Taylor and Maclaurin Series

Right when we thought we had seen it all, it's time to take a look at the big boys: Taylor and Maclaurin. We like old Brook and Colin, they made calculus class just a little bit easier—at least when it comes to series. They made the discovery that...drumroll...if you can dodge a wrench, you can dodge a ball. Oh, wait, that's gym class. 

Taylor and Maclaurin actually found that if we can find the derivative of a function, we can find its power series. That is, if the function happens to have a derivative of all orders.

Think back to tangent line approximations.

If the function f is continuous, we can draw a line through the point

(a, (a)) with slope '(a) and for x close to a, the value of f at x will be close to the value

of the line at x:

That line is called the tangent line, and it approximates the function f near a.

Sample Problem

Find the approximate value of .

Answer. Let . We know (4) and '(4), and 5 is close to 4, so we'll draw the tangent line approximation to f at 4 and use the value of the tangent line when x = 5 as an approximation for .

To draw a line we need a point and a slope. We have the point

(4, (4)) = (4, 2).

The slope of f at that point is

Putting this together,

so the tangent line is

Graphing the tangent line and the function f on one graph, this looks like a reasonable approximation. Close to x = 4 the line and the function look very similar:

When x = 5 the value on the line is y = 2.25, so we can make the approximation

A calculator will tell you  (or something like that), but this is pretty close to 2.25, so our tangent line approximation seems to have hit the nail on the head.

A tangent line approximation works better for some functions than others.

When functions are really curvy, the tangent line approximation is only useful for points really close to the point (a(a)) we used to draw the line.

Lucky for us, there's something we can do about this. We can use more derivatives to get a better approximation. We hope you wore your derivation pants, today.

With more derivatives we can build an approximation to f that will curve the same direction f curves, as we'll see in the next example.

We'll deal with the case where a = 0 first, since that's easiest.

Taylor Polynomials

Let (x) = cos (x). Since (0) = 1 and '(0) = sin 0 = 0, a linear approximation to (x) near 0 is given by the function

g(x) = 1 + 0(x – 0) = 1.

This is a horizontal line, which doesn't look like a very good approximation:

We'd like to find an approximation to (x) = cos (x) that's better than the linear one.

In particular we'd like to find a function g(x) of the form

g(x) = a + bx + cx2

that has the same value, slope, and second derivative as f when x = 0.

Since we want g(0) and (0) to be the same and we know

g(0) = a + b(0) + c(0)2 = a

(0) = cos 0 = 1

we must have a = 1 in order to have g(0) = (0).

Next, we want g'(x) and '(x) to be the same. Calculate the derivatives of both functions:

g'(x) = b + 2cx

'(x) = -sin x

so g'(0) = b and '(0) = -sin(0) = 0. In order for these to be the same we must have b = 0.

Finally, we want g(2)(0) = (2)(0), so we'll calculate the second derivatives of both functions and see what we have to do to make them equal.

g(2)(0) = 2c

f(2)(0) = -cos(0) = -1

To have these equal we must have . We've found a, b, and c, so we know what g has to be now:

To check that this answer makes sense, we can graph both f and g on the same graph. If we did things right, g will look a lot like f near 0.

And it does:

Since g curves, it's able to have the same shape as f near 0, giving us a better approximation to f.

We can generalize the idea of matching up derivatives to include as many derivatives as we like. Suppose we want a function

g(x) = a0 + a1x1 + a2x2 + a3x3 + ... + anxn

that approximates f(x) for x close to 0. We want g and f to be equal at 0, and we also want their derivatives to be equal for as many non-zero derivatives as g has. This means we need

Let's take some derivatives of g.

In order to have g(1)(0) = (1)(0) we need

g(1)(0) = a1 = (1)(0)

In order to have g(2)(0) = (2)(0) we need

g(2)(0) = 2a2 = (2)(0)

so

In order to have g(3)(0) = (3)(0) we need

g(3)(0) = 3!a3 = (3)(0)

so

See where we're going?

If g(x) = a0 + a1x1 + a2x2 + a3x3 + ... + anxn satisfies g(i)(0) = (i)(0) for i up to n, let's find an.

Continuing the pattern we started, by the time we get up to the nth derivative we'll have

g(n)(x) = n!an.

Then to have the nth derivatives of f and g agree at 0, we need

g(n)(0) = n!an = (n)(0)

so

Now that we've worked out the coefficients, we can see the function g looks like this:

Remember to put the x terms back in. This function g is called the nth degree Taylor polynomial for f.

Since we're using the values of f and its derivatives at zero, we say the polynomial g(x) is centered at 0. We can use g to approximate the value of (x) when x is close to 0.

This is the formula you'll want to remember and highlight and put a big box around: The nth-degree Taylor polynomial for the function f, centered at 0, is

Whenever you're asked to find a Taylor polynomial centered at 0, calculate derivatives, evaluate at 0, and plug them into this formula.

Then graph the original function f and the function g on the same axes. If they don't look pretty much the same near x = 0, go back and check your arithmetic.

The higher the degree of a Taylor polynomial, the better an approximation it gives.

Be Careful: Evaluate the derivatives at 0 before plugging them into the formula.

Sample Problem

Find the second-degree Taylor polynomial for (x) = x3 + 5x2 + 2x + 7 near x = 0.

Answer.

Find the first two derivatives:

'(x) = 3x2 + 10x + 2

(2)(x) = 6x + 10

Evaluate the function and its derivatives at 0:

(0) = 7

'(0) = 2

(2)(0) = 10

Now plug these values into the formula for a degree 2 Taylor polynomial:

If we graph (x) and g2(x), the picture looks reasonable. The two functions have roughly the same shape near 0:

We need to be careful not to get ahead of ourselves. If we put the derivatives into the Taylor polynomial formula before we evaluate them at 0, we get a mess.

This is the incorrect way:

In this case, if we graph h2 and f on the same graph, they don't really look the same:

The Maclaurin Series: Approximations to f Near x = 0

If we let a Taylor polynomial keep going forever instead of cutting it off at a particular degree, we get a Taylor series. A Taylor series centered at 0 is also called a Maclaurin series. If you're asked "find the Maclaurin series for (x)," this means the same thing as "find the Taylor series for (x) near 0."

The formula for the Maclaurin series of (x) is

because we take the formula for a Taylor polynomial centered at zero and let it keep on going. Forever.

It was important to graph the original function and the Taylor polynomial to make sure the answer looked okay. The same is true for Maclaurin series. When finding a Maclaurin series, graph the original function and the first few terms of the Maclaurin series and make sure the graph looks right.

Something like this is good because the polynomial looks like the function close to 0:

Something like this is not so good, because the polynomial does not look like the function close to 0:

The Taylor Series: Approximations to f Near x = a

In general, Taylor polynomials don't have to be centered at 0. We can pick any a we like and then approximate a function f for values of x near that a.

A Taylor polynomial g centered at a or near x = a is a polynomial that has the same value and shape as f at x = a. Visually, g will look like f near x = a. We started this section with a linear (tangent line) approximation to the function  near x = 4:

In the earlier part of this section we wanted f and g to look the same near 0; now we want them to look the same near a (which may or may not be 0).

A Taylor series for f centered at a is the infinite series we get if we let the Taylor polynomial just keep going.

We could walk through the construction of this next formula, but you will probably never be asked for it. If you really want to know where it comes from you can figure it out similarly to how we found the formula for a Taylor polynomial near x = 0.

The nth degree Taylor polynomial for f at a is

and the Taylor series for f at a is what we get if we let the Taylor polynomial just keep on going:

Only two things have changed from before:

(1) we have (n)(a) instead of (n)(0), and

(2) we have (x – a)n instead of xn.