Sign in to follow this  
Narf the Mouse

A question about the Taylor series and polynomails

Recommended Posts

Narf the Mouse    322
In a previous thread, the Taylor series was suggested for solving polynomials. I've finally gotten around to it, only to run into a lot of problems. Problems which would make more sense if: The Taylor series was suggested after I posted a polynomial solver that resolves polynomial roots; I therefore assumed that the Taylor series resolves polynomial roots. However, if what I'm thinking is correct, the Taylor series solves for X, which is very different than what I'm aiming for. If that is correct and the Taylor series is not useful for my purposes (Solving for f(x) = 0, or in other words, resolving roots for = 0), what would be a good, fast way of resolving roots? I've already implemented the Brent algorithm, which Wikipedia suggested. Thanks.

Share this post


Link to post
Share on other sites
Emergent    982
I'm not sure what you're referring to. The Taylor series isn't a "method" and it doesn't "solve" for anything. It's just a way to approximate a function at a point using its derivatives.

AFAIK, Brent's method converges to a root of a polynomial if you start knowing that it lies in some interval. Would you instead like to find all roots, or to figure out what intervals to use?

If so, there's a very simple algorithm to find all the distinct real roots; I don't know its name, but IIRC Dave Eberly posted it a while back, and in essence it works like this: You repeatedly differentiate the polynomial until you've got a line, and then work your way back up...

I'll describe it in a second; first, however, I need to mention that the technique also needs a bound on the location of real roots. One is due to Cauchy. Call the polynomial 'p' and suppose it is of the form

p(x) = x^n + a_{n-1} x^{n-1} + ... + a_1 x + a_0

for real coefficients a_{n-1}, ..., a_0. Then Cauchy's bound states that all roots lie in the interval [-1 - A, 1 + A], where A is the largest absolute value of a coefficient. I'll also define B = 1+A, so I can just say that all the roots are in [-B,B].

Now, on to the derivatives. Consider the sequence of polynomials

p_n = p
p_{n-1} = D p_n
...
p_1 = D p_2

where 'D' denotes differentiation. You can do this on a computer; it's just operations on the coefficients.

Now, consider the polynomial of degree 1, a line, p_1: It has one root which you can find analytically; call it r_1. Once you've found it, you know that p_2 is monotonic on [-B,r_1] and [r_1,B], and hence for each of these intervals that it contains a real root if and only if p_2 differs in sign at its endpoints. If it does contain a real root, then you can reliably find it by e.g. the bisection method (the false position method and Brent's algorithm should also work here). (Note also that the endpoints themselves can also happen to be roots.)

Now that you know the zeros of p_2, you know the intervals in which p_3 is monotonic. Hence you can bracket the roots of p_3 and find them reliably. Once you have the roots of p_3, you move up to p_4, and so on, until you've found all the zeros of p_n.

Share this post


Link to post
Share on other sites
Narf the Mouse    322
Perhaps I worded things a bit less than clearly. So, in response, "Thereby solving that function (To some degree of precision) for input X".

Thank you for that algorithm. It looks like a much more evolved version of the algorithm I have and should result in a considerable improvement.

Not sure where you've got the word "method" from.

(Edited to alter questionable grammar. Not in a good mood)

Share this post


Link to post
Share on other sites
Emergent    982
Quote:
Original post by Narf the Mouse
Perhaps I worded things a bit less than clearly. So, in response, "Thereby solving that function (To some degree of precision) for input X".


Yep, got that.


Quote:
(Edited to alter questionable grammar. Not in a good mood)


Missed that. Anyway, good luck.

Share this post


Link to post
Share on other sites
Narf the Mouse    322
Thanks - My current eventual goal with this is calculating things like course matches for things like spaceships, missiles and such - I'm building libraries towards game making and as a hobby.

A thought - A modified Runge-Kutta (Modified towards picking the lowest number, perhaps with a divide by average/multiply by average first and last steps) could probably converge rather fast per step, but would it be any faster than just doing four bisections?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this