Jump to content

Newton polynomial: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Andris (talk | contribs)
m link
Line 3: Line 3:
:<math>(t_0,y_0), (t_1,y_1), \ldots (t_n,y_n) \,</math>
:<math>(t_0,y_0), (t_1,y_1), \ldots (t_n,y_n) \,</math>


where no two <math>t_i</math> are the same, we assume the <math>y_n</math>:s are values of a function, <math>f</math>, at some certain <math>t</math>-points named <math>f(t_n)</math>. We know from [[Weierstrass' theorem]] that there exists a unique polynomial of degree <math>n-1</math> that pass through all these points, and we write it thusly:
where no two <math>t_i</math> are the same, we assume the <math>y_n</math>:s are values of a function, <math>f</math>, at some certain <math>t</math>-points named <math>f(t_n)</math>. We know from [[Stone-Weierstrass theorem]] that there exists a unique polynomial of degree <math>n-1</math> that pass through all these points, and we write it thusly:


:<math>\begin{matrix} P(t) &=& c_0 + c_1(t-t_1) + c_2(t-t_1)(t-t_2) + \\ & & \cdots + c_n(t-t_1)(t-t_2) \cdots (t-t_n) \end{matrix}</math>
:<math>\begin{matrix} P(t) &=& c_0 + c_1(t-t_1) + c_2(t-t_1)(t-t_2) + \\ & & \cdots + c_n(t-t_1)(t-t_2) \cdots (t-t_n) \end{matrix}</math>

Revision as of 10:45, 9 June 2004

Newton polynomials (named after their inventor Isaac Newton) are polynomials used for polynomial interpolation. Rather than solving the huge Vandermonde matrix equation obtained in the polynomial interpolation by Gaussian elimination, we notice that we can do clever tricks by writing the polynomial in a different way. Given a data set:

where no two are the same, we assume the :s are values of a function, , at some certain -points named . We know from Stone-Weierstrass theorem that there exists a unique polynomial of degree that pass through all these points, and we write it thusly:

or more formally:

We notice that:

And the equation may be written:

And the solution giving the coefficients is thus:

and this equation system will quickly grow to unrealistic proportions. Thus we need to find a better way to retrieve the coefficients . It turns out there exists a recusive formula for doing this in an efficient way.

Divided Differences

We see that the polynomial for a certain may be defined recursively, thus:

so that interpolates the function in the points . The coefficients are dependent only on the obtained function values of (our :s), so it is natural to say that as only depends on , only on and only on , and we define a notation for the divided differences:

This definition gives us the formal definition of the divided differences:

So that for example:

These are not easily grasped when put like this, but once the functions are arranged in a tabular form, things look simpler. Here for example, for a data set of (and we know that for all ):

On the diagonal of this table you will find the coefficients, as . Insert these into the formula at the top and you have your unique interpolation polynomial:

expanding into:

This is usually called the common Newtonian interpolation polynomial.

Example

to be written

See Also