|
|
Line 3: |
Line 3: |
|
:<math>(t_0,y_0), (t_1,y_1), \ldots (t_n,y_n) \,</math> |
|
:<math>(t_0,y_0), (t_1,y_1), \ldots (t_n,y_n) \,</math> |
|
|
|
|
|
where no two <math>t_i</math> are the same, we assume the <math>y_n</math>:s are values of a function, <math>f</math>, at some certain <math>t</math>-points named <math>f(t_n)</math>. We know from [[Weierstrass' theorem]] that there exists a unique polynomial of degree <math>n-1</math> that pass through all these points, and we write it thusly: |
|
where no two <math>t_i</math> are the same, we assume the <math>y_n</math>:s are values of a function, <math>f</math>, at some certain <math>t</math>-points named <math>f(t_n)</math>. We know from [[Stone-Weierstrass theorem]] that there exists a unique polynomial of degree <math>n-1</math> that pass through all these points, and we write it thusly: |
|
|
|
|
|
:<math>\begin{matrix} P(t) &=& c_0 + c_1(t-t_1) + c_2(t-t_1)(t-t_2) + \\ & & \cdots + c_n(t-t_1)(t-t_2) \cdots (t-t_n) \end{matrix}</math> |
|
:<math>\begin{matrix} P(t) &=& c_0 + c_1(t-t_1) + c_2(t-t_1)(t-t_2) + \\ & & \cdots + c_n(t-t_1)(t-t_2) \cdots (t-t_n) \end{matrix}</math> |
Newton polynomials (named after their inventor Isaac Newton) are polynomials used for polynomial interpolation. Rather than solving the huge Vandermonde matrix equation obtained in the polynomial interpolation by Gaussian elimination, we notice that we can do clever tricks by writing the polynomial in a different way. Given a data set:

where no two
are the same, we assume the
:s are values of a function,
, at some certain
-points named
. We know from Stone-Weierstrass theorem that there exists a unique polynomial of degree
that pass through all these points, and we write it thusly:

or more formally:

We notice that:

And the equation may be written:




And the solution giving the coefficients
is thus:




and this equation system will quickly grow to unrealistic proportions. Thus we need to find a better way to retrieve the coefficients
. It turns out there exists a recusive formula for doing this in an efficient way.
Divided Differences
We see that the polynomial
for a certain
may be defined recursively, thus:


so that
interpolates the function
in the points
. The coefficients
are dependent only on the obtained function values of
(our
:s),
so it is natural to say that as
only depends on
,
only on
and
only on
, and we define a notation for the divided differences:
![{\displaystyle c_{k}=f[t_{0},t_{1},t_{2},\ldots ,t_{k}]\,}](/media/api/rest_v1/media/math/render/svg/a8434647c79864da5994c1a5b9aa115de9a99134)
This definition gives us the formal definition of the divided differences:
![{\displaystyle f[t_{i}]=f(t_{i})=f_{i}=y_{i}\,}](/media/api/rest_v1/media/math/render/svg/41a3ca14cc6ee0d02afebb92e76c61052fcd1211)
![{\displaystyle f[t_{1},t_{2},t_{3},\ldots ,t_{k+1}]={\frac {f[t_{2},t_{3},\ldots ,t_{k+1}]-f[t_{1},t_{2},\ldots ,t_{k}]}{t_{k+1}-t_{1}}}\,}](/media/api/rest_v1/media/math/render/svg/c9d2df030980cc3b0850fb4ea938311ba8e27972)
So that for example:
![{\displaystyle f[t_{1},t_{2}]={\frac {f_{2}-f_{1}}{t_{2}-t_{1}}}}](/media/api/rest_v1/media/math/render/svg/ec1814d1945d0fb5b57eaf86686c7b97272ccf74)
![{\displaystyle f[t_{1},t_{2},t_{3}]={\frac {f[t_{2},t_{3}]-f[t_{1},t_{2}]}{t_{3}-t_{1}}}}](/media/api/rest_v1/media/math/render/svg/c1bc43fd37f010a7f5fe193ec346b9423f9f85c5)
These are not easily grasped when put like this, but once the functions are arranged in a tabular form, things look simpler. Here for example, for a data set of
(and we know that
for all
):
 |
 |
|
|
|
|
 |
 |
![{\displaystyle f[t_{1},t_{2}]\,}](/media/api/rest_v1/media/math/render/svg/761d1ebcf00293a63b648df6e5cc3bd6c4466426) |
|
|
|
 |
 |
![{\displaystyle f[t_{2},t_{3}]\,}](/media/api/rest_v1/media/math/render/svg/78dfae0f6f44c1bd23ea2bcc0744b1dcb10edc7d) |
![{\displaystyle f[t_{1},t_{2},t_{3}]\,}](/media/api/rest_v1/media/math/render/svg/04c168ac1045604c551bcd7fab7f620bd06c4a21) |
|
|
 |
 |
![{\displaystyle f[t_{3},t_{4}]\,}](/media/api/rest_v1/media/math/render/svg/e27f8ea6e3eb0884b7b19bc76e71cc9e44c94fc5) |
![{\displaystyle f[t_{2},t_{3},t_{4}]\,}](/media/api/rest_v1/media/math/render/svg/5431a15527a9c47e0b9b3ecca5621c570fe46281) |
![{\displaystyle f[t_{1},t_{2},t_{2},t_{4}]\,}](/media/api/rest_v1/media/math/render/svg/5c5cc8e1becbd08f5890fe185bd36b0c0ad73cb8) |
|
 |
 |
![{\displaystyle f[t_{4},t_{5}]\,}](/media/api/rest_v1/media/math/render/svg/3532d23df381d9f33426ffa54b0f3d45ec5e7c4a) |
![{\displaystyle f[t_{3},t_{4},t_{5}]\,}](/media/api/rest_v1/media/math/render/svg/ec43f077d6050b857f672109d0604607e472cf1d) |
![{\displaystyle f[t_{2},t_{3},t_{4},t_{5}]\,}](/media/api/rest_v1/media/math/render/svg/1f42d5cff0d6bf41dd2e36e167f4b46e03ee23e6) |
![{\displaystyle f[t_{1},t_{2},t_{3},t_{4},t_{5}]\,}](/media/api/rest_v1/media/math/render/svg/56a2ee961eaa0ff8723d742797e4b11eddeb1c0e) |
On the diagonal of this table you will find the coefficients, as
. Insert these into the formula at the top and you have your unique interpolation polynomial:
![{\displaystyle P_{n}(t)=\sum _{i=0}^{n}f[t_{i}]\prod _{j=0}^{i}(t-t_{j})}](/media/api/rest_v1/media/math/render/svg/fd4bc4a0697a60223bc5b7a8145177d4672233bc)
expanding into:
+\cdots +f[t_{1},\ldots ,t_{n}](t-t_{1})\cdots (t-t_{n})\,}](/media/api/rest_v1/media/math/render/svg/e223e31cc69a843546fccb6a1cba3dfceba426b7)
This is usually called the common Newtonian interpolation polynomial.
Example
- to be written
See Also