In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions.[citation needed] Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.[1]
Divided differences is a recursive division process. Given a sequence of data points [math]\displaystyle{ (x_0, y_0), \ldots, (x_{n}, y_{n}) }[/math], the method calculates the coefficients of the interpolation polynomial of these points in the Newton form.
Given n + 1 data points [math]\displaystyle{ (x_0, y_0),\ldots,(x_{n}, y_{n}) }[/math] where the [math]\displaystyle{ x_k }[/math] are assumed to be pairwise distinct, the forward divided differences are defined as: [math]\displaystyle{ \begin{align} \mathopen[y_k] &:= y_k, && k \in \{ 0,\ldots,n\} \\ \mathopen[y_k,\ldots,y_{k+j}] &:= \frac{[y_{k+1},\ldots , y_{k+j}] - [y_{k},\ldots , y_{k+j-1}]}{x_{k+j}-x_k}, && k\in\{0,\ldots,n-j\},\ j\in\{1,\ldots,n\}. \end{align} }[/math]
To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of j above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding x-values: [math]\displaystyle{ \begin{matrix} x_0 & y_0 = [y_0] & & & \\ & & [y_0,y_1] & & \\ x_1 & y_1 = [y_1] & & [y_0,y_1,y_2] & \\ & & [y_1,y_2] & & [y_0,y_1,y_2,y_3]\\ x_2 & y_2 = [y_2] & & [y_1,y_2,y_3] & \\ & & [y_2,y_3] & & \\ x_3 & y_3 = [y_3] & & & \\ \end{matrix} }[/math]
Note that the divided difference [math]\displaystyle{ [y_k,\ldots,y_{k+j}] }[/math] depends on the values [math]\displaystyle{ x_k,\ldots,x_{k+j} }[/math] and [math]\displaystyle{ y_k,\ldots,y_{k+j} }[/math], but the notation hides the dependency on the x-values. If the data points are given by a function f, [math]\displaystyle{ (x_0, y_0), \ldots, (x_{k}, y_n) =(x_0, f(x_0)), \ldots, (x_n, f(x_n)) }[/math] one sometimes writes the divided difference in the notation [math]\displaystyle{ f[x_k,\ldots,x_{k+j}] \ \stackrel{\text{def}}= \ [f(x_k),\ldots,f(x_{k+j})] = [y_k,\ldots,y_{k+j}]. }[/math]Other notations for the divided difference of the function ƒ on the nodes x0, ..., xn are: [math]\displaystyle{ f[x_k,\ldots,x_{k+j}]=\mathopen[x_0,\ldots,x_n]f= \mathopen[x_0,\ldots,x_n;f]= D[x_0,\ldots,x_n]f. }[/math]
Divided differences for [math]\displaystyle{ k=0 }[/math] and the first few values of [math]\displaystyle{ j }[/math]: [math]\displaystyle{ \begin{align} \mathopen[y_0] &= y_0 \\ \mathopen[y_0,y_1] &= \frac{y_1-y_0}{x_1-x_0} \\ \mathopen[y_0,y_1,y_2] &= \frac{\mathopen[y_1,y_2]-\mathopen[y_0,y_1]}{x_2-x_0} = \frac{\frac{y_2-y_1}{x_2-x_1}-\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0} = \frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)} \\ \mathopen[y_0,y_1,y_2,y_3] &= \frac{\mathopen[y_1,y_2,y_3]-\mathopen[y_0,y_1,y_2]}{x_3-x_0} \end{align} }[/math]
Thus, the table corresponding to these terms upto two columns has the following form: [math]\displaystyle{ \begin{matrix} x_0 & y_{0} & & \\ & & {y_{1}-y_{0}\over x_1 - x_0} & \\ x_1 & y_{1} & & {{y_{2}-y_{1}\over x_2 - x_1}-{y_{1}-y_{0}\over x_1 - x_0} \over x_2 - x_0} \\ & & {y_{2}-y_{1}\over x_2 - x_1} & \\ x_2 & y_{2} & & \vdots \\ & & \vdots & \\ \vdots & & & \vdots \\ & & \vdots & \\ x_n & y_{n} & & \\ \end{matrix} }[/math]
The divided difference scheme can be put into an upper triangular matrix: [math]\displaystyle{ T_f(x_0,\dots,x_n)= \begin{pmatrix} f[x_0] & f[x_0,x_1] & f[x_0,x_1,x_2] & \ldots & f[x_0,\dots,x_n] \\ 0 & f[x_1] & f[x_1,x_2] & \ldots & f[x_1,\dots,x_n] \\ 0 & 0 & f[x_2] & \ldots & f[x_2,\dots,x_n] \\ \vdots & \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & f[x_n] \end{pmatrix}. }[/math]
Then it holds
The matrix [math]\displaystyle{ J = \begin{pmatrix} x_0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & x_1 & 1 & 0 & \cdots & 0 \\ 0 & 0 & x_2 & 1 & & 0 \\ \vdots & \vdots & & \ddots & \ddots & \\ 0 & 0 & 0 & 0 & \; \ddots & 1\\ 0 & 0 & 0 & 0 & & x_n \end{pmatrix} }[/math] contains the divided difference scheme for the identity function with respect to the nodes [math]\displaystyle{ x_0,\dots,x_n }[/math], thus [math]\displaystyle{ J^m }[/math] contains the divided differences for the power function with exponent [math]\displaystyle{ m }[/math]. Consequently, you can obtain the divided differences for a polynomial function [math]\displaystyle{ p }[/math] by applying [math]\displaystyle{ p }[/math] to the matrix [math]\displaystyle{ J }[/math]: If [math]\displaystyle{ p(\xi) = a_0 + a_1 \cdot \xi + \dots + a_m \cdot \xi^m }[/math] and [math]\displaystyle{ p(J) = a_0 + a_1\cdot J + \dots + a_m\cdot J^m }[/math] then [math]\displaystyle{ T_p(x) = p(J). }[/math] This is known as Opitz' formula.[2][3]
Now consider increasing the degree of [math]\displaystyle{ p }[/math] to infinity, i.e. turn the Taylor polynomial into a Taylor series. Let [math]\displaystyle{ f }[/math] be a function which corresponds to a power series. You can compute the divided difference scheme for [math]\displaystyle{ f }[/math] by applying the corresponding matrix series to [math]\displaystyle{ J }[/math]: If [math]\displaystyle{ f(\xi) = \sum_{k=0}^\infty a_k \xi^k }[/math] and [math]\displaystyle{ f(J)=\sum_{k=0}^\infty a_k J^k }[/math] then [math]\displaystyle{ T_f(x)=f(J). }[/math]
[math]\displaystyle{ \begin{align} f[x_0] &= f(x_0) \\ f[x_0,x_1] &= \frac{f(x_0)}{(x_0-x_1)} + \frac{f(x_1)}{(x_1-x_0)} \\ f[x_0,x_1,x_2] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)} \\ f[x_0,x_1,x_2,x_3] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)\cdot(x_0-x_3)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)\cdot(x_1-x_3)} +\\ &\quad\quad \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)\cdot(x_2-x_3)} + \frac{f(x_3)}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\ f[x_0,\dots,x_n] &= \sum_{j=0}^{n} \frac{f(x_j)}{\prod_{k\in\{0,\dots,n\}\setminus\{j\}} (x_j-x_k)} \end{align} }[/math]
With the help of the polynomial function [math]\displaystyle{ \omega(\xi) = (\xi-x_0) \cdots (\xi-x_n) }[/math] this can be written as [math]\displaystyle{ f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{\omega'(x_j)}. }[/math]
If [math]\displaystyle{ x_0\lt x_1\lt \cdots\lt x_n }[/math] and [math]\displaystyle{ n\geq 1 }[/math], the divided differences can be expressed as[4] [math]\displaystyle{ f[x_0,\ldots,x_n] = \frac{1}{(n-1)!} \int_{x_0}^{x_n} f^{(n)}(t)\;B_{n-1}(t) \, dt }[/math] where [math]\displaystyle{ f^{(n)} }[/math] is the [math]\displaystyle{ n }[/math]-th derivative of the function [math]\displaystyle{ f }[/math] and [math]\displaystyle{ B_{n-1} }[/math] is a certain B-spline of degree [math]\displaystyle{ n-1 }[/math] for the data points [math]\displaystyle{ x_0,\dots,x_n }[/math], given by the formula [math]\displaystyle{ B_{n-1}(t) = \sum_{k=0}^n \frac{(\max(0,x_k-t))^{n-1}}{\omega'(x_k)} }[/math]
This is a consequence of the Peano kernel theorem; it is called the Peano form of the divided differences and [math]\displaystyle{ B_{n-1} }[/math] is the Peano kernel for the divided differences, all named after Giuseppe Peano.
When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.
Given n+1 data points [math]\displaystyle{ (x_0, y_0), \ldots, (x_n, y_n) }[/math] with [math]\displaystyle{ x_{k} = x_0 + k h,\ \text{ for } \ k=0,\ldots,n \text{ and fixed } h\gt 0 }[/math] the forward differences are defined as [math]\displaystyle{ \begin{align} \Delta^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\ \Delta^{(j)}y_k &:= \Delta^{(j-1)}y_{k+1} - \Delta^{(j-1)}y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \end{align} }[/math]whereas the backward differences are defined as: [math]\displaystyle{ \begin{align} \nabla^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\ \nabla^{(j)}y_k &:= \nabla^{(j-1)}y_{k} - \nabla^{(j-1)}y_{k-1},\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \end{align} }[/math] Thus the forward difference table is written as:[math]\displaystyle{ \begin{matrix} y_0 & & & \\ & \Delta y_0 & & \\ y_1 & & \Delta^2 y_0 & \\ & \Delta y_1 & & \Delta^3 y_0\\ y_2 & & \Delta^2 y_1 & \\ & \Delta y_2 & & \\ y_3 & & & \\ \end{matrix} }[/math]whereas the backwards difference table is written as:[math]\displaystyle{ \begin{matrix} y_0 & & & \\ & \nabla y_1 & & \\ y_1 & & \nabla^2 y_2 & \\ & \nabla y_2 & & \nabla^3 y_3\\ y_2 & & \nabla^2 y_3 & \\ & \nabla y_3 & & \\ y_3 & & & \\ \end{matrix} }[/math]
The relationship between divided differences and forward differences is[5] [math]\displaystyle{ [y_j, y_{j+1}, \ldots , y_{j+k}] = \frac{1}{k!h^k}\Delta^{(k)}y_j, }[/math]whereas for backward differences:[citation needed][math]\displaystyle{ [{y}_{j}, y_{j-1},\ldots,{y}_{j-k}] = \frac{1}{k!h^k}\nabla^{(k)}y_j. }[/math]
de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen
Original source: https://en.wikipedia.org/wiki/Divided differences.
Read more |