In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials.[1][2] The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials. It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.[3]
In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions [math]\displaystyle{ \phi_k(x) }[/math]:
where [math]\displaystyle{ \phi_k,\; k=0, 1, \ldots }[/math] is a sequence of functions that satisfy the linear recurrence relation
where the coefficients [math]\displaystyle{ \alpha_k(x) }[/math] and [math]\displaystyle{ \beta_k(x) }[/math] are known in advance.
The algorithm is most useful when [math]\displaystyle{ \phi_k(x) }[/math] are functions that are complicated to compute directly, but [math]\displaystyle{ \alpha_k(x) }[/math] and [math]\displaystyle{ \beta_k(x) }[/math] are particularly simple. In the most common applications, [math]\displaystyle{ \alpha(x) }[/math] does not depend on [math]\displaystyle{ k }[/math], and [math]\displaystyle{ \beta }[/math] is a constant that depends on neither [math]\displaystyle{ x }[/math] nor [math]\displaystyle{ k }[/math].
To perform the summation for given series of coefficients [math]\displaystyle{ a_0, \ldots, a_n }[/math], compute the values [math]\displaystyle{ b_k(x) }[/math] by the "reverse" recurrence formula:
Note that this computation makes no direct reference to the functions [math]\displaystyle{ \phi_k(x) }[/math]. After computing [math]\displaystyle{ b_2(x) }[/math] and [math]\displaystyle{ b_1(x) }[/math], the desired sum can be expressed in terms of them and the simplest functions [math]\displaystyle{ \phi_0(x) }[/math] and [math]\displaystyle{ \phi_1(x) }[/math]:
See Fox and Parker[4] for more information and stability analyses.
A particularly simple case occurs when evaluating a polynomial of the form
The functions are simply
and are produced by the recurrence coefficients [math]\displaystyle{ \alpha(x) = x }[/math] and [math]\displaystyle{ \beta = 0 }[/math].
In this case, the recurrence formula to compute the sum is
and, in this case, the sum is simply
which is exactly the usual Horner's method.
Consider a truncated Chebyshev series
The coefficients in the recursion relation for the Chebyshev polynomials are
with the initial conditions
Thus, the recurrence is
and the final sum is
One way to evaluate this is to continue the recurrence one more step, and compute
(note the doubled a0 coefficient) followed by
Clenshaw summation is extensively used in geodetic applications.[2] A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form
Leaving off the initial [math]\displaystyle{ C_0\,\theta }[/math] term, the remainder is a summation of the appropriate form. There is no leading term because [math]\displaystyle{ \phi_0(\theta) = \sin 0\theta = \sin 0 = 0 }[/math].
The recurrence relation for [math]\displaystyle{ \sin k\theta }[/math] is
making the coefficients in the recursion relation
and the evaluation of the series is given by
The final step is made particularly simple because [math]\displaystyle{ \phi_0(\theta) = \sin 0 = 0 }[/math], so the end of the recurrence is simply [math]\displaystyle{ b_1(\theta)\sin(\theta) }[/math]; the [math]\displaystyle{ C_0\,\theta }[/math] term is added separately:
Note that the algorithm requires only the evaluation of two trigonometric quantities [math]\displaystyle{ \cos \theta }[/math] and [math]\displaystyle{ \sin \theta }[/math].
Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write
Clenshaw summation can be applied in this case[5] provided we simultaneously compute [math]\displaystyle{ m(\theta_1)+m(\theta_2) }[/math] and perform a matrix summation,
where
The first element of [math]\displaystyle{ \mathsf M(\theta_1,\theta_2) }[/math] is the average value of [math]\displaystyle{ m }[/math] and the second element is the average slope. [math]\displaystyle{ \mathsf F_k(\theta_1,\theta_2) }[/math] satisfies the recurrence relation
where
takes the place of [math]\displaystyle{ \alpha }[/math] in the recurrence relation, and [math]\displaystyle{ \beta=-1 }[/math]. The standard Clenshaw algorithm can now be applied to yield
where [math]\displaystyle{ \mathsf B_k }[/math] are 2×2 matrices. Finally we have
This technique can be used in the limit [math]\displaystyle{ \theta_2 = \theta_1 = \mu }[/math] and [math]\displaystyle{ \delta = 0\, }[/math] to simultaneously compute [math]\displaystyle{ m(\mu) }[/math] and the derivative [math]\displaystyle{ dm(\mu)/d\mu }[/math], provided that, in evaluating [math]\displaystyle{ \mathsf F_1 }[/math] and [math]\displaystyle{ \mathsf A }[/math], we take [math]\displaystyle{ \lim_{\delta\rightarrow0}(\sin k \delta)/\delta = k }[/math].
Original source: https://en.wikipedia.org/wiki/Clenshaw algorithm.
Read more |