A finite-difference method for solving Cauchy's problem for systems of first-order differential equations
$$ y ^ \prime = f ( x , y ),\ y ( x _ {0} ) = y _ {0} . $$
In integration over a grid with a constant step $ x _ {n} = x _ {0} + nh $ the computational formulas may be based a) on extrapolation
$$ y _ {n + 1 } = y _ {n} + h \sum _ {\lambda = 0 } ^ { k } u _ {- \lambda } f ( x _ {n - \lambda } , y _ {n - \lambda } ) ; $$
or b) on interpolation
$$ y _ {n + 1 } = y _ {n} + h \sum _ {\lambda = - 1 } ^ { {k } - 1 } v _ {- \lambda } f ( x _ {n - \lambda } , y _ {n - \lambda } ) . $$
For a given $ k $, the latter formula gives more accurate results but a non-linear system of equations must be solved in order to find $ y _ {n+1} $.
In practical work, an approximation by the former method is found and one or two corrections are performed according to the formula
$$ y _ {n+1} ^ {(i+1)} = y _ {n} + h \sum _ {\lambda = 0 } ^ { k-1 } v _ {- \lambda } f ( x _ {n - \lambda } , y _ {n - \lambda } ) + hv _ {1} f ( x _ {n + 1 } , y _ {n + 1 } ^ {(i)} ) , $$
which converge pointwise if $ h | v _ {1} | \| ( \partial f / \partial y ) \| < 1 $. The initial conditions $ y _ {1} \dots y _ {k} $ for Adams' method, which are needed to begin the calculations by formula a), must be found by some special method. The error in the solution can be written as
$$ \chi _ {k+1} \left ( \int\limits _ { x _ 0 } ^ {{ x } _ {n}} \Omega ( x , t ) y ^ {(k+2)} ( t ) dt \right ) h ^ {k+1} + O ( h ^ {k+2} ) , $$
where $ \Omega (x, t) $ is the solution of the system
$$ \frac{d \Omega ( x , t ) }{dx} = f _ {y} ( x , y ( x ) ) \Omega ( x , t ) $$
when $ \Omega (t, t) = E $.
The structure of the term $ O (h ^ {k+2} ) $ is such that, for small values of $ h $, it is usually uniformly small as compared to the main term on large intervals of integration. This means that Adams' method can be applied over a large integration interval in the case of an absolutely-stable solution of the differential problem. In particular, as distinct from the Milne method, it may be used for finding stable periodic solutions of differential equations. The standard Adams procedure for integration with automatic step selection is much more involved than the standard Runge–Kutta method, since the step-changing algorithm is much more involved and the selection of the initial values of $ y _ {k} $ is not standardized.
In the case of the equations $ y ^ \prime = -ay $, $ a > 0 $, the extrapolation formula a) has the form
$$ y _ {n+1} = y _ {n} + h \sum _ {\lambda = 0 } ^ { k } u _ {- \lambda } ( - ay _ {n - \lambda } ) . $$
Particular solutions of this equation are $ y _ {n} = C \mu ^ {n} $, where $ \mu $ is a root of the equation
$$ \mu ^ {k+1} = \mu ^ {k} - ah \sum _ {\lambda = 0 } ^ { k } u _ {- \lambda } \mu ^ {k - \lambda } . $$
If $ ah \sum _ {\lambda = 0 } ^ {k} | u {- \lambda } | > 2 $, then one root of this equation is $ | \mu | > 1 $, and the rounding-off errors rapidly increase. When integrating with automatic step selection, this results in an unjustified diminution of the step size. However, Adams' method proves to be more economical than the Runge–Kutta method in most cases; it was first introduced by J.C. Adams in 1855.
[1] | I.S. Berezin, N.P. Zhidkov, "Computing methods" , 2 , Pergamon (1973) (Translated from Russian) |
[2] | N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian) |
[3] | A.N. Tikhonov, A.D. Gorbunov, "Asymptotic expansions of the error in the difference method of solving Cauchy's problem for systems of differential equations" USSR Comput. Math. Math. Phys. , 2 : 4 (1962) pp. 565–586 Zh. Vychisl. Mat. i Mat. Fiz. , 2 : 4 (1962) pp. 537–548 |
[4] | S.M. Lozinskii, "An estimate of the error of numerical integration of differential equations I" Izv. Vyssh. Uchebn. Zaved. Mat. , 5 : 6 (1958) pp. 52–90 (In Russian) |
[5] | N.S. Bakhvalov, "On an estimate of the error at numerical integration of differential equations by Adams' extrapolation method" Dokl. Akad. Nauk SSSR , 104 : 5 (1955) pp. 683–686 (In Russian) |
The Adams method is a special $ k $ or $ ( k + 1 ) $ multistep method. It yields explicit difference equations (predictor formula) in the extrapolation case a) and implicit difference equations (corrector formula) in the interpolation case b). Certain special case are known as Adams–Bashforth methods, viz. the series of explicit methods $ k = 0 , u _ {0} = 1 $( Euler's method); $ k = 1 , u _ {0} = 3 / 2 , u _ {-1} = - 1/2 $; $ k = 2 , u _ {0} = 23 / 12 , u _ {-1} = - 4 / 3 , u _ {-2} = 5 / 12 $; etc.
A certain series of implicit methods is known as Adams–Moulton methods, viz. $ k = 1 , v _ {1} = 1 / 2 , v _ {0} = 1 / 2 $( the trapezoidal rule); $ k = 2 , v _ {1} = 5 / 12 , v _ {0} = 2 / 3 , v _ {-1} = - 1 / 12 $; etc.
[a1] | L. Fox, D.F. Mayers, "Computing methods for scientists and engineers" , Clarendon Press (1968) |
[a2] | J.B. Rosser, "Solving differential equations on a hand-held programmable calculator" G.H. Golub (ed.) , Studies in numerical analysis , Math. Assoc. Amer. (1984) pp. 199–242 |