Differential equations |
---|
Classification |
Solution |
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem.[1][2] A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts.[3] In perturbation theory, the solution is expressed as a power series in a small parameter [math]\displaystyle{ \varepsilon }[/math].[1][2] The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of [math]\displaystyle{ \varepsilon }[/math] usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.
Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution [math]\displaystyle{ \ A\ , }[/math] a series in the small parameter (here called ε), like the following:
In this example, [math]\displaystyle{ \ A_0\ }[/math] would be the known solution to the exactly solvable initial problem, and the terms [math]\displaystyle{ \ A_1, A_2, A_3, \ldots \ }[/math] represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small [math]\displaystyle{ \ \varepsilon\ }[/math] these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction
Some authors use big O notation to indicate the order of the error in the approximate solution: [math]\displaystyle{ \; A = A_0 + \varepsilon A_1 + \mathcal{O}\bigl(\ \varepsilon^2\ \bigr) ~. }[/math][2]
If the power series in [math]\displaystyle{ \ \varepsilon\ }[/math] converges with a nonzero radius of convergence, the perturbation problem is called a regular perturbation problem.[1] In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution.[1] However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called an asymptotic series. If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powers [math]\displaystyle{ \ \varepsilon^{\left(1/2\right)}\ }[/math] or negative powers [math]\displaystyle{ \ \varepsilon^{-2}\ }[/math]) then the perturbation problem is called a singular perturbation problem.[1] Many special techniques in perturbation theory have been developed to analyze singular perturbation problems.[1][2]
The earliest use of what would now be called perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: for example the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun.[4]
Perturbation methods start with a simplified form of the original problem, which is simple enough to be solved exactly. In celestial mechanics, this is usually a Keplerian ellipse. Under Newtonian gravity, an ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and the Moon) but not quite correct when there are three or more objects (say, the Earth, Moon, Sun, and the rest of the Solar System) and not quite correct when the gravitational interaction is stated using formulations from general relativity.
Keeping the above example in mind, one follows a general recipe to obtain the perturbation series. The perturbative expansion is created by adding successive corrections to the simplified problem. The corrections are obtained by forcing consistency between the unperturbed solution, and the equations describing the system in full. Write [math]\displaystyle{ D }[/math] for this collection of equations; that is, let the symbol [math]\displaystyle{ D }[/math] stand in for the problem to be solved. Quite often, these are differential equations, thus, the letter "D".
The process is generally mechanical, if laborious. One begins by writing the equations [math]\displaystyle{ D }[/math] so that they split into two parts: some collection of equations [math]\displaystyle{ D_0 }[/math] which can be solved exactly, and some additional remaining part [math]\displaystyle{ \varepsilon D_1 }[/math] for some small [math]\displaystyle{ \varepsilon \ll 1 }[/math]. The solution [math]\displaystyle{ A_0 }[/math] (to [math]\displaystyle{ D_0 }[/math]) is known, and one seeks the general solution [math]\displaystyle{ A }[/math] to [math]\displaystyle{ D = D_0 + \varepsilon D_1 }[/math].
Next the approximation [math]\displaystyle{ A\approx A_0+\varepsilon A_1 }[/math] is inserted into [math]\displaystyle{ \varepsilon D_1 }[/math]. This results in an equation for [math]\displaystyle{ A_1 }[/math], which, in the general case, can be written in closed form as a sum over integrals over [math]\displaystyle{ A_0 }[/math]. Thus, one has obtained the first-order correction [math]\displaystyle{ A_1 }[/math] and thus [math]\displaystyle{ A\approx A_0+\varepsilon A_1 }[/math] is a good approximation to [math]\displaystyle{ A }[/math]. It is a good approximation, precisely because the parts that were ignored were of size [math]\displaystyle{ \varepsilon^2 }[/math]. The process can then be repeated, to obtain corrections [math]\displaystyle{ A_2 }[/math], and so on.
In practice, this process rapidly explodes into a profusion of terms, which become extremely hard to manage by hand. Isaac Newton is reported to have said, regarding the problem of the Moon's orbit, that "It causeth my head to ache."[5] This unmanageability has forced perturbation theory to develop into a high art of managing and writing out these higher order terms. One of the fundamental breakthroughs for controlling the expansion are the Feynman diagrams, which allow perturbation series to be written down diagrammatically.
Perturbation theory has been used in a large number of different settings in physics and applied mathematics. Examples of the "collection of equations" [math]\displaystyle{ D }[/math] include algebraic equations,[6] differential equations (e.g., the equations of motion[7] and commonly wave equations), thermodynamic free energy in statistical mechanics, radiative transfer,[8] and Hamiltonian operators in quantum mechanics.
Examples of the kinds of solutions that are found perturbatively include the solution of the equation of motion (e.g., the trajectory of a particle), the statistical average of some physical quantity (e.g., average magnetization), the ground state energy of a quantum mechanical problem.
Examples of exactly solvable problems that can be used as starting points include linear equations, including linear equations of motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all degrees of freedom).
Examples of systems that can be solved with perturbations include systems with nonlinear contributions to the equations of motion, interactions between particles, terms of higher powers in the Hamiltonian/free energy.
For physical problems involving interactions between particles, the terms of the perturbation series may be displayed (and manipulated) using Feynman diagrams.
Perturbation theory was first devised to solve otherwise intractable problems in the calculation of the motions of planets in the solar system. For instance, Newton's law of universal gravitation explained the gravitation between two astronomical bodies, but when a third body is added, the problem was, "How does each body pull on each?" Newton's equation only allowed the mass of two bodies to be analyzed. The gradually increasing accuracy of astronomical observations led to incremental demands in the accuracy of solutions to Newton's gravitational equations, which led several notable 18th and 19th century mathematicians, such as Lagrange and Laplace, to extend and generalize the methods of perturbation theory.
These well-developed perturbation methods were adopted and adapted to solve new problems arising during the development of quantum mechanics in 20th century atomic and subatomic physics. Paul Dirac developed quantum perturbation theory in 1927 to evaluate when a particle would be emitted in radioactive elements. This was later named Fermi's golden rule.[9][10] Perturbation theory in quantum mechanics is fairly accessible, as the quantum notation allows expressions to be written in fairly compact form, thus making them easier to comprehend. This resulted in an explosion of applications, ranging from the Zeeman effect to the hyperfine splitting in the hydrogen atom.
Despite the simpler notation, perturbation theory applied to quantum field theory still easily gets out of hand. Richard Feynman developed the celebrated Feynman diagrams by observing that many terms repeat in a regular fashion. These terms can be replaced by dots, lines, squiggles and similar marks, each standing for a term, a denominator, an integral, and so on; thus complex integrals can be written as simple diagrams, with absolutely no ambiguity as to what they mean. The one-to-one correspondence between the diagrams, and specific integrals is what gives them their power. Although originally developed for quantum field theory, it turns out the diagrammatic technique is broadly applicable to all perturbative series (although, perhaps, not always so useful).
In the second half of the 20th century, as chaos theory developed, it became clear that unperturbed systems were in general completely integrable systems, while the perturbed systems were not. This promptly lead to the study of "nearly integrable systems", of which the KAM torus is the canonical example. At the same time, it was also discovered that many (rather special) non-linear systems, which were previously approachable only through perturbation theory, are in fact completely integrable. This discovery was quite dramatic, as it allowed exact solutions to be given. This, in turn, helped clarify the meaning of the perturbative series, as one could now compare the results of the series to the exact solutions.
The improved understanding of dynamical systems coming from chaos theory helped shed light on what was termed the small denominator problem or small divisor problem. It was observed in the 19th century (by Poincaré, and perhaps earlier), that sometimes 2nd and higher order terms in the perturbative series have "small denominators". That is, they have the general form [math]\displaystyle{ \psi_n V\phi_m / (\omega_n -\omega_m) }[/math] where [math]\displaystyle{ \psi_n }[/math], [math]\displaystyle{ V }[/math] and [math]\displaystyle{ \phi_m }[/math] are some complicated expressions pertinent to the problem to be solved, and [math]\displaystyle{ \omega_n }[/math] and [math]\displaystyle{ \omega_m }[/math] are real numbers; very often they are the energy of normal modes. The small divisor problem arises when the difference [math]\displaystyle{ \omega_n -\omega_m }[/math] is small, causing the perturbative correction to blow up, becoming as large or maybe larger than the zeroth order term. This situation signals a breakdown of perturbation theory: it stops working at this point, and cannot be expanded or summed any further. In formal terms, the perturbative series is a asymptotic series: a useful approximation for a few terms, but ultimately inexact. The breakthrough from chaos theory was an explanation of why this happened: the small divisors occur whenever perturbation theory is applied to a chaotic system. The one signals the presence of the other.
Since the planets are very remote from each other, and since their mass is small as compared to the mass of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the equations of the two-body problem, the two bodies being the planet and the Sun.[11]
Since astronomic data came to be known with much greater accuracy, it became necessary to consider how the motion of a planet around the Sun is affected by other planets. This was the origin of the three-body problem; thus, in studying the system Moon–Earth–Sun the mass ratio between the Moon and the Earth was chosen as the small parameter. Lagrange and Laplace were the first to advance the view that the constants which describe the motion of a planet around the Sun are "perturbed", as it were, by the motion of other planets and vary as a function of time; hence the name "perturbation theory".[11]
Perturbation theory was investigated by the classical scholars—Laplace, Poisson, Gauss—as a result of which the computations could be performed with a very high accuracy. The discovery of the planet Neptune in 1848 by Urbain Le Verrier, based on the deviations in motion of the planet Uranus (he sent the coordinates to Johann Gottfried Galle who successfully observed Neptune through his telescope), represented a triumph of perturbation theory.[11]
The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requires singular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate.
Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related methods. Implicit perturbation theory[12] works with the complete Hamiltonian from the very beginning and never specifies a perturbation operator as such. Møller–Plesset perturbation theory uses the difference between the Hartree–Fock Hamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree–Fock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in most ab initio quantum chemistry programs. A related but more accurate method is the coupled cluster method.
A shell-crossing (sc) occurs in perturbation theory when matter trajectories intersect, forming a singularity.[13] This limits the predictive power of physical simulations at small scales.
Original source: https://en.wikipedia.org/wiki/Perturbation theory.
Read more |