In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector [math]\displaystyle{ \delta \mathbf{Z}_0 }[/math] diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by
where [math]\displaystyle{ \lambda }[/math] is the Lyapunov exponent.
The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents—equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time.
The exponent is named after Aleksandr Lyapunov.
The maximal Lyapunov exponent can be defined as follows:
The limit [math]\displaystyle{ |\delta \mathbf{Z}_0| \to 0 }[/math] ensures the validity of the linear approximation at any time.[1]
For discrete time system (maps or fixed point iterations) [math]\displaystyle{ x_{n+1} = f(x_n) }[/math] , for an orbit starting with [math]\displaystyle{ x_0 }[/math] this translates into:
For a dynamical system with evolution equation [math]\displaystyle{ \dot{x}_i = f_i(x) }[/math] in an n–dimensional phase space, the spectrum of Lyapunov exponents
in general, depends on the starting point [math]\displaystyle{ x_0 }[/math]. However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. (For Hamiltonian systems, which do not have attractors, this is not a concern.) The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix
this Jacobian defines the evolution of the tangent vectors, given by the matrix [math]\displaystyle{ Y }[/math], via the equation
with the initial condition [math]\displaystyle{ Y_{ij}(0) = \delta_{ij} }[/math]. The matrix [math]\displaystyle{ Y }[/math] describes how a small change at the point [math]\displaystyle{ x(0) }[/math] propagates to the final point [math]\displaystyle{ x(t) }[/math]. The limit
defines a matrix [math]\displaystyle{ \Lambda }[/math] (the conditions for the existence of the limit are given by the Oseledets theorem). The Lyapunov exponents [math]\displaystyle{ \lambda_i }[/math] are defined by the eigenvalues of [math]\displaystyle{ \Lambda }[/math].
The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system.
To introduce Lyapunov exponent consider a fundamental matrix [math]\displaystyle{ X(t) }[/math] (e.g., for linearization along a stationary solution [math]\displaystyle{ x_0 }[/math] in a continuous system), the fundamental matrix is [math]\displaystyle{ \exp\left( \left. \frac{ d f^t(x) }{dx} \right|_{x_0} t\right) }[/math] consisting of the linearly-independent solutions of the first-order approximation of the system. The singular values [math]\displaystyle{ \{\alpha_j\big(X(t)\big)\}_{1}^{n} }[/math] of the matrix [math]\displaystyle{ X(t) }[/math] are the square roots of the eigenvalues of the matrix [math]\displaystyle{ X(t)^*X(t) }[/math]. The largest Lyapunov exponent [math]\displaystyle{ \lambda_{\mathrm{max}} }[/math] is as follows [2]
A.M. Lyapunov proved that if the system of the first approximation is regular (e.g., all systems with constant and periodic coefficients are regular) and its largest Lyapunov exponent is negative, then the solution of the original system is asymptotically Lyapunov stable. Later, it was stated by O. Perron that the requirement of regularity of the first approximation is substantial.
In 1930 O. Perron constructed an example of a second-order system, where the first approximation has negative Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of the original nonlinear system is Lyapunov unstable. Furthermore, in a certain neighborhood of this zero solution almost all solutions of original system have positive Lyapunov exponents. Also, it is possible to construct a reverse example in which the first approximation has positive Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system is Lyapunov stable.[3][4] The effect of sign inversion of Lyapunov exponents of solutions of the original system and the system of first approximation with the same initial data was subsequently called the Perron effect.[3][4]
Perron's counterexample shows that a negative largest Lyapunov exponent does not, in general, indicate stability, and that a positive largest Lyapunov exponent does not, in general, indicate chaos.
Therefore, time-varying linearization requires additional justification.[4]
If the system is conservative (i.e., there is no dissipation), a volume element of the phase space will stay the same along a trajectory. Thus the sum of all Lyapunov exponents must be zero. If the system is dissipative, the sum of Lyapunov exponents is negative.
If the system is a flow and the trajectory does not converge to a single point, one exponent is always zero—the Lyapunov exponent corresponding to the eigenvalue of [math]\displaystyle{ L }[/math] with an eigenvector in the direction of the flow.
The Lyapunov spectrum can be used to give an estimate of the rate of entropy production, of the fractal dimension, and of the Hausdorff dimension of the considered dynamical system.[5] In particular from the knowledge of the Lyapunov spectrum it is possible to obtain the so-called Lyapunov dimension (or Kaplan–Yorke dimension) [math]\displaystyle{ D_{KY} }[/math], which is defined as follows:
where [math]\displaystyle{ k }[/math] is the maximum integer such that the sum of the [math]\displaystyle{ k }[/math] largest exponents is still non-negative. [math]\displaystyle{ D_{KY} }[/math] represents an upper bound for the information dimension of the system.[6] Moreover, the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy accordingly to Pesin's theorem.[7] Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, which is based on the direct Lyapunov method with special Lyapunov-like functions.[8] The Lyapunov exponents of bounded trajectory and the Lyapunov dimension of attractor are invariant under diffeomorphism of the phase space. [9]
The multiplicative inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time, and defines the characteristic e-folding time. For chaotic orbits, the Lyapunov time will be finite, whereas for regular orbits it will be infinite.
Generally the calculation of Lyapunov exponents, as defined above, cannot be carried out analytically, and in most cases one must resort to numerical techniques. An early example, which also constituted the first demonstration of the exponential divergence of chaotic trajectories, was carried out by R. H. Miller in 1964.[10] Currently, the most commonly used numerical procedure estimates the [math]\displaystyle{ L }[/math] matrix based on averaging several finite time approximations of the limit defining [math]\displaystyle{ L }[/math].
One of the most used and effective numerical techniques to calculate the Lyapunov spectrum for a smooth dynamical system relies on periodic Gram–Schmidt orthonormalization of the Lyapunov vectors to avoid a misalignment of all the vectors along the direction of maximal expansion.[11][12][13][14] The Lyapunov spectrum of various models are described.[15] Source codes for nonlinear systems such as the Hénon map, the Lorenz equations, a delay differential equation and so on are introduced.[16][17][18]
For the calculation of Lyapunov exponents from limited experimental data, various methods have been proposed. However, there are many difficulties with applying these methods and such problems should be approached with care. The main difficulty is that the data does not fully explore the phase space, rather it is confined to the attractor which has very limited (if any) extension along certain directions. These thinner or more singular directions within the data set are the ones associated with the more negative exponents. The use of nonlinear mappings to model the evolution of small displacements from the attractor has been shown to dramatically improve the ability to recover the Lyapunov spectrum,[19][20] provided the data has a very low level of noise. The singular nature of the data and its connection to the more negative exponents has also been explored.[21]
Whereas the (global) Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes of interest to estimate the local predictability around a point x0 in phase space. This may be done through the eigenvalues of the Jacobian matrix J 0(x0). These eigenvalues are also called local Lyapunov exponents.[22] (A word of caution: unlike the global exponents, these local exponents are not invariant under a nonlinear change of coordinates).
This term is normally used regarding synchronization of chaos, in which there are two systems that are coupled, usually in a unidirectional manner so that there is a drive (or master) system and a response (or slave) system. The conditional exponents are those of the response system with the drive system treated as simply the source of a (chaotic) drive signal. Synchronization occurs when all of the conditional exponents are negative.[23]
Original source: https://en.wikipedia.org/wiki/Lyapunov exponent.
Read more |