Contents |
As soon as scientists realized that the evolution of physical systems can be described in terms of mathematical equations, the stability of the various dynamical regimes was recognized as a matter of primary importance. The interest for this question was not only motivated by general curiosity, but also by the need to know, in the XIX century, to what extent the behavior of suitable mechanical devices remains unchanged, once their configuration has been perturbed. As a result, illustrious scientists such as Lagrange, Poisson, Maxwell and others deeply thought about ways of quantifying the stability both in general and specific contexts. The first exact definition of stability was given by the Russian mathematician Aleksandr Lyapunov who addressed the problem in his PhD Thesis in 1892, where he introduced two methods, the first of which is based on the linearization of the equations of motion and has originated what has later been termed Lyapunov exponents (LE). (Lyapunov 1992)
LEs measure the growth rates of generic perturbations, in a regime where their evolution is ruled by linear equations,
where \(\bf u\) is an \(N\) dimensional vector and \({\bf J}\) is a (time-dependent) \( N\times N \) matrix. In some contexts, such as that of linear stochastic differential equations, \({\bf J}\) fluctuates because of the presence of disorder or multiplicative noise (Arnold, 1986).
More commonly, in the context of deterministic dynamical systems, \({\bf J}\) is the Jacobian of a suitable velocity field \(\bf F\), computed along a trajectory \({\bf x}(t)\) that satisfies the ordinary differential equation,
If \({\bf x}(t)={\bf x}_0\) is a solution (i.e. if \({\bf F}({\bf x}_0)=0\)), then, the stability of this fixed point is quantified by the eigenvalues of the (constant) operator \({\bf J}\). In this simple case, the LEs \(\lambda_i\) are the real parts of the eigenvalues. They measure the exponential contraction/expansion rate of infinitesimal perturbations. A slightly more complicated example is that of a periodic orbit \({\bf x}(t+T) = {\bf x}(t)\). In this case, it is necessary to integrate Eq. (1) over a time \(T\), to obtain the discrete time evolution operator \(\bf M\),
From the eigenvalues \(m_i\) of \(\bf M\), one can thereby determine the Floquet exponents \(\mu_i=(\ln m_i)/T\); the LE \(\lambda_i\) are their real parts.
Since trajectories are not, in general, periodic, a different approach is required. The most general definition involves the computation of the eigenvalues \(\alpha_i\) of yet another matrix, namely \({\bf M}(t){\bf M}^T(t)\). A typical instance of the behavior of \(\alpha_i\) is illustrated in the upper part of Figure 1. From the knowledge of \(\alpha_i\), one naturally introduces the finite-time LE as
Since \(\lambda_i(t)\) is, in general, a fluctuating quantity (see the lower part of Figure 1), it is necessary to consider the infinite time limit, to determine the asymptotic (in time) behaviour. This leads to the following definition of LE,
where the \(\limsup\) is considered to account for the worst possible fluctuations: this is important whenever the stability of a given regime must be assessed. The Oseledets multiplicative ergodic theorem guarantees that LEs are independent of the initial condition (Oseledets 1968).
It is interesting to notice that while it makes sense to determine the imaginary part of the Lyapunov exponents for fixed points and periodic orbits, this question cannot, in general, be addressed for an aperiodic motion. In fact the \(\alpha_i\)'s are, by definition, real quantities and there is no way to extend the definition to include rotations. One can at most introduce a rotation number, to characterize the rotation of a generic perturbation around the reference trajectory (Ruelle 1985).
In practice, Lyapunov exponents can be computed by exploiting the natural tendency of an \(n\)-dimensional volume to align along the \(n\) most expanding subspace. From the expansion rate of an \(n\)-dimensional volume, one obtains the sum of the \(n\) largest Lyapunov exponents. Altogether, the procedure requires evolving \(n\) linearly independent perturbations and one is faced with the problem that all vectors tend to align along the same direction. However, as shown in the late '70s, this numerical instability can be counterbalanced by orthonormalizing the vectors with the help of the Gram-Schmidt procedure (Benettin et al. 1980, Shimada and Nagashima 1979) (or, equivalently with a QR decomposition). As a result, the LE \(\lambda_i\), naturally ordered from the largest to the most negative one, can be computed: they are altogether referred to as the Lyapunov spectrum.
The knowledge of the LEs allows determining additional invariants such as the fractal dimension of the underlying attractor and its dynamical entropy.
The Kaplan-Yorke formula provides an upper bound for the information dimension of the attractor, (Kaplan and Yorke 1979)
where \(\Lambda_j\equiv \sum_{i=1}^j\lambda_i\) and \(J\) is the largest \(j\)-value such that \(\Lambda_j>0\). This equation can be understood in the following way. A strictly positive \(\Lambda_j\) implies that the hyper-volume of a generic \(j\)-dimensional box diverges while spreading over the attractor. This implies that the dimension is larger than \(j\), since it is like asking to measure the "length" of a square: the length of a line covering the square is obviously infinite! For the same reason, \(\Lambda_j<0\) signals that the dimension is smaller than \(j\). Altogether, one can view the Kaplan-Yorke formula as a linear interpolation between the largest \(j\) such that \(\Lambda_j>0\) and the smallest such that the opposite is true (the procedure is schematically reproduced in Figure 2). In general, \(D_{KY}\) provides an upper bound to the information dimension, but in three dimensional flows (two-dimensional maps) and in random dynamical systems it has been proved to coincide with it.
The Kaplan-Yorke formula provides also approximate information on the number of the active degrees of freedom. In fact, in typical dissipative models, the phase-space dimension is infinite, but the number of independent variables that are necessary to uniquely identify the different points of the attractors is finite and sometimes even small.
Another dynamical invariant that is connected with the LE is the Kolmogorov-Sinai entropy \(H_{KS}\) which measures the growth rate of the entropy due to the exponential instability of the chaotic motion. In this case, the relationship is expressed by the Pesin formula (Pesin 1977)
where the sum in \(\Lambda_j\) is restricted to the strictly expanding directions (see Figure 2 for a schematic representation). In order to take into account the possible fractal structure along the unstable directions (this happens in the case of repellors, i.e. transient chaos) this formula must be extended to,
where \(d_i\) represents the fractal dimension along the ith direction (in standard chaotic attractors \(d_i=1\)).
As schematically illustrated in Figure 1, the finite-time LE fluctuates. The central limit theorem guarantees that such fluctuations vanish when time goes to infinity. However, the so-called generalized LE (Fujisaka 1983, Benzi et al. 1985) \(\mathcal{L}(q)\)
(in this section, for simplicity, we drop the dependence on the index \(i\)) is sensitive to such fluctuations. It is easy to see that in the limit \(q\to 0\), the usual LE definition is recovered.
The same problem can be approached in a more transparent way, by expressing the probability \(P(\lambda,t)\) that a trajectory of length \(t\) is characterized by an exponent \(\lambda\) (in the limit of finite but large enough \(t\)) in terms of the large-deviation function \(g(\lambda)\),
\(g(\lambda)\) is a nonnegative function with a typically quadratic maximum in correspondence of the usual LE \(\overline \lambda\), where \(g(\overline \lambda) = 0\). This condition implies that the probability of observing \(\lambda=\overline \lambda\) does not vanish (exponentially) for increasing time. \(g(\lambda)\) and \(\mathcal{L}(q)\) are related to one another by a Legendre transform.
The large-deviation function \(g\) is a powerful tool to detect deviations from a perfectly hyperbolic behaviour (for instance, discovering that the domain of definition of for a positive exponent extends to negative values as well, as a result of homoclinic tangencies.)
Generalized LE are important to establish the connection with different definitions of fractal dimensions: for instance, the correlation dimension, that is measured by implementing the Grassberger-Procaccia algorithm (Grassberger and Procaccia 1983), is connected with \(\mathcal{L}(1)\).
For simplicity we refer to one-dimensional lattices of length \(N\) and assume that a single variable \(x_i\) is defined on each lattice site. As a result, the phase-space dimension is \(N\). There are two natural limits that one wishes to consider: thermodynamic and continuum limit. In the former case, we let \(N\) go to infinity, by increasing the number of sites and leaving their mutual distance constant. In the latter case, \(N\) is increased by reducing the spatial separation. In the thermodynamic limit, it has been observed and proven that the LEs come closer to each other in such a way that it makes sense to speak of a Lyapunov spectrum (Ruelle 2004, Grassberger 1989)
The existence of a Lyapunov spectrum can be interpreted as the evidence of the extensive character of space-time chaos. In fact, this means that the entropy \(H_P\) and the fractal dimension \(D_{KY}\) are proportional to the system size. In other words, the dynamics in sufficiently separated regions (of the physical space) are independent of one another.
In the continuum limit, additional (negative) exponents appear, which characterize the fast relaxation phenomena occurring on short spatial scales.
Lyapunov exponents have been introduced with the goal of characterizing the time evolution of perturbations of lumped dynamical systems. However in spatially extended systems, it is important to describe the spatial evolution as well. A first generalization of the LE is obtained by introducing the convective exponent, to describe the growth of an initially localized perturbation (Deissler and Kaneko 1987)
where \(v=i/t\) is the world line along which the evolution is measured and \(u(x,0)\) is restricted to some finite interval around \(x=0\).
In chaotic systems with left-right symmetry, \(L(v)\) is symmetric too and attains its maximum value for zero velocity; \(L(0)\) coincides with the standard maximum LE (see Figure 3, left panel). As the velocity increases (in absolute value), \(L(v)\) decreases to eventually become negative, beyond some critical value \(v_0\) which can be interpreted as the maximal propagation of (infinitesimal) perturbations.
Whenever there is no left-right symmetry, it may happen that only perturbations propagating with some finite velocity do expand. In such cases, one speaks of convective instabilities (see the right panel in Figure 3. If the system is open, it locally relaxes back to the previous equilibrium state, once the perturbation has travelled away.
Convective exponents are an example of the additional information that can be extracted by implementing the so-called chronotopic approach (Lepri et al. 1996), which is based on the definition of the growth rate of exponentially distributed perturbations \(u(x) = {\rm e}^{\mu x}u_\mu(x)\) (standard LE are obtained by assuming \(\mu =0\)). By assuming a generic \(\mu\)-value in the original evolution equations in tangent space, one can determine the generalized temporal Lyapunov spectrum \(\lambda(\rho,\mu)\).
The convective exponents can be obtained by Legendre transforming \(\lambda(0,\mu)\), i.e.
The corresponding geometrical construction is presented in Figure 4. Notice that one can equivalently proceed from \(L(v)\) to \(\lambda(0,\mu)\), in which case \(\mu\) is determined as \(\mu = dL/dv\).
By exchanging the role of space and time variables, one can define
the complementary spatial Lyapunov exponents
\(\mu(\lambda,\rho)\).
In one dimensional systems, it has been conjectured that the two kinds of spectra
are related to one another and follow from the existence of a superinvariant
(as it is independent of the space-time parametrization)
entropy potential (Lepri et al. 1997).
While the LEs correspond to the limit eigenvalues of a suitable product of matrices, there is no corresponding unique set of eigenvectors, as they depend on the current position of the phase point. In fact this dependence reflects the typically nonlinear shape of both stable and unstable manifolds. However, one cannot directly invoke the vectors \({\bf V}_i\) arising from the Gram-Schmidt orthogonalization procedure, as they are not covariant, i.e. the vector \({\bf V}_i({\bf x})\) defined in \(\bf x\) is not transformed into \({\bf V}_i({\bf y})\) when \(\bf x\) is mapped onto \(\bf y\). A proper definition requires to generalize the concept of eigenvectors of linear operators (Eckmann and Ruelle 1985). Roughly speaking, the covariant vectors can be obtained by iterating forward and backward along the same trajectory to identify the \(i\)th vector \({\bf W}_i\) as the (backward) most expanding direction within the (forward) most expanding subspace of dimension \(i\). Effective algorithms for the determination of the covariant vectors have been proposed only recently (Wolfe and Samelson 2007, Ginelli et al. 2007)
In some cases it is useful, if not even necessary, to consider finite-amplitude perturbations. Apart from experimental time series, where, in the absence of a model, one is forced to consider finite distances, it is useful to extend the concept of Lyapunov exponents to regimes where nonlinearities are possibly relevant.
Finite-amplitude exponents may be defined in the following way. Given any two nearby trajectories, let \(\Delta(t)\) denote their mutual distance and measure the times \(t_n\) when \(|\Delta(t_n)|\) crosses (for the first time) a series of exponentially spaced thresholds \(\theta_n\) (\(\theta_n = r \theta_{n-1}\) - see Figure 5).
By averaging the time separation between consecutive crossings over different pairs of trajectories, one obtains the finite-amplitude Lyapunov exponent (Aurell et al. 1996)
\( \ell = \frac{r}{\langle t_n-t_{n-1}\rangle} \)
For small enough thresholds, one recovers the usual (maximum) Lyapunov exponent, while for large amplitudes, \(\ell\) saturates to zero, since a perturbation cannot be larger than the size of the accessible phase-space. In the intermediate range, \(\ell\) tells us how the growth of a perturbation is affected by nonlinearities.
As the definition of finite-amplitude LE does neither involve
an infinite-time limit nor that of infinitesimal perturbations,
is not mathematically well posed, as the result will
depend on the selection of the variables.
Nevertheless, it may profitably be used to extract useful
information on the presence of collective dynamics, where one
would like to distinguish between the stability of microscopic
and macroscopic perturbations, or in the presence of
different time scales, when some directions saturate very
rapidly.
LEs prove useful in various contexts. Within dynamical systems, LEs, besides providing a detailed characterization of chaotic dynamics, can help to assess various forms of synchronization (Pikovsky 2007). Another context where LEs help to clarify the underlying dynamics is chaotic advection, i.e. the evolution of particles transported by a (possibly time-dependent) velocity field,
where \({\bf x}(t) \) denotes the Lagrangian trajectory of a generic particle in the physical space. In this case, the existence of a positive Lyapunov exponent is synonymous of chaotic mixing (Ottino, 1989).
Another prominent example is Anderson localization of the eigenfunctions \( \psi(x)\) of the Schrödinger equation in the presence of disorder. In this case, the object of study is the spatial dependence of \( \psi(x)\) (see also the section on the chronotopic approach). In one-dimensional systems, in the tight-binding approximation, \( x \) is an integer variable and the spatial evolution corresponds to multiplying by a \(2\times2 \) random matrix. This is the so-called transfer matrix approach: the invariance under spatial reversal implies that the two (spatial) LEs are opposite of each other. The most important result is that the positive LE coincides with the inverse of the localization length \( \ell_c \) (Borland 1963, Furstenberg 1963). The transfer-matrix approach can be applied also in higher-dimensional spaces, in which case, the inverse localization length coincides with the minimal positive LE.