A stochastic dynamical system is a dynamical system subjected to the effects of noise. Such effects of fluctuations have been of interest for over a century since the seminal work of Einstein (1905). Fluctuations are classically referred to as "noisy" or "stochastic" when their suspected origin implicates the action of a very large number of variables or "degrees of freedom". For example, the action of many water molecules on the motion of a large protein can be seen as noise. In principle the equations of motion for such high-dimensional dynamics can be written and studied analytically and numerically. However, it is possible to study a system subjected to the action of the large number of variables by coupling its deterministic equations of motion to a "noise" that simple mimics the perpetual action of many variables.
The coupling of noise to nonlinear deterministic equations of motion can lead to non-trivial effects (Schimansky-Geier 1985; Horsthemke 1985; Haenggi, Talkner and Borkovec, 1990; Haenggi and Marchesoni 2005). For example, noise can stabilize unstable equilibria and shift bifurcations, i.e. the parameter value at which the dynamics change qualitatively (Arnold 2003). Noise can lead to transitions between coexisting deterministic stable states or attractors. More interestingly still, noise can induce new stable states that have no deterministic counterpart. At the very least, noise excites internal modes of oscillation in both linear and nonlinear systems. In the latter case, it can even enhance the response of a nonlinear system to external signals (Jung, 1993; Gammaitoni et al., 1998; Lindner et al. 2004).
It is often thought that the action of noise merely amounts to a blurring of trajectories of the deterministic system. That is indeed the case for "observational" or "measurement" noise. However, in nonlinear systems where noise acts as a driving force, noise can drastically modify the deterministic dynamics. We discuss these issues using a basic level of description which couples a stochastic process to a deterministic equation of motion: the stochastic differential equation (SDE).
Noise as a random variable \(\eta(t)\) is a quantity that fluctuates aperiodically in time. This variable takes on a different set of values every time we sample it. However, to be a useful quantity to describe the real world, this random variable should have well-defined statistical properties, which are hopefully experimentally accessible. Examples are a transition probability density from a given state \( \eta_o \) at time \( t_o \) to a value \( \eta \) at time \( t\ ,\) i.e. \(\rho(\eta,t; \eta_o,t_o)\ ,\) the stationary limit of this density function \(\rho_s(\eta)\) in the limit \( t-t_o \rightarrow \infty \ ,\) and a two-point autocorrelation function \(C(t,s)=\langle \eta(t)\eta(s) \rangle\ .\)
Stationary processes are those for which the state variable at different times has the same statistics. In other words, all joint probability densities have time translation invariance. In this case, the two-point correlation function is a function of the interval \( |t-s| \ ,\) rather than of the two arguments separately. It is then related to the power spectral density of the process by Fourier transform, via the Wiener-Khintchine relations which are valid for stationary processes. Its characteristic decay time gives a measure of the noise "correlation time", i.e. of the level of correlation between successive values of the stochastic process. Here the brackets denote averaging over an infinite ensemble of possible realizations of the noise, in the theoretical case, or a finite ensemble in an experimental or numerical setting. Such ensemble averages can be replaced by time averages if the system is ergodic. The integral of the autocorrelation function over all times is the noise intensity.
Note also that the density can be time-dependent, some of its moments may not be defined, and the correlation function may depend on both its arguments, or one their difference if the system is stationary. Figure 1 shows different realizations of a noise process known as the Wiener process. The density \(\rho(\eta,t)\) can be constructed using a large number of realizations.
One of the difficulties with modeling noise is that we may not have access to the noise variable itself, but rather, to a state variable perturbed by one or more sources of noise. Thus, one may have to make assumptions about the nature of the noise and its coupling to the dynamical state variables. The accuracy of these assumptions can later be assessed by looking at the agreement between the predictions of the resulting model and the experimental data.
In this case, the dynamical system evolves deterministically, but the measurements on this system are contaminated by noise. For example, suppose a one-dimensional dynamical system described by one state variable \(x\) with the following time evolution: \[\tag{1} \frac{dx}{dt} = a(x;\mu ) \]
where \(\mu\) is a parameter. Then observational noise corresponds to the measurement of \(y(t) \equiv F(x(t)) +\eta (t)\) where \(F\) is a static function. Here the measurement \(y\ ,\) but not the evolution of the system \(x\ ,\) is affected by the presence of noise. While this is often an important source of noise, and the simplest to deal with mathematically, it is also the most boring form of noise: it merely blurs deterministic solutions of \(x(t)\ .\)
Noise can also affect the parameter \(\mu\ .\) This "parametric" noise can mimic the fluctuations in the environment of the system, such as pressure or reagent concentration. Mathematically, we can model this type of noisy forcing by setting \(\mu=<\mu>+\eta(t)\ ,\) where \(<\mu>\) denotes the constant average value of the coupling parameter and \(\eta(t)\) represents the fluctuations. The deterministic evolution equation Eq.(1) becomes a stochastic differential equation (SDE) in the presence of the noisy forcing, and can lead to either of the following two possibilities.
A classification exists according to how the noise and dynamical variable interact. If the coefficient of \(\eta\) in the evolution equation is independent of the state \(x\) of the system: \[\tag{2} \frac{dx}{dt} = a(x;\langle\mu \rangle ) + \eta (t), \]
the noise is said to be "additive". In other words, the noise is simply added to the deterministic part of the dynamics.
Alternatively, one can have multiplicative noise, for which the coefficient of the noise depends on the value of one or many state variables. In such a case, the evolution equation would take the form: \[\tag{3} \frac{dx}{dt} = a(x;\langle\mu\rangle)+b(x)\eta (t). \]
Now the strength of the noise is dependent on \(x(t) \ ,\) so if e.g. \(b(x)\) is large at time \(t\ ,\) the effect of noise will be large at that time. Note that the functions \(a\) and \(b\) depend on time by virtue of their dependence on the state variable \(x(t)\ .\)
A commonly used simple process is the Gaussian white noise, denoted as \(\xi(t)\ ,\) where Gaussian refers to the shape of the distribution, whereas "white" refers to the fact that the correlation time is zero for \(t \neq s\ .\) Alternatively, this means that the Fourier transform of the autocorrelation (the power spectrum) is flat. Specifically, the autocorrelation is \(<\xi(t )\xi(t')>=\delta(t-t')\ .\) The noise \(\xi (t)\) is an extremely "spiky" looking function, which in fact is nowhere differentiable. It is the time derivative of the Wiener process (also known as Brownian motion), i.e. \[ \frac{dW}{dt} = \xi(t) \] with \(W(0)=0\) (by definition). Its intensity is the integral of the autocorrelation function, here equal to 1. This can be scaled by a factor \(D\) to achieve other intensities. This latter Markov process begins at zero, and has Gaussian transition probabilities. Gaussian white noise is a good approximation to a colored noise process (see below) in the case where the characteristic time scales of the deterministic system are much larger than the noise correlation time. This is called the Quasi-White approximation.
The Ornstein-Uhlenbeck process (OU) was proposed to model the velocity of a particle executing Brownian motion (its position is then obtained by integration). It is the only stationary Markovian process that is Gaussian and a diffusion process. Its realizations are continuous, and successive values are correlated exponentially. This latter property makes the OU process a "colored" noise. It is characterized by two parameters: its variance \(\sigma\) and its correlation time \(\tau\ .\) This noise is particularly useful to investigate the effects of the noise correlation time on the evolution of a nonlinear system. Different scalings of the OU process are used in the literature. A common scaling for the time evolution equation of the OU noise \(y\) is: \[ \frac{dy}{dt} = -\frac{y}{\tau} + \sqrt{\frac{2\sigma^2}{\tau}}\xi(t) \] where \(\xi\) is a zero-mean Gaussian white noise with autocorrelation function \(\langle \xi(t)\xi(s)\rangle = \delta (t-s)\ .\) The mean of \(y\) is zero, and its stationary density is \[ p(y) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp \left[ -\frac{y^2}{2\sigma^2}\right] \,, \] which is independent of the correlation time. Its autocorrelation function is: \[ C(t,s)=C(|t-s |)=\langle y(t)y(s)\rangle = \sigma^2 \exp\left[ -|t-s |/\tau \right] \]
The noise intensity is the integral of the autocorrelation over all times - and by the Wiener-Khintchine relations, the value of the power spectral density at the origin \(S(0)\ .\) For the OU process in this scaling, this intensity is \(\sigma^2\tau\ .\) The variance of the OU process is equal to \(C(0)\ ,\) i.e. the integral of the power spectral density; here it is equal to \(\sigma^2\ .\) One can explore limits of the parameters by keeping either the intensity or the variance constant. For example, in Fig.2 are shown two realizations of the OU process for two values of the noise correlation time at constant intensity, i.e. \(\sigma^2\tau = const.\) Continuing this process for smaller correlation times yields the Gaussian white noise limit.
Another important type of noise is the Poisson dichotomous process, characterized by a discrete two-state space \(\xi(t)=\pm \Delta\) with an exponentially decaying autocorrelation function. It is also referred to as the random telegraph signal. This type of noise is thus described by its amplitude \(\Delta\) and its correlation time \(\tau\ .\) One of its advantages is to lead to an exact stationary probability density for one-variable dynamical systems (see e.g. L'Heureux and Kapral 1988). In this case it is possible to obtain a linear equation which describes the time-evolution of the probability density. This equation is more complicated than the Fokker-Planck equation (see below), but the stationary density can still be obtained analytically, at least for the case where there is only one dynamical variable. In contrast, no exact evolution equation for the probability density of the state variable can be obtained for the case of the OU noise. Also, there exists a limit of dichotomous noise that is white shot noise - which is a sequence of delta-function spikes - from which it is possible to transform to Gaussian white noise.
This distinction concerns stochastic differential equations that involve Gaussian white noise. A rewriting of a stochastic differential equation in integral form yields \[ x(t) = x(0) + \int_0^t dt' a(x(t'))+\int_0^t dW(t') b(x(t')) \,. \] where we have removed the explicit dependence on parameters for simplicity. There are in principle an infinite number of interpretations of the stochastic integral on the right hand side, depending on the selection of the position of \(t'\) during the limit from a finite sum to an integral of the non-differentiable Wiener process (see e.g. Gardiner 1985). Two main interpretations are used in the literature, known as the Ito and Stratonovich interpretations. It is important to establish what interpretation or "calculus" one assumes at the outset of an analysis, as this may influence the resulting form of the stochastic differential equation (SDE). It will further affect the kind of calculus to use upon making variable changes. The Stratonovich calculus obeys the usual laws of calculus (such as for changes of variables), but this is not the case for the Ito calculus. Nevertheless it is possible to convert from one form of calculus to the other, and to restate an SDE in one form into the other (as well as their corresponding Fokker-Planck equations - see below). The properties obtained with both calculi are identical when the Gaussian white noise is additive. Further, it is necessary to use a numerical integration method (Kloeden and Platen 1992) that is compatible with the chosen calculus in order to match up simulation to theory. For example, the explicit Euler method is compatible with the Ito interpretation. In the following text the Ito calculus is used.
One may be interested in the behavior of individual trajectories, features of which can be compared to those from experimental measurements, or in the evolution of probability densities. The SDE approach is concerned with the former, and involves either exact or approximate analytical solutions, or numerical solutions; the Fokker-Planck approach (or more generally, the Chapman-Kolmogorov approach - see below) focusses on time-dependent probability densities. The SDE approach can also be used to compute densities relevant to the latter approach.
The SDE of the type seen in Eq.(3) defines a "Langevin equation", which is particularly interesting in the case where \(a(x,\langle\mu\rangle)\) is nonlinear in \(x\ .\) One can numerically integrate such a nonlinear Langevin equation with flow \(a(x,\langle\mu\rangle )\) using a simple Euler-Maruyama method (Sancho et al. 1982; Kloeden and Platen 1992) with a fixed time step \(\Delta t\ :\) \[ x(t+\Delta t) = x(t) + a(x,t;\langle\mu\rangle) \Delta t + b(x,t)\sqrt{\Delta t}\, \Delta W_n. \] where we have added an explicit time dependence in the functions \(a\) and \(b\) for generality (for example they can include external sinusoidal forcing). The variables \({\Delta W_n}\) are known as increments of the Wiener process; they are Gaussian numbers generated in an uncorrelated fashion, for example by using a pseudo-random number generator in combination with the Box-Muller algorithm. Such generators are in fact high-dimensional chaotic systems. Such algorithms must be "seeded", i.e., provided with an initial condition. One of the results of the theory of Gaussian white noise is that the random term in Eq.4 is multiplied by the square root of the time step rather than by the time step itself. Appropriate averages of the relevant properties can be computed over many realizations, each with a different seed. More precise numerical algorithms for solving SDE's are also available, and are easily generalized to multiplicative noise processes of the type shown in Eq.3 (see e.g. Sancho et al., 1982; Kloeden and Platen, 1992; Honeycutt and Fox 1992).
One can study how an ensemble of initial conditions, characterized by an initial density, propagates under the action of the SDE Eq.(3). The evolution of this density is governed by a deterministic partial differential equation in the density variable \(\rho (x,t)\ ,\) known as the Fokker-Planck equation (Gardiner 1985, Risken 1989). In one-dimension, this equation reads: \[\tag{4} \frac{\partial \rho}{\partial t} = \frac{1}{2} \frac{\partial^2\left[ b^2(x)\rho\right]}{\partial x^2} - \frac {\partial\left[ a(x)\rho\right]}{\partial x} \]
Note that the probabilistic aspect of the problem has been shifted from individual realizations of a SDE to a probability density that evolves according to a deterministic linear evolution equation.
Setting the left hand side of Eq.(4) to zero and solving the resulting ordinary differential equation yields the stationary density, \(\rho_s \equiv \rho (x,\infty )\ .\) For simple systems, it is possible to calculate \(\rho_s\ ,\) and sometimes even \(\rho (x,t)\ .\) However, for general nonlinear problems, usually one can at best approximate \(\rho_s\ .\) In either cases, a knowledge of the stationary probability density gives us a wealth of statistical information in the asymptotic regime. One can also solve the Fokker-Planck equation numerically, or adopt the Langevin approach, and estimate the density directly from realizations of the SDE.
The foundation for the study of stochastic dynamical systems is built on the Chapman-Kolmogorov equation which, for a continuous state space for the variable \(x\) and the time ordering \(t_1\ge t_2 \ge t_3\ ,\) reads: \[ p(x_1,t_1|x_3,t_3)=\int \,dx_2 p(x_1,t_1|x_2,t_2) p(x_2,t_2|x_3,t_3) \,. \] It embodies the Markov assumption according to which singly (as opposed to multiply) conditioned probabilities are sufficient to compute transition probabilities in state space. This is a statement of the fact that only the present state is necessary to compute the future state; the past is irrelevant for this computation. This Markov assumption is not strictly valid in any physical setting, where the immediate history will play a role in the future evolution (see Gardiner 1985; Risken 1989). Nevertheless, the mathematical idealization that is the Markov process is useful for describing reality.
The differential form of the Chapman-Kolmogorov equation can also studied. For a one-dimensional system it reads (multivariate forms can be found in Gardiner 1985; Risken 1989): \[ \frac{\partial p(z,t|y,s)}{\partial t} = - \frac{\partial}{\partial z} \left[a(z,t)p(z,t|y,s)\right] + \frac{1}{2} \frac{\partial^2}{\partial z^2} \left[ b(z,t) p(z,t|y,s)\right] + \int\, dx W(z|x,t) p(x,t|y,s) - \int\, dx W(x|z,t)p(z,t|y,s) \,\,, \] where the conditional quantity \(W(x|z,t)\ ,\) a transition probability for discrete jumps in the state space, must be distinguished from the Wiener process used above. This equation assumes the following definitions, with \(\epsilon=|x-z |\ :\) \[ \lim_{\Delta t\rightarrow 0} p(x,t+\Delta t|z,t)/\Delta t = W(x|z,t) \] \[ \lim_{\Delta t\rightarrow 0} \frac{1}{\Delta t} \int_{\epsilon} dx (x-z) p(x,t+\Delta t|z,t) = a(z,t) + O(\epsilon ) \] \[ \lim_{\Delta t\rightarrow 0} \frac{1}{\Delta t} \int_{\epsilon} dx (x-z)^2 p(x,t+\Delta t|z,t) = b(z,t)+O(\epsilon ) \,. \] It can be shown that the quantities \(A\) and \(B\) are associated with continuous motion, while \(W\) is associated with discontinuous motion.
The differential form of the Chapman-Kolmogorov equation in which \(a(z,t)=b(z,t)=0\) is known as a Master equation. The system admits solutions which are constants separated by finite jumps with density \(W(x|z,t)\ .\) The jumps occur at discrete time points, and accordingly this process is sometimes called a jump process.
For \(b=W=0\ ,\) the differential form of the Chapman-Kolmogorov equation is known as the Liouville equation, which describes completely deterministic motion as in classical mechanics.
If the quantity \(W(x|z,t)\) is zero, the differential form of the Chapman-Kolmogorov equation reduces to the Fokker-Planck equation. The process then has continuous paths. The quantities \(a\) and \(b\) are known as the drift and diffusion of the process, identified respectively with the functions \(a\) and \(b^2\) in Eq.(3).
Noise-induced states are a nontrivial effect of noise. Their study requires the prior definition of the notion of a "state" in a stochastic sense, distinct from that of a "state variable". A stochastic state is the analogue of an attractor in a deterministic dynamical system. Specifically, it is the value of the dynamical variable for which the stationary probability distribution is a maximum. There can be more than one such state. These states may be the same as in the noiseless case. However, the positions and even the number of stochastic states may differ from the deterministic case.
In the case where the number of stochastic states is larger than the number of stable deterministic fixed points, one speaks of the creation of stochastic states by noise. Examples across a variety of disciplines in the natural sciences can be found in the book by Horsthemke and Lefever (1984), and in Schimansky-Geier et al. (1985), Hanggi et al (2005). Generally noise reveals the presence of nearby bifurcations by producing behavior that is stereotyped for a given bifurcation (Wiesenfeld 1985). It can produce stochastic versions of various deterministic phenomena such as phase locking (Longtin and Chialvo, 1998) as in neurons and other excitable systems, in which also can create very long time scales, e.g., a "noise-induced" memory, (Chialvo et al., 2000).
As mentioned in the introduction, noise can have high-dimensional deterministic origins. In fact, a pseudo-random number generator is one such system operating in discrete time, i.e. it is a high-dimensional map operating in a chaotic regime. Chaotic systems share many properties with noisy systems (Lasota and Mackey, 1994), such as their ability to synchronize (Pikovsky et al., 2001). A recent review of the effect of chaotic dynamics as "deterministic Brownian motion" on other dynamical systems can be found in Mackey and Tyran-Kaminska (2006).
Internal references