A measure of the long-range dependence of a time series
The Hurst exponent is used as a measure of long-term memory of time series. It relates to the autocorrelations of the time series, and the rate at which these decrease as the lag between pairs of values increases. Studies involving the Hurst exponent were originally developed in hydrology for the practical matter of determining optimum dam sizing for the Nile river's volatile rain and drought conditions that had been observed over a long period of time.[1][2] The name "Hurst exponent", or "Hurst coefficient", derives from Harold Edwin Hurst (1880–1978), who was the lead researcher in these studies; the use of the standard notation H for the coefficient also relates to his name.
In fractal geometry, the generalized Hurst exponent has been denoted by H or Hq in honor of both Harold Edwin Hurst and Ludwig Otto Hölder (1859–1937) by Benoît Mandelbrot (1924–2010).[3]H is directly related to fractal dimension, D, and is a measure of a data series' "mild" or "wild" randomness.[4]
The Hurst exponent is referred to as the "index of dependence" or "index of long-range dependence". It quantifies the relative tendency of a time series either to regress strongly to the mean or to cluster in a direction.[5] A value H in the range 0.5–1 indicates a time series with long-term positive autocorrelation, meaning that the decay in autocorrelation is slower than exponential, following a power law; for the series it means that a high value tends to be followed by another high value and that future excursions to more high values do occur. A value in the range 0 – 0.5 indicates a time series with long-term switching between high and low values in adjacent pairs, meaning that a single high value will probably be followed by a low value and that the value after that will tend to be high, with this tendency to switch between high and low values lasting a long time into the future, also following a power law. A value of H=0.5 indicates short-memory, with (absolute) autocorrelations decaying exponentially quickly to zero.
The Hurst exponent, H, is defined in terms of the asymptotic behaviour of the rescaled range as a function of the time span of a time series as follows;[6][7]
where
is the range of the first cumulative deviations from the mean
For self-similar time series,
H is directly related to fractal dimension, D, where 1 < D < 2, such that D = 2 - H. The values of the Hurst exponent vary between 0 and 1, with higher values indicating a smoother trend, less volatility, and less roughness.[8]
For more general time series or multi-dimensional process, the Hurst exponent and fractal dimension can be chosen independently, as the Hurst exponent represents structure over asymptotically longer periods, while fractal dimension represents structure over asymptotically shorter periods.[9]
A number of estimators of long-range dependence have been proposed in the literature. The oldest and best-known is the so-called rescaled range (R/S) analysis popularized by Mandelbrot and Wallis[3][10] and based on previous hydrological findings of Hurst.[1] Alternatives include DFA, Periodogram regression,[11] aggregated variances,[12] local Whittle's estimator,[13] wavelet analysis,[14][15] both in the time domain and frequency domain.
To estimate the Hurst exponent, one must first estimate the dependence of the rescaled range on the time span n of observation.[7] A time series of full length N is divided into a number of nonoverlapping shorter time series of length n, where n takes values N, N/2, N/4, ... (in the convenient case that N is a power of 2). The average rescaled range is then calculated for each value of n.
For each such time series of length , , the rescaled range is calculated as follows:[6][7]
Calculate the rescaled range and average over all the partial time series of length
The Hurst exponent is estimated by fitting the power law to the data. This can be done by plotting as a function of , and fitting a straight line; the slope of the line gives . A more principled approach would be to fit the power law in a maximum-likelihood fashion.[16] Such a graph is called a box plot. However, this approach is known to produce biased estimates of the power-law exponent.[clarification needed] For small there is a significant deviation from the 0.5 slope.[clarification needed] Anis and Lloyd[17] estimated the theoretical (i.e., for white noise)[clarification needed] values of the R/S statistic to be:
No asymptotic distribution theory has been derived for most of the Hurst exponent estimators so far. However, Weron[18] used bootstrapping to obtain approximate functional forms for confidence intervals of the two most popular methods, i.e., for the Anis-Lloyd[17] corrected R/S analysis:
Here and is the series length. In both cases only subseries of length were considered for estimating the Hurst exponent; subseries of smaller length lead to a high variance of the R/S estimates.
The basic Hurst exponent can be related to the expected size of changes, as a function of the lag between observations, as measured by E(|Xt+τ−Xt|2). For the generalized form of the coefficient, the exponent here is replaced by a more general term, denoted by q.
There are a variety of techniques that exist for estimating H, however assessing the accuracy of the estimation can be a complicated issue. Mathematically, in one technique, the Hurst exponent can be estimated such that:[19][20]
for a time series
may be defined by the scaling properties of its structure functions ():
where , is the time lag and averaging is over the time window
usually the largest time scale of the system.
Practically, in nature, there is no limit to time, and thus H is non-deterministic as it may only be estimated based on the observed data; e.g., the most dramatic daily move upwards ever seen in a stock market index can always be exceeded during some subsequent day.[21]
In the above mathematical estimation technique, the function H(q) contains information about averaged generalized volatilities at scale (only q = 1, 2 are used to define the volatility). In particular, the H1 exponent indicates persistent (H1 > 1⁄2) or antipersistent (H1 < 1⁄2) behavior of the trend.
In the above definition two separate requirements are mixed together as if they would be one.[24] Here are the two independent requirements: (i) stationarity of the increments, x(t+T) − x(t) = x(T) − x(0) in distribution. This is the condition that yields longtime autocorrelations. (ii) Self-similarity of the stochastic process then yields variance scaling, but is not needed for longtime memory. E.g., both Markov processes (i.e., memory-free processes) and fractional Brownian motion scale at the level of 1-point densities (simple averages), but neither scales at the level of pair correlations or, correspondingly, the 2-point probability density.[clarification needed]
An efficient market requires a martingale condition, and unless the variance is linear in the time this produces nonstationary increments, x(t+T) − x(t) ≠ x(T) − x(0). Martingales are Markovian at the level of pair correlations, meaning that pair correlations cannot be used to beat a martingale market. Stationary increments with nonlinear variance, on the other hand, induce the longtime pair memory of fractional Brownian motion that would make the market beatable at the level of pair correlations. Such a market would necessarily be far from "efficient".
An analysis of economic time series by means of the Hurst exponent using rescaled range and Detrended fluctuation analysis is conducted by econophysicist A.F. Bariviera.[25] This paper studies the time varying character of Long-range dependency and, thus of informational efficiency.
Matlab code for computing R/S, DFA, periodogram regression and wavelet estimates of the Hurst exponent and their corresponding confidence intervals is available from RePEc: https://ideas.repec.org/s/wuu/hscode.html
^ abHurst, H.E. (1951). "Long-term storage capacity of reservoirs". Transactions of the American Society of Civil Engineers. 116: 770. doi:10.1061/TACEAT.0006518.
^Mandelbrot, Benoit B.; Wallis, James R. (1969-10-01). "Robustness of the rescaled range R/S in the measurement of noncyclic long run statistical dependence". Water Resources Research. 5 (5): 967–988. Bibcode:1969WRR.....5..967M. doi:10.1029/WR005i005p00967. ISSN1944-7973.
^Geweke, J.; Porter-Hudak, S. (1983). "The Estimation and Application of Long Memory Time Series Models". J. Time Ser. Anal. 4 (4): 221–238. doi:10.1111/j.1467-9892.1983.tb00371.x.
^J. Beran. Statistics For Long-Memory Processes. Chapman and Hall, 1994.
^R. H. Riedi. Multifractal processes. In P. Doukhan, G. Oppenheim, and M. S. Taqqu, editors, The- ory And Applications Of Long-Range Dependence, pages 625–716. Birkh¨auser, 2003.
^ abAnnis, A. A.; Lloyd, E. H. (1976-01-01). "The expected value of the adjusted rescaled Hurst range of independent normal summands". Biometrika. 63 (1): 111–116. doi:10.1093/biomet/63.1.111. ISSN0006-3444.
^Joseph L McCauley, Kevin E Bassler, and Gemunu H. Gunaratne (2008) "Martingales, Detrending Data, and the Efficient Market Hypothesis", Physica, A37, 202, Open access preprint: arXiv:0710.2583