Harmonic distribution

From HandWiki - Reading time: 6 min

Harmonic
Probability density function
ProbDensFunc
Cumulative distribution function
CumDisFunc
Notation [math]\displaystyle{ \mathrm{Harm}(m,a)\, }[/math]
Parameters m ≥ 0, a ≥ 0
Support x > 0
PDF [math]\displaystyle{ \frac{1}{2xK_{0}(a)}\exp\left(-\frac{a}{2} \left(\frac{x}{m}+\frac{m}{x} \right)\right) }[/math]
Mean [math]\displaystyle{ m\frac{K_1(a)}{K_0(a)} }[/math]
Median m
Mode [math]\displaystyle{ \frac{m(\sqrt{a^2+1}-1)}{a} }[/math]
Variance [math]\displaystyle{ m^2\left(1+\frac{2K_1(a)}{K_0(a)a}-\frac{K_1^2(a)}{K_0^2(a)}\right) }[/math]
Skewness [math]\displaystyle{ \frac{K_0^2(a)K_3(a)-3K_0(a)K_1(a)K_2(a)+2K_1^3(a)}{(K_0(a)K_2(a)-K_1^2(a))^{3/2}} }[/math]
Kurtosis (see text)

In probability theory and statistics, the harmonic distribution is a continuous probability distribution. It was discovered by Étienne Halphen, who had become interested in the statistical modeling of natural events. His practical experience in data analysis motivated him to pioneer a new system of distributions that provided sufficient flexibility to fit a large variety of data sets. Halphen restricted his search to distributions whose parameters could be estimated using simple statistical approaches. Then, Halphen introduced for the first time what he called the harmonic distribution or harmonic law. The harmonic law is a special case of the generalized inverse Gaussian distribution family when [math]\displaystyle{ \gamma=0 }[/math].

History

One of Halphen's tasks, while working as statistician for Electricité de France, was the modeling of the monthly flow of water in hydroelectric stations. Halphen realized that the Pearson system of probability distributions could not be solved; it was inadequate for his purpose despite its remarkable properties. Therefore, Halphen's objective was to obtain a probability distribution with two parameters, subject to an exponential decay both for large and small flows.

In 1941, Halphen decided that, in suitably scaled units, the density of X should be the same as that of 1/X.[1] Taken this consideration, Halphen found the harmonic density function. Nowadays known as a hyperbolic distribution, has been studied by Rukhin (1974) and Barndorff-Nielsen (1978).[2]

The harmonic law is the only one two-parameter family of distributions that is closed under change of scale and under reciprocals, such that the maximum likelihood estimator of the population mean is the sample mean (Gauss' principle).[3]

In 1946, Halphen realized that introducing an additional parameter, flexibility could be improved. His efforts led him to generalize the harmonic law to obtain the generalized inverse Gaussian distribution density.[1]

Definition

Notation

The harmonic distribution will be denoted by [math]\displaystyle{ \theta(m,a) }[/math]. As a result, when a random variable X is distributed following a harmonic law, the parameter of scale m is the population median and a is the parameter of shape.

[math]\displaystyle{ X\ \sim\operatorname{Harm}(m,a)\, }[/math]

Probability density function

The density function of the harmonic law, which depends on two parameters,[3] has the form,

[math]\displaystyle{ f(x;m,a)= \frac{1}{2xK_0(a)}\exp\left(-\frac{a}{2}\left(\frac{x}{m}+\frac{m}{x}\right)\right) }[/math]

where

  • [math]\displaystyle{ K_0(a) }[/math] denotes the third kind of the modified Bessel function with index 0,
  • [math]\displaystyle{ m \ge 0, }[/math]
  • [math]\displaystyle{ a \ge 0. }[/math]

Properties

Moments

To derive an expression for the non-central moment of order r, the integral representation of the Bessel function can be used.[4]

[math]\displaystyle{ \mu'_r = \int_0^\infty x^r f(x;m,a) \, dx= m^r \frac{K_r(a)}{K_0(a)} }[/math]

where:

  • r denotes the order of the moment.

Hence the mean and the succeeding three moments about it are

Order Moment Cumulant
1 [math]\displaystyle{ \mu_1 = m\frac{K_1(a)}{K_0(a)} }[/math] [math]\displaystyle{ \mu }[/math]
2 [math]\displaystyle{ \mu_2 = m^2\left(\frac{K_2(a)}{K_0(a)}-\frac{K_1^2(a)}{K_0^2(a)}\right) }[/math] [math]\displaystyle{ \sigma^2 }[/math]
3 [math]\displaystyle{ \mu_3 = m^3 \left(\frac{K_3(a)}{K_0(a)}-3\frac{K_1(a)K_2(a)}{K_0^2(a)} +2 + \frac{K_1^2(a)}{K_0^3(a)}\right) }[/math] [math]\displaystyle{ k_3 }[/math]
4 [math]\displaystyle{ \mu_4 = m^4\left(\frac{K_4(a)}{K_0(a)}-4\frac{K_1(a)K_3(a)}{K_0^2(a)} + 6\frac{K_1^2(a)K_2(a)}{K_0^3(a)} - 3\frac{K_1^4(a)}{K_0^4(a)}\right) }[/math] [math]\displaystyle{ k_4 }[/math]

Skewness

Skewness is the third standardized moment around the mean divided by the 3/2 power of the standard deviation, we work with,[4]

[math]\displaystyle{ \gamma_1=\frac{\mu_3}{\mu_2^{3/2}}=\frac{K_0^2(a)K_3(a)-3K_0(a)K_1(a)K_2(a) + 2K_1^3(a)}{(K_0(a)K_2(a)-K_1^2(a))^{3/2}} }[/math]
  • Always [math]\displaystyle{ \gamma_1\gt 0 }[/math], so the mass of the distribution is concentrated on the left.

Kurtosis

The coefficient of kurtosis is the fourth standardized moment divided by the square of the variance., for the harmonic distribution it is[4]

[math]\displaystyle{ \gamma_2=\frac{\mu_4}{\mu_2^2} = \frac{K_0^3(a)K_4(a)-4K_0^2(a)K_1(a) K_3(a) + 6K_0(a) K_1^2(a) K_2(a)-3K_1^4(a)}{(K_0(a)K_2(a)-K_1^2(a))^2} }[/math]
  • Always [math]\displaystyle{ \gamma_2\gt 0 }[/math] the distribution has a high acute peak around the mean and fatter tails.

Parameter estimation

Maximum likelihood estimation

The likelihood function is

[math]\displaystyle{ L(a,m)= \prod_{i=1}^n f(x_i\mid a,m)= \prod_{i=1}^n \frac{1}{2x_i K_0(a)} \exp\left[-\frac{a}{2} \left(\frac{x_i}{m}+\frac{m}{x_i}\right)\right]. }[/math]

After that, the log-likelihood function is

[math]\displaystyle{ \ell(a,m) = \ln L(a,m)= -n\ln(2K_0(a)) - \sum_{i=1}^n \ln x_i + \frac{a}{2m} \sum_{i=1}^n x_i +\frac{am}{2}\sum_{i=1}^n \frac{1}{x_i}. }[/math]

From the log-likelihood function, the likelihood equations are

[math]\displaystyle{ \frac{\partial\ell}{\partial a} = -n\frac{K_0'(a)}{K_0(a)} + \frac{1}{2m} \sum_{i=1}^n x_i + \frac{m}{2} \sum_{i=1}^n \frac{1}{x_i}=0, }[/math]
[math]\displaystyle{ \frac{\partial\ell}{\partial m} = \frac{1}{2m^2} \sum_{i=1}^n x_i +\frac{a}{2} \sum_{i=1}^n \frac{1}{x_i} = 0. }[/math]

These equations admit only a numerical solution for a, but we have

[math]\displaystyle{ \hat{m}=\sqrt{\frac{\bar{H}}{\bar{H}_{-1}}}; \qquad \sqrt{\bar{H}\bar{H}_{-1}}=\frac{K_1(\hat{a})}{K_0(\hat{a})}. }[/math]

Method of moments

The mean and the variance for the harmonic distribution are,[3][4]

[math]\displaystyle{ \begin{cases} \mu = m\frac{K_1(a)}{K_0(a)} \\ \sigma^2 = m^2 \left( 1+\frac{2K_1(a)}{K_0(a)a}-\frac{K_1^2(a)}{K_0^2(a)}\right) \end{cases} }[/math]

Note that

[math]\displaystyle{ \sigma^2 = \mu^2\left(\frac{2K_0(a)}{K_1(a)}\right)^2+\frac{2K_0(a)\mu^2}{K_1(a)a}- \mu^2 }[/math]

The method of moments consists in to solve the following equations:

[math]\displaystyle{ \begin{cases} \bar{H}=m\frac{K_1(a)}{K_0(a)} \\ s^2= \bar{H}^2 \left( \frac{2K_0(a)}{K_1(a)} \right)^2+\frac{2K_0(a)\bar{H}^2}{K_1(a)a}- \bar{H}^2 \end{cases} }[/math]

where [math]\displaystyle{ s^2 }[/math] is the sample variance and [math]\displaystyle{ \bar{H} }[/math] is the sample mean. Solving the second equation we obtain [math]\displaystyle{ \hat{a} }[/math], and then we calculate [math]\displaystyle{ \hat{m} }[/math] using

[math]\displaystyle{ \hat{m}=\frac{\bar{H}K_0(\hat{a})}{K_1(\hat{a})}. }[/math]

Related distributions

The harmonic law is a sub-family of the generalized inverse Gaussian distribution. The density of GIG family have the form

[math]\displaystyle{ f(x\mid m,\gamma)= \frac{x^{\gamma-1}}{2m^\gamma K_\gamma(a)}\exp\left[-\frac{a}{2} \left(\frac{x}{m}+\frac{m}{x}\right)\right] }[/math]

The density of the generalized inverse Gaussian distribution family corresponds to the harmonic law when [math]\displaystyle{ \gamma=0 }[/math].[3]

When [math]\displaystyle{ a }[/math] tends to infinity, the harmonic law can be approximated by a normal distribution. This is indicated by demonstrating that if [math]\displaystyle{ a }[/math] tends to infinity, then [math]\displaystyle{ U=\sqrt{a}\left(\frac{X}{m}-1\right) }[/math], which is a linear transformation of X, tends to a normal distribution ([math]\displaystyle{ N(0,1) }[/math]).

This explains why the normal distribution can be used successfully for certain data sets of ratios.[4]

Another related distribution is the log-harmonic law, which is the probability distribution of a random variable whose logarithm follows an harmonic law.

This family has an interesting property, the Pitman estimator of the location parameter does not depend on the choice of the loss function. Only two statistical models satisfy this property: One is the normal family of distributions and the other one is a three-parameter statistical model which contains the log-harmonic law.[2]

See also

References

  1. 1.0 1.1 Kots, Samuel L. (1982–1989). Encyclopedia of statistical sciences. 5. pp. 3059–3061 3069–3072. 
  2. 2.0 2.1 Rukhin, A.L. (1978). "Strongly symmetrical families and statistical analysis of their parameters". Journal of Soviet Mathematics 9 (6): 886–910. doi:10.1007/BF01092900. 
  3. 3.0 3.1 3.2 3.3 Puig, Pere (2008). "A note on the harmonic law: A two-parameter family of distributions for ratios". Statistics and Probability Letters 78 (3): 320–326. doi:10.1016/j.spl.2007.07.024. 
  4. 4.0 4.1 4.2 4.3 4.4 Perrault, L.; Bobée, B.; Rasmussen, P.F. (1999). "Halphen distribution system. I: Mathematical and statistical properties.". Journal of Hydrologic Engineering 4 (3): 189–199. doi:10.1061/(ASCE)1084-0699(1999)4:3(189). 




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Harmonic_distribution
25 views | Status: cached on July 31 2024 22:46:01
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF