Categories
  • Probability theorems
  •   Encyclosphere.org ENCYCLOREADER
      supported by EncyclosphereKSF

    Heyde theorem

    From Handwiki - Reading time: 1 min

    In the mathematical theory of probability, the Heyde theorem  is the characterization theorem concerning the normal distribution (the Gaussian distribution) by the symmetry of one linear form given another. This theorem was proved by C. C. Heyde.

    Formulation

    Let [math]\displaystyle{ \xi_j, j = 1, 2, \ldots, n, n \ge 2 }[/math]  be independent random variables. Let [math]\displaystyle{ \alpha_j, \beta_j }[/math]  be nonzero constants such that [math]\displaystyle{ \frac{\beta_i}{\alpha_i} + \frac{\beta_j}{\alpha_j} \ne 0 }[/math] for all [math]\displaystyle{ i \ne j }[/math]. If the conditional distribution of the linear form [math]\displaystyle{ L_2 = \beta_1\xi_1 + \cdots + \beta_n\xi_n }[/math] given [math]\displaystyle{ L_1 = \alpha_1\xi_1 + \cdots + \alpha_n\xi_n }[/math] is symmetric then all random variables [math]\displaystyle{ \xi_j }[/math] have normal distributions (Gaussian distributions).

    References

    · C. C. Heyde, “Characterization of the normal law by the symmetry of a certain conditional distribution,” Sankhya, Ser. A,32, No. 1, 115–118 (1970).

    · A. M. Kagan, Yu. V. Linnik, and C. R. Rao, Characterization Problems in Mathematical Statistics, Wiley, New York (1973).



    This article is licensed under CC BY-SA 3.0.
    Original source: https://handwiki.org/wiki/Heyde theorem
    Status: article is cached
    Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF