A method for determining a probability distribution by its moments (cf. Moment). Theoretically the method of moments is based on the uniqueness of the solution of the moment problem: If
are the moments of
The use of the method of moments in the proof of limit theorems in probability theory and mathematical statistics is based on the correspondence between moments and the convergence of distributions: If
The method of moments in mathematical statistics is one of the general methods for finding statistical estimators of unknown parameters of a probability distribution from results of observations. The method of moments was first used to this end by K. Pearson (1894) to solve the problem of the approximation of an empirical distribution by a system of Pearson distributions (cf. Pearson distribution). The procedure in the method of moments is this: The moments of the empirical distribution are determined (the sample moments), equal in number to the number of parameters to be estimated; they are then equated to the corresponding moments of the probability distribution, which are functions of the unknown parameters; the system of equations thus obtained is solved for the parameters and the solutions are the required estimates. In practice the method of moments often leads to very simple calculations. Under fairly general conditions the method of moments allows one to find estimators that are asymptotically normal, have mathematical expectation that differs from the true value of the parameter only by a quantity of order
[1] | Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes" , Springer (1969) (Translated from Russian) |
[2] | H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946) |
[3] | M.G. Kendall, A. Stuart, "The advanced theory of statistics" , 1 , Griffin (1987) |