The normal distribution, also called Gaussian distribution by scientists (named after Carl Friedrich Gauss due to his rigorous application of the distribution to astronomical data (Havil, 2003)), is a continuous probability distribution of great importance in many fields. It is a family of distributions of the same general form, differing in their location and scale parameters: the mean ("average") and standard deviation ("variability"), respectively. The standard normal distribution is the normal distribution with a mean of zero and a variance of one (the green curves in the plots to the right). It is often called the bell curve because the graph of its probability density resembles a bell.
The importance of the normal distribution as a model of quantitative phenomena in the natural and behavioral sciences is due to the central limit theorem. Many psychological measurements and physical phenomena (like photon counts and noise) can be approximated well by the normal distribution. While the mechanisms underlying these phenomena are often unknown, the use of the normal model can be theoretically justified by assuming that many small, independent effects are additively contributing to each observation.
The normal distribution also arises in many areas of statistics. For example, the sampling distribution of the sample mean is approximately normal, even if the distribution of the population from which the sample is taken is not normal. In addition, the normal distribution maximizes information entropy among all distributions with known mean and variance, which makes it the natural choice of underlying distribution for data summarized in terms of sample mean and variance. The normal distribution is the most widely used family of distributions in statistics and many statistical tests are based on the assumption of normality. In probability theory, normal distributions arise as the limiting distributions of several continuous and discrete families of distributions.
The normal distribution was first introduced by Abraham de Moivre in an article in 1734 (reprinted in the second edition of his The Doctrine of Chances, 1738 in the context of approximating certain binomial distributions for large n. His result was extended by Laplace in his book Analytical Theory of Probabilities (1812), and is now called the theorem of de Moivre-Laplace.
Laplace used the normal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have used the method since 1794, justified it rigorously in 1809 by assuming a normal distribution of the errors.
The name "bell curve" goes back to Jouffret who first used the term "bell surface" in 1872 for a bivariate normal with independent components. The name "normal distribution" was coined independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875. This terminology unfortunately encourages the fallacy that many or all other probability distributions are not "normal". (See the discussion of "occurrence" below.)
That the distribution is called the Gaussian distribution is an instance of Stigler's law of eponymy: "No scientific discovery is named after its original discoverer."
There are various ways to characterize a probability distribution. The most visual is the probability density function (PDF); the PDF of the normal distribution is plotted at the beginning of this article. Equivalent ways are the cumulative distribution function, the moments, the cumulants, the characteristic function, the moment-generating function, the cumulant-generating function, and Maxwell's theorem. See probability distribution for a discussion.
To indicate that a random variable X is normally distributed with mean and variance we write
The probability density function of the normal distribution is a Gaussian function,
where is the standard deviation, and, is the expected value, and
is the density function of the "standard" normal distribution, i.e., the normal distribution with μ = 0 and σ = 1.
As a Gaussian function with the denominator of the exponent equal to two, the standard normal density function φ is an eigenfunction of the Fourier transform.
Some notable qualities of the probability density function:
The cumulative distribution function (cdf) of a probability distribution, evaluated at a number (lower-case) x, is the probability of the event that a random variable (capital) X with that distribution is less than or equal to x. The cumulative distribution function of the normal distribution is expressed in terms of the density function as follows:
where the standard normal cdf Φ is just the general cdf evaluated with μ = 0 and σ = 1:
The standard normal cdf can be expressed in terms of a special function called the error function, as
and the cdf itself can hence be expressed as
The inverse standard normal cumulative distribution function, or quantile function, can be expressed in terms of the inverse error function:
and the inverse cumulative distribution function can hence be expressed as
This quantile function is sometimes called the probit function. There is no elementary primitive for the probit function. This is not to say merely that none is known, but rather that the non-existence of such a function has been proved.
Values of Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions.
The moment generating function is defined as the expected value of . For a normal distribution, the moment generating function is
as can be seen by completing the square in the exponent.
The cumulant generating function is the logarithm of the moment generating function: g(t) = μt + σ2t2/2. Since this is a quadratic polynomial in t, only the first two cumulants are nonzero.
The characteristic function is defined as the expected value of , where is the imaginary unit. For a normal distribution, the characteristic function is
The characteristic function is obtained by replacing t with i t in the moment-generating function.
Some of the properties of the normal distribution:
As a consequence of Property 1, it is possible to relate all normal random variables to the standard normal.
If ~ , then
is a standard normal random variable: ~ . An important consequence is that the cdf of a general normal distribution is therefore
Conversely, if is a standard normal distribution, ~ , then
is a normal random variable with mean and variance .
The standard normal distribution has been tabulated (usually in the form of value of the cumulative distribution function Φ), and the other normal distributions are the simple transformations, as described above, of the standard one. Therefore, one can use tabulated values of the cdf of the standard normal distribution to find values of the cdf of a general normal distribution.
Some of the first few moments of the normal distribution are:
Number | Raw moment | Central moment | Cumulant |
---|---|---|---|
0 | 1 | 1 | |
1 | 0 | ||
2 | |||
3 | 0 | 0 | |
4 | 0 |
All of cumulants of the normal distribution beyond the second cumulant are zero.
For computer simulations, it is often useful to generate values that have a normal distribution. There are several methods and the most basic is to invert the standard normal cdf. More efficient methods are also known, one such method being the Box-Muller transform. An even faster algorithm is the ziggurat algorithm.
The Box-Muller algorithm says that, if you have two numbers a and b uniformly distributed on (0, 1], (e.g. the output from a random number generator), then a standard normally distributed random variable is c where:
This is a consequence of the fact that the chi-square distribution with two degrees of freedom (see property 4 above) is an easily-generated exponential random variable.
The normal distribution has the very important property that under certain conditions, the distribution of a sum of a large number of independent variables is approximately normal.
This is the central limit theorem.
The practical importance of the central limit theorem is that the normal distribution can be used as an approximation to some other distributions.
The approximating normal distribution has mean and variance .
The approximating normal distribution has mean and variance .
Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.
The normal distributions are infinitely divisible probability distributions.
The normal distributions are strictly stable probability distributions.
About 68% of values drawn from a standard normal distribution are within 1 standard deviation away from the mean; about 95% of the values are within two standard deviations and about 99.7% lie within 3 standard deviations. This is known as the "68-95-99.7 rule" or the "empirical rule".
To be more precise, the area under the curve between and is
where erf(x) is the error function. To six decimal places the values of the 1,2 and 3 sigma points are 0.682689, 0.954499, 0.997300 respectively.
The Normal distribution is a two-parameter exponential family with natural parameters and , and natural statistics and .
Consider complex Gaussian random variable,
where and are real and independent Gaussian variables with equal variances . The pdf of the joint variables is then
Because , the resulting PDF for the complex Gaussian variable is
Many scores are derived from the normal distribution, including percentile ranks ("percentiles"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, a number of behavioral statistical procedures are based on the assumption that scores are normally distributed; for example, t-tests and ANOVAs (see below). Bell curve grading assigns relative grades based on a normal distribution of scores.
Normality tests check a given set of data for similarity to the normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small P-value indicates non-normal data.
Suppose
are independent and each is normally distributed with expectation μ and variance σ2. In the language of statisticians, the observed values of these random variables make up a "sample from a normally distributed population." It is desired to estimate the "population mean" μ and the "population standard deviation" σ, based on observed values of this sample. The joint probability density function of these random variables is
(Note: Here the proportionality symbol means proportional as a function of and , not proportional as a function of That may be considered one of the differences between the statistician's point of view and the probabilist's point of view. The reason this is important will appear below.)
As a function of μ and σ this is the likelihood function
In the method of maximum likelihood, the values of μ and σ that maximize the likelihood function are taken to be estimates of the population parameters μ and σ.
Usually in maximizing a function of two variables one might consider partial derivatives. But here we will exploit the fact that the value of μ that maximizes the likelihood function with σ fixed does not depend on σ. Therefore, we can find that value of μ, then substitute it from μ in the likelihood function, and finally find the value of σ that maximizes the resulting expression.
It is evident that the likelihood function is a decreasing function of the sum
So we want the value of μ that minimizes this sum. Let
be the "sample mean". Observe that
Only the last term depends on μ and it is minimized by
That is the maximum-likelihood estimate of μ. When we substitute that estimate for μ in the likelihood function, we get
It is conventional to denote the "loglikelihood function", i.e., the logarithm of the likelihood function, by a lower-case , and we have
and then
This derivative is positive, zero, or negative according as σ2 is between 0 and
or equal to that quantity, or greater than that quantity.
Consequently this average of squares of residuals is maximum-likelihood estimate of σ2, and its square root is the maximum-likelihood estimate of σ. This estimator is biased, but has a smaller mean squared error than the usual unbiased estimator, which is n/(n − 1) times this estimator.
The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is subtle. It involves the spectral theorem and the reason it can be better to view a scalar as the trace of a 1×1 matrix than as a mere scalar. See estimation of covariance matrices.
The maximum likelihood estimator of the population mean from a sample is an unbiased estimator of the mean, as is the variance when the mean of the population is known a priori. However, if we are faced with a sample and have no knowledge of the mean or the variance of the population from which it is drawn, the unbiased estimator of the variance is:
This "sample variance" follows a Gamma distribution if all X are independent identically distributed (iid):
Approximately normal distributions occur in many situations, as a result of the central limit theorem. When there is reason to suspect the presence of a large number of small effects acting additively and independently, it is reasonable to assume that observations will be normal. There are statistical methods to empirically test that assumption, for example the Kolmogorov-Smirnov test.
Effects can also act as multiplicative (rather than additive) modifications. In that case, the assumption of normality is not justified, and it is the logarithm of the variable of interest that is normally distributed. The distribution of the directly observed variable is then called log-normal.
Finally, if there is a single external influence which has a large effect on the variable under consideration, the assumption of normality is not justified either. This is true even if, when the external variable is held constant, the resulting marginal distributions are indeed normal. The full distribution will be a superposition of normal variables, which is not in general normal. This is related to the theory of errors (see below).
To summarize, here is a list of situations where approximate normality is sometimes assumed. For a fuller discussion, see below.
Of relevance to biology and economics is the fact that complex systems tend to display power laws rather than normality.
Light intensity from a single source varies with time, as thermal fluctuations can be observed if the light is analyzed at sufficiently high time resolution. The intensity is usually assumed to be normally distributed. Quantum mechanics interprets measurements of light intensity as photon counting. The natural assumption in this setting is the Poisson distribution. When light intensity is integrated over times longer than the coherence time and is large, the Poisson-to-normal limit is appropriate.
Normality is the central assumption of the mathematical theory of errors. Similarly, in statistical model-fitting, an indicator of goodness of fit is that the residuals (as the errors are called in that setting) be independent and normally distributed. The assumption is that any deviation from normality needs to be explained. In that sense, both in model-fitting and in the theory of errors, normality is the only observation that need not be explained, being expected. However, if the original data are not normally distributed (for instance if they follow a Cauchy distribution), then the residuals will also not be normally distributed. This fact is usually ignored in practice.
Repeated measurements of the same quantity are expected to yield results which are clustered around a particular value. If all major sources of errors have been taken into account, it is assumed that the remaining error must be the result of a large number of very small additive effects, and hence normal. Deviations from normality are interpreted as indications of systematic errors which have not been taken into account. Whether this assumption is valid is debatable.
The sizes of full-grown animals is approximately lognormal. The evidence and an explanation based on models of growth was first published in the 1932 book Problems of Relative Growth by Julian Huxley.
Differences in size due to sexual dimorphism, or other polymorphisms like the worker/soldier/queen division in social insects, further make the distribution of sizes deviate from lognormality.
The assumption that linear size of biological specimens is normal (rather than lognormal) leads to a non-normal distribution of weight (since weight or volume is roughly proportional to the 2nd or 3rd power of length, and Gaussian distributions are only preserved by linear transformations), and conversely assuming that weight is normal leads to non-normal lengths. This is a problem, because there is no a priori reason why one of length, or body mass, and not the other, should be normally distributed. Lognormal distributions, on the other hand, are preserved by powers so the "problem" goes away if lognormality is assumed.
On the other hand, there are some biological measures where normality is assumed, such as blood pressure of adult humans. This is supposed to be normally distributed, but only after separating males and females into different populations (each of which is normally distributed).
Because of the exponential nature of inflation, financial indicators such as stock values, or commodity prices make good examples of multiplicative behavior. As such, periodic changes in them (for example, yearly changes) should not be expected to be normal, but perhaps lognormal. This was the theory proposed in 1900 by Louis Bachelier. However, Benoît Mandelbrot, the popularizer of fractals, showed that even the assumption of lognormality is flawed--the changes in logarithm over short periods (such as a day) are approximated well by distributions that do not have a finite variance, and therefore the central limit theorem does not apply. Rather, the sum of many such changes gives log-Levy distributions.
Sometimes, the difficulty and number of questions on an IQ test is selected in order to yield normal distributed results. Or else, the raw test scores are converted to IQ values by fitting them to the normal distribution. In either case, it is the deliberate result of test construction or score interpretation that leads to IQ scores being normally distributed for the majority of the population. However, the question whether intelligence itself is normally distributed is more involved, because intelligence is a latent variable, therefore its distribution cannot be observed directly. The Bell Curve is a controversial book on the topic of the heritability of intelligence. Despite its title, the book does not primarily address whether IQ is normally distributed.
The normal distribution is widely used in scientific and statistical computing. Therefore, it has been implemented in various ways.
The GNU Scientific Library calculates values of the standard normal CDF using piecewise approximations by rational functions. Another approximation method uses third-degree polynomials on intervals [1].
Generation of deviates from the unit normal is normally done using the Box-Muller method of choosing an angle uniformly and a radius exponential and then transforming to (normally distributed) x and y coordinates. If log, cos or sin are expensive then a simple alternative is to simply sum 12 uniform [−1/2, 1/2] deviates. This is equivalent to a twelfth-order polynomial approximation to the normal distribution and is quite usable in many applications.
A method that is much faster than the Box-Muller transform but which is still exact is the so called Ziggurat algorithm developed by George Marsaglia. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases where the combination of those two falls outside the "core of the ziggurat" a kind of rejection sampling using logarithms, exponentials and more uniform random numbers has to be employed.
In Microsoft Excel the function NORMSINV() calculates the cdf of the standard normal distribution. Critics of Excel have reported that some of its statistical algorithms are flawed.[2]
OTHER:
Categories: [Suggestion Bot Tag]