The joint distribution of the elements from the sample covariance matrix of observations from a multivariate normal distribution. Let the results of observations have a $ p $-dimensional normal distribution $ N( \mu , \Sigma ) $
with vector mean $ \mu $
and covariance matrix $ \Sigma $.
Then the joint density of the elements of the matrix $ A= \sum _ {i= 1} ^ {n} ( X _ {i} - \overline{X}\; ) ( X _ {i} - \overline{X}\; ) ^ \prime $
is given by the formula
$$ w( n, \Sigma ) = \frac{| A | ^ {( n- p)/2 } e ^ {- \mathop{\rm tr} ( A \Sigma ^ {- 1} )/2 } }{2 ^ {( n- 1) p/2 } \pi ^ {p( p- 1)/4 } | \Sigma | ^ {( n- 1)/2 } \prod _ { i= 1} ^ { p } \Gamma \left ( n- \frac{i}{2} \right ) } $$
( $ \mathop{\rm tr} M $ denotes the trace of a matrix $ M $), if the matrix $ \Sigma $ is positive definite, and $ w( n, \Sigma )= 0 $ in other cases. The Wishart distribution with $ n $ degrees of freedom and with matrix $ \Sigma $ is defined as the $ p( n+ 1)/2 $- dimensional distribution $ W( n, \Sigma ) $ with density $ w( n, \Sigma ) $. The sample covariance matrix $ S= A/( n- 1) $, which is an estimator for the matrix $ \Sigma $, has a Wishart distribution.
The Wishart distribution is a basic distribution in multivariate statistical analysis; it is the $ p $-dimensional generalization (in the sense above) of the $ 1 $-dimensional "chi-squared" distribution.
If the independent random vectors $ X $ and $ Y $ have Wishart distributions $ W( n _ {1} , \Sigma ) $ and $ W ( n _ {2} , \Sigma ) $, respectively, then the vector $ X + Y $ has the Wishart distribution $ W( n _ {1} + n _ {2} , \Sigma ) $.
The Wishart distribution was first used by J. Wishart [1].
[1] | J. Wishart, Biometrika A , 20 (1928) pp. 32–52 |
[2] | T.W. Anderson, "An introduction to multivariate statistical analysis" , Wiley (1958) |
[a1] | A.M. Khirsagar, "Multivariate analysis" , M. Dekker (1972) |
[a2] | R.J. Muirhead, "Aspects of multivariate statistical theory" , Wiley (1982) |