In statistics, the matrix variate beta distribution is a generalization of the beta distribution. If [math]\displaystyle{ U }[/math] is a [math]\displaystyle{ p\times p }[/math] positive definite matrix with a matrix variate beta distribution, and [math]\displaystyle{ a,b\gt (p-1)/2 }[/math] are real parameters, we write [math]\displaystyle{ U\sim B_p\left(a,b\right) }[/math] (sometimes [math]\displaystyle{ B_p^I\left(a,b\right) }[/math]). The probability density function for [math]\displaystyle{ U }[/math] is:
Notation | [math]\displaystyle{ {\rm B}_{p}(a,b) }[/math] | ||
---|---|---|---|
Parameters | [math]\displaystyle{ a,b }[/math] | ||
Support | [math]\displaystyle{ p\times p }[/math] matrices with both [math]\displaystyle{ U }[/math] and [math]\displaystyle{ I_p-U }[/math] positive definite | ||
[math]\displaystyle{ \left\{\beta_p\left(a,b\right)\right\}^{-1} \det\left(U\right)^{a-(p+1)/2}\det\left(I_p-U\right)^{b-(p+1)/2}. }[/math] | |||
CDF | [math]\displaystyle{ {}_1F_1\left(a;a+b;iZ\right) }[/math] |
Here [math]\displaystyle{ \beta_p\left(a,b\right) }[/math] is the multivariate beta function:
where [math]\displaystyle{ \Gamma_p\left(a\right) }[/math] is the multivariate gamma function given by
If [math]\displaystyle{ U\sim B_p(a,b) }[/math] then the density of [math]\displaystyle{ X=U^{-1} }[/math] is given by
provided that [math]\displaystyle{ X\gt I_p }[/math] and [math]\displaystyle{ a,b\gt (p-1)/2 }[/math].
If [math]\displaystyle{ U\sim B_p(a,b) }[/math] and [math]\displaystyle{ H }[/math] is a constant [math]\displaystyle{ p\times p }[/math] orthogonal matrix, then [math]\displaystyle{ HUH^T\sim B(a,b). }[/math]
Also, if [math]\displaystyle{ H }[/math] is a random orthogonal [math]\displaystyle{ p\times p }[/math] matrix which is independent of [math]\displaystyle{ U }[/math], then [math]\displaystyle{ HUH^T\sim B_p(a,b) }[/math], distributed independently of [math]\displaystyle{ H }[/math].
If [math]\displaystyle{ A }[/math] is any constant [math]\displaystyle{ q\times p }[/math], [math]\displaystyle{ q\leq p }[/math] matrix of rank [math]\displaystyle{ q }[/math], then [math]\displaystyle{ AUA^T }[/math] has a generalized matrix variate beta distribution, specifically [math]\displaystyle{ AUA^T\sim GB_q\left(a,b;AA^T,0\right) }[/math].
If [math]\displaystyle{ U\sim B_p\left(a,b\right) }[/math] and we partition [math]\displaystyle{ U }[/math] as
where [math]\displaystyle{ U_{11} }[/math] is [math]\displaystyle{ p_1\times p_1 }[/math] and [math]\displaystyle{ U_{22} }[/math] is [math]\displaystyle{ p_2\times p_2 }[/math], then defining the Schur complement [math]\displaystyle{ U_{22\cdot 1} }[/math] as [math]\displaystyle{ U_{22}-U_{21}{U_{11}}^{-1}U_{12} }[/math] gives the following results:
Mitra proves the following theorem which illustrates a useful property of the matrix variate beta distribution. Suppose [math]\displaystyle{ S_1,S_2 }[/math] are independent Wishart [math]\displaystyle{ p\times p }[/math] matrices [math]\displaystyle{ S_1\sim W_p(n_1,\Sigma), S_2\sim W_p(n_2,\Sigma) }[/math]. Assume that [math]\displaystyle{ \Sigma }[/math] is positive definite and that [math]\displaystyle{ n_1+n_2\geq p }[/math]. If
where [math]\displaystyle{ S=S_1+S_2 }[/math], then [math]\displaystyle{ U }[/math] has a matrix variate beta distribution [math]\displaystyle{ B_p(n_1/2,n_2/2) }[/math]. In particular, [math]\displaystyle{ U }[/math] is independent of [math]\displaystyle{ \Sigma }[/math].
Original source: https://en.wikipedia.org/wiki/Matrix variate beta distribution.
Read more |