In linear algebra and statistics, the pseudo-determinant[1] is the product of all non-zero eigenvalues of a square matrix. It coincides with the regular determinant when the matrix is non-singular.
The pseudo-determinant of a square n-by-n matrix A may be defined as:
where |A| denotes the usual determinant, I denotes the identity matrix and rank(A) denotes the rank of A.[2]
The Vahlen matrix of a conformal transformation, the Möbius transformation (i.e. [math]\displaystyle{ (ax + b)(cx + d)^{-1} }[/math] for [math]\displaystyle{ a, b, c, d \in \mathcal{G}(p, q) }[/math]), is defined as [math]\displaystyle{ [f] = \begin{bmatrix}a & b \\c & d \end{bmatrix} }[/math]. By the pseudo-determinant of the Vahlen matrix for the conformal transformation, we mean
If [math]\displaystyle{ \operatorname{pdet}[f] \gt 0 }[/math], the transformation is sense-preserving (rotation) whereas if the [math]\displaystyle{ \operatorname{pdet}[f] \lt 0 }[/math], the transformation is sense-preserving (reflection).
If [math]\displaystyle{ A }[/math] is positive semi-definite, then the singular values and eigenvalues of [math]\displaystyle{ A }[/math] coincide. In this case, if the singular value decomposition (SVD) is available, then [math]\displaystyle{ |\mathbf{A}|_+ }[/math] may be computed as the product of the non-zero singular values. If all singular values are zero, then the pseudo-determinant is 1.
Supposing [math]\displaystyle{ \operatorname{rank}(A) = k }[/math], so that k is the number of non-zero singular values, we may write [math]\displaystyle{ A = PP^\dagger }[/math] where [math]\displaystyle{ P }[/math] is some n-by-k matrix and the dagger is the conjugate transpose. The singular values of [math]\displaystyle{ A }[/math] are the squares of the singular values of [math]\displaystyle{ P }[/math] and thus we have [math]\displaystyle{ |A|_+ = \left|P^\dagger P\right| }[/math], where [math]\displaystyle{ \left|P^\dagger P\right| }[/math] is the usual determinant in k dimensions. Further, if [math]\displaystyle{ P }[/math] is written as the block column [math]\displaystyle{ P = \left(\begin{smallmatrix} C \\ D \end{smallmatrix}\right) }[/math], then it holds, for any heights of the blocks [math]\displaystyle{ C }[/math] and [math]\displaystyle{ D }[/math], that [math]\displaystyle{ |A|_+ = \left|C^\dagger C + D^\dagger D\right| }[/math].
If a statistical procedure ordinarily compares distributions in terms of the determinants of variance-covariance matrices then, in the case of singular matrices, this comparison can be undertaken by using a combination of the ranks of the matrices and their pseudo-determinants, with the matrix of higher rank being counted as "largest" and the pseudo-determinants only being used if the ranks are equal.[3] Thus pseudo-determinants are sometime presented in the outputs of statistical programs in cases where covariance matrices are singular.[4]
Original source: https://en.wikipedia.org/wiki/Pseudo-determinant.
Read more |