In statistics, modes of variation[1] are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample. Typically, variation patterns in the data can be decomposed in descending order of eigenvalues with the directions represented by the corresponding eigenvectors or eigenfunctions. Modes of variation provide a visualization of this decomposition and an efficient description of variation around the mean. Both in principal component analysis (PCA) and in functional principal component analysis (FPCA), modes of variation play an important role in visualizing and describing the variation in the data contributed by each eigencomponent.[2] In real-world applications, the eigencomponents and associated modes of variation aid to interpret complex data, especially in exploratory data analysis (EDA).
Modes of variation are a natural extension of PCA and FPCA.
If a random vector [math]\displaystyle{ \mathbf{X}=(X_1, X_2, \cdots, X_p)^T }[/math] has the mean vector [math]\displaystyle{ \boldsymbol{\mu}_p }[/math], and the covariance matrix [math]\displaystyle{ \mathbf{\Sigma}_{p\times p} }[/math] with eigenvalues [math]\displaystyle{ \lambda_1\geq \lambda_2\geq \cdots \geq \lambda_p\geq0 }[/math] and corresponding orthonormal eigenvectors [math]\displaystyle{ \mathbf{e}_1, \mathbf{e}_2, \cdots,\mathbf{e}_p }[/math], by eigendecomposition of a real symmetric matrix, the covariance matrix [math]\displaystyle{ \mathbf{\Sigma} }[/math] can be decomposed as
where [math]\displaystyle{ \mathbf{Q} }[/math] is an orthogonal matrix whose columns are the eigenvectors of [math]\displaystyle{ \mathbf{\Sigma} }[/math], and [math]\displaystyle{ \mathbf{\Lambda} }[/math] is a diagonal matrix whose entries are the eigenvalues of [math]\displaystyle{ \mathbf{\Sigma} }[/math]. By the Karhunen–Loève expansion for random vectors, one can express the centered random vector in the eigenbasis
where [math]\displaystyle{ \xi_k=\mathbf{e}_k^T(\mathbf{X}-\boldsymbol{\mu}) }[/math] is the principal component[3] associated with the [math]\displaystyle{ k }[/math]-th eigenvector [math]\displaystyle{ \mathbf{e}_k }[/math], with the properties
Then the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ \mathbf{X} }[/math] is the set of vectors, indexed by [math]\displaystyle{ \alpha }[/math],
where [math]\displaystyle{ A }[/math] is typically selected as [math]\displaystyle{ 2\ \text{or}\ 3 }[/math].
For a square-integrable random function [math]\displaystyle{ X(t), t \in \mathcal{T}\subset R^p }[/math], where typically [math]\displaystyle{ p=1 }[/math] and [math]\displaystyle{ \mathcal{T} }[/math] is an interval, denote the mean function by [math]\displaystyle{ \mu(t) = \operatorname{E}(X(t)) }[/math], and the covariance function by
where [math]\displaystyle{ \lambda_1\geq \lambda_2\geq \cdots \geq 0 }[/math] are the eigenvalues and [math]\displaystyle{ \{\varphi_1, \varphi_2, \cdots\} }[/math] are the orthonormal eigenfunctions of the linear Hilbert–Schmidt operator
By the Karhunen–Loève theorem, one can express the centered function in the eigenbasis,
where
is the [math]\displaystyle{ k }[/math]-th principal component with the properties
Then the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ X(t) }[/math] is the set of functions, indexed by [math]\displaystyle{ \alpha }[/math],
that are viewed simultaneously over the range of [math]\displaystyle{ \alpha }[/math], usually for [math]\displaystyle{ A=2\ \text{or}\ 3 }[/math].[2]
The formulation above is derived from properties of the population. Estimation is needed in real-world applications. The key idea is to estimate mean and covariance.
Suppose the data [math]\displaystyle{ \mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n }[/math] represent [math]\displaystyle{ n }[/math] independent drawings from some [math]\displaystyle{ p }[/math]-dimensional population [math]\displaystyle{ \mathbf{X} }[/math] with mean vector [math]\displaystyle{ \boldsymbol{\mu} }[/math] and covariance matrix [math]\displaystyle{ \mathbf{\Sigma} }[/math]. These data yield the sample mean vector [math]\displaystyle{ \overline\mathbf{{x}} }[/math], and the sample covariance matrix [math]\displaystyle{ \mathbf{S} }[/math] with eigenvalue-eigenvector pairs [math]\displaystyle{ (\hat{\lambda}_1, \hat{\mathbf{e}}_1), (\hat{\lambda}_2, \hat{\mathbf{e}}_2), \cdots, (\hat{\lambda}_p, \hat{\mathbf{e}}_p) }[/math]. Then the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ \mathbf{X} }[/math] can be estimated by
Consider [math]\displaystyle{ n }[/math] realizations [math]\displaystyle{ X_1(t), X_2(t), \cdots, X_n(t) }[/math] of a square-integrable random function [math]\displaystyle{ X(t), t \in \mathcal{T} }[/math] with the mean function [math]\displaystyle{ \mu(t) = \operatorname{E}(X(t)) }[/math] and the covariance function [math]\displaystyle{ G(s, t) = \operatorname{Cov}(X(s), X(t)) }[/math]. Functional principal component analysis provides methods for the estimation of [math]\displaystyle{ \mu(t) }[/math] and [math]\displaystyle{ G(s, t) }[/math] in detail, often involving point wise estimate and interpolation. Substituting estimates for the unknown quantities, the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ X(t) }[/math] can be estimated by
Modes of variation are useful to visualize and describe the variation patterns in the data sorted by the eigenvalues. In real-world applications, modes of variation associated with eigencomponents allow to interpret complex data, such as the evolution of function traits[5] and other infinite-dimensional data.[6] To illustrate how modes of variation work in practice, two examples are shown in the graphs to the right, which display the first two modes of variation. The solid curve represents the sample mean function. The dashed, dot-dashed, and dotted curves correspond to modes of variation with [math]\displaystyle{ \alpha=\pm1, \pm2, }[/math] and [math]\displaystyle{ \pm3 }[/math], respectively.
The first graph displays the first two modes of variation of female mortality data from 41 countries in 2003.[4] The object of interest is log hazard function between ages 0 and 100 years. The first mode of variation suggests that the variation of female mortality is smaller for ages around 0 or 100, and larger for ages around 25. An appropriate and intuitive interpretation is that mortality around 25 is driven by accidental death, while around 0 or 100, mortality is related to congenital disease or natural death.
Compared to female mortality data, modes of variation of male mortality data shows higher mortality after around age 20, possibly related to the fact that life expectancy for women is higher than that for men.
Original source: https://en.wikipedia.org/wiki/Modes of variation.
Read more |