In multilinear algebra, the higher-order singular value decomposition (HOSVD) of a tensor is a specific orthogonal Tucker decomposition. It may be regarded as one type of generalization of the matrix singular value decomposition. It has applications in computer vision, computer graphics, machine learning, scientific computing, and signal processing. Some aspects can be traced as far back as F. L. Hitchcock in 1928,[1] but it was L. R. Tucker who developed for third-order tensors the general Tucker decomposition in the 1960s,[2][3][4] further advocated by L. De Lathauwer et al.[5] in their Multilinear SVD work that employs the power method, or advocated by Vasilescu and Terzopoulos that developed M-mode SVD a parallel algorithm that employs the matrix SVD.
The term higher order singular value decomposition (HOSVD) was coined be DeLathauwer, but the algorithm referred to commonly in the literature as the HOSVD and attributed to either Tucker or DeLathauwer was developed by Vasilescu and Terzopoulos.[6][7][8] Robust and L1-norm-based variants of HOSVD have also been proposed.[9][10][11][12]
For the purpose of this article, the abstract tensor [math]\displaystyle{ \mathcal{A} }[/math] is assumed to be given in coordinates with respect to some basis as a M-way array, also denoted by [math]\displaystyle{ \mathcal{A}\in\mathbb{C}^{I_1 \times I_2 \cdots \times \cdots I_m \cdots\times I_M} }[/math], where M is the number of modes and the order of the tensor. [math]\displaystyle{ \mathbb{C} }[/math] is the complex numbers and it includes both the real numbers [math]\displaystyle{ \mathbb{R} }[/math] and the pure imaginary numbers.
Let [math]\displaystyle{ {\bf U}_m \in \mathbb{C}^{I_m \times I_m} }[/math]be a unitary matrix containing a basis of the left singular vectors of the standard mode-m flattening [math]\displaystyle{ \mathcal{A}_{[m]} }[/math] of [math]\displaystyle{ \mathcal{A} }[/math] such that the jth column [math]\displaystyle{ \mathbf{u}_j }[/math] of [math]\displaystyle{ {\bf U}_m }[/math] corresponds to the jth largest singular value of [math]\displaystyle{ \mathcal{A}_{[m]} }[/math]. Observe that the mode/factor matrix [math]\displaystyle{ {\bf U}_m }[/math] does not depend on the particular on the specific definition of the mode m flattening. By the properties of the multilinear multiplication, we have[math]\displaystyle{ \begin{array}{rcl} \mathcal{A} &=& \mathcal{A}\times ({\bf I}, {\bf I}, \ldots, {\bf I}) \\ &=& \mathcal{A} \times ({\bf U}_1 {\bf U}_1^H, {\bf U}_2 {\bf U}_2^H, \ldots, {\bf U}_M {\bf U}_M^H) \\ &=& \left(\mathcal{A} \times ({\bf U}_1^H, {\bf U}_2^H, \ldots, {\bf U}_M^H) \right) \times ({\bf U}_1, {\bf U}_2, \ldots, {\bf U}_M), \end{array} }[/math]where [math]\displaystyle{ \cdot^H }[/math] denotes the conjugate transpose. The second equality is because the [math]\displaystyle{ {\bf U}_m }[/math]'s are unitary matrices. Define now the core tensor[math]\displaystyle{ \mathcal{S} := \mathcal{A} \times ({\bf U}_1^H, {\bf U}_2^H, \ldots, {\bf U}_M^H). }[/math]Then, the HOSVD[5] of [math]\displaystyle{ \mathcal{A} }[/math] is the decomposition[math]\displaystyle{ \mathcal{A} = \mathcal{S}\times ({\bf U}_1, {\bf U}_2, \ldots, {\bf U}_M). }[/math] The above construction shows that every tensor has a HOSVD.
As in the case of the compact singular value decomposition of a matrix, it is also possible to consider a compact HOSVD, which is very useful in applications.
Assume that [math]\displaystyle{ {\bf U}_m \in \mathbb{C}^{I_m \times R_m} }[/math] is a matrix with unitary columns containing a basis of the left singular vectors corresponding to the nonzero singular values of the standard factor-m flattening [math]\displaystyle{ \mathcal{A}_{[m]} }[/math] of [math]\displaystyle{ \mathcal{A} }[/math]. Let the columns of [math]\displaystyle{ {\bf U}_m }[/math] be sorted such that the [math]\displaystyle{ r_m }[/math] th column [math]\displaystyle{ {\bf u}_{r_m} }[/math] of [math]\displaystyle{ {\bf U}_m }[/math] corresponds to the [math]\displaystyle{ r_m }[/math]th largest nonzero singular value of [math]\displaystyle{ \mathcal{A}_{[m]} }[/math]. Since the columns of [math]\displaystyle{ {\bf U}_m }[/math] form a basis for the image of [math]\displaystyle{ \mathcal{A}_{[m]} }[/math], we have[math]\displaystyle{ \mathcal{A}_{[m]} = {\bf U}_m {\bf U}_m^H \mathcal{A}_{[m]} = \bigl( \mathcal{A} \times_m ({\bf U}_m {\bf U}_m^H) \bigr)_{[m]}, }[/math]where the first equality is due to the properties of orthogonal projections (in the Hermitian inner product) and the last equality is due to the properties of multilinear multiplication. As flattenings are bijective maps and the above formula is valid for all [math]\displaystyle{ m=1,2,\ldots,m,\ldots,M }[/math], we find as before that[math]\displaystyle{ \begin{array}{rcl} \mathcal{A} &=& \mathcal{A} \times ({\bf U}_1 {\bf U}_1^H, {\bf U}_2 {\bf U}_2^H, \ldots, {\bf U}_M {\bf U}_M^H)\\ &=& \left(\mathcal{A} \times ({\bf U}_1^H, {\bf U}_2^H, \ldots, {\bf U}_M^H)\right) \times ({\bf U}_1, {\bf U}_2, \ldots, {\bf U}_M) \\ &=& \mathcal{S} \times ({\bf U}_1, {\bf U}_2, \ldots, {\bf U}_M), \end{array} }[/math]where the core tensor [math]\displaystyle{ \mathcal{S} }[/math] is now of size [math]\displaystyle{ R_1 \times R_2 \times \cdots \times R_M }[/math].
The multilinear rank[1] of [math]\displaystyle{ \mathcal{A} }[/math] is denoted with rank-[math]\displaystyle{ (R_1, R_2, \ldots, R_M) }[/math]. The multilinear rank is a tuple in [math]\displaystyle{ \mathbb{N}^M }[/math] where [math]\displaystyle{ R_m := \mathrm{rank}( \mathcal{A}_{[m]} ) }[/math]. Not all tuples in [math]\displaystyle{ \mathbb{N}^M }[/math] are multilinear ranks.[13] The multilinear ranks are bounded by [math]\displaystyle{ 1 \le R_m \le I_m }[/math] and it satisfy the constraint [math]\displaystyle{ R_m \le \prod_{i \ne m} R_i }[/math] must hold.[13]
The compact HOSVD is a rank-revealing deomposition in the sense that the dimensions of its core tensor correspond with the components of the multilinear rank of the tensor.
The following geometric interpretation is valid for both the full and compact HOSVD. Let [math]\displaystyle{ (R_1, R_2, \ldots, R_M) }[/math] be the multilinear rank of the tensor [math]\displaystyle{ \mathcal{A} }[/math]. Since [math]\displaystyle{ \mathcal{S} \in {\mathbb C}^{R_1 \times R_2 \times \cdots \times R_M} }[/math] is a multidimensional array, we can expand it as follows[math]\displaystyle{ \mathcal{S} = \sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \cdots \sum_{r_M=1}^{R_M} s_{r_1,r_2,\ldots,r_M} \mathbf{e}_{r_1} \otimes \mathbf{e}_{r_2} \otimes \cdots \otimes \mathbf{e}_{r_M}, }[/math]where [math]\displaystyle{ \mathbf{e}_{r_m} }[/math] is the [math]\displaystyle{ r_m }[/math]th standard basis vector of [math]\displaystyle{ {\mathbb C}^{I_m} }[/math]. By definition of the multilinear multiplication, it holds that[math]\displaystyle{ \mathcal{A} = \sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \cdots \sum_{r_M=1}^{R_M} s_{r_1,r_2,\ldots,r_M} \mathbf{u}_{r_1} \otimes \mathbf{u}_{r_2} \otimes \cdots \otimes \mathbf{u}_{r_M}, }[/math]where the [math]\displaystyle{ \mathbf{u}_{r_m} }[/math] are the columns of [math]\displaystyle{ {\bf U}_m \in {\mathbb C}^{I_m \times R_m} }[/math]. It is easy to verify that [math]\displaystyle{ B = \{ \mathbf{u}_{r_1} \otimes \mathbf{u}_{r_2} \otimes \cdots \otimes \mathbf{u}_{r_M} \}_{r_1,r_2,\ldots,r_M} }[/math] is an orthonormal set of tensors. This means that the HOSVD can be interpreted as a way to express the tensor [math]\displaystyle{ \mathcal{A} }[/math] with respect to a specifically chosen orthonormal basis [math]\displaystyle{ B }[/math] with the coefficients given as the multidimensional array [math]\displaystyle{ \mathcal{S} }[/math].
Let [math]\displaystyle{ \mathcal{A} \in {\mathbb C}^{I_1 \times I_2 \times \cdots \times I_M} }[/math] be a tensor with a rank-[math]\displaystyle{ (R_1, R_2, \ldots, R_M) }[/math], where [math]\displaystyle{ \mathbb C }[/math] contains the reals [math]\displaystyle{ \mathbb{R} }[/math] as a subset.
The strategy for computing the Multilinear SVD and the M-mode SVD was introduced in the 1960s by L. R. Tucker,[3] further advocated by L. De Lathauwer et al.,[5] and by Vasilescu and Terzopulous.[8][6] The term HOSVD was coined by Lieven De Lathauwer, but the algorithm typically referred to in the literature as HOSVD was introduced by Vasilescu and Terzopoulos[6][8] with the name M-mode SVD. It is a parallel computation that employs the matrix SVD to compute the orthonormal mode matrices.
A strategy that is significantly faster when some or all [math]\displaystyle{ r_k \ll n_k }[/math] consists of interlacing the computation of the core tensor and the factor matrices, as follows:[14][15][16]
The HOSVD can be computed in-place via the Fused In-place Sequentially Truncated Higher Order Singular Value Decomposition (FIST-HOSVD) [16] algorithm by overwriting the original tensor by the HOSVD core tensor, significantly reducing the memory consumption of computing HOSVD.
In applications, such as those mentioned below, a common problem consists of approximating a given tensor [math]\displaystyle{ \mathcal{A} \in \mathbb{C}^{I_1 \times I_2 \times \cdots \times I_m \cdots \times I_M} }[/math] by one with a reduced multilinear rank. Formally, if the multilinear rank of [math]\displaystyle{ \mathcal{A} }[/math] is denoted by [math]\displaystyle{ \mathrm{rank-}(R_1,R_2,\ldots,R_m,\ldots,R_M) }[/math], then computing the optimal [math]\displaystyle{ \mathcal{\bar A} }[/math] that approximates [math]\displaystyle{ \mathcal{A} }[/math] for a given reduced [math]\displaystyle{ \mathrm{rank-}(\bar R_1,\bar R_2,\ldots,\bar R_m,\ldots,\bar R_M) }[/math] is a nonlinear non-convex [math]\displaystyle{ \ell_2 }[/math]-optimization problem [math]\displaystyle{ \min_{\mathcal{\bar A}\in \mathbb{C}^{I_1 \times I_2 \times \cdots \times I_M}} \frac{1}{2} \| \mathcal{A} - \mathcal{\bar A} \|_F^2 \quad\text{s.t.}\quad \mathrm{rank-}(\bar R_1, \bar R_2, \ldots, \bar R_M), }[/math]where [math]\displaystyle{ (\bar R_1, \bar R_2, \ldots, \bar R_M) \in \mathbb{N}^M }[/math] is the reduced multilinear rank with [math]\displaystyle{ 1 \le \bar R_m \lt R_m \le I_m }[/math], and the norm [math]\displaystyle{ \|.\|_F }[/math] is the Frobenius norm.
A simple idea for trying to solve this optimization problem is to truncate the (compact) SVD in step 2 of either the classic or the interlaced computation. A classically truncated HOSVD is obtained by replacing step 2 in the classic computation by
while a sequentially truncated HOSVD (or successively truncated HOSVD) is obtained by replacing step 2 in the interlaced computation by
The HOSVD is most commonly applied to the extraction of relevant information from multi-way arrays.
Starting in the early 2000s, Vasilescu addressed causal questions by reframing the data analysis, recognition and synthesis problems as multilinear tensor problems. The power of the tensor framework was showcased by decomposing and representing an image in terms of its causal factors of data formation, in the context of Human Motion Signatures for gait recognition,[18] face recognition—TensorFaces[19][20] and computer graphics—TensorTextures.[21]
The HOSVD has been successfully applied to signal processing and big data, e.g., in genomic signal processing.[22][23][24] These applications also inspired a higher-order GSVD (HO GSVD)[25] and a tensor GSVD.[26]
A combination of HOSVD and SVD also has been applied for real-time event detection from complex data streams (multivariate data with space and time dimensions) in disease surveillance.[27]
It is also used in tensor product model transformation-based controller design.[28][29]
The concept of HOSVD was carried over to functions by Baranyi and Yam via the TP model transformation.[28][29] This extension led to the definition of the HOSVD-based canonical form of tensor product functions and Linear Parameter Varying system models[30] and to convex hull manipulation based control optimization theory, see TP model transformation in control theories.
HOSVD was proposed to be applied to multi-view data analysis[31] and was successfully applied to in silico drug discovery from gene expression.[32]
L1-Tucker is the L1-norm-based, robust variant of Tucker decomposition.[10][11] L1-HOSVD is the analogous of HOSVD for the solution to L1-Tucker.[10][12]
Original source: https://en.wikipedia.org/wiki/Higher-order singular value decomposition.
Read more |