In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any m × n {\displaystyle m\times n} matrix. It is related to the polar decomposition.
Specifically, the singular value decomposition of an m × n {\displaystyle m\times n} complex matrix M {\displaystyle \mathbf {M} } is a factorization of the form M = U Σ V ∗ , {\displaystyle \mathbf {M} =\mathbf {U\Sigma V^{*}} ,} where U {\displaystyle \mathbf {U} } is an m × m {\displaystyle m\times m} complex unitary matrix, Σ {\displaystyle \mathbf {\Sigma } } is an m × n {\displaystyle m\times n} rectangular diagonal matrix with non-negative real numbers on the diagonal, V {\displaystyle \mathbf {V} } is an n × n {\displaystyle n\times n} complex unitary matrix, and V ∗ {\displaystyle \mathbf {V} ^{*}} is the conjugate transpose of V {\displaystyle \mathbf {V} } . Such decomposition always exists for any complex matrix. If M {\displaystyle \mathbf {M} } is real, then U {\displaystyle \mathbf {U} } and V {\displaystyle \mathbf {V} } can be guaranteed to be real orthogonal matrices; in such contexts, the SVD is often denoted U Σ V T . {\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{\mathrm {T} }.}
The diagonal entries σ i = Σ i i {\displaystyle \sigma _{i}=\Sigma _{ii}} of Σ {\displaystyle \mathbf {\Sigma } } are uniquely determined by M {\displaystyle \mathbf {M} } and are known as the singular values of M {\displaystyle \mathbf {M} } . The number of non-zero singular values is equal to the rank of M {\displaystyle \mathbf {M} } . The columns of U {\displaystyle \mathbf {U} } and the columns of V {\displaystyle \mathbf {V} } are called left-singular vectors and right-singular vectors of M {\displaystyle \mathbf {M} } , respectively. They form two sets of orthonormal bases u 1 , … , u m {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{m}} and v 1 , … , v n , {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{n},} and if they are sorted so that the singular values σ i {\displaystyle \sigma _{i}} with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as
M = ∑ i = 1 r σ i u i v i ∗ , {\displaystyle \mathbf {M} =\sum _{i=1}^{r}\sigma _{i}\mathbf {u} _{i}\mathbf {v} _{i}^{*},}
where r ≤ min { m , n } {\displaystyle r\leq \min\{m,n\}} is the rank of M . {\displaystyle \mathbf {M} .}
The SVD is not unique. However, it is always possible to choose the decomposition such that the singular values Σ i i {\displaystyle \Sigma _{ii}} are in descending order. In this case, Σ {\displaystyle \mathbf {\Sigma } } (but not U {\displaystyle \mathbf {U} } and V {\displaystyle \mathbf {V} } ) is uniquely determined by M . {\displaystyle \mathbf {M} .}
The term sometimes refers to the compact SVD, a similar decomposition M = U Σ V ∗ {\displaystyle \mathbf {M} =\mathbf {U\Sigma V} ^{*}} in which Σ {\displaystyle \mathbf {\Sigma } } is square diagonal of size r × r , {\displaystyle r\times r,} where r ≤ min { m , n } {\displaystyle r\leq \min\{m,n\}} is the rank of M , {\displaystyle \mathbf {M} ,} and has only the non-zero singular values. In this variant, U {\displaystyle \mathbf {U} } is an m × r {\displaystyle m\times r} semi-unitary matrix and V {\displaystyle \mathbf {V} } is an n × r {\displaystyle n\times r} semi-unitary matrix, such that U ∗ U = V ∗ V = I r . {\displaystyle \mathbf {U} ^{*}\mathbf {U} =\mathbf {V} ^{*}\mathbf {V} =\mathbf {I} _{r}.}
Mathematical applications of the SVD include computing the pseudoinverse, matrix approximation, and determining the rank, range, and null space of a matrix. The SVD is also extremely useful in many areas of science, engineering, and statistics, such as signal processing, least squares fitting of data, and process control.