Content deleted Content added
→For matrix eigenvalue problems: added a section on singular value problems |
→Using the normal matrix: added an example |
||
Line 57:
# Compute the <math> N \times m </math> matrix <math> M W </math>
# Compute the [[Singular_value_decomposition#Thin_SVD|thin, or economy-sized, SVD]] <math> M W = \mathbf {U}{{\Sigma }}\mathbf {V}
# Compute the matrices of the Ritz left <math>U = \mathbf {U}</math> and right <math>
# Output approximations <math>U, \Sigma,
==== Example ====
The matrix
: <math>M = \begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 2 & 0 & 0\\
0 & 0 & 3 & 0\\
0 & 0 & 0 & 4\\
0 & 0 & 0 & 0
\end{bmatrix}</math>
has its normal matrix
: <math>A = M* M = \begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 4 & 0 & 0\\
0 & 0 & 9 & 0\\
0 & 0 & 0 & 16\\
\end{bmatrix}</math>,
singular values <math>1, 2, 3, 4</math> and the corresponding [[Singular_value_decomposition#Thin_SVD|thin SVD]]
:<math>A =
\begin{bmatrix}
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
4 & 0 & 0 & 0\\
0 & 3 & 0 & 0\\
0 & 0 & 2 & 0\\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0
\end{bmatrix}</math>
Let us take
:<math>W = \begin{bmatrix}
\sqrt{2}/2 & \sqrt{2}/2\\
\sqrt{2}/2 & -\sqrt{2}/2\\
0 & 0\\
0 & 0
\end{bmatrix}.</math>
Following the algorithm step 1, we compute
:<math>MW = \begin{bmatrix}
\sqrt{2}/2 & \sqrt{2}/2\\
\sqrt{2} & -\sqrt{2}\\
0 & 0\\
0 & 0
\end{bmatrix},</math>
and on step 2 its [[Singular_value_decomposition#Thin_SVD|thin SVD]] <math> M W = \mathbf {U}{{\Sigma }}\mathbf {V}_h</math>
with
:<math> \mathbf {U} = \begin{bmatrix}
0 & -1\\
1 & 0\\
0 & 0\\
0 & 0\\
0 & 0
\end{bmatrix},
\quad
\Sigma = \begin{bmatrix}
2 & 0\\
0 & 1
\end{bmatrix},
\quad
\mathbf {V}_h = \begin{bmatrix}
- \sqrt{2}/2 & \sqrt{2}/2\\
- \sqrt{2} & -\sqrt{2}
\end{bmatrix}.
</math>
Thus we already obtain the singular values 2 and 1 from <math>\Sigma</math> and from <math>\mathbf {U}</math> the corresponding left singular vectors <math>[0, 1, 0, 0, 0]^*</math> and <math>[-1, 0, 0, 0, 0]^*</math>, which span the column-space of the matrix <math>W</math>, explaining why the approximations are exact for the given <math>W</math>.
Finally, step 3 computes the matrix <math>V_h = \mathbf {V}_h W^*</math>
: <math>
\mathbf {V}_h = \begin{bmatrix}
- \sqrt{2}/2 & \sqrt{2}/2\\
- \sqrt{2} & -\sqrt{2}
\end{bmatrix}
\,
= \begin{bmatrix}
- \sqrt{2}/2 & \sqrt{2}/2 & 0 & 0\\
- \sqrt{2} & -\sqrt{2} & 0 & 0
\end{bmatrix} =
\begin{bmatrix}
0 & -1 & 0 & 0\\
-1 & 0 & 0 & 0
\end{bmatrix}
</math>
recovering from its rows the right singular vectors <math>[0, -1, 0, 0]^*</math> and <math>[-1, 0, 0, 0]^*</math>
== Derivation from calculus of variations ==
|