Content deleted Content added
Align |
Transpose |
||
Line 13:
If the above matrix is full rank, the [[Moore–Penrose inverse]] matrices of it and its transpose are
:<math>\begin{align}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^
\left(
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}
\right)^{-1} \begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}, \\
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix} \left(
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}
\right)^{-1}.
Line 33:
To reduce computational costs to ''n''- and ''p''-square matrix inversions and to introduce parallelism, treating the blocks separately, one derives <ref name=Baksalary>{{cite journal|author=[[Jerzy Baksalary|J.K. Baksalary]] and O.M. Baksalary|title=Particular formulae for the Moore–Penrose inverse of a columnwise partitioned matrix|journal=Linear Algebra Appl.|volume=421|date=2007|pages=16–23|doi=10.1016/j.laa.2006.03.031}}</ref>
:<math>\begin{align}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^
\begin{bmatrix}
\mathbf P_B^\perp \mathbf A\left(\mathbf A^\textsf{T} \mathbf P_B^\perp \mathbf A\right)^{-1} \\
\mathbf P_A^\perp \mathbf B\left(\mathbf B^\textsf{T} \mathbf P_A^\perp \mathbf B\right)^{-1}
\end{bmatrix} =
\begin{bmatrix}
\left(\mathbf P_B^
\left(\mathbf P_A^
\end{bmatrix}, \\
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^
\begin{bmatrix}
\mathbf P_B^\perp \mathbf A\left(\mathbf A^\textsf{T} \mathbf P_B^\perp \mathbf A\right)^{-1},\quad
\mathbf P_A^\perp \mathbf B\left(\mathbf B^\textsf{T} \mathbf P_A^\perp \mathbf B\right)^{-1}
\end{bmatrix} =
\begin{bmatrix}
\left(\mathbf A^\textsf{T} \mathbf P_B^
\left(\mathbf B^\textsf{T} \mathbf P_A^
\end{bmatrix},
\end{align}</math>
Line 59:
where [[orthogonal projection]] matrices are defined by
::<math>\begin{align}
\mathbf P_A^\perp &= \mathbf I - \mathbf A \left(\mathbf A^\textsf{T} \mathbf A\right)^{-1} \mathbf A^\textsf{T}, \\
\mathbf P_B^\perp &= \mathbf I - \mathbf B \left(\mathbf B^\textsf{T} \mathbf B\right)^{-1} \mathbf B^\textsf{T}.
\end{align}</math>
The above formulas are not necessarily valid if <math>\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}</math> does not have full rank – for example, if <math>\mathbf A \neq 0</math>, then
:<math>
\begin{bmatrix}\mathbf A & \mathbf A\end{bmatrix}^
\frac{1}{2}\begin{bmatrix}
\mathbf A^
\mathbf A^
\end{bmatrix} \neq
\begin{bmatrix}
\left(\mathbf P_A^
\left(\mathbf P_A^
\end{bmatrix} =
0
Line 107:
\begin{bmatrix}
\mathbf A, & \mathbf B
\end{bmatrix}^
\begin{bmatrix}
\left(\mathbf P_B^
\left(\mathbf P_A^
\end{bmatrix}\mathbf d.
</math>
Line 116:
Therefore, we have a decomposed solution:
:<math>
\mathbf x_1 = \left(\mathbf P_B^
\mathbf x_2 = \left(\mathbf P_A^
</math>
Line 125:
:<math>
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}\mathbf x =
\begin{bmatrix}
Line 139:
:<math>\mathbf x =
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^
\begin{bmatrix}
\mathbf e \\
Line 151:
:<math>
\mathbf x = \begin{bmatrix}
\left(\mathbf A^\textsf{T}\mathbf P_B^
\left(\mathbf B^\textsf{T}\mathbf P_A^
\end{bmatrix} \begin{bmatrix}
\mathbf e \\
\mathbf f
\end{bmatrix} =
\left(\mathbf A^\textsf{T}\mathbf P_B^
\left(\mathbf B^\textsf{T}\mathbf P_A^
</math>
== Comments on matrix inversion ==
Instead of <math>\mathbf \left(\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T} \begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}\right)^{-1}</math>, we need to calculate directly or indirectly{{citation needed|date=December 2010}}{{original research?|date=December 2010}}
:<math>
\left(\mathbf A^\textsf{T} \mathbf A\right)^{-1},\quad
\left(\mathbf B^\textsf{T} \mathbf B\right)^{-1},\quad
\left(\mathbf A^\textsf{T} \mathbf P_B^
\left(\mathbf B^\textsf{T} \mathbf P_A^
</math>
In a dense and small system, we can use [[singular value decomposition]], [[QR decomposition]], or [[Cholesky decomposition]] to replace the matrix inversions with numerical routines. In a large system, we may employ [[iterative methods]] such as Krylov subspace methods.
Considering [[parallel algorithms]], we can compute <math>\left(\mathbf A^\textsf{T} \mathbf A\right)^{-1}</math> and <math>\left(\mathbf B^\textsf{T} \mathbf B\right)^{-1}</math> in parallel. Then, we finish to compute <math>\left(\mathbf A^\textsf{T} \mathbf P_B^
== See also ==
|