Block matrix pseudoinverse: Difference between revisions

Content deleted Content added
Align
m Added short description
Tags: Mobile edit Mobile app edit iOS app edit App description add
 
(7 intermediate revisions by 6 users not shown)
Line 1:
{{Short description|Formula for the pseudoinverse of a partitioned matrix}}
{{refimprove|date=December 2010}}
In [[mathematics]], a '''block matrix pseudoinverse''' is a formula for the [[pseudoinverse]] of a [[partitioned matrix]]. This is useful for decomposing or approximating many algorithms updating parameters in [[signal processing]], which are based on the [[least squares]] method.
Line 11 ⟶ 12:
</math>
 
If the above matrix is full column rank, the [[Moore–Penrose inverse]] matrices of it and its transpose are
:<math>\begin{align}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^{+} &=
\left(
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}
\right)^{-1} \begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}, \\
 
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^{+} &=
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix} \left(
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}
\right)^{-1}.
Line 31 ⟶ 32:
This computation of the pseudoinverse requires (''n''&nbsp;+&nbsp;''p'')-square matrix inversion and does not take advantage of the block form.
 
To reduce computational costs to ''n''- and ''p''-square matrix inversions and to introduce parallelism, treating the blocks separately, one derives <ref name=Baksalary>{{cite journal|author=[[Jerzy Baksalary|J.K. Baksalary]] and O.M. Baksalary|title=Particular formulae for the Moore–Penrose inverse of a columnwise partitioned matrix|journal=Linear Algebra Appl.|volume=421|date=2007|pages=16–23|doi=10.1016/j.laa.2006.03.031|doi-access=}}</ref>
:<math>\begin{align}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^{+} &=
\begin{bmatrix}
\mathbf P_B^\perp \mathbf A\left(\mathbf A^\textsf{T} \mathbf P_B^\perp \mathbf A\right)^{-1} \\
\mathbf P_A^\perp \mathbf B\left(\mathbf B^\textsf{T} \mathbf P_A^\perp \mathbf B\right)^{-1}
\end{bmatrix} =
\begin{bmatrix}
\left(\mathbf P_B^{\perp}\mathbf A\right)^{+} \\
\left(\mathbf P_A^{\perp}\mathbf B\right)^{+}
\end{bmatrix}, \\
 
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^{+} &=
\begin{bmatrix}
\mathbf P_B^\perp \mathbf A\left(\mathbf A^\textsf{T} \mathbf P_B^\perp \mathbf A\right)^{-1},\quad
\mathbf P_A^\perp \mathbf B\left(\mathbf B^\textsf{T} \mathbf P_A^\perp \mathbf B\right)^{-1}
\end{bmatrix} =
\begin{bmatrix}
\left(\mathbf A^\textsf{T} \mathbf P_B^{\perp}\right)^{+} &
\left(\mathbf B^\textsf{T} \mathbf P_A^{\perp}\right)^{+}
\end{bmatrix},
\end{align}</math>
Line 59 ⟶ 60:
where [[orthogonal projection]] matrices are defined by
::<math>\begin{align}
\mathbf P_A^\perp &= \mathbf I - \mathbf A \left(\mathbf A^\textsf{T} \mathbf A\right)^{-1} \mathbf A^\textsf{T}, \\
\mathbf P_B^\perp &= \mathbf I - \mathbf B \left(\mathbf B^\textsf{T} \mathbf B\right)^{-1} \mathbf B^\textsf{T}.
\end{align}</math>
 
The above formulas are not necessarily valid if <math>\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}</math> does not have full rank – for example, if <math>\mathbf A \neq 0</math>, then
:<math>
\begin{bmatrix}\mathbf A & \mathbf A\end{bmatrix}^{+} =
\frac{1}{2}\begin{bmatrix}
\mathbf A^{+} \\
\mathbf A^{+}
\end{bmatrix} \neq
\begin{bmatrix}
\left(\mathbf P_A^{\perp}\mathbf A\right)^{+} \\
\left(\mathbf P_A^{\perp}\mathbf A\right)^{+}
\end{bmatrix} =
0
Line 107 ⟶ 108:
\begin{bmatrix}
\mathbf A, & \mathbf B
\end{bmatrix}^{+}\,\mathbf d =
\begin{bmatrix}
\left(\mathbf P_B^{\perp} \mathbf A\right)^{+} \\
\left(\mathbf P_A^{\perp} \mathbf B\right)^{+}
\end{bmatrix}\mathbf d.
</math>
Line 116 ⟶ 117:
Therefore, we have a decomposed solution:
:<math>
\mathbf x_1 = \left(\mathbf P_B^{\perp} \mathbf A\right)^{+}\,\mathbf d,\quad
\mathbf x_2 = \left(\mathbf P_A^{\perp} \mathbf B\right)^{+}\,\mathbf d.
</math>
 
Line 125 ⟶ 126:
:<math>
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}\mathbf x =
\begin{bmatrix}
Line 139 ⟶ 140:
:<math>\mathbf x =
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^{+}\,
\begin{bmatrix}
\mathbf e \\
Line 151 ⟶ 152:
:<math>
\mathbf x = \begin{bmatrix}
\left(\mathbf A^\textsf{T}\mathbf P_B^{\perp}\right)^{+} &
\left(\mathbf B^\textsf{T}\mathbf P_A^{\perp}\right)^{+}
\end{bmatrix} \begin{bmatrix}
\mathbf e \\
\mathbf f
\end{bmatrix} =
\left(\mathbf A^\textsf{T}\mathbf P_B^{\perp}\right)^{+}\,\mathbf e +
\left(\mathbf B^\textsf{T}\mathbf P_A^{\perp}\right)^{+}\,\mathbf f.
</math>
 
== Comments on matrix inversion ==
 
Instead of <math>\mathbf \left(\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T} \begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}\right)^{-1}</math>, we need to calculate directly or indirectly{{citation needed|date=December 2010}}{{original research?|date=December 2010}}
 
:<math>
\left(\mathbf A^\textsf{T} \mathbf A\right)^{-1},\quad
\left(\mathbf B^\textsf{T} \mathbf B\right)^{-1},\quad
\left(\mathbf A^\textsf{T} \mathbf P_B^{\perp} \mathbf A\right)^{-1},\quad
\left(\mathbf B^\textsf{T} \mathbf P_A^{\perp} \mathbf B\right)^{-1}.
</math>
 
In a dense and small system, we can use [[singular value decomposition]], [[QR decomposition]], or [[Cholesky decomposition]] to replace the matrix inversions with numerical routines. In a large system, we may employ [[iterative methods]] such as Krylov subspace methods.
 
Considering [[parallel algorithms]], we can compute <math>\left(\mathbf A^\textsf{T} \mathbf A\right)^{-1}</math> and <math>\left(\mathbf B^\textsf{T} \mathbf B\right)^{-1}</math> in parallel. Then, we finish to compute <math>\left(\mathbf A^\textsf{T} \mathbf P_B^{\perp} \mathbf A\right)^{-1}</math> and <math>\left(\mathbf B^\textsf{T} \mathbf P_A^{\perp} \mathbf B\right)^{-1}</math> also in parallel.
 
== See also ==
* [[{{section link|Invertible matrix#|Blockwise inversion]]}}
 
==References ==
Line 186 ⟶ 187:
* [https://web.archive.org/web/20060414125709/http://www.csit.fsu.edu/~burkardt/papers/linear_glossary.html Linear Algebra Glossary] by [http://www.csit.fsu.edu/~burkardt/ John Burkardt]
* [http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf The Matrix Cookbook] by [http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=3274/ Kaare Brandt Petersen]
* [http://see.stanford.edu/materials/lsoeldsee263/08-min-norm.pdf Lecture 8: Least-norm solutions of undetermined equations] by [httphttps://www.s\right]tanfordstanford.edu/~boyd/ Stephen P. Boyd]
 
{{Numerical linear algebra}}