Content deleted Content added
bmatrix |
m Added short description Tags: Mobile edit Mobile app edit iOS app edit App description add |
||
(9 intermediate revisions by 6 users not shown) | |||
Line 1:
{{Short description|Formula for the pseudoinverse of a partitioned matrix}}
{{refimprove|date=December 2010}}
In [[mathematics]], a '''block matrix pseudoinverse''' is a formula for the [[pseudoinverse]] of a [[partitioned matrix]]. This is useful for decomposing or approximating many algorithms updating parameters in [[signal processing]], which are based on the [[least squares]] method.
Line 5 ⟶ 6:
Consider a column-wise partitioned matrix:
:<math>
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix},\
\mathbf A \in \reals^{m \times n},\
\mathbf B \in \reals^{m \times p},\
m \geq n + p.
</math>
If the above matrix is full column rank, the [[Moore–Penrose inverse]] matrices of it and its transpose are
:<math>\begin{align}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^
\left(
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}
▲ \begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^T,
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix} \left(
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T}▼
▲ \begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^T
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}
\right)^{-1}.
\end{align}</math>
This computation of the pseudoinverse requires (''n'' + ''p'')-square matrix inversion and does not take advantage of the block form.
To reduce computational costs to ''n''- and ''p''-square matrix inversions and to introduce parallelism, treating the blocks separately, one derives <ref name=Baksalary>{{cite journal|author=[[Jerzy Baksalary|J.K. Baksalary]] and O.M. Baksalary|title=Particular formulae for the Moore–Penrose inverse of a columnwise partitioned matrix|journal=Linear Algebra Appl.|volume=421|date=2007|pages=16–23|doi=10.1016/j.laa.2006.03.031|doi-access=}}</ref>
:<math>
\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^
\begin{bmatrix}
\mathbf P_B^\perp \mathbf A\left(\mathbf A^\textsf{T} \mathbf P_B^\perp \mathbf A\right)^{-1} \\
\mathbf P_A^\perp \mathbf B\left(\mathbf B^\textsf{T} \mathbf P_A^\perp \mathbf B\right)^{-1}
\end{bmatrix} =
\begin{bmatrix}
\left(\mathbf P_B^
\left(\mathbf P_A^
\end{bmatrix}, \\
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^
\begin{bmatrix}
\mathbf P_B^\perp \mathbf A\left(\mathbf A^\textsf{T} \mathbf P_B^\perp \mathbf A\right)^{-1},\quad
\mathbf P_A^\perp \mathbf B\left(\mathbf B^\textsf{T} \mathbf P_A^\perp \mathbf B\right)^{-1}
\end{bmatrix} =
\begin{bmatrix}
\left(\mathbf A^\textsf{T} \mathbf P_B^
\left(\mathbf B^\textsf{T} \mathbf P_A^
\end{bmatrix},
\end{align}</math>
where [[orthogonal projection]] matrices are defined by
::<math>\begin{align}
\mathbf P_A^\perp &= \mathbf I - \mathbf A \left(\mathbf A^\textsf{T} \mathbf A\right)^{-1} \mathbf A^\textsf{T}, \\
\mathbf P_B^\perp &= \mathbf I - \mathbf B \left(\mathbf B^\textsf{T} \mathbf B\right)^{-1} \mathbf B^\textsf{T}.
\end{align}</math>
The above formulas are not necessarily valid if <math>\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}</math> does not have full rank – for example, if <math>\mathbf A \neq 0</math>, then
:<math>
\begin{bmatrix}\mathbf A & \mathbf A\end{bmatrix}^
\frac{1}{2}\begin{bmatrix}
\mathbf A^
\mathbf A^
\end{bmatrix} \neq
\begin{bmatrix}
\left(\mathbf P_A^
\left(\mathbf P_A^
\end{bmatrix} =
0
Line 89 ⟶ 86:
=== Column-wise partitioning in over-determined least squares ===
Suppose a solution <math>
\mathbf x_1 \\
\mathbf x_2 \\
Line 103 ⟶ 100:
\mathbf x_2 \\
\end{bmatrix} =
\mathbf d,\
\mathbf d \in \reals^{m \times 1}.
</math>
Line 111 ⟶ 108:
\begin{bmatrix}
\mathbf A, & \mathbf B
\end{bmatrix}^
\begin{bmatrix}
\left(\mathbf P_B^
\left(\mathbf P_A^
\end{bmatrix}\mathbf d.
</math>
Line 120 ⟶ 117:
Therefore, we have a decomposed solution:
:<math>
\mathbf x_1 = \left(\mathbf P_B^
\mathbf x_2 = \left(\mathbf P_A^
</math>
=== Row-wise partitioning in under-determined least squares ===
Suppose a solution <math>
:<math>
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}\mathbf x =
\begin{bmatrix}
\mathbf e \\
\mathbf f
\end{bmatrix},\
\mathbf e \in \reals^{n \times 1},\
\mathbf f \in \reals^{p \times 1}.
</math>
Line 143 ⟶ 140:
:<math>\mathbf x =
\begin{bmatrix}
\mathbf A^\textsf{T} \\
\mathbf B^\textsf{T}
\end{bmatrix}^
\begin{bmatrix}
\mathbf e \\
Line 155 ⟶ 152:
:<math>
\mathbf x = \begin{bmatrix}
\left(\mathbf A^\textsf{T}\mathbf P_B^
\left(\mathbf B^\textsf{T}\mathbf P_A^
\end{bmatrix} \begin{bmatrix}
\mathbf e \\
\mathbf f
\end{bmatrix} =
\left(\mathbf A^\textsf{T}\mathbf P_B^
\left(\mathbf B^\textsf{T}\mathbf P_A^
</math>
== Comments on matrix inversion ==
Instead of <math>\mathbf \left(\begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}^\textsf{T} \begin{bmatrix}\mathbf A & \mathbf B\end{bmatrix}\right)^{-1}</math>, we need to calculate directly or indirectly{{citation needed|date=December 2010}}{{original research?|date=December 2010}}
:<math>
\left(\mathbf A^\textsf{T} \mathbf A\right)^{-1},\quad
\left(\mathbf B^\textsf{T} \mathbf B\right)^{-1},\quad
\left(\mathbf A^\textsf{T} \mathbf P_B^
\left(\mathbf B^\textsf{T} \mathbf P_A^
</math>
In a dense and small system, we can use [[singular value decomposition]], [[QR decomposition]], or [[Cholesky decomposition]] to replace the matrix inversions with numerical routines. In a large system, we may employ [[iterative methods]] such as Krylov subspace methods.
Considering [[parallel algorithms]], we can compute <math>\left(\mathbf A^\textsf{T} \mathbf A\right)^{-1}</math> and <math>\left(\mathbf B^\textsf{T} \mathbf B\right)^{-1}</math> in parallel. Then, we finish to compute <math>\left(\mathbf A^\textsf{T} \mathbf P_B^
== See also ==
*
==References ==
Line 190 ⟶ 187:
* [https://web.archive.org/web/20060414125709/http://www.csit.fsu.edu/~burkardt/papers/linear_glossary.html Linear Algebra Glossary] by [http://www.csit.fsu.edu/~burkardt/ John Burkardt]
* [http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf The Matrix Cookbook] by [http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=3274/ Kaare Brandt Petersen]
* [http://see.stanford.edu/materials/lsoeldsee263/08-min-norm.pdf Lecture 8: Least-norm solutions of undetermined equations] by [
{{Numerical linear algebra}}
|