Content deleted Content added
→Example ''p''=3: simpler |
m rm fancy boxes around equations per style agreement reached a while ago in the math wikproject. Proper punctuation. |
||
Line 1:
'''Direct linear transformation''', or '''DLT''' is an algorithm which solves a set of variables from a set of similarity relations:
: <math> \mathbf{x}_{k} \sim \mathbf{A} \, \mathbf{y}_{k} </math> for <math> \, k = 1, \ldots, N </math>
where <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> are known vectors, <math> \, \sim </math> denotes equality up to an unkown scalar multiplication, and <math> \mathbf{A} </math> is a matrix (or linear transformation) which contains the unknowns to be solved.
Line 11:
An ordinary linear equation
: <math> \mathbf{x}_{k} = \mathbf{A} \, \mathbf{y}_{k} </math> for <math> \, k = 1, \ldots, N </math>
can be solved, for example, by rewriting it as a matrix equation <math> \mathbf{X} = \mathbf{A} \, \mathbf{Y} </math> where matrices <math> \mathbf{X} </math> and <math> \mathbf{Y} </math> contain the vectors <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> in their respective columns. Given that there exists a unique solution, it is given by
: <math> \mathbf{A} = \mathbf{X} \, \mathbf{Y}^{T} \, (\mathbf{Y} \, \mathbf{Y}^{T})^{-1} .</math>
Solutions can also be described in the case that the equations are over or under determined.
Line 35:
and multiply both sides of the equation with <math> \mathbf{x}_{k}^{T} \, \mathbf{H} </math> from the left
▲|<math> \alpha_{k} \, \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{x}_{k} = \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{A} \, \mathbf{y}_{k} </math> for <math> \, k = 1, \ldots, N </math>
: <math> 0 = \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{A} \, \mathbf{y}_{k} </math> for <math> \, k = 1, \ldots, N .</math>
In order to solve <math> \mathbf{A} </math> from this set of equations, consider the elements of the vectors <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> and matrix <math> \mathbf{A} </math>:
Line 50 ⟶ 47:
and the above homogeneous equation becomes
: <math> 0 = a_{11} \, x_{2k} \, y_{1k} - a_{21} \, x_{1k} \, y_{1k} + a_{12} \, x_{2k} \, y_{2k} - a_{22} \, x_{1k} \, y_{2k} + a_{13} \, x_{2k} \, y_{3k} - a_{23} \, x_{1k} \, y_{3k} </math> for <math> \, k = 1, \ldots, N. </math>
This can also be written
▲|<math> 0 = \mathbf{b}_{k}^{T} \, \mathbf{a} </math> for <math> \, k = 1, \ldots, N </math>
where <math> \mathbf{b}_{k} </math> and <math> \mathbf{a} </math> both are 6-dimensional vectors defined as
: <math> \mathbf{b}_{k} = \begin{pmatrix} x_{2k} \, y_{1k} \\ -x_{1k} \, y_{1k} \\ x_{2k} \, y_{2k} \\ -x_{1k} \, y_{2k} \\ x_{2k} \, y_{3k} \\ -x_{1k} \, y_{3k} \end{pmatrix} </math> and <math> \mathbf{a} = \begin{pmatrix} a_{11} \\ a_{21} \\ a_{12} \\ a_{22} \\ a_{13} \\ a_{23} \end{pmatrix}. </math>
This set of homogeneous equation can also be written in matrix form
▲|<math> \mathbf{0} = \mathbf{B} \, \mathbf{a} </math>
where <math> \mathbf{B} </math> is a <math> 6 \times N </math> matrix which holds the vectors <math> \mathbf{b}_{k} </math> in its rows. This means that <math> \mathbf{a} </math> is a [[null vector]] of <math> \mathbf{B} </math> and it can be determined, for example, by a [[singular value decomposition]] of <math> \mathbf{B} </math>; <math> \mathbf{a} </math> is a right singular vector of <math> \mathbf{B} </math> corresponding to a singular value that equals zero. Once <math> \mathbf{a} </math> has been determined, the elements of <math> \mathbf{A} </math> can be found by a simple rearrangement from a 6-dimensional vector to a <math> 2 \times 3 </math> matrix. Notice that the scaling of <math> \mathbf{a} </math> or <math> \mathbf{A} </math> is not important (except that it must be non-zero) since the defining equations already allow for unknown scaling.
In practice the vectors <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> may contain noise which means that the similarity equations are only approximately valid. As a consequence, there may not be a vector <math> \mathbf{a} </math> which solves the homogeneous equation <math> \mathbf{0} = \mathbf{B} \, \mathbf{a} </math> exactly. In these cases, a total least squares solution can be used by choosing <math> \mathbf{a} </math> as a right singular vector corresponding to the smallest singular value of <math> \mathbf{B}. </math>
== More general cases ==
The above example has <math> \mathbf{x}_{k} \in \mathbb{R}^{2} </math> and <math> \mathbf{y}_{k} \in \mathbb{R}^{3} </math>, but the general strategy for rewriting the similarity relations into homogeneous linear equations can be generalized to arbitrary dimensions for both <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k}. </math>
If <math> \mathbf{x}_{k} \in \mathbb{R}^{2} </math> and <math> \mathbf{y}_{k} \in \mathbb{R}^{q} </math> the previous expressions can still lead to an equation
Line 82 ⟶ 73:
: <math> 0 = \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{A} \, \mathbf{y}_{k} </math> for <math> \, k = 1, \ldots, N </math>
where <math> \mathbf{A} </math> now is <math> 2 \times q. </math>
In the most general case <math> \mathbf{x}_{k} \in \mathbb{R}^{p} </math> and <math> \mathbf{y}_{k} \in \mathbb{R}^{q} </math>. The main difference compared to previously is that the matrix <math> \mathbf{H} </math> now is <math> p \times p </math> and anti-symmetric. When <math> p > 0 </math> the space of such matrices is no longer one-dimensional, it is of dimension
: <math> M = \frac{p\,(p-1)}{2}. </math>
This means that each value of ''k'' provides ''M'' homogeneous equations of the type
Line 98 ⟶ 89:
In the case that ''p''=3 the following three matrices <math> \mathbf{H}_{m} </math> can be chosen
: <math> \mathbf{H}_{1} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix} </math>, <math> \mathbf{H}_{2} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix} </math>, <math> \mathbf{H}_{3} = \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} .</math>
In this particular case, the homogeneous linear equations can be written as
Line 108 ⟶ 99:
Each value of ''k'' provides three homogeneous linear equations in the unknown elements of <math> \mathbf{A} </math>. However, since <math> [\mathbf{x}_{k}]_{\times} </math> has rank = 2, at most two equations are linearly independent. In practice, therefore, it is common to only use two of the three matrices <math> \mathbf{H}_{m} </math>, for example, for ''m''=1, 2. However, the linear dependency between the equations is dependent on <math> \mathbf{x}_{k} </math>, which means that in unlucky cases it would have been better to choose, for example, ''m''=2,3. As a consequence, if the number of equations is not a concern, it may be better to use all three equations when the matrix <math> \mathbf{B} </math> is constructed.
The linear dependence between the resulting homogeneous linear equations is a general concern for the case ''p > 2'' and has to be dealt with either by reducing the set of anti-symmetric matrices <math> \mathbf{H}_{m} </math> or by allowing <math> \mathbf{B} </math> to become larger than necessary for determining <math> \mathbf{a}. </math>
== References ==
|