Direct linear transformation: Difference between revisions

Content deleted Content added
Undid revision 1252197754 by 213.176.68.212 (talk)
 
(20 intermediate revisions by 13 users not shown)
Line 5:
where <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> are known vectors, <math> \, \propto </math> denotes equality up to an unknown scalar multiplication, and <math> \mathbf{A} </math> is a matrix (or linear transformation) which contains the unknowns to be solved.
 
This type of relation appears frequently in [[projective geometry]]. Practical examples include the relation between 3D points in a scene and their projection onto the image plane of a [[Pinhole camera model|pinhole camera]],<ref>{{cite journal | last=Abdel-Aziz | first=Y.I. | last2=Karara | first2=H.M. | title=Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry | journal=Photogrammetric Engineering & Remote Sensing | publisher=American Society for Photogrammetry and Remote Sensing | volume=81 | issue=2 | date=2015-02-01 | issn=0099-1112 | doi=10.14358/pers.81.2.103 | pages=103–107| doi-access=free }}</ref> and [[Homography (computer vision)|homographies]].
 
== Introduction ==
 
An ordinary [[system of linear equationequations]]
 
: <math> \mathbf{x}_{k} = \mathbf{A} \, \mathbf{y}_{k} </math> &nbsp; for <math> \, k = 1, \ldots, N </math>
Line 20:
 
What makes the direct linear transformation problem distinct from the above standard case is the fact that the left and right sides of the defining equation can differ by an unknown multiplicative factor which is dependent on ''k''. As a consequence, <math> \mathbf{A} </math> cannot be computed as in the standard case. Instead, the similarity relations are rewritten as proper linear homogeneous equations which then can be solved by a standard method. The combination of rewriting the similarity equations as homogeneous linear equations and solving them by standard methods is referred to as a '''direct linear transformation algorithm''' or '''DLT algorithm'''. DLT is attributed to Ivan Sutherland.
<ref name="Sutherland">{{Citation | last=Sutherland| first=Ivan E. | title=Three-dimensional data input by tablet | journal=Proceedings of the IEEE | volume=62 | number=4 |date=April 1974 | pages=453-461453–461 | doi=10.1109/PROC.1974.9449 }}</ref>
 
== Example ==
 
Suppose that <math> k \in \{1, ..., N\} </math>. Let <math> \mathbf{x}_{k} = (x_{1k}, x_{2k}) \in \mathbb{R}^{2} </math> and <math> \mathbf{y}_{k} = (y_{1k}, y_{2k}, y_{3k}) \in \mathbb{R}^{3} </math> be two sets of known vectors, and thewe problem iswant to find the <math> 2 \times 3 </math> matrix <math> \mathbf{A} </math> such that
 
: <math> \alpha_{k} \, \mathbf{x}_{k} = \mathbf{A} \, \mathbf{y}_{k} </math> &nbsp; for <math> \, k = 1, \ldots, N </math>
 
where <math> \alpha_{k} \neq 0 </math> is the unknown scalar factor related to equation ''k''.
Line 36:
and multiply both sides of the equation with <math> \mathbf{x}_{k}^{T} \, \mathbf{H} </math> from the left
 
:<math> \begin{align}
:<math> \alpha_{k} \, \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{x}_{k} = \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{A} \, \mathbf{y}_{k} </math> &nbsp; for <math> \, k = 1, \ldots, N .</math>
(\mathbf{x}_{k}^{T} \, \mathbf{H}) \, \alpha_{k} \, \mathbf{x}_{k} &= (\mathbf{x}_{k}^{T} \, \mathbf{H}) \, \mathbf{A} \, \mathbf{y}_{k} \\
:<math> \alpha_{k} \, \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{x}_{k} &= \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{A} \, \mathbf{y}_{k} </math> &nbsp; for <math> \, k = 1, \ldots, N .</math>
\end{align}
</math>
 
Since <math> \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{x}_{k} = 0, </math> the following homogeneous equations, which no longer contain the unknown scalars, are at hand
 
: <math> 0 = \mathbf{x}_{k}^{T} \, \mathbf{H} \, \mathbf{A} \, \mathbf{y}_{k} </math> &nbsp; for <math> \, k = 1, \ldots, N .0</math>
 
In order to solve <math> \mathbf{A} </math> from this set of equations, consider the elements of the vectors <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> and matrix <math> \mathbf{A} </math>:
Line 50 ⟶ 54:
: <math> 0 = a_{11} \, x_{2k} \, y_{1k} - a_{21} \, x_{1k} \, y_{1k} + a_{12} \, x_{2k} \, y_{2k} - a_{22} \, x_{1k} \, y_{2k} + a_{13} \, x_{2k} \, y_{3k} - a_{23} \, x_{1k} \, y_{3k} </math> &nbsp; for <math> \, k = 1, \ldots, N. </math>
 
This can also be written in the matrix form:
 
:<math> 0 = \mathbf{b}_{k}^{T} \, \mathbf{a} </math> &nbsp; for <math> \, k = 1, \ldots, N </math>
Line 58 ⟶ 62:
: <math> \mathbf{b}_{k} = \begin{pmatrix} x_{2k} \, y_{1k} \\ -x_{1k} \, y_{1k} \\ x_{2k} \, y_{2k} \\ -x_{1k} \, y_{2k} \\ x_{2k} \, y_{3k} \\ -x_{1k} \, y_{3k} \end{pmatrix} </math> &nbsp; and &nbsp; <math> \mathbf{a} = \begin{pmatrix} a_{11} \\ a_{21} \\ a_{12} \\ a_{22} \\ a_{13} \\ a_{23} \end{pmatrix}. </math>
 
ThisSo far, we have 1 equation and 6 unknowns. A set of homogeneous equationequations can also be written in the matrix form
 
:<math> \mathbf{0} = \mathbf{B} \, \mathbf{a} </math>
 
where <math> \mathbf{B} </math> is a <math> N \times 6 </math> matrix which holds the known vectors <math> \mathbf{b}_{k} </math> in its rows. The This means thatunknown <math> \mathbf{a} </math> lies in the [[null space]] of <math> \mathbf{B} </math> and can be determined, for example, by a [[singular value decomposition]] of <math> \mathbf{B} </math>; <math> \mathbf{a} </math> is a right singular vector of <math> \mathbf{B} </math> corresponding to a singular value that equals zero. Once <math> \mathbf{a} </math> has been determined, the elements of matrix <math> \mathbf{A} </math> can be found by a simple rearrangementrearranged from a 6-dimensional vector to a <math> 2 \times 3 mathbf{a}</math> matrix. Notice that the scaling of <math> \mathbf{a} </math> or <math> \mathbf{A} </math> is not important (except that it must be non-zero) since the defining equations already allow for unknown scaling.
 
In practice the vectors <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> may contain noise which means that the similarity equations are only approximately valid. As a consequence, there may not be a vector <math> \mathbf{a} </math> which solves the homogeneous equation <math> \mathbf{0} = \mathbf{B} \, \mathbf{a} </math> exactly. In these cases, a [[total least squares]] solution can be used by choosing <math> \mathbf{a} </math> as a right singular vector corresponding to the smallest singular value of <math> \mathbf{B}. </math>
Line 76 ⟶ 80:
where <math> \mathbf{A} </math> now is <math> 2 \times q. </math> Each ''k'' provides one equation in the <math> 2q </math> unknown elements of <math> \mathbf{A} </math> and together these equations can be written <math> \mathbf{B} \, \mathbf{a} = \mathbf{0} </math> for the known <math> N \times 2 \, q </math> matrix <math> \mathbf{B} </math> and unknown ''2q''-dimensional vector <math> \mathbf{a}. </math> This vector can be found in a similar way as before.
 
In the most general case <math> \mathbf{x}_{k} \in \mathbb{R}^{p} </math> and <math> \mathbf{y}_{k} \in \mathbb{R}^{q} </math>. The main difference compared to previously is that the matrix <math> \mathbf{H} </math> now is <math> p \times p </math> and anti-symmetric. When <math> p > 02 </math> the space of such matrices is no longer one-dimensional, it is of dimension
 
: <math> M = \frac{p\,(p-1)}{2}. </math>
Line 103 ⟶ 107:
 
== References ==
{{Reflist}}
*{{cite book |
author=Richard Hartley and Andrew Zisserman |
Line 111 ⟶ 116:
 
== External links ==
* [httpshttp://wwwciteseerx.csist.ubcpsu.caedu/gradsviewdoc/resources/thesis/May09/Dubrofsky_Elansummary?doi=10.1.1.186.pdf4411 Homography Estimation] by Elan Dubrofsky (&sect;§2.1 sketches the "Basic DLT Algorithm")
 
* [http://www.mathworks.com/matlabcentral/fileexchange/65030-direct-linear-transformation--dlt--solver A DLT Solver based on MATLAB] by Hsiang-Jen (Johnny) Chien
 
[[Category:Geometry in computer vision]]