Direct linear transformation: Difference between revisions

Content deleted Content added
Example ''p''=3: more of same
Line 74:
In practice the vectors <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math> may contain noise which means that the similarity equations are only approximately valid. As a consequence, there may not be a vector <math> \mathbf{a} </math> which solves the homogeneous equation <math> \mathbf{0} = \mathbf{B} \, \mathbf{a} </math> exactly. In these cases, a total least squares solution can be used by choosing <math> \mathbf{a} </math> as a right singular vector corresponding to the smallest singular value of <math> \mathbf{B} </math>.
 
== More genealgeneral cases ==
 
The above example has <math> \mathbf{x}_{k} \in \mathbb{R}^{2} </math> and <math> \mathbf{y}_{k} \in \mathbb{R}^{3} </math>, but the general strategy for rewriting the similarity realtionsrelations into homogeneous linear equations can be generalized to arbitrary dimensions for both <math> \mathbf{x}_{k} </math> and <math> \mathbf{y}_{k} </math>.
 
If <math> \mathbf{x}_{k} \in \mathbb{R}^{2} </math> and <math> \mathbf{y}_{k} \in \mathbb{R}^{q} </math> the previous expressions can still lead to an equation
Line 84:
where <math> \mathbf{A} </math> now is <math> 2 \times q </math>. Each ''k'' provides one equation in the <math> 2q </math> unknown elements of <math> \mathbf{A} </math> and together these equations can be written <math> \mathbf{B} \, \mathbf{a} = \mathbf{0} </math> for the known <math> N \times 2 \, q </math> matrix <math> \mathbf{B} </math> and unknown ''2q''-dimensional vector <math> \mathbf{a} </math>. This vector can be found in a similar way as before.
 
In the most general case <math> \mathbf{x}_{k} \in \mathbb{R}^{p} </math> and <math> \mathbf{y}_{k} \in \mathbb{R}^{q} </math>. The main difference compared to previously is that the matrix <math> \mathbf{H} </math> now is <math> p \times p </math> and anti-symmetric. When <math> p > 0 </math> the space of such matrixesmatrices is no longer one-dimensional, it is of dimension
 
: <math> M = \frac{p\,(p-1)}{2} </math>
Line 96:
=== Example ''p''=3 ===
 
In the case that ''p''=3 the follwingfollowing three matrices <math> \mathbf{H}_{m} </math> can be chosen
 
: <math> \mathbf{H}_{1} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix} </math>, &nbsp; <math> \mathbf{H}_{2} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix} </math>, &nbsp; <math> \mathbf{H}_{3} = \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} </math>
Line 108:
Each value of ''k'' provides three homogeneous linear equations in the unknown elements of <math> \mathbf{A} </math>. However, since <math> [\mathbf{x}_{k}]_{\times} </math> has rank = 2, at most two equations are linearly independent. In practice, therefore, it is common to only use two of the three matrices <math> \mathbf{H}_{m} </math>, for example, for ''m''=1, 2. However, the linear dependency between the equations is dependent on <math> \mathbf{x}_{k} </math>, which means that in unlucky cases it would have been better to choose, for example, ''m''=2,3. As a consequence, if the number of equations is not a concern, it may be better to use all three equations when the matrix <math> \mathbf{B} </math> is constructed.
 
The linear dependence between the resulting homogeneous linear equations is a general concern for the case ''p > 2'' and has to be dealt with either by reducing the set of anti-symmericsymmetric matrices <math> \mathbf{H}_{m} </math> or by allowing <math> \mathbf{B} </math> to become larger than necessary for determining <math> \mathbf{a} </math>.
 
== References ==