Content deleted Content added
No edit summary |
|||
Line 36:
It's important to note that each landmark <math>i \in \lbrace 1, \ldots k \rbrace </math> should represent the same anatomical ___location. For example, landmark #3, <math>(x_3, y_3)</math> might represent the tip of the ring finger across all training images.
Now the shape outlines are reduced to sequences of <math>k</math> landmarks, so that a given training shape is defined as the vector <math>\mathbf{X} \in \mathbb{R}^{2k}</math>. Assuming the scattering is [[gaussian distribution|gaussian]] in this space, PCA is used to computes normalized [[eigenvectors]] and [[eigenvalues]] of the [[covariance matrix]] across all training shapes. The matrix of the top
Finally, a [[linear_combination|linear combination]] of the eigenvectors is used to define a new shape <math>\mathbf{X}'</math>, mathematically defined as:
Line 42:
:<math>\mathbf{X}' = \overline{\mathbf{X}} + \mathbf{P} * \mathbf{b}</math>
where <math>\overline{\mathbf{X}}</math> is defined as the mean shape across all training images, and <math>\mathbf{b}</math> is a vector of scaling vectors for each principal component.
PDM's can be extended to any arbitrary number of dimensions, but are typically used in 2D image and 3D volume applications (where each landmark point is <math>\mathbb{R}^2</math> or <math>\mathbb{R}^3</math>).
|