Content deleted Content added
→Details: This is not convolution, but a matrix vector product. |
|||
(12 intermediate revisions by 8 users not shown) | |||
Line 1:
The '''point distribution model''' is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes.
==Background==
|author = T. F. Cootes
|title = Statistical models of appearance for computer vision
|
|url=http://www.face-rec.org/algorithms/AAM/app_models.pdf
}}</ref> Taylor ''et al.''<ref name=taylor>{{citation
Line 14 ⟶ 13:
|pages = 38–59
|year = 1995
|
}}</ref> and became a standard in [[computer vision]] for the [[statistical shape analysis|statistical study of shape]]<ref>{{
▲}}</ref> and became a standard in [[computer vision]] for the [[statistical shape analysis|statistical study of shape]]<ref>{{citation
|title = Shape discrimination in the Hippocampus using an MDL Model
|year = 2003
|conference = IMPI
|url = http://www2.wiau.man.ac.uk/caws/Conferences/10/proceedings/8/papers/133/rhhd_ipmi03%2Epdf
|author = Rhodri H. Davies and Carole J. Twining and P. Daniel Allen and Tim F. Cootes and Chris J. Taylor
|access-date = 2007-07-27
|archive-url = https://web.archive.org/web/20081008194350/http://www2.wiau.man.ac.uk/caws/Conferences/10/proceedings/8/papers/133/rhhd_ipmi03%2Epdf
|archive-date = 2008-10-08
|url-status = dead
}}</ref> and for [[image segmentation|segmentation]] of [[medical imaging|medical images]]<ref name=taylor/> where shape priors really help interpretation of noisy and low-contrasted [[pixel]]s/[[voxel]]s. The latter point leads to [[active shape model]]s (ASM) and [[active appearance model]]s (AAM).
Point distribution models rely on [[landmark point]]s. A landmark is an annotating point posed by an anatomist onto a given locus for every shape instance across the training set population. For instance, the same landmark will designate the tip of the [[index finger
==Details==
First, a set of training images are manually landmarked with enough corresponding landmarks to sufficiently approximate the geometry of the original shapes. These landmarks are aligned using the [[generalized procrustes analysis]], which minimizes the least squared error between the points.
<math>k</math> aligned landmarks in two dimensions are given as
:<math>\mathbf{X} = (x_1, y_1, \ldots, x_k, y_k)</math>.
It's important to note that each landmark <math>i \in \lbrace 1, \ldots k \rbrace </math> should represent the same anatomical ___location. For example, landmark #3, <math>(x_3, y_3)</math> might represent the tip of the ring finger across all training images.
Now the shape outlines are reduced to sequences of <math>k</math> landmarks, so that a given training shape is defined as the vector <math>\mathbf{X} \in \mathbb{R}^{2k}</math>. Assuming the scattering is [[gaussian distribution|gaussian]] in this space, PCA is used to compute normalized [[eigenvectors]] and [[eigenvalues]] of the [[covariance matrix]] across all training shapes. The matrix of the top <math>d</math> eigenvectors is given as <math>\mathbf{P} \in \mathbb{R}^{2k \times d}</math>, and each eigenvector
Finally, a [[linear combination]] of the eigenvectors is used to define a new shape <math>\mathbf{X}'</math>, mathematically defined as:
Line 48 ⟶ 50:
==Discussion==
An eigenvector, interpreted in [[euclidean space]], can be seen as a sequence of <math>k</math> euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twisting [[nematode
Due to the PCA properties: eigenvectors are mutually [[orthogonal]], form a basis of the training set cloud in the shape space, and cross at the 0 in this space, which represents the mean shape. Also, PCA is a traditional way of fitting a closed ellipsoid to a Gaussian cloud of points (whatever their dimension): this suggests the concept of bounded variation.
The idea behind
==See also==▼
* [[Procrustes analysis]]▼
==References==
Line 63 ⟶ 68:
|url=http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=403
|quote=Images, annotations and data reports are placed in the enclosed zip-file.
|
|name-list-style=amp }} -->
▲==See also==
▲* [[Procrustes analysis]]
==External links==
* [https://web.archive.org/web/20080509041813/http://www.isbe.man.ac.uk/~bim/Models/index.html Flexible Models for Computer Vision], Tim Cootes, Manchester University.
* [http://www.icaen.uiowa.edu/~dip/LECTURE/Understanding3.html A practical introduction to PDM and ASMs].
[[Category:Computer vision]]
|