Content deleted Content added
(8 intermediate revisions by 6 users not shown) | |||
Line 1:
The '''point distribution model''' is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes.
==Background==
|author = T. F. Cootes
|title = Statistical models of appearance for computer vision
Line 13:
|pages = 38–59
|year = 1995
|
}}</ref> and became a standard in [[computer vision]] for the [[statistical shape analysis|statistical study of shape]]<ref>{{
▲}}</ref> and became a standard in [[computer vision]] for the [[statistical shape analysis|statistical study of shape]]<ref>{{citation
|title = Shape discrimination in the Hippocampus using an MDL Model
|year = 2003
|conference = IMPI
|url = http://www2.wiau.man.ac.uk/caws/Conferences/10/proceedings/8/papers/133/rhhd_ipmi03%2Epdf
|author = Rhodri H. Davies and Carole J. Twining and P. Daniel Allen and Tim F. Cootes and Chris J. Taylor
|access-date = 2007-07-27
|archive-url = https://web.archive.org/web/20081008194350/http://www2.wiau.man.ac.uk/caws/Conferences/10/proceedings/8/papers/133/rhhd_ipmi03%2Epdf
|archive-date = 2008-10-08
|url-status = dead
}}</ref> and for [[image segmentation|segmentation]] of [[medical imaging|medical images]]<ref name=taylor/> where shape priors really help interpretation of noisy and low-contrasted [[pixel]]s/[[voxel]]s. The latter point leads to [[active shape model]]s (ASM) and [[active appearance model]]s (AAM).
Line 27 ⟶ 30:
==Details==
First, a set of training images are manually landmarked with enough corresponding landmarks to sufficiently approximate the geometry of the original shapes. These landmarks are aligned using the [[generalized procrustes analysis]], which minimizes the least squared error between the points.
<math>k</math> aligned landmarks in two dimensions are given as
:<math>\mathbf{X} = (x_1, y_1, \ldots, x_k, y_k)</math>.
It's important to note that each landmark <math>i \in \lbrace 1, \ldots k \rbrace </math> should represent the same anatomical ___location. For example, landmark #3, <math>(x_3, y_3)</math> might represent the tip of the ring finger across all training images.
Line 47 ⟶ 50:
==Discussion==
An eigenvector, interpreted in [[euclidean space]], can be seen as a sequence of <math>k</math> euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twisting [[nematode
Due to the PCA properties: eigenvectors are mutually [[orthogonal]], form a basis of the training set cloud in the shape space, and cross at the 0 in this space, which represents the mean shape. Also, PCA is a traditional way of fitting a closed ellipsoid to a Gaussian cloud of points (whatever their dimension): this suggests the concept of bounded variation.
The idea behind
==See also==▼
* [[Procrustes analysis]]▼
==References==
Line 62 ⟶ 68:
|url=http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=403
|quote=Images, annotations and data reports are placed in the enclosed zip-file.
|
|name-list-style=amp }} -->
▲==See also==
▲* [[Procrustes analysis]]
==External links==
* [https://web.archive.org/web/20080509041813/http://www.isbe.man.ac.uk/~bim/Models/index.html Flexible Models for Computer Vision], Tim Cootes, Manchester University.
* [http://www.icaen.uiowa.edu/~dip/LECTURE/Understanding3.html A practical introduction to PDM and ASMs].
|