Content deleted Content added
Just fixing the reference to Principal Component Analysis |
m Typo fixing/lang tags , typos fixed: developped → developed using AWB |
||
Line 1:
The Point Distribution Model is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes. It has been
Point Distribution Models rely on [[Landmark point]]s. A landmark is an annotating point posed by an anatomist onto a given locus for every shape instance across the training set population. For instance, the same landmark will designate the tip of the index in a training set of 2D hands outlines. [[Principal Component Analysis]] (PCA), for instance, is a relevant tool for studying correlations of movement between groups of landmarks among the training set population. Typically, it might detect that all the landmarks located along the same finger move exactly together across the training set examples showing different finger spacing for a flat-posed hands collection.
The implementation of the procedure is rouglhy the following:
Line 15 ⟶ 12:
* '''4:''' PCA computes normalized eigenvectors and eigenvalues of the training set covariance matrix. Each eigenvector describe a principal mode of variation along the set, the corresponding eigenvalue indicating the importance of this mode in the shape space scattering. Since correlation was found between landmarks, the total variation of the space is concentrated on the very first eigenvectors, showing a very fast descent. Otherwise correlation was not found, suggesting the training set shows no variation or the landmarks are not properly posed.
An eigenvector, interpreted in euclidean space, can be seen as a sequence of n euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twisting nematod worm opens the road to Kernel PCA-based methods.
Line 22 ⟶ 18:
The very big idea of PDM is that eigenvectors can be linearly combined to create an infinity of new shape instances that will 'look like' the one in the training set. The coefficients are bounded alike the values of the corresponding eigenvalues, so as to ensure the generated 2n/3n-dimensional dot will remain into the hyper-ellipsoïdal allowed ___domain (ASD, Allowable Shape Domain [1])
==Some Interesting Articles==
Line 38 ⟶ 32:
[2]:
@
title = "Shape discrimination in the Hippocampus using an MDL Model",
author = "Rhodri H. Davies and Carole J. Twining and P. Daniel Allen and Tim F. Cootes and Chris J. Taylor",
|