Content deleted Content added
Statistix35 (talk | contribs) No edit summary |
Statistix35 (talk | contribs) No edit summary |
||
Line 169:
In the example (figure 3), individual 1 is characterized by a small size (i.e. small values) both in terms of group 1 and group 2 (partial points of the individual 1 have a negative coordinate and are close one another). On the contrary, the individual 5 is more characterized by high values for the variables of group 2 than for the variables of group 1 (for the individual 5, group 2 partial point lies further from the origin than group 1 partial point). This reading of the graph can be checked directly in the data.
6. '' Representations of groups of variables '' as such. In these graphs, each group of variables is represented by a single point. Two groups of variables are close one another when they define the same structure on individuals. Extreme case: two groups of variables that define homothetic clouds of individuals <math>N_i^j</math> coincide. The coordinate of group <math>j</math> along the axis <math>s</math> is equal to the contribution of the group <math>j</math> to the inertia of MFA dimension of rank <math>s</math>. This contribution can be interpreted as an indicator of relationship (between the group <math>j</math> and the axis <math>s</math>, hence the name [[relationship square]] given to this type of representation). This representation also exists in other factorial methods (MCA and FAMD in particular) in which case the groups of variable are each reduced to a single variable.
[[File:AFM fig4.jpg|center|thumb|Figure4. MFA. Test data. Representation of groups of variables.]]
In the example (Figure 4), this representation shows that the first axis is related to the two groups of variables, while the second axis is related to the first group. This agrees with the representation of the variables (figure 2). In practice, this representation is especially precious when the groups are numerous and include many variables.
''Other reading grid''. The two groups of variables have in common the size effect (first axis) and differ according to axis 2 since this axis is specific to group 1 (he opposes the variables A and B).
7. ''Representations of factors of separate analyses'' of the different groups. These factors are represented as supplementary quantitative variables (correlation circle).
[[File:AFM fig5.jpg|center|thumb|Figure 5. MFA. Test data. Representation of the principal components of separate PCA of each group.]]
In the example (figure 5), the first axis of the MFA is relatively strongly correlated (r = .80) to the first component of the group 2. This group, consisting of two identical variables, possesses only one principal component (confounded with the variable). The group 1 consists of two orthogonal variables: any direction of the subspace generated by these two variables has the same inertia (equal to 1). So there is uncertainty in the choice of principal components and there is no reason to be interested in one of them in particular. However, the two components provided by the program are well represented: the plane of the MFA is close to the plane spanned by the two variables of group 1.
==Conclusion==
The numerical example illustrates the output of the MFA. Besides balancing groups of variables and besides usual graphics of PCA (of MCA in the case of qualitative variables), the MFA provides results specific of the group structure of the set of variables, that is, in particular:
* A superimposed representation of partial individuals for a detailed analysis of the data;
* A representation of groups of variables providing a synthetic image more and more valuable as that data include many groups;
* A representation of factors from separate analyses.
The small size and simplicity of the example allow to easily validate the rules of interpretation. But the method will be more valuable when the data is large and complex.
Other methods suitable for this type of data are available. Procrustes analysis is compared to the MFA in {{Harvsp | Text Pages = 2013 | id = Pagès2014 |}}.
==History ==
The MFA was developed by [[Brigitte Escofier-Cordier|Brigitte Escofier]] and Jérôme Pagès in the 1980s. It is at the heart of two books written by these authors:
{{Harvsp | text = Escofier & Pagès, 2008 | id = Escofier & Pagès}} and {{Harvsp | text = Pagès 2014 | id = Pagès2014}}. The MFA and its extensions (hierarchical MFA, MFA on contingency tables, etc.) are a research topic of applied mathematics laboratory Agrocampus ([http://math.agrocampus-ouest.fr LMA ²]).
==Software ==
MFA is available in two package R ([http://factominer.free.fr FactoMineR] and [ADE http://pbil.univ-lyon1.fr/ADE-4 4]) and in many software SPAD, Uniwin, XLSTAT, etc. There is also a function [http://www.ensai.fr/userfiles/AFMULT% 20and% 20PLOTAFM% 20aout% 202010.pdf SAS]. The graphs in this article come from the R package FactoMineR.
Line 174 ⟶ 200:
== References ==
<!--- See http://en.wikipedia.org/wiki/Wikipedia:Footnotes on how to create references using <ref></ref> tags, these references will then appear here automatically -->
{{Reflist}}
* {{ book | author=Jérôme Pagès|title= Multiple Factor Analysis by Example Using R | publisher = Chapman & Hall/CRC The R Series |___location= London | year =2014|pages =272 | id=Pagès2014}}
* {{ book | author1=Brigitte Escofier | author2=Jérôme Pagès|title=Analyses factorielles simples et multiples ; objectifs, méthodes et interprétation | publisher = Dunod |___location= Paris | year =2008|pages totales=318 |isbn=978-2-10-051932-3|id= Escofier&Pagès}}
* {{ book | author1=François Husson| author2=Sébastien Lê |author3=Jérôme Pagès|title= Exploratory Multivariate Analysis by Example Using R | publisher = Chapman & Hall/CRC The R Series |___location= London | year =2009|pages totales=224 |isbn=978-2-7535-0938-2|id= HussonLêPagès}}
== External links ==
|