Partial least squares regression: Difference between revisions

Content deleted Content added
Fix broken link in reference
Yoderj (talk | contribs)
Algorithms: Try to smooth orthogonal/orthonormal sentence flow.
Line 17:
==Algorithms==
 
A number of variants of PLS exist for estimating the factor and loading matrices {{mvar|T, U, P}} and {{mvar|Q}}. Most of them construct estimates of the linear regression between {{mvar|X}} and {{mvar|Y}} as <math>Y = X \tilde{B} + \tilde{B}_0</math>. Some PLS algorithms are only appropriate for the case where {{mvar|Y}} is a column vector, while others deal with the general case of a matrix {{mvar|Y}}. Algorithms also differ on whether they estimate the factor matrix {{mvar|T}} as an orthogonal, an(that is, [[orthonormal matrix|orthonormal]]) matrix or not.<ref>
{{cite journal |last1=Lindgren |first1=F |last2=Geladi |first2=P |last3=Wold |first3=S |title=The kernel algorithm for PLS |journal=J. Chemometrics |volume=7 |pages=45–59 |year=1993 |doi=10.1002/cem.1180070104 }}</ref><ref>{{cite journal |last1=de Jong |first1=S. |last2=ter Braak |first2=C.J.F. |title=Comments on the PLS kernel algorithm |journal=J. Chemometrics |volume=8 |issue=2 |pages=169–174 |year=1994 |doi=10.1002/cem.1180080208 }}</ref><ref>{{cite journal |last1=Dayal |first1=B.S. |last2=MacGregor |first2=J.F. |title=Improved PLS algorithms |journal=J. Chemometrics |volume=11 |issue=1 |pages=73–85 |year=1997 |doi=10.1002/(SICI)1099-128X(199701)11:1<73::AID-CEM435>3.0.CO;2-# }}</ref><ref>{{cite journal |last=de Jong |first=S. |title=SIMPLS: an alternative approach to partial least squares regression |journal=Chemometrics and Intelligent Laboratory Systems |volume=18 |pages=251–263 |year=1993 |doi=10.1016/0169-7439(93)85002-X |issue=3 }}</ref><ref>{{cite journal |last1=Rannar |first1=S. |last2=Lindgren |first2=F. |last3=Geladi |first3=P. |last4=Wold |first4=S. |title=A PLS Kernel Algorithm for Data Sets with Many Variables and Fewer Objects. Part 1: Theory and Algorithm |journal=J. Chemometrics |volume=8 |issue=2 |pages=111–125 |year=1994 |doi=10.1002/cem.1180080204 }}</ref><ref>{{cite journal |last=Abdi |first=H. |title=Partial least squares regression and projection on latent structure regression (PLS-Regression) |journal=Wiley Interdisciplinary Reviews: Computational Statistics |volume=2 |pages=97–106 |year=2010 |doi=10.1002/wics.51 }}</ref>
The final prediction will be the same for all these varieties of PLS, but the components will differ.
Line 23:
===PLS1===
 
PLS1 is a widely used algorithm appropriate for the vector {{mvar|Y}} case. It estimates {{math|T}} as an orthonormal matrix. In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted):
 
1 {{nowrap|'''function''' PLS1({{mvar|X, y, l}})}}