Content deleted Content added
Undid revision 1121154757 by AManWithNoPlan (talk). Please do not simply delete dead links |
→Dimension reduction: use Citation bot to fill in this citation properly |
||
Line 157:
[[Feature extraction]] and dimension reduction can be combined in one step using [[Principal Component Analysis|principal component analysis]] (PCA), [[linear discriminant analysis]] (LDA), or [[Canonical correlation|canonical correlation analysis]] (CCA) techniques as a pre-processing step, followed by clustering by ''k''-NN on [[Feature (machine learning)|feature vectors]] in reduced-dimension space. This process is also called low-dimensional [[embedding]].<ref>{{citation |last1=Shaw |first1=Blake |last2=Jebara |first2=Tony |title=Structure preserving embedding |work=Proceedings of the 26th Annual International Conference on Machine Learning |year=2009 |pages=1–8 | publication-date=June 2009 |url=http://www.cs.columbia.edu/~jebara/papers/spe-icml09.pdf |doi=10.1145/1553374.1553494 |isbn=9781605585161 |s2cid=8522279 }}</ref>
For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensional [[time series]]) running a fast '''approximate''' ''k''-NN search using [[Locality Sensitive Hashing|locality sensitive hashing]], "random projections",<ref>
== Decision boundary ==
|