Content deleted Content added
Tag: Reverted |
Undid revision 1302185849 by 129.83.31.78 (talk) WP:CITESPAM |
||
Line 171:
{{Main|Sparse dictionary learning}}
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of [[basis function]]s and assumed to be a [[sparse matrix]]. The method is [[strongly NP-hard]] and difficult to solve approximately.<ref>{{cite journal |first=A. M. |last=Tillmann |title=On the Computational Intractability of Exact and Approximate Dictionary Learning |journal=IEEE Signal Processing Letters |volume=22 |issue=1 |year=2015 |pages=45–49 |doi=10.1109/LSP.2014.2345761|bibcode=2015ISPL...22...45T |arxiv=1405.6664 |s2cid=13342762 }}</ref> A popular [[heuristic]] method for sparse dictionary learning is the [[k-SVD|''k''-SVD]] algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in [[image de-noising]]. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.<ref>[[Michal Aharon|Aharon, M]], M Elad, and A Bruckstein. 2006. "[http://sites.fas.harvard.edu/~cs278/papers/ksvd.pdf K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation] {{Webarchive|url=https://web.archive.org/web/20181123142158/http://sites.fas.harvard.edu/~cs278/papers/ksvd.pdf |date=2018-11-23 }}." Signal Processing, IEEE Transactions on 54 (11): 4311–4322</ref>
==== Anomaly detection ====
|