Prior knowledge for pattern recognition: Difference between revisions

Content deleted Content added
adding links to references using Google Scholar
Citation bot (talk | contribs)
Misc citation tidying. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine
Line 3:
== Prior Knowledge ==
 
Prior knowledge<ref>B. Scholkopf and A. Smola, "[https://books.google.com/books?id=y8ORL3DWt4sC&printsec=frontcover#v=onepage&q=%22prior%20knowledge+knowledge%22&f=false Learning with Kernels]", MIT Press 2002.</ref> refers to all information about the problem available in addition to the training data. However, in this most general form, determining a [[Model (abstract)|model]] from a finite set of samples without prior knowledge is an [[ill-posed]] problem, in the sense that a unique model may not exist. Many classifiers incorporate the general smoothness assumption that a test pattern similar to one of the training samples tends to be assigned to the same class.
 
The importance of prior knowledge in machine learning is suggested by its role in search and optimization. Loosely, the [[No free lunch in search and optimization|no free lunch theorem]] states that all search algorithms have the same average performance over all problems, and thus implies that to gain in performance on a certain application one must use a specialized algorithm that includes some prior knowledge about the problem. <!-- This sentence is still not right. Read the "no free lunch" article to see why.