Instance-based learning: Difference between revisions

Content deleted Content added
m gr
Reverted to revision 615580426 by Qwertyus (talk): Not an improvement. (TW)
Line 2:
 
It is called instance-based because it constructs hypotheses directly from the training instances themselves.<ref name='aima733'>[[Stuart J. Russell|Stuart Russell]] and [[Peter Norvig]] (2003). ''[[Artificial Intelligence: A Modern Approach]]'', second edition, p. 733. Prentice Hall. ISBN 0-13-080302-2</ref>
This means that the hypothesis complexity can grow with the data:<ref name='aima733'/>A in the worst case, a hypothesis is a list of ''n'' training items and the computational complexity of [[Classification (machine learning)|classifying]] a single new instance is [[Big O notation|''O'']](''n''),in the worst case. One advantage that instance-based learning has over other methods of machine learning is its ability to [[online machine learning|adapt its model to previously unseen data]]: instance-based learners may simply store a new instance or throw an old instance away.
 
Examples of instance-based learning algorithm are the [[k-nearest neighbor algorithm]], [[kernel method|kernel machines]] and [[Radial basis function network|RBF networks]].<ref>{{cite book |author=Tom Mitchell |title=Machine Learning |year=1997 |publisher=McGraw-Hill}}</ref>{{rp|ch. 8}} These store (a subset of) their training set; when predicting a value/class for a new instance, they compute distances or similarities between this instance and the training instances to make a decision.
Line 12:
 
==See also==
*
*[[Analogical modeling]]