Content deleted Content added
Garfieldnate (talk | contribs) →External links: Added AM reference |
No edit summary |
||
Line 2:
It is called instance-based because it constructs hypotheses directly from the training instances themselves.<ref name='aima733'>[[Stuart J. Russell|Stuart Russell]] and [[Peter Norvig]] (2003). ''[[Artificial Intelligence: A Modern Approach]]'', second edition, p. 733. Prentice Hall. ISBN 0-13-080302-2</ref>
This means that the hypothesis complexity can grow with the data:<ref name='aima733'/> in the worst case, a hypothesis is a list of ''n'' training items and the computational complexity of [[Classification (machine learning)|classification]] a single new instance is [[Big O notation|''O'']](''n''). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Where other methods generally require the entire set of training data to be re-examined when one instance is changed, instance-based learners may simply store a new instance or throw an old instance away.
A simple example of an instance-based learning algorithm is the [[k-nearest neighbor algorithm]]. Daelemans and Van den Bosch describe variations of this algorithm for use in [[natural language processing]] (NLP), claiming that memory-based learning is both more psychologically realistic than other machine-learning schemes and more effective in practice.<ref>Walter Daelemans and Antal van den Bosch (2005). ''Memory-Based Language Processing''. Cambridge University Press.</ref>
|