Instance-based learning: Difference between revisions

Content deleted Content added
Ehamberg (talk | contribs)
link NLP to Natural Language Processing again
explain one advantage of MBL; this may belong in lazy learning, but I have no knowledge of other lazy methods in ML
Line 2:
 
It is called instance-based because it constructs hypotheses directly from the training instances themselves.<ref name='aima733'>Stuart Russell and Peter Norvig (2003). ''[[Artificial Intelligence: A Modern Approach]]'', second edition, p. 733. Prentice Hall. ISBN 0-13-080302-2</ref>
This means that the hypothesis complexity can grow with the data:<ref name='aima733'/> in the worst case, a hypothesis is a list of ''n'' training items and [[Classification (machine learning)|classification]] takes [[Big O notation|''O'']](''n''). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Where other methods generally require the entire set of training data to be re-examined when one instance is changed, instance-based learners may simply store a new instance or throw an old instance away.
This means that the hypothesis complexity can grow with the data.<ref name='aima733'/>
 
A simple example of an instance-based learning algorithm is the [[k-nearest neighbor algorithm]]. Daelemans and Van den Bosch describe variations of this algorithm for use in [[natural language processing]] (NLP), claiming that memory-based learning is both more psychologically realistic than other machine-learning schemes and practically effective.<ref>Walter Daelemans and Antal van den Bosch (2005). ''Memory-Based Language Processing''. Cambridge University Press.</ref>