Instance-based learning: Difference between revisions

Content deleted Content added
KolbertBot (talk | contribs)
Added a "s" to the "neighbor" word, because it should be plural
Line 4:
This means that the hypothesis complexity can grow with the data:<ref name='aima733'/> in the worst case, a hypothesis is a list of ''n'' training items and the computational complexity of [[Classification (machine learning)|classifying]] a single new instance is [[Big O notation|''O'']](''n''). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Instance-based learners may simply store a new instance or throw an old instance away.
 
Examples of instance-based learning algorithm are the [[k-nearest neighborneighbors algorithm|''k''-nearest neighborneighbors algorithm]], [[kernel method|kernel machines]] and [[Radial basis function network|RBF networks]].<ref>{{cite book |author=Tom Mitchell |title=Machine Learning |year=1997 |publisher=McGraw-Hill}}</ref>{{rp|ch. 8}} These store (a subset of) their training set; when predicting a value/class for a new instance, they compute distances or similarities between this instance and the training instances to make a decision.
 
To battle the memory complexity of storing all training instances, as well as the risk of [[overfitting]] to noise in the training set, ''instance reduction'' algorithms have been proposed.<ref>{{cite journal |title=Reduction techniques for instance-based learning algorithms |author1=D. Randall Wilson |author2=Tony R. Martinez |journal=[[Machine Learning (journal)|Machine Learning]] |publisher=Kluwer |year=2000}}</ref>