Content deleted Content added
No edit summary Tags: Reverted Visual edit Mobile edit Mobile web edit |
m Reverted edits by 5.112.116.64 (talk) to last version by Pirehelokan |
||
Line 1:
{{short description|Non-parametric classification method}}{{About|a clustering algorithm|algorithms for finding nearest neighbors|Nearest neighbor search}}{{Distinguish|k-means clustering}}
{{DISPLAYTITLE:''k''-nearest neighbors algorithm}}
In [[statistics]], the '''''k''-nearest neighbors algorithm''' ('''''k''-NN''') is a [[Non-parametric statistics|non-parametric]] [[supervised learning]] method first developed by [[Evelyn Fix]] and [[Joseph Lawson Hodges Jr.|Joseph Hodges]] in 1951,<ref>{{Cite report | last1=Fix | first1=Evelyn | last2= Hodges | first2=Joseph L. | title=Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties | issue=Report Number 4, Project Number 21-49-004 | year=1951 | url=https://apps.dtic.mil/dtic/tr/fulltext/u2/a800276.pdf | archive-url=https://web.archive.org/web/20200926212807/https://apps.dtic.mil/dtic/tr/fulltext/u2/a800276.pdf | url-status=live | archive-date=September 26, 2020 | publisher=USAF School of Aviation Medicine, Randolph Field, Texas}}</ref> and later expanded by [[Thomas M. Cover|Thomas Cover]].<ref name=":1" /> It is used for [[statistical classification|classification]] and [[regression analysis|regression]].
:* In ''k-NN classification'', the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its ''k'' nearest neighbors (''k'' is a positive [[integer]], typically small). If ''k'' = 1, then the object is simply assigned to the class of that single nearest neighbor.
|