Content deleted Content added
Reverted 1 edit by Yasmeen Rizwana (talk): Medium.com is not a reliable source |
No edit summary Tags: Reverted Mobile edit Mobile web edit |
||
Line 1:
kkk
{{DISPLAYTITLE:''k''-nearest neighbors algorithm}}
In [[statistics]], the '''''k''-nearest neighbors algorithm''' ('''''k''-NN''') is a [[Non-parametric statistics|non-parametric]] [[supervised learning]] method first developed by [[Evelyn Fix]] and [[Joseph Lawson Hodges Jr.|Joseph Hodges]] in 1951,<ref>{{Cite report | last1=Fix | first1=Evelyn | last2= Hodges | first2=Joseph L. | title=Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties | issue=Report Number 4, Project Number 21-49-004 | year=1951 | url=https://apps.dtic.mil/dtic/tr/fulltext/u2/a800276.pdf | archive-url=https://web.archive.org/web/20200926212807/https://apps.dtic.mil/dtic/tr/fulltext/u2/a800276.pdf | url-status=live | archive-date=September 26, 2020 | publisher=USAF School of Aviation Medicine, Randolph Field, Texas}}</ref> and later expanded by [[Thomas M. Cover|Thomas Cover]].<ref name=":1" /> It is used for [[statistical classification|classification]] and [[regression analysis|regression]]. In both cases, the input consists of the ''k'' closest training examples in a [[data set]]. The output depends on whether ''k''-NN is used for classification or regression:
|