Computational learning theory: Difference between revisions

Content deleted Content added
No edit summary
Overview: resolve {{cn}}
 
(One intermediate revision by the same user not shown)
Line 6:
 
==Overview==
Theoretical results in machine learning often focus on a type of inductive learning known as [[supervised learning]]. In supervised learning, an algorithm is provided withewith [[Labeled data|labeled]] samples. For instance, the samples might be descriptions of mushrooms, with labels indicating whether they are edible or not. The algorithm uses these labeled samples to create a classifier. This classifier assigns labels to new samples, including those it has not previously encountered. The goal of the supervised learning algorithm is to optimize performance metrics, such as minimizing errors on new samples.
 
In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning.{{citation needed|date=October 2017}} In
Line 22:
There are several different approaches to computational learning theory based on making different assumptions about the [[inference]] principles used to generalise from limited data. This includes different definitions of [[probability]] (see [[frequency probability]], [[Bayesian probability]]) and different assumptions on the generation of samples.{{citation needed|date=October 2017}} The different approaches include:
 
* Exact learning, proposed by [[Dana Angluin]];<ref>{{cite thesis | type=Ph.D. thesis | author=Dana Angluin | title=An Application of the Theory of Computational Complexity to the Study of Inductive Inference | institution=University of California at Berkeley | year=1976 }}</ref><ref>{{cite journal | url=http://www.sciencedirect.com/science/article/pii/S0019995878906836 | author=D. Angluin | title=On the Complexity of Minimum Inference of Regular Sets | journal=Information and Control | volume=39 | number=3 | pages=337&ndash;350 | year=1978 }}</ref>
* Exact learning, proposed by [[Dana Angluin]]{{citation needed|date=October 2017}};
* [[Probably approximately correct learning]] (PAC learning), proposed by [[Leslie Valiant]];<ref>{{cite journal |last1=Valiant |first1=Leslie |title=A Theory of the Learnable |journal=Communications of the ACM |date=1984 |volume=27 |issue=11 |pages=1134–1142 |doi=10.1145/1968.1972 |s2cid=12837541 |url=https://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |ref=ValTotL |access-date=2022-11-24 |archive-date=2019-05-17 |archive-url=https://web.archive.org/web/20190517235548/http://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |url-status=dead }}</ref>
* [[VC theory]], proposed by [[Vladimir Vapnik]] and [[Alexey Chervonenkis]];<ref>{{cite journal |last1=Vapnik |first1=V. |last2=Chervonenkis |first2=A. |title=On the uniform convergence of relative frequencies of events to their probabilities |journal=Theory of Probability and Its Applications |date=1971 |volume=16 |issue=2 |pages=264–280 |doi=10.1137/1116025 |url=https://courses.engr.illinois.edu/ece544na/fa2014/vapnik71.pdf |ref=VCdim}}</ref>