Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5 |
→Overview: resolve {{cn}} |
||
(14 intermediate revisions by 13 users not shown) | |||
Line 6:
==Overview==
Theoretical results in machine learning
In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning.{{citation needed|date=October 2017}} In
Line 13:
* Positive results{{spaced ndash}}Showing that a certain class of functions is learnable in polynomial time.
* Negative results{{spaced ndash}}Showing that certain classes cannot be learned in polynomial time.<ref>{{Cite book |last1=Kearns |first1=Michael |title=An Introduction to Computational Learning Theory |last2=Vazirani |first2=Umesh |date=August 15, 1994 |publisher=MIT Press |isbn=978-0262111935}}</ref>
Negative results often rely on commonly believed, but yet unproven assumptions,{{citation needed|date=October 2017}} such as:
Line 22:
There are several different approaches to computational learning theory based on making different assumptions about the [[inference]] principles used to generalise from limited data. This includes different definitions of [[probability]] (see [[frequency probability]], [[Bayesian probability]]) and different assumptions on the generation of samples.{{citation needed|date=October 2017}} The different approaches include:
* Exact learning, proposed by [[Dana Angluin]];<ref>{{cite thesis | type=Ph.D. thesis | author=Dana Angluin | title=An Application of the Theory of Computational Complexity to the Study of Inductive Inference | institution=University of California at Berkeley | year=1976 }}</ref><ref>{{cite journal | url=http://www.sciencedirect.com/science/article/pii/S0019995878906836 | author=D. Angluin | title=On the Complexity of Minimum Inference of Regular Sets | journal=Information and Control | volume=39 | number=3 | pages=337–350 | year=1978 }}</ref>
* [[Probably approximately correct learning]] (PAC learning), proposed by [[Leslie Valiant]];<ref>{{cite journal |last1=Valiant |first1=Leslie |title=A Theory of the Learnable |journal=Communications of the ACM |date=1984 |volume=27 |issue=11 |pages=1134–1142 |doi=10.1145/1968.1972 |s2cid=12837541 |url=https://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |ref=ValTotL |access-date=2022-11-24 |archive-date=2019-05-17 |archive-url=https://web.archive.org/web/20190517235548/http://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |url-status=dead }}</ref>
* [[VC theory]], proposed by [[Vladimir Vapnik]] and [[Alexey Chervonenkis]];<ref>{{cite journal |last1=Vapnik |first1=V. |last2=Chervonenkis |first2=A. |title=On the uniform convergence of relative frequencies of events to their probabilities |journal=Theory of Probability and Its Applications |date=1971 |volume=16 |issue=2 |pages=264–280 |doi=10.1137/1116025 |url=https://courses.engr.illinois.edu/ece544na/fa2014/vapnik71.pdf |ref=VCdim}}</ref>
* [[Solomonoff's theory of inductive inference|Inductive inference]] as developed by [[Ray Solomonoff]];<ref>{{cite journal |last1=Solomonoff |first1=Ray |title=A Formal Theory of Inductive Inference Part 1 |journal=Information and Control |date=March 1964 |volume=7 |issue=1 |pages=
* [[Algorithmic learning theory]], from the work of [[E. Mark Gold]];<ref>{{Cite journal | last1 = Gold | first1 = E. Mark | year = 1967 | title = Language identification in the limit | journal = Information and Control | volume = 10 | issue = 5 | pages = 447–474 | doi = 10.1016/S0019-9958(67)91165-5 | url=http://web.mit.edu/~6.863/www/spring2009/readings/gold67limit.pdf | doi-access = free }}</ref>
* [[Online machine learning]], from the work of Nick Littlestone{{citation needed|date=October 2017}}.
Line 42:
==Further reading==
A description of some of these publications is given at
===Surveys===
* Angluin, D. 1992. Computational learning theory: Survey and selected bibliography. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing (May 1992), pages 351–369. http://portal.acm.org/citation.cfm?id=129712.129746
Line 54:
===Negative results===
* M. Kearns and [[Leslie Valiant]]. 1989. Cryptographic limitations on learning boolean formulae and finite automata. In Proceedings of the 21st Annual ACM Symposium on Theory of Computing, pages 433–444, New York. ACM. http://citeseer.ist.psu.edu/kearns89cryptographic.html{{dl|date=August 2024}}
===Error tolerance===
|