Content deleted Content added
No edit summary |
→Overview: resolve {{cn}} |
||
(156 intermediate revisions by 99 users not shown) | |||
Line 1:
{{see also|Statistical learning theory}}
{{Short description|Theory of machine learning}}{{more citations needed|date=November 2018}}
In [[computer science]], '''computational learning theory''' (or just '''learning theory''') is a subfield of [[artificial intelligence]] devoted to studying the design and analysis of [[machine learning]] algorithms.<ref name="ACL">{{Cite web | url=http://www.learningtheory.org/ | title=ACL - Association for Computational Learning}}</ref>
==Overview==
In addition to performance bounds, computational learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results:▼
Theoretical results in machine learning often focus on a type of inductive learning known as [[supervised learning]]. In supervised learning, an algorithm is provided with [[Labeled data|labeled]] samples. For instance, the samples might be descriptions of mushrooms, with labels indicating whether they are edible or not. The algorithm uses these labeled samples to create a classifier. This classifier assigns labels to new samples, including those it has not previously encountered. The goal of the supervised learning algorithm is to optimize performance metrics, such as minimizing errors on new samples.
#Positive results --- Showing that a certain class of functions is learnable in polynomial time.▼
* [[cryptography|Cryptographic]] - [[One-way function]]s exist.▼
In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning.{{citation needed|date=October 2017}} In
There are several different approaches to computational learning theory, which are often mathematically incompatible. This incompatibility arises from using different [[inference]] principles: principles which tell you how to generalize from limited data. The incompatibility also arises from differing definitions of [[probability]] (see [[frequency probability]], [[Bayesian probability]]). The different approaches include:▼
▲
complexity results:
▲
Computational learning theory has led to practical algorithms. For example, PAC theory inspired [[boosting]], VC theory led to [[support vector machine]]s, and Bayesian inference led to [[belief networks]] (by [[Judea Pearl]]).▼
* Negative results{{spaced ndash}}Showing that certain classes cannot be learned in polynomial time.<ref>{{Cite book |last1=Kearns |first1=Michael |title=An Introduction to Computational Learning Theory |last2=Vazirani |first2=Umesh |date=August 15, 1994 |publisher=MIT Press |isbn=978-0262111935}}</ref>
Negative results often rely on commonly believed, but yet unproven assumptions,{{citation needed|date=October 2017}} such as:
''See also:''▼
* [[information theory]]▼
* Computational complexity – [[P versus NP problem|P ≠ NP (the P versus NP problem)]];
== References ==▼
▲There are several different approaches to computational learning theory
* Exact learning, proposed by [[Dana Angluin]];<ref>{{cite thesis | type=Ph.D. thesis | author=Dana Angluin | title=An Application of the Theory of Computational Complexity to the Study of Inductive Inference | institution=University of California at Berkeley | year=1976 }}</ref><ref>{{cite journal | url=http://www.sciencedirect.com/science/article/pii/S0019995878906836 | author=D. Angluin | title=On the Complexity of Minimum Inference of Regular Sets | journal=Information and Control | volume=39 | number=3 | pages=337–350 | year=1978 }}</ref>
* [[Probably approximately correct learning]] (PAC learning), proposed by [[Leslie Valiant]];<ref>{{cite journal |last1=Valiant |first1=Leslie |title=A Theory of the Learnable |journal=Communications of the ACM |date=1984 |volume=27 |issue=11 |pages=1134–1142 |doi=10.1145/1968.1972 |s2cid=12837541 |url=https://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |ref=ValTotL |access-date=2022-11-24 |archive-date=2019-05-17 |archive-url=https://web.archive.org/web/20190517235548/http://www.montefiore.ulg.ac.be/~geurts/Cours/AML/Readings/Valiant.pdf |url-status=dead }}</ref>
* [[VC theory]], proposed by [[Vladimir Vapnik]] and [[Alexey Chervonenkis]];<ref>{{cite journal |last1=Vapnik |first1=V. |last2=Chervonenkis |first2=A. |title=On the uniform convergence of relative frequencies of events to their probabilities |journal=Theory of Probability and Its Applications |date=1971 |volume=16 |issue=2 |pages=264–280 |doi=10.1137/1116025 |url=https://courses.engr.illinois.edu/ece544na/fa2014/vapnik71.pdf |ref=VCdim}}</ref>
* [[Solomonoff's theory of inductive inference|Inductive inference]] as developed by [[Ray Solomonoff]];<ref>{{cite journal |last1=Solomonoff |first1=Ray |title=A Formal Theory of Inductive Inference Part 1 |journal=Information and Control |date=March 1964 |volume=7 |issue=1 |pages=1–22 |doi=10.1016/S0019-9958(64)90223-2|doi-access=free }}</ref><ref>{{cite journal |last1=Solomonoff |first1=Ray |title=A Formal Theory of Inductive Inference Part 2 |journal=Information and Control |date=1964 |volume=7 |issue=2 |pages=224–254 |doi=10.1016/S0019-9958(64)90131-7}}</ref>
* [[Algorithmic learning theory]], from the work of [[E. Mark Gold]];<ref>{{Cite journal | last1 = Gold | first1 = E. Mark | year = 1967 | title = Language identification in the limit | journal = Information and Control | volume = 10 | issue = 5 | pages = 447–474 | doi = 10.1016/S0019-9958(67)91165-5 | url=http://web.mit.edu/~6.863/www/spring2009/readings/gold67limit.pdf | doi-access = free }}</ref>
* [[Online machine learning]], from the work of Nick Littlestone{{citation needed|date=October 2017}}.
▲
* [[Error tolerance (PAC learning)]]
* [[Grammar induction]]
* [[Occam learning]]
* [[Stability (learning theory)]]
{{Reflist}}
==Further reading==
A description of some of these publications is given at
===Surveys===
* Angluin, D. 1992. Computational learning theory: Survey and selected bibliography. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing (May 1992),
* D. Haussler. Probably approximately correct learning. In AAAI-90 Proceedings of the Eight National Conference on Artificial Intelligence, Boston, MA, pages
===Feature selection===
* A. Dhagat and L. Hellerstein
===Optimal O notation learning===
*
===Negative results===
* M. Kearns and
===Error tolerance===
* Michael Kearns and Ming Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):
* Kearns, M. (1993). Efficient noise-tolerant learning from statistical queries. In Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, pages
===Equivalence===
* D.Haussler, M.Kearns, N.Littlestone and [[Manfred K. Warmuth|M. Warmuth]], Equivalence of models for polynomial learnability, Proc. 1st ACM Workshop on Computational Learning Theory, (1988) 42-55.
* {{Cite journal | last1 = Pitt | first1 = L.
▲A description of some of these publications is given at [[list of important publications in computer science#Machine learning|important publications in machine learning]].
▲== External links ==
* [http://www.learningtheory.org Computational learning theory web site]▼
* [http://research.microsoft.com/adapt/MSBNx/msbnx/Basics_of_Bayesian_Inference.htm Basics of Bayesian inference]
{{Differentiable computing}}
▲[[Category:Machine learning]]
[[Category:Computational fields of study]]
[[de:Maschinelles Lernen]]
|