Content deleted Content added
Undid revision 1302185849 by 129.83.31.78 (talk) WP:CITESPAM |
Pirehelokan (talk | contribs) very specific studies, not good fits for the lead paragraph |
||
Line 9:
'''Machine learning''' ('''ML''') is a [[field of study]] in [[artificial intelligence]] concerned with the development and study of [[Computational statistics|statistical algorithms]] that can learn from [[data]] and [[generalise]] to unseen data, and thus perform [[Task (computing)|tasks]] without explicit [[Machine code|instructions]].{{Refn|The definition "without being explicitly programmed" is often attributed to [[Arthur Samuel (computer scientist)|Arthur Samuel]], who coined the term "machine learning" in 1959, but the phrase is not found verbatim in this publication, and may be a [[paraphrase]] that appeared later. Confer "Paraphrasing Arthur Samuel (1959), the question is: How can computers learn to solve problems without being explicitly programmed?" in {{Cite conference |chapter=Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming |conference=Artificial Intelligence in Design '96 |last1=Koza |first1=John R. |last2=Bennett |first2=Forrest H. |last3=Andre |first3=David |last4=Keane |first4=Martin A. |title=Artificial Intelligence in Design '96 |date=1996 |publisher=Springer Netherlands |___location=Dordrecht, Netherlands |pages=151–170 |language=en |doi=10.1007/978-94-009-0279-4_9 |isbn=978-94-010-6610-5 }}}} Within a subdiscipline in machine learning, advances in the field of [[deep learning]] have allowed [[Neural network (machine learning)|neural networks]], a class of statistical algorithms, to surpass many previous machine learning approaches in performance.<ref name="ibm">{{Cite web |title=What is Machine Learning? |url=https://www.ibm.com/topics/machine-learning |access-date=27 June 2023 |website=IBM |date=22 September 2021 |language=en-us |archive-date=27 December 2023 |archive-url=https://web.archive.org/web/20231227153910/https://www.ibm.com/topics/machine-learning |url-status=live }}</ref>
ML finds application in many fields, including [[natural language processing]], [[computer vision]], [[speech recognition]], [[email filtering]], [[agriculture]], and [[medicine]]. The application of ML to business problems is known as [[predictive analytics]].
[[Statistics]] and [[mathematical optimisation]] (mathematical programming) methods comprise the foundations of machine learning. [[Data mining]] is a related field of study, focusing on [[exploratory data analysis]] (EDA) via [[unsupervised learning]].{{refn|Machine learning and pattern recognition "can be viewed as two facets of the same field".<ref name="bishop2006" />{{rp|vii}}}}<ref name="Friedman-1998">{{cite journal |last=Friedman |first=Jerome H. |author-link = Jerome H. Friedman|title=Data Mining and Statistics: What's the connection? |journal=Computing Science and Statistics |volume=29 |issue=1 |year=1998 |pages=3–9}}</ref>
Line 257:
=== Belief functions ===
{{Main|Dempster–Shafer theory}}
The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as [[probability]], [[Possibility theory|possibility]] and [[Imprecise probability|imprecise probability theories]]. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in a [[Probability mass function|pmf]]-based Bayesian approach would combine probabilities.<ref>{{Cite journal |last1=Verbert |first1=K. |last2=Babuška |first2=R. |last3=De Schutter |first3=B. |date=2017-04-01 |title=Bayesian and Dempster–Shafer reasoning for knowledge-based fault diagnosis–A comparative study |url=https://www.sciencedirect.com/science/article/abs/pii/S0952197617300118 |journal=Engineering Applications of Artificial Intelligence |volume=60 |pages=136–150 |doi=10.1016/j.engappai.2017.01.011 |issn=0952-1976}}</ref> However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance and [[uncertainty quantification]]. These belief function approaches that are implemented within the machine learning ___domain typically leverage a fusion approach of various [[ensemble methods]] to better handle the learner's [[decision boundary]], low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving.<ref name="YoosefzadehNajafabadi-2021">{{cite journal |last1=Yoosefzadeh-Najafabadi |first1=Mohsen |last2=Hugh |first2=Earl |last3=Tulpan |first3=Dan |last4=Sulik |first4=John |last5=Eskandari |first5=Milad |year=2021 |title=Application of Machine Learning Algorithms in Plant Breeding: Predicting Yield From Hyperspectral Reflectance in Soybean? |journal=Front. Plant Sci. |volume=11 |pages=624273 |bibcode=2021FrPS...1124273Y |doi=10.3389/fpls.2020.624273 |pmc=7835636 |pmid=33510761 |doi-access=free}}</ref><ref name="Kohavi" /> However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches.
=== Rule-based models ===
|