Machine learning: Difference between revisions

Content deleted Content added
No edit summary
ce
Line 23:
Although the earliest machine learning model was introduced in the 1950s when [[Arthur Samuel (computer scientist)|Arthur Samuel]] invented a [[Computer program|program]] that calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes.<ref name="WhatIs">{{Cite web |title=History and Evolution of Machine Learning: A Timeline |url=https://www.techtarget.com/whatis/A-Timeline-of-Machine-Learning-History |access-date=8 December 2023 |website=WhatIs |language=en |archive-date=8 December 2023 |archive-url=https://web.archive.org/web/20231208220935/https://www.techtarget.com/whatis/A-Timeline-of-Machine-Learning-History |url-status=live }}</ref> In 1949, [[Canadians|Canadian]] psychologist [[Donald O. Hebb|Donald Hebb]] published the book ''[[Organization of Behavior|The Organization of Behavior]]'', in which he introduced a [[Hebbian theory|theoretical neural structure]] formed by certain interactions among [[nerve cells]].<ref>{{Cite journal |last=Milner |first=Peter M. |date=1993 |title=The Mind and Donald O. Hebb |url=https://www.jstor.org/stable/24941344 |journal=Scientific American |volume=268 |issue=1 |pages=124–129 |doi=10.1038/scientificamerican0193-124 |jstor=24941344 |pmid=8418480 |bibcode=1993SciAm.268a.124M |issn=0036-8733 |access-date=9 December 2023 |archive-date=20 December 2023 |archive-url=https://web.archive.org/web/20231220163326/https://www.jstor.org/stable/24941344 |url-status=live |url-access=subscription }}</ref> Hebb's model of [[neuron]]s interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or [[artificial neuron]]s used by computers to communicate data.<ref name="WhatIs" /> Other researchers who have studied human [[cognitive systems engineering|cognitive systems]] contributed to the modern machine learning technologies as well, including logician [[Walter Pitts]] and [[Warren Sturgis McCulloch|Warren McCulloch]], who proposed the early mathematical models of neural networks to come up with [[algorithm]]s that mirror human thought processes.<ref name="WhatIs" />
 
By the early 1960s, an experimental "learning machine" with [[punched tape]] memory, called Cybertron, had been developed by [[Raytheon Company]] to analyse [[sonar]] signals, [[Electrocardiography|electrocardiograms]], and speech patterns using rudimentary [[reinforcement learning]]. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "[[goof]]" button to cause it to reevaluate incorrect decisions.<ref>"Science: The Goof Button", [[Time (magazine)|Time]], 18 August 1961.
</ref> A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.<ref>Nilsson N. Learning Machines, McGraw Hill, 1965.</ref> Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973.<ref>Duda, R., Hart P. Pattern Recognition and Scene Analysis, Wiley Interscience, 1973</ref> In 1981 a report was given on using teaching strategies so that an [[artificial neural network]] learns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.<ref>S. Bozinovski "Teaching space: A representation concept for adaptive pattern classification" COINS Technical Report No. 81-28, Computer and Information Science Department, University of Massachusetts at Amherst, MA, 1981. https://web.cs.umass.edu/publication/docs/1981/UM-CS-1981-028.pdf {{Webarchive|url=https://web.archive.org/web/20210225070218/https://web.cs.umass.edu/publication/docs/1981/UM-CS-1981-028.pdf |date=25 February 2021 }}</ref>
 
Line 137:
The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine:
# in situation ''s'' perform action ''a''
# receive a consequence situation ''s''{{'}}
# compute emotion of being in the consequence situation ''v(s')''
# update crossbar memory ''w'(a,s) = w(a,s) + v(s')''
Line 253:
=== Genetic algorithms ===
{{Main|Genetic algorithm}}
A genetic algorithm (GA) is a [[search algorithm]] and [[heuristic (computer science)|heuristic]] technique that mimics the process of [[natural selection]], using methods such as [[Mutation (genetic algorithm)|mutation]] and [[Crossover (genetic algorithm)|crossover]] to generate new [[Chromosome (genetic algorithm)|genotype]]s in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.<ref>{{cite journal |last1=Goldberg |first1=David E. |first2=John H. |last2=Holland |title=Genetic algorithms and machine learning |journal=[[Machine Learning (journal)|Machine Learning]] |volume=3 |issue=2 |year=1988 |pages=95–99 |doi=10.1007/bf00113892 |s2cid=35506513 |url=https://deepblue.lib.umich.edu/bitstream/2027.42/46947/1/10994_2005_Article_422926.pdf |doi-access=free |access-date=3 September 2019 |archive-date=16 May 2011 |archive-url=https://web.archive.org/web/20110516025803/http://deepblue.lib.umich.edu/bitstream/2027.42/46947/1/10994_2005_Article_422926.pdf |url-status=live }}</ref><ref>{{Cite journal |title=Machine Learning, Neural and Statistical Classification |journal=Ellis Horwood Series in Artificial Intelligence |first1=D. |last1=Michie |first2=D. J. |last2=Spiegelhalter |first3=C. C. |last3=Taylor |year=1994 |bibcode=1994mlns.book.....M }}</ref> Conversely, machine learning techniques have been used to improve the performance of genetic and [[evolutionary algorithm]]s.<ref>{{cite journal |last1=Zhang |first1=Jun |last2=Zhan |first2=Zhi-hui |last3=Lin |first3=Ying |last4=Chen |first4=Ni |last5=Gong |first5=Yue-jiao |last6=Zhong |first6=Jing-hui |last7=Chung |first7=Henry S.H. |last8=Li |first8=Yun |last9=Shi |first9=Yu-hui |title=Evolutionary Computation Meets Machine Learning: A Survey |journal=Computational Intelligence Magazine |year=2011 |volume=6 |issue=4 |pages=68–75 |doi=10.1109/mci.2011.942584|s2cid=6760276 }}</ref>
 
=== Belief functions ===