Content deleted Content added
Justice2022 (talk | contribs) m evaluated in each |
Citation bot (talk | contribs) Alter: title. Add: chapter. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox3 | #UCB_webform_linked 1118/2306 |
||
Line 1:
{{short description|Paradigm of rule-based machine learning methods}}
[[File:Function approximation with LCS rules.jpg|thumb|2D visualization of LCS rules learning to approximate a 3D function. Each blue ellipse represents an individual rule covering part of the solution space. (Adapted from images taken from XCSF<ref name=":9">{{Cite journal|last1=Stalph|first1=Patrick O.|last2=Butz|first2=Martin V.|date=2010-02-01|title=JavaXCSF: The XCSF Learning Classifier System in Java|journal=SIGEVOlution|volume=4|issue=3|pages=16–19|doi=10.1145/1731888.1731890|s2cid=16861908|issn=1931-8499}}</ref> with permission from Martin Butz)]]
'''Learning classifier systems''', or '''LCS''', are a paradigm of [[rule-based machine learning]] methods that combine a discovery component (e.g. typically a [[genetic algorithm]]) with a learning component (performing either [[supervised learning]], [[reinforcement learning]], or [[unsupervised learning]]).<ref name=":1">{{Cite journal|last1=Urbanowicz|first1=Ryan J.|last2=Moore|first2=Jason H.|date=2009-09-22|title=Learning Classifier Systems: A Complete Introduction, Review, and Roadmap|journal=Journal of Artificial Evolution and Applications|language=en|volume=2009|pages=1–25|doi=10.1155/2009/736398|issn=1687-6229|doi-access=free}}</ref> Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in a [[piecewise]] manner in order to make predictions (e.g. [[behavior modeling]],<ref>{{Cite journal|last=Dorigo|first=Marco|title=Alecsys and the AutonoMouse: Learning to control a real robot by distributed classifier systems|journal=Machine Learning|language=en|volume=19|issue=3|pages=209–240|doi=10.1007/BF00996270|issn=0885-6125|year=1995|doi-access=free}}</ref> [[Statistical classification|classification]],<ref>{{Cite journal|last1=Bernadó-Mansilla|first1=Ester|last2=Garrell-Guiu|first2=Josep M.|date=2003-09-01|title=Accuracy-Based Learning Classifier Systems: Models, Analysis and Applications to Classification Tasks|journal=Evolutionary Computation|volume=11|issue=3|pages=209–238|doi=10.1162/106365603322365289|pmid=14558911|s2cid=9086149|issn=1063-6560}}</ref><ref name=":0">{{Cite journal|last1=Urbanowicz|first1=Ryan J.|last2=Moore|first2=Jason H.|date=2015-04-03|title=ExSTraCS 2.0: description and evaluation of a scalable learning classifier system|journal=Evolutionary Intelligence|language=en|volume=8|issue=2–3|pages=89–116|doi=10.1007/s12065-015-0128-8|issn=1864-5909|pmc=4583133|pmid=26417393}}</ref> [[data mining]],<ref name=":0" /><ref>{{Cite book|title=Advances in Learning Classifier Systems|url=https://archive.org/details/advanceslearning00lanz|url-access=limited|last1=Bernadó|first1=Ester|last2=Llorà|first2=Xavier|last3=Garrell|first3=Josep M.|date=2001-07-07|publisher=Springer Berlin Heidelberg|isbn=9783540437932|editor-last=Lanzi|editor-first=Pier Luca|series=Lecture Notes in Computer Science|pages=[https://archive.org/details/advanceslearning00lanz/page/n120 115]–132|language=en|doi=10.1007/3-540-48104-4_8|editor-last2=Stolzmann|editor-first2=Wolfgang|editor-last3=Wilson|editor-first3=Stewart W.}}</ref><ref>{{Cite book|title=Learning Classifier Systems|url=https://archive.org/details/learningclassifi00kova_690|url-access=limited|last1=Bacardit|first1=Jaume|last2=Butz|first2=Martin V.|date=2007-01-01|publisher=Springer Berlin Heidelberg|isbn=9783540712305|editor-last=Kovacs|editor-first=Tim|series=Lecture Notes in Computer Science|pages=[https://archive.org/details/learningclassifi00kova_690/page/n291 282]–290|language=en|doi=10.1007/978-3-540-71231-2_19|editor-last2=Llorà|editor-first2=Xavier|editor-last3=Takadama|editor-first3=Keiki|editor-last4=Lanzi|editor-first4=Pier Luca|editor-last5=Stolzmann|editor-first5=Wolfgang|editor-last6=Wilson|editor-first6=Stewart W.|citeseerx = 10.1.1.553.4679}}</ref> [[Regression analysis|regression]],<ref>{{Cite book|last1=Urbanowicz|first1=Ryan|last2=Ramanand|first2=Niranjan|last3=Moore|first3=Jason
The founding concepts behind learning classifier systems came from attempts to model [[complex adaptive system]]s, using rule-based agents to form an artificial cognitive system (i.e. [[artificial intelligence]]).
Line 82:
=== In the wake of XCS ===
XCS inspired the development of a whole new generation of LCS algorithms and applications. In 1995, Congdon was the first to apply LCS to real-world [[Epidemiology|epidemiological]] investigations of disease <ref name=":8" /> followed closely by Holmes who developed the '''BOOLE++''',<ref>{{Cite journal|last=Holmes|first=John H.|date=1996-01-01|title=A Genetics-Based Machine Learning Approach to Knowledge Discovery in Clinical Data|journal=Proceedings of the AMIA Annual Fall Symposium|pages=883|issn=1091-8280|pmc=2233061}}</ref> '''EpiCS''',<ref>Holmes, John H. "[https://web.archive.org/web/20180820234915/https://pdfs.semanticscholar.org/71e4/eb6c630dee4b762e74b2970f6dc638a351ab.pdf Discovering Risk of Disease with a Learning Classifier System]." In ''ICGA'', pp. 426-433. 1997.</ref> and later '''EpiXCS'''<ref>Holmes, John H., and Jennifer A. Sager. "[https://link.springer.com/10.1007%2F11527770_60 Rule discovery in epidemiologic surveillance data using EpiXCS: an evolutionary computation approach]." In''Conference on Artificial Intelligence in Medicine in Europe'', pp. 444-452. Springer Berlin Heidelberg, 2005.</ref> for [[Epidemiology|epidemiological]] classification. These early works inspired later interest in applying LCS algorithms to complex and large-scale [[data mining]] tasks epitomized by [[bioinformatics]] applications. In 1998, Stolzmann introduced '''anticipatory classifier systems (ACS)''' which included rules in the form of 'condition-action-effect, rather than the classic 'condition-action' representation.<ref name=":7" /> ACS was designed to predict the perceptual consequences of an action in all possible situations in an environment. In other words, the system evolves a model that specifies not only what to do in a given situation, but also provides information of what will happen after a specific action will be executed. This family of LCS algorithms is best suited to multi-step problems, planning, speeding up learning, or disambiguating perceptual aliasing (i.e. where the same observation is obtained in distinct states but requires different actions). Butz later pursued this anticipatory family of LCS developing a number of improvements to the original method.<ref>Butz, Martin V. "[https://web.archive.org/web/20180820234943/https://pdfs.semanticscholar.org/3572/7a56fcce7a73ccc43e5bfa19389780e6d436.pdf Biasing exploration in an anticipatory learning classifier system]." In ''International Workshop on Learning Classifier Systems'', pp. 3-22. Springer Berlin Heidelberg, 2001.</ref> In 2002, Wilson introduced '''XCSF''', adding a computed action in order to perform function approximation.<ref>{{Cite journal|last=Wilson|first=Stewart W.|title=Classifiers that approximate functions|journal=Natural Computing|language=en|volume=1|issue=2–3|pages=211–234|doi=10.1023/A:1016535925043|issn=1567-7818|year=2002|s2cid=23032802}}</ref> In 2003, Bernado-Mansilla introduced a '''sUpervised Classifier System (UCS)''', which specialized the XCS algorithm to the task of [[supervised learning]], single-step problems, and forming a best action set. UCS removed the [[reinforcement learning]] strategy in favor of a simple, accuracy-based rule fitness as well as the explore/exploit learning phases, characteristic of many reinforcement learners. Bull introduced a simple accuracy-based LCS '''(YCS)'''<ref>Bull, Larry. "[https://web.archive.org/web/20180820234941/https://pdfs.semanticscholar.org/120c/8f5057995c36ee60ec320c2263b20af05444.pdf A simple accuracy-based learning classifier system]." ''Learning Classifier Systems Group Technical Report UWELCSG03-005, University of the West of England, Bristol, UK'' (2003).</ref> and a simple strength-based LCS '''Minimal Classifier System (MCS)'''<ref>Bull, Larry. "[https://link.springer.com/chapter/10.1007/978-3-540-30217-9_104 A simple payoff-based learning classifier system]." In''International Conference on Parallel Problem Solving from Nature'', pp. 1032-1041. Springer Berlin Heidelberg, 2004.</ref> in order to develop a better theoretical understanding of the LCS framework. Bacardit introduced '''GAssist'''<ref>Peñarroya, Jaume Bacardit. "Pittsburgh genetic-based machine learning in the data mining era: representations, generalization, and run-time." PhD diss., Universitat Ramon Llull, 2004.</ref> and '''BioHEL''',<ref>{{Cite journal|last1=Bacardit|first1=Jaume|last2=Burke|first2=Edmund K.|last3=Krasnogor|first3=Natalio|date=2008-12-12|title=Improving the scalability of rule-based evolutionary learning|journal=Memetic Computing|language=en|volume=1|issue=1|pages=55–67|doi=10.1007/s12293-008-0005-4|s2cid=775199|issn=1865-9284}}</ref> Pittsburgh-style LCSs designed for [[data mining]] and [[scalability]] to large datasets in [[bioinformatics]] applications. In 2008, Drugowitsch published the book titled "Design and Analysis of Learning Classifier Systems" including some theoretical examination of LCS algorithms.<ref>{{Cite book|title=Design and Analysis of Learning Classifier Systems - Springer|volume = 139|doi=10.1007/978-3-540-79866-8|series = Studies in Computational Intelligence|year = 2008|isbn = 978-3-540-79865-1|last1 = Drugowitsch|first1 = Jan}}</ref> Butz introduced the first rule online learning visualization within a [[Graphical user interface|GUI]] for XCSF<ref name=":9" /> (see the image at the top of this page). Urbanowicz extended the UCS framework and introduced '''ExSTraCS,''' explicitly designed for [[supervised learning]] in noisy problem domains (e.g. epidemiology and bioinformatics).<ref>Urbanowicz, Ryan J., Gediminas Bertasius, and Jason H. Moore. "[http://www.seas.upenn.edu/~gberta/uploads/3/1/4/8/31486883/urbanowicz_2014_exstracs_algorithm.pdf An extended michigan-style learning classifier system for flexible supervised learning, classification, and data mining]." In ''International Conference on Parallel Problem Solving from Nature'', pp. 211-221. Springer International Publishing, 2014.</ref> ExSTraCS integrated (1) expert knowledge to drive covering and genetic algorithm towards important features in the data,<ref>Urbanowicz, Ryan J., Delaney Granizo-Mackenzie, and Jason H. Moore. "[https://web.archive.org/web/20180820234834/https://pdfs.semanticscholar.org/b407/8f8bb6aa9e39e84b0b20874662a6ed8b7df1.pdf Using expert knowledge to guide covering and mutation in a michigan style learning classifier system to detect epistasis and heterogeneity]." In''International Conference on Parallel Problem Solving from Nature'', pp. 266-275. Springer Berlin Heidelberg, 2012.</ref> (2) a form of long-term memory referred to as attribute tracking,<ref>{{Cite book|last1=Urbanowicz|first1=Ryan|last2=Granizo-Mackenzie|first2=Ambrose|last3=Moore|first3=Jason
== Variants ==
|