Symbolic regression: Difference between revisions

Content deleted Content added
WikiCleanerBot (talk | contribs)
m v2.04b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation)
Citation bot (talk | contribs)
Altered pages. Add: article-number, arxiv, bibcode. Removed URL that duplicated identifier. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 605/967
 
(34 intermediate revisions by 25 users not shown)
Line 5:
'''Symbolic regression''' ('''SR''') is a type of [[regression analysis]] that searches the space of mathematical expressions to find the model that best fits a given dataset, both in terms of accuracy and simplicity.
 
No particular model is provided as a starting point for symbolic regression. Instead, initial expressions are formed by randomly combining mathematical building blocks such as [[Operation (mathematics)|mathematical operators]], [[analytic function]]s, [[Constant (mathematics)|constants]], and [[state variable]]s. Usually, a subset of these primitives will be specified by the person operating it, but that's not a requirement of the technique. The symbolic regression problem for mathematical functions has been tackled with a variety of methods, including recombining equations most commonly using [[genetic programming]],<ref name="schmidt2009distilling"/> as well as more recentlyrecent methods utilizing [[Bayesian statistics#Outline of Bayesian methods|Bayesian methods]]<ref name="bayesian"/> and [[Artificial neural network|neural networks]].<ref name="aifeynman"/> Another non-classical alternative method to SR is called Universal Functions Originator (UFO), which has a different mechanism, search-space, and building strategy.<ref name="ufo"/> Further methods such as Exact Learning attempt to transform the fitting problem into a [[Method of moments (statistics)|moments problem]] in a natural function space, usually built around generalizations of the [[Meijer G-function|Meijer-G function]].<ref name="exactlearning"/>
 
By not requiring ''a priori'' specification of a model, symbolic regression isn't affected by human bias, or unknown gaps in [[___domain knowledge]]. It attempts to uncover the intrinsic relationships of the dataset, by letting the patterns in the data itself reveal the appropriate models, rather than imposing a model structure that is deemed mathematically tractable from a human perspective. The [[fitness function]] that drives the evolution of the models takes into account not only [[Residual (numerical analysis)|error metrics]] (to ensure the models accurately predict the data), but also special complexity measures,<ref name="complexity"/> thus ensuring that the resulting models reveal the data's underlying structure in a way that's understandable from a human perspective. This facilitates reasoning and favors the odds of getting insights about the data-generating system, as well as improving generalisability and extrapolation behaviour by preventing [[overfitting]]. Accuracy and simplicity may be left as two separate objectives of the regression—in which case the optimum solutions form a [[Pareto front]]—or they may be combined into a single objective by means of a model selection principle such as [[minimum description length]].
 
It has been proven that symbolic regression is an [[NP-hardness|NP-hard]] problem.<ref>{{cite journal |last1=Virgolin |first1=Marco |last2=Pissis |first2=Solon P. |journal=Transactions on Machine Learning Research |date=2022 |title=Symbolic Regression is NP-hard |arxiv=2207.01018 |url=https://openreview.net/forum?id=LTiaPxqe2e }}</ref> Nevertheless, inif the sensesought-for thatequation oneis cannotnot alwaystoo complex it is possible to findsolve the bestsymbolic regression problem exactly by generating every possible mathematicalfunction expression(built tofrom fitsome topredefined aset givenof operators) and evaluating them on the dataset in [[Polynomial-time|polynomial time]]question.<ref>{{Citecite journal |last1=Virgolin Bartlett|first1=Marco Deaglan|last2=Pissis Desmond|first2=Solon P. Harry|datelast3=2022-07-05 Ferreira|first3=Pedro|title=Exhaustive Symbolic Regression|journal=IEEE isTransactions NP-hardon Evolutionary Computation |urlyear=http://arxiv2023 |volume=28 |issue=4 |page=1 |doi=10.org1109/abs/2207TEVC.010182023.3280250 |arxiv=22072211.0101811461|s2cid=253735380 }}</ref>
 
== Difference from classical regression ==
Line 20:
 
== Benchmarking ==
In 2022 the results of a large benchmarking competition known as [https://cavalab.org/srbench SRBench] were announced at the [[Genetic and Evolutionary Computation Conference|GECCO conference]] in Boston, MA. The competition pitted nine leading symbolic regression algorithms against each other on a large set of data problems and evaluation criteria.<ref name="srbench-2022">{{cite web |url=https://cavalab.org/srbench/competition-2022/|title=SRBench 2022}}</ref> The competition was organised in two tracks, a synthetic track and a real-world data track.
 
=== Synthetic TrackSRBench ===
In 2021, [https://cavalab.org/srbench SRBench]<ref>{{cite journal |last1=La Cava |first1=William |last2=Orzechowski |first2=Patryk |last3=Burlacu |first3=Bogdan |last4=de Franca |first4=Fabricio |last5=Virgolin |first5=Marco |last6=Jin |first6=Ying |last7=Kommenda |first7=Michael |last8=Moore |first8=Jason |title=Contemporary Symbolic Regression Methods and their Relative Performance |journal=Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks |date=2021 |volume=1 |arxiv=2107.14351 |url=https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/c0c7c76d30bd3dcaefc96f40275bdc0a-Abstract-round1.html}}</ref> was proposed as a large benchmark for symbolic regression.
In its inception, SRBench featured 14 symbolic regression methods, 7 other ML methods, and 252 datasets from [https://github.com/EpistasisLab/pmlb PMLB].
The benchmark intends to be a living project: it encourages the submission of improvements, new datasets, and new methods, to keep track of the state of the art in SR.
 
=== SRBench Competition 2022 ===
In 2022, theSRBench resultsannounced ofthe acompetition largeInterpretable benchmarkingSymbolic competitionRegression knownfor asData [https://cavalab.org/srbenchScience, SRBench]which werewas announcedheld at the [[Genetic and Evolutionary Computation Conference|GECCO conference]] in Boston, MA. The competition pitted nine leading symbolic regression algorithms against each other on a largenovel set of data problems and considered different evaluation criteria.<ref name="srbench-2022">{{cite web |url=https://cavalab.org/srbench/competition-2022/|title=SRBench 2022}}</ref> The competition was organisedorganized in two tracks, a synthetic track and a real-world data track.<ref name="srbench2022"/>
 
==== Synthetic Track ====
In the synthetic track, methods were compared according to five properties: re-discovery of exact expressions; feature selection; resistance to local optima; extrapolation; and sensitivity to noise. Rankings of the methods were:
# [[QLattice]]
# [https://github.com/MilesCranmer/PySR PySR (Python Symbolic Regression)]
# [[PySR]]
# [[Deep https://github.com/brendenpetersen/deep-symbolic -optimization | uDSR] (Deep Symbolic Optimization)]
 
==== Real-world Track ====
In the real-world track, methods were trained to build interpretable predictive models for 14-day forecast counts of COVID-19 cases, hospitalizations, and deaths in New York State. These models were reviewed by a subject expert and assigned trust ratings and evaluated for accuracy and simplicity. The ranking of the methods was:
 
# [[Deep https://github.com/brendenpetersen/deep-symbolic -optimization | uDSR] (Deep Symbolic Optimization)]
# [[QLattice]]
# [[https://github.com/alcides/GeneticEngine/ geneticengine] (Genetic Engine)]
 
== Non-Standardstandard Methodsmethods ==
Most symbolic regression algorithms prevent [[combinatorial explosion]] by implementing evolutionary algorithms that iteratively improve the best-fit expression over many generations. Recently, researchers have proposed algorithms utilizing other tactics in [[Artificial intelligence|AI]].
 
Silviu-Marian Udrescu and [[Max Tegmark]] developed the "AI Feynman" algorithm,<ref>{{Cite journal |last1=Udrescu |first1=Silviu-Marian |last2=Tegmark |first2=Max |date=2020-04-17 |title=AI Feynman: A physics-inspired method for symbolic regression |journal=Science Advances |language=en |volume=6 |issue=16 |pages=eaay2631 |doi=10.1126/sciadv.aay2631 |issn=2375-2548 |pmc=7159912 |pmid=32426452|arxiv=1905.11481 |bibcode=2020SciA....6.2631U }}</ref><ref>{{cite arXiv |last1=Udrescu |first1=Silviu-Marian |last2=Tan |first2=Andrew |last3=Feng |first3=Jiahai |last4=Neto |first4=Orisvaldo |last5=Wu |first5=Tailin |last6=Tegmark |first6=Max |date=2020-12-16 |title=AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity |class=cs.LG |eprint=2006.10782 }}</ref> which attempts symbolic regression by training a neural network to represent the mystery function, then runs tests against the neural network to attempt to break up the problem into smaller parts. For example, if <math>f(x_1, ..., x_i, x_{i+1}, ..., x_n) = g(x_1,..., x_i) + h(x_{i+1},..., x_n)</math>, tests against the neural network can recognize the separation and proceed to solve for <math>g</math> and <math>h</math> separately and with different variables as inputs. This is an example of [[Divide-and-conquer algorithm|divide and conquer]], which reduces the size of the problem to be more manageable. AI Feynman also transforms the inputs and outputs of the mystery function in order to produce a new function which can be solved with other techniques, and performs [[dimensional analysis]] to reduce the number of independent variables involved. The algorithm was able to "discover" 100 equations from [[The Feynman Lectures on Physics]], while a leading software using evolutionary algorithms, [[Eureqa]], solved only 71. AI Feynman, in contrast to classic symbolic regression methods, requires a very large dataset asin to be ableorder to first train the neural network and is naturally biased towards equations that are common in elementary physics.
 
Some researchers have pointed out that conventional symbolic regression techniques may struggle to generalize in systems with complex causal dependencies or non-explicit governing equations.<ref>{{cite journal |last1=Zenil |first1=Hector |last2=Kiani |first2=Narsis A. |last3=Zea |first3=Allan A. |last4=Tegnér |first4=Jesper |title=Causal deconvolution by algorithmic generative models |journal=Nature Machine Intelligence |volume=1 |issue=1 |year=2019 |pages=58–66 |doi=10.1038/s42256-018-0005-0 }}</ref> A more general approach was developed a conceptual framework for extracting generative rules from complex dynamical systems based on Algorithmic Information Theory (AIT).<ref>{{cite journal | last=Zenil | first=Hector | title=Algorithmic Information Dynamics | journal=Scholarpedia | date=25 July 2020 | volume=15 | issue=7 | doi=10.4249/scholarpedia.53143 | doi-access=free | bibcode=2020SchpJ..1553143Z | hdl=10754/666314 | hdl-access=free }}</ref> This framework, called Algorithmic Information Dynamics (AID), applies perturbation analysis to quantify the algorithmic complexity of system components and reconstruct phase spaces and causal mechanisms, including for discrete systems such as cellular automata. Unlike traditional symbolic regression, AID enables the inference of generative rules without requiring explicit kinetic equations, offering insights into the causal structure and reprogrammability of complex systems.<ref> {{cite book | last1=Zenil | first1=Hector | last2=Kiani | first2=Narsis A. | last3=Tegner | first3=Jesper | title=Algorithmic Information Dynamics: A Computational Approach to Causality with Applications to Living Systems | publisher=Cambridge University Press | year=2023 | doi=10.1017/9781108596619 | isbn=978-1-108-59661-9 | url=https://doi.org/10.1017/9781108596619}}</ref>
 
== Software ==
 
=== End-user software ===
* [[QLattice]] is a quantum-inspired simulation and machine learning technology that helps you search through an infinite list of potential mathematical models to solve youra problem.<ref>{{Cite web|url=https://docs.abzu.ai|title=Feyn is a Python module for running the QLattice|date=June 22, 2022}}</ref><ref name="srfeyn" />
* [https://github.com/hengzhe-zhang/EvolutionaryForest Evolutionary Forest] is a Genetic Programming-based automated feature construction algorithm for symbolic regression.<ref>{{Cite journal |last1=Zhang |first1=Hengzhe |last2=Zhou |first2=Aimin |last3=Zhang |first3=Hu |date=August 2022 |title=An Evolutionary Forest for Regression |journal=IEEE Transactions on Evolutionary Computation |volume=26 |issue=4 |pages=735–749 |doi=10.1109/TEVC.2021.3136667 |bibcode=2022ITEC...26..735Z |issn=1089-778X}}</ref><ref>{{Cite journal |last1=Zhang |first1=Hengzhe |last2=Zhou |first2=Aimin |last3=Chen |first3=Qi |last4=Xue |first4=Bing |last5=Zhang |first5=Mengjie |date=2023 |title=SR-Forest: A Genetic Programming based Heterogeneous Ensemble Learning Method |journal=IEEE Transactions on Evolutionary Computation |volume=28 |issue=5 |pages=1484–1498 |doi=10.1109/TEVC.2023.3243172 |issn=1089-778X}}</ref>
* [[Deephttps://github.com/brendenpetersen/deep-symbolic-optimization Symbolic Optimization]uDSR] is a deep learning framework for symbolic optimization tasks<ref>{{Cite web|url=https://github.com/brendenpetersen/deep-symbolic-optimization|title=Deep symbolic optimization|website=[[GitHub]] |date=June 22, 2022}}</ref>
* [[https://github.com/darioizzo/dcgp/ dCGP]], differentiable Cartesian Genetic Programming in python (free, open source) <ref>{{Cite web|url=https://darioizzo.github.io/dcgp/|title=Differentiable Cartesian Genetic Programming, v1.6 Documentation|date=June 10, 2022}}</ref><ref>{{Cite journal|title=Differentiable genetic programming|first1=Dario|last1=Izzo|first2=Francesco|last2=Biscani|first3=Alessio|last3=Mereta|journal=Proceedings of the European Conference on Genetic Programming|year=2016 |arxiv=1611.04766 }}</ref>
* [[HeuristicLab]], a software environment for heuristic and evolutionary algorithms, including symbolic regression (free, open source)
* [[Gene expression programming#Software|GeneXProTools]], - an implementation of [[Gene expression programming]] technique for various problems including symbolic regression (commercial)
* [[Multi expression programming#MEPX|Multi Expression Programming X]], an implementation of [[Multi expression programming]] for symbolic regression and classification (free, open source)
* [[Eureqa]], evolutionary symbolic regression software (commercial), and [[software library]]
* [[https://turingbotsoftware.com/ TuringBot]], symbolic regression software based on simulated annealing (commercial)
* [[https://github.com/MilesCranmer/PySR PySR]],<ref>{{cite web |title=High-Performance Symbolic Regression in Python |website=[[GitHub]] |date=18 August 2022 |url=https://github.com/MilesCranmer/PySR}}</ref> symbolic regression environment written in [[Python (programming language)|Python]] and [[Julia (programming language)|Julia]], using regularized evolution, [[simulated annealing]], and [[gradient]]-free optimization (free, open source)<ref>{{Cite web|url=https://www.quantamagazine.org/machine-scientists-distill-the-laws-of-physics-from-raw-data-20220510/|title='Machine Scientists' Distill the Laws of Physics From Raw Data|date=May 10, 2022|website=[[Quanta Magazine]]}}</ref>
* [https://github.com/marcovirgolin/GP-GOMEA GP-GOMEA], fast ([[C++]] back-end) [[genetic programming|evolutionary]] symbolic regression with [[Python (programming language)|Python]] [[scikit-learn]]-compatible interface, achieved one of the best trade-offs between accuracy and simplicity of discovered models on [https://cavalab.org/srbench/ SRBench] in 2021 (free, open source)
 
== See also ==
* [[Closed-form expression#Conversion from numerical forms|Closed-form expression § Conversion from numerical forms]]
* [[Genetic programming]]<ref name="aifeynman"/>
* [[Gene expression programming]]
* [[Kolmogorov complexity]]
Line 63 ⟶ 74:
* [[Regression analysis]]
* [[Reverse mathematics]]
* [[Discovery system (AI research)]]<ref name="aifeynman"/>
 
== References ==
Line 105 ⟶ 116:
| pmid = 32426452
| pmc = 7159912
| arxiv = 1905.11481
| bibcode = 2020SciA....6.2631U
}}</ref><ref name="ufo">{{cite journal
Line 113 ⟶ 125:
| volume = 94
| year = 2020
| pagesarticle-number = 106417
| issn = 1568-4946
| url = https://www.sciencedirect.com/science/article/pii/S1568494620303574
Line 119 ⟶ 131:
| doi = 10.1016/j.asoc.2020.106417
| s2cid = 219743405
| url-access= subscription
}}</ref><ref name="complexity">{{cite journal
| title = Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic programming
| author1 = Ekaterina J. Vladislavleva
Line 131 ⟶ 144:
| url = http://symbolicregression.com/sites/SRDocuments/NonlinearityPreprint.pdf
| doi=10.1109/tevc.2008.926486
| s2cidbibcode = 120727642009ITEC...13..333V
| s2cid = 12072764
}}</ref><ref name="exactlearning">{{cite journalweb
| title = A Natural Representation of Functions for Exact Learning
| type = Preprint
| author = Benedict W. J. Irwin
| year = 2021
Line 162 ⟶ 177:
}}</ref>
}}
 
== Further reading ==
* {{cite conference