Content deleted Content added
Citation bot (talk | contribs) Alter: journal. Add: year, arxiv, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Abductive | #UCB_webform 3058/3850 |
Added result of a recent leading benchmark competition |
||
Line 18:
Nevertheless, this characteristic of symbolic regression also has advantages: because the [[evolutionary algorithm]] requires diversity in order to effectively explore the search space, the result is likely to be a selection of high-scoring models (and their corresponding set of parameters). Examining this collection could provide better insight into the underlying process, and allows the user to identify an approximation that better fits their needs in terms of accuracy and simplicity.
== Benchmarking ==
In 2022 the results of a large benchmarking competition known as [[SRBench]] were announced at the [[Genetic and Evolutionary Computation Conference|GECCO conference]] in Boston, MA. The competition pitted nine leading symbolic regression algorithms against each other on a large set of data problems and evaluation criteria<ref name="srbecnh2022" />. The competition was organised in two tracks, a synthetic track and a real-world data track.
=== Synthetic Track ===
In the synthetic track, methods were compared according to five properties: re-discovery of exact expressions; feature selection; resistance to local optima; extrapolation; and sensitivity to noise. Rankings of the methods were:
# [[QLattice]]
# [[PySR]]
# [[Deep symbolic optimization | uDSR]]
=== Real-world Track ===
In the real-world track, methods were trained to build interpretable predictive models for 14-day forecast counts of COVID-19 cases, hospitalizations, and deaths in New York State. These models were reviewed by a subject expert and assigned trust ratings and evaluated for accuracy and simplicity. The ranking of the methods were:
# [[Deep symbolic optimization | uDSR]]
# [[QLattice]]
# [[geneticengine]]
== Non-Standard Methods ==
Line 130 ⟶ 146:
}}</ref>
}}
<ref name="srbench2022">{{cite web
|title = SRBench Competition 2022: Interpretable Symbolic Regression for Data Science
|author1 = Michael Kommenda
|author2 = William La Cava
|author3 = Maimuna Majumder
|author4 = Fabricio Olivetti de França
|author5 = Marco Virgolin
|url = https://cavalab.org/srbench/competition-2022/
}}</ref>
== Further reading ==
|