Content deleted Content added
→History: capitalization, format |
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5 |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 19:
The earliest idea of Bayesian optimization<ref>{{Cite book |last=GARNETT |first=ROMAN |title=BAYESIAN OPTIMIZATION |date=2023 |publisher=Cambridge University Press |isbn=978-1-108-42578-0 |edition=First published 2023}}</ref> sprang in 1964, from a paper by American applied mathematician Harold J. Kushner,<ref>{{Cite web|url=https://vivo.brown.edu/display/hkushner|title=Kushner, Harold|website=vivo.brown.edu}}</ref> [https://asmedigitalcollection.asme.org/fluidsengineering/article/86/1/97/392213/A-New-Method-of-Locating-the-Maximum-Point-of-an “A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise”]. Although not directly proposing Bayesian optimization, in this paper, he first proposed a new method of locating the maximum point of an arbitrary multipeak curve in a noisy environment. This method provided an important theoretical foundation for subsequent Bayesian optimization.
By the 1980s, the framework we now use for Bayesian optimization was explicitly established. In 1978, the Lithuanian scientist Jonas Mockus,<ref>{{Cite web |title=Jonas Mockus |url=https://en.ktu.edu/people/jonas-mockus/ |access-date=2025-03-06 |website=Kaunas University of Technology |language=en}}</ref> in his paper “The Application of Bayesian Methods for Seeking the Extremum”, discussed how to use Bayesian methods to find the extreme value of a function under various uncertain conditions. In his paper, Mockus first proposed the [https://schneppat.com/expected-improvement_ei.html Expected Improvement principle (EI)], which is one of the core sampling strategies of Bayesian optimization. This criterion balances exploration while optimizing the function efficiently by maximizing the expected improvement. Because of the usefulness and profound impact of this principle, Jonas Mockus is widely regarded as the founder of Bayesian optimization. Although Expected Improvement principle (
==== From theory to practice ====
In the 1990s, Bayesian optimization began to gradually transition from pure theory to real-world applications. In 1998, Donald R. Jones<ref>{{Cite web |title=Donald R. Jones |url=https://scholar.google.com/citations?user=CZhZ4MYAAAAJ&hl=en |access-date=2025-02-25 |website=scholar.google.com}}</ref> and his coworkers published a paper titled
In the 21st century, with the gradual rise of artificial intelligence and bionic robots, Bayesian optimization has been widely used in machine learning and deep learning, and has become an important tool for [[Hyperparameter optimization|Hyperparameter Tuning]].<ref>T. T. Joy, S. Rana, S. Gupta and S. Venkatesh, "Hyperparameter tuning for big data using Bayesian optimisation," 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 2016, pp. 2574-2579, doi: 10.1109/ICPR.2016.7900023. keywords: {Big Data;Bayes methods;Optimization;Tuning;Data models;Gaussian processes;Noise measurement},</ref> Companies such as Google, Facebook and OpenAI have added Bayesian optimization to their deep learning frameworks to improve search efficiency. However, Bayesian optimization still faces many challenges, for example, because of the use of Gaussian Process<ref>{{Cite book
==Strategy==
|