Hyperparameter optimization: Difference between revisions

Content deleted Content added
No edit summary
Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 123:
 
# Create an initial population of random solutions (i.e., randomly generate tuples of hyperparameters, typically 100+)
# Evaluate the hyperparametershyperparameter tuples and acquire their [[fitness function]] (e.g., 10-fold [[Cross-validation (statistics)|cross-validation]] accuracy of the machine learning algorithm with those hyperparameters)
# Rank the hyperparameter tuples by their relative fitness
# Replace the worst-performing hyperparameter tuples with new hyperparameter tuplesones generated throughvia [[crossover (genetic algorithm)|crossover]] and [[mutation (genetic algorithm)|mutation]]
# Repeat steps 2-4 until satisfactory algorithm performance is reached or algorithm performance is no longer improving.
 
Evolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms,<ref name="bergstra11" /> [[automated machine learning]], typical neural network <ref name="kousiouris1">{{cite journal |vauthors=Kousiouris G, Cuccinotta T, Varvarigou T | year = 2011 | title= The effects of scheduling, workload type and consolidation scenarios on virtual machine performance and their prediction through optimized artificial neural networks | url= https://www.sciencedirect.com/science/article/abs/pii/S0164121211000951 | journal = Journal of Systems and Software | volume = 84 | issue = 8 | pages = 1270–1291| doi = 10.1016/j.jss.2011.04.013 | hdl = 11382/361472 | hdl-access = free }}</ref> and [[Deep learning#Deep neural networks|deep neural network]] architecture search,<ref name="miikkulainen1">{{cite arXiv | vauthors = Miikkulainen R, Liang J, Meyerson E, Rawal A, Fink D, Francon O, Raju B, Shahrzad H, Navruzyan A, Duffy N, Hodjat B | year = 2017 | title = Evolving Deep Neural Networks |eprint=1703.00548| class = cs.NE }}</ref><ref name="jaderberg1">{{cite arXiv | vauthors = Jaderberg M, Dalibard V, Osindero S, Czarnecki WM, Donahue J, Razavi A, Vinyals O, Green T, Dunning I, Simonyan K, Fernando C, Kavukcuoglu K | year = 2017 | title = Population Based Training of Neural Networks |eprint=1711.09846| class = cs.LG }}</ref> as well as training of the weights in deep neural networks.<ref name="such1">{{cite arXiv | vauthors = Such FP, Madhavan V, Conti E, Lehman J, Stanley KO, Clune J | year = 2017 | title = Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning |eprint=1712.06567| class = cs.NE }}</ref>