Hyperparameter optimization: Difference between revisions

Content deleted Content added
ce
Gradient-based optimization: | Altered template type. Add: class, date, title, eprint, authors 1-5. Changed bare reference to CS1/2. Removed parameters. Some additions/deletions were parameter name changes. | Use this tool. Report bugs. | #UCB_Gadget
Line 119:
For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters using [[gradient descent]]. The first usage of these techniques was focused on neural networks.<ref>{{cite book |last1=Larsen|first1=Jan|last2= Hansen |first2=Lars Kai|last3=Svarer|first3=Claus|last4=Ohlsson|first4=M|title=Neural Networks for Signal Processing VI. Proceedings of the 1996 IEEE Signal Processing Society Workshop |chapter=Design and regularization of neural networks: The optimal use of a validation set |date=1996|pages=62–71|doi=10.1109/NNSP.1996.548336|isbn=0-7803-3550-3|citeseerx=10.1.1.415.3266|s2cid=238874|chapter-url=http://orbit.dtu.dk/files/4545571/Svarer.pdf}}</ref> Since then, these methods have been extended to other models such as [[support vector machine]]s<ref>{{cite journal |author1=Olivier Chapelle |author2=Vladimir Vapnik |author3=Olivier Bousquet |author4=Sayan Mukherjee |title=Choosing multiple parameters for support vector machines |journal=Machine Learning |year=2002 |volume=46 |pages=131–159 |url=http://www.chapelle.cc/olivier/pub/mlj02.pdf | doi = 10.1023/a:1012450327387 |doi-access=free }}</ref> or logistic regression.<ref>{{cite journal |author1 =Chuong B|author2= Chuan-Sheng Foo|author3=Andrew Y Ng|journal = Advances in Neural Information Processing Systems |volume=20|title = Efficient multiple hyperparameter learning for log-linear models|year =2008|url=http://papers.nips.cc/paper/3286-efficient-multiple-hyperparameter-learning-for-log-linear-models.pdf}}</ref>
 
A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm using [[automatic differentiation]].<ref>{{cite journal|last1=Domke|first1=Justin|title=Generic Methods for Optimization-Based Modeling|journal=Aistats|date=2012|volume=22|url=http://www.jmlr.org/proceedings/papers/v22/domke12/domke12.pdf|access-date=2017-12-09|archive-date=2014-01-24|archive-url=https://web.archive.org/web/20140124182520/http://jmlr.org/proceedings/papers/v22/domke12/domke12.pdf|url-status=dead}}</ref><ref name=abs1502.03492>{{cite arXiv |last1=Maclaurin|first1=Dougal|last2=Duvenaud|first2=David|last3=Adams|first3=Ryan P.|eprint=1502.03492|title=Gradient-based Hyperparameter Optimization through Reversible Learning|class=stat.ML|date=2015}}</ref><ref>{{cite journal |last1=Franceschi |first1=Luca |last2=Donini |first2=Michele |last3=Frasconi |first3=Paolo |last4=Pontil |first4=Massimiliano |title=Forward and Reverse Gradient-Based Hyperparameter Optimization |journal=Proceedings of the 34th International Conference on Machine Learning |date=2017 |arxiv=1703.01785 |bibcode=2017arXiv170301785F |url=http://proceedings.mlr.press/v70/franceschi17a/franceschi17a-supp.pdf}}</ref><ref>Shaban, A., Cheng, C. A., Hatch, N., & Boots, B. (2019, April). [https://{{arxiv.org/pdf/|1810.10667.pdf Truncated back-propagation for bilevel optimization]. In ''The 22nd International Conference on Artificial Intelligence and Statistics'' (pp. 1723-1732). PMLR.}}</ref> A more recent work along this direction uses the [[implicit function theorem]] to calculate hypergradients and proposes a stable approximation of the inverse Hessian. The method scales to millions of hyperparameters and requires constant memory.<ref>Lorraine,{{cite JarXiv | eprint=1911.,02590 | last1=Lorraine | first1=Jonathan | last2=Vicol, P.,| &first2=Paul | last3=Duvenaud, D.| (2018).first3=David [[arxiv:1911.02590| title=Optimizing Millions of Hyperparameters by Implicit Differentiation]]. ''arXiv| preprintdate=2019 arXiv:1911.02590''| class=cs.LG }}</ref>
 
In a different approach,<ref>Lorraine,{{cite JarXiv | eprint=1802.,09419 &| last1=Lorraine | first1=Jonathan | last2=Duvenaud, D.| (2018).first2=David [[arxiv:1802.09419| title=Stochastic hyperparameterHyperparameter optimizationOptimization through hypernetworks]].Hypernetworks ''arXiv| preprintdate=2018 arXiv:1802.09419''| class=cs.LG }}</ref> a hypernetwork is trained to approximate the best response function. One of the advantages of this method is that it can handle discrete hyperparameters as well. Self-tuning networks<ref>MacKay,{{cite MarXiv | eprint=1903.,03088 | last1=MacKay | first1=Matthew | last2=Vicol, P.,| first2=Paul | last3=Lorraine, J.,| first3=Jon | last4=Duvenaud, D.,| &first4=David | last5=Grosse, R.| (2019).first5=Roger [[arxiv:1903.03088| title=Self-tuningTuning networksNetworks: Bilevel optimizationOptimization of hyperparametersHyperparameters using structuredStructured bestBest-responseResponse functions]].Functions ''arXiv| preprintdate=2019 arXiv:1903.03088''| class=cs.LG }}</ref> offer a memory efficient version of this approach by choosing a compact representation for the hypernetwork. More recently, Δ-STN<ref>Bae,{{cite arXiv J| eprint=2010.,13514 &| Grosse,last1=Bae R.| B.first1=Juhan (2020).| last2=Grosse [[arxiv:2010.13514| first2=Roger | title=Delta-stnSTN: Efficient bilevelBilevel optimizationOptimization for neuralNeural networksNetworks using structuredStructured responseResponse jacobians]].Jacobians ''Advances| indate=2020 Neural| Information Processing Systems'', ''33'',class=cs.LG 21725-21737.}}</ref> has improved this method further by a slight reparameterization of the hypernetwork which speeds up training. Δ-STN also yields a better approximation of the best-response Jacobian by linearizing the network in the weights, hence removing unnecessary nonlinear effects of large changes in the weights.
 
Apart from hypernetwork approaches, gradient-based methods can be used to optimize discrete hyperparameters also by adopting a continuous relaxation of the parameters.<ref>Liu,{{cite HarXiv | eprint=1806.,09055 | last1=Liu | first1=Hanxiao | last2=Simonyan, K.,| &first2=Karen | last3=Yang, Y.| (2018).first3=Yiming [[arxiv:1806.09055|Darts title=DARTS: Differentiable architectureArchitecture search]].Search ''arXiv| preprintdate=2018 arXiv:1806.09055''| class=cs.LG }}</ref> Such methods have been extensively used for the optimization of architecture hyperparameters in [[neural architecture search]].
 
=== Evolutionary optimization ===