Hyperparameter optimization: Difference between revisions

Content deleted Content added
Rhiever (talk | contribs)
Rhiever (talk | contribs)
Line 140:
}}</ref> an efficient implementation of Bayesian optimization in C/C++ with support for Python, Matlab and Octave.
* [https://github.com/yelp/MOE MOE] MOE is a Python/C++/CUDA library implementing Bayesian Global Optimization using Gaussian Processes.
* [http://www.cs.ubc.ca/labs/beta/Projects/autoweka/ Auto-WEKA]<ref name="autoweka">{{cite journal | vauthors = Kotthoff L, Thornton C, Hoos HH, Hutter F, Leyton-Brown K | year = 2017 | title = Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA | url = http://jmlr.org/papers/v18/16-261.html | journal = Journal of Machine Learning Research | pages = 1-5 }}</ref> is a Bayesian hyperparameter optimization layer on top of [[Weka (machine learning)|WEKA]].
* [https://github.com/automl/auto-sklearn Auto-sklearn]<ref name="autosklearn">{{cite journal | vauthors = Feurer M, Klein A, Eggensperger K, Springenberg J, Blum M, Hutter F | year = 2015 | title = Efficient and Robust Automated Machine Learning | url = https://papers.nips.cc/paper/5872-efficient-and-robust-automated-machine-learning | journal = Advances in Neural Information Processing Systems 28 (NIPS 2015) | pages = 2962--2970 }}</ref> is a Bayesian hyperparameter optimization layer on top of [[scikit-learn]].
 
===Gradient based===