Hyperparameter optimization: Difference between revisions

Content deleted Content added
Removed unsubstantiated fake attributes of hyperparameter optimization
Rewrote introductory paragraph using a reference
Line 1:
In [[machine learning]], '''hyperparameter optimization''' or tuning is the problem of choosing a set of optimal [[Hyperparameter (machine learning)|hyperparameters]] for a learning algorithm.
 
The same kind of machine learning model couldcan require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can bestoptimally solve the machine learning problem. UsuallyHyperparameter optimization finds a metrictuple isof chosenhyperparameters tothat measureyields thean algorithm'soptimal performancemodel which minimizes a predefined [[loss function]] on angiven independent data.<ref setname=abs1502.02127>{{cite article |url=https://arxiv.org/abs/1502.02127 |title=Claesen, Marc, and hyperparametersBart thatDe maximizeMoor. this"Hyperparameter measureSearch arein adoptedMachine Learning." OftenarXiv preprint arXiv:1502.02127 (2015).}}</ref> The objective function takes a tuple of hyperparameters and returns the associated loss.<ref name=abs1502.02127/> [[Cross-validation (statistics)|crossCross-validation]] is often used to estimate this generalization performance.<ref name="bergstra" />
 
== Algorithms ==