Hyperparameter optimization: Difference between revisions

Content deleted Content added
Software: Add pycma
Removed unsubstantiated fake attributes of hyperparameter optimization
Line 2:
 
The same kind of machine learning model could require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can best solve the machine learning problem. Usually a metric is chosen to measure the algorithm's performance on an independent data set and hyperparameters that maximize this measure are adopted. Often [[Cross-validation (statistics)|cross-validation]] is used to estimate this generalization performance.<ref name="bergstra" />
 
Hyperparameter optimization contrasts with actual learning problems, which are also often cast as optimization problems, but optimize a [[loss function]] on the training set alone. In effect, learning algorithms learn parameters that model/reconstruct their inputs well, while hyperparameter optimization is to ensure the model does not e.g., [[overfitting|overfit]] its data by tuning, as by regularization.
 
== Algorithms ==