*Constrainconstrain the setspace of probability distributions <math>\rho</math>, e.g. via a parametric approach, or
*Constrainconstrain the setspace of hypotheses <math>\mathcal H</math> to be small, as in distribution free approaches.
The latter approach leads to concepts such as [[VC dimension]] and [[Rademacher complexity]] which control the complexity of the space <math>\mathcal H</math>. A smaller hypothesis space introduces more bias into the inference process, meaning that <math>\mathcal R^*_\mathcal{H}</math> may be largergreater than the best possible expected risk in a larger space. However, by restricting the complexity of the hypothesis space it becomes possible for an algorithm to produce functionsmore converginguniformly inconsistent expected risk to <math>\mathcal R^*_\mathcal{H}</math>functions. This trade-off leads to the concept of [[regularization (mathematics)|regularization]].<ref name = "Rosasco" />