Sample complexity: Difference between revisions

Content deleted Content added
improved notation
Line 28:
\Pr_{\rho^n}[\mathcal E(h_n) - \mathcal E^*_\mathcal{H}\geq\varepsilon]<\delta.
</math>
The sample complexity of <math>\mathcal{A}</math> is then the minimum <math>N</math> for which this holds, as a function of ''ρ''<math>\rho, ''ε''\epsilon</math>, and ''δ''<math>\delta</math>. We write the sample complexity as <math>N(\rho, \epsilon, \delta)</math> to emphasize that this value of ''<math>N''</math> depends on ''ρ''<math>\rho, ''ε''\epsilon</math>, and ''δ''<math>\delta</math>. If <math>ALG\mathcal{A}</math> is not consistent, then we set <math>N(\rho,\epsilon,\delta)=\infty</math>. If there exists an algorithm for which <math>N(\rho,\epsilon,\delta)</math> is finite, then we say that the hypothesis space <math> \mathcal H</math> is '''learnable'''.
 
In words, the sample complexity <math>N(\rho,\epsilon,\delta)</math> defines the rate of consistency of the algorithm: given a desired accuracy ''ε'' and confidence ''δ'', one needs to sample <math>N(\rho,\epsilon,\delta)</math> data points to guarantee that the risk of the output function is within ''ε'' of the best possible, with probability at least 1 - ''δ''.<ref name="Rosasco">{{citation |last = Rosasco | first = Lorenzo | title = Consistency, Learnability, and Regularization | series = Lecture Notes for MIT Course 9.520. | year = 2014 }}</ref>