Content deleted Content added
Line 30:
The '''sample complexity''' of <math>\mathcal{A}</math> is then the minimum <math>N</math> for which this holds, as a function of <math>\rho, \epsilon</math>, and <math>\delta</math>. We write the sample complexity as <math>N(\rho, \epsilon, \delta)</math> to emphasize that this value of <math>N</math> depends on <math>\rho, \epsilon</math>, and <math>\delta</math>. If <math>\mathcal{A}</math> is '''not consistent''', then we set <math>N(\rho,\epsilon,\delta)=\infty</math>. If there exists an algorithm for which <math>N(\rho,\epsilon,\delta)</math> is finite, then we say that the hypothesis space <math> \mathcal H</math> is '''learnable'''.
In others words, the sample complexity <math>N(\rho,\epsilon,\delta)</math> defines the rate of consistency of the algorithm: given a desired accuracy
In [[Probably approximately correct learning|probabilistically approximately correct (PAC) learning]], one is concerned with whether the sample complexity is ''polynomial'', that is, whether <math>N(\rho,\epsilon,\delta)</math> is bounded by a polynomial in 1/''ε'' and 1/''δ''. If <math>N(\rho,\epsilon,\delta)</math> is polynomial for some learning algorithm, then one says that the hypothesis space
|