Content deleted Content added
m →Sample-complexity bounds: Use math environment |
Thatsme314 (talk | contribs) mNo edit summary |
||
Line 10:
The No Free Lunch theorem, discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples.
However, if we are only interested in a particular class of target functions (e.g, only linear functions) then the sample complexity is finite, and it depends linearly on the [[VC dimension]] on the class of target functions.<ref name=":0" />
==Definition==
|