Content deleted Content added
Clovermoss (talk | contribs) m →Other Settings: capitalization |
m →top: clean up, typo(s) fixed: e.g, → e.g., |
||
Line 11:
The [[No free lunch theorem]], discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples.
However, if we are only interested in a particular class of target functions (e.g., only linear functions) then the sample complexity is finite, and it depends linearly on the [[VC dimension]] on the class of target functions.<ref name=":0" />
==Definition==
|