Sample complexity: Difference between revisions

Content deleted Content added
mNo edit summary
Line 8:
* The strong variant takes the worst-case sample complexity over all input-output distributions.
 
The No Free Lunch theorem, discussed below, proves that, in general, the strong sample complexity is infinite., Ii.e,. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples.
 
However, if we are only interested in a particular class of target functions (e.g, only linear functions) then the sample complexity is finite, and it depends linearly on the [[VC dimension]] on the class of target functions.