Sample complexity: Difference between revisions

Content deleted Content added
m change link "Active Learning" to active learning in the context of machine learning: "Active learning (machine learning)".
No edit summary
Line 8:
* The strong variant takes the worst-case sample complexity over all input-output distributions.
 
The [[No Freefree Lunchlunch theorem]], discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples.
 
However, if we are only interested in a particular class of target functions (e.g, only linear functions) then the sample complexity is finite, and it depends linearly on the [[VC dimension]] on the class of target functions.<ref name=":0" />