Average-case complexity: Difference between revisions

Content deleted Content added
Undid revision 609293314 by Tokenzero (talk) no, I think all epsilon was what was intended. Also, using "some" as a mathematical quantifier is too ambiguous to be a good idea.
Three not two
Line 1:
'''Average-case complexity''' is a subfield of [[computational complexity theory]] that studies the complexity of [[algorithms]] over inputs drawn randomly from a particular [[probability distribution]]. It is frequently contrasted with [[worst-case complexity]] which considers the maximal complexity of the algorithm over all possible inputs.
 
There are twothree primary motivations for studying average-case complexity.<ref name="gol07">O. Goldreich and S. Vadhan, Special issue on worst-case versus average-case complexity, Comput. Complex. 16, 325-330, 2007.</ref> First, although some problems may be intractable in the worst-case, the inputs which elicit this behavior may rarely occur in practice, so the average-case complexity may be a more accurate measure of an algorithm's performance. Second, average-case complexity analysis provides tools and techniques to generate hard instances of problems which can be utilized in areas such as [[cryptography]] and [[derandomization]]. Third, average-case complexity allows discriminating the most efficient algorithm in practice among algorithms of equivalent based case complexity (for instance [[Quicksort#Formal analysis|Quicksort]]).
 
==History and Background==