Average-case complexity: Difference between revisions

Content deleted Content added
m typo; mos
it's not really a field
Line 1:
In [[computational complexity theory]], the '''Averageaverage-case complexity''' is a subfield of an [[computational complexity theoryalgorithm]] that studiesis the complexityamount of [[algorithms]]some overcomputational inputsresource drawn(typically randomlytime) fromused aby particularthe [[probabilityalgorithm, distribution]]averaged over all possible inputs. It is frequently contrasted with [[worst-case complexity]] which considers the maximal complexity of the algorithm over all possible inputs.
 
There are three primary motivations for studying average-case complexity.<ref name="gol07">O. Goldreich and S. Vadhan, Special issue on worst-case versus average-case complexity, Comput. Complex. 16, 325–330, 2007.</ref> First, although some problems may be intractable in the worst-case, the inputs which elicit this behavior may rarely occur in practice, so the average-case complexity may be a more accurate measure of an algorithm's performance. Second, average-case complexity analysis provides tools and techniques to generate hard instances of problems which can be utilized in areas such as [[cryptography]] and [[derandomization]]. Third, average-case complexity allows discriminating the most efficient algorithm in practice among algorithms of equivalent based case complexity (for instance [[Quicksort#Formal analysis|Quicksort]]).