Content deleted Content added
m Dating maintenance tags: {{Pn}} |
m Open access bot: doi updated in citation with #oabot. |
||
Line 2:
In [[computational complexity theory]], the '''average-case complexity''' of an [[algorithm]] is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with [[worst-case complexity]] which considers the maximal complexity of the algorithm over all possible inputs.
There are three primary motivations for studying average-case complexity.<ref name="gol07">{{Cite journal |last1=Goldreich |first1=Oded |last2=Vadhan |first2=Salil |date=December 2007 |title=Special Issue On Worst-case Versus Average-case Complexity Editors' Foreword |url=https://link.springer.com/10.1007/s00037-007-0232-y |journal=Computational Complexity |language=en |volume=16 |issue=4 |pages=325–330 |doi=10.1007/s00037-007-0232-y |issn=1016-3328|doi-access=free }}</ref> First, although some problems may be intractable in the worst-case, the inputs which elicit this behavior may rarely occur in practice, so the average-case complexity may be a more accurate measure of an algorithm's performance. Second, average-case complexity analysis provides tools and techniques to generate hard instances of problems which can be utilized in areas such as [[cryptography]] and [[derandomization]]. Third, average-case complexity allows discriminating the most efficient algorithm in practice among algorithms of equivalent best case complexity (for instance [[Quicksort#Formal analysis|Quicksort]]).
Average-case analysis requires a notion of an "average" input to an algorithm, which leads to the problem of devising a [[probability distribution]] over inputs. Alternatively, a [[randomized algorithm]] can be used. The analysis of such algorithms leads to the related notion of an '''expected complexity'''.<ref name="clrs"/>{{rp|28}}
|