Average-case complexity: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: title, template type. Add: chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar
OAbot (talk | contribs)
m Open access bot: url-access updated in citation with #oabot.
Line 8:
==History and background==
 
The average-case performance of algorithms has been studied since modern notions of computational efficiency were developed in the 1950s. Much of this initial work focused on problems for which worst-case polynomial time algorithms were already known.<ref name="bog06">{{Cite journal |last1=Bogdanov |first1=Andrej |last2=Trevisan |first2=Luca |date=2006 |title=Average-Case Complexity |url=http://www.nowpublishers.com/article/Details/TCS-004 |journal=Foundations and Trends in Theoretical Computer Science |language=en |volume=2 |issue=1 |pages=1–106 |doi=10.1561/0400000004 |issn=1551-305X|url-access=subscription }}</ref> In 1973, [[Donald Knuth]]<ref name="knu73">{{cite book
| last = Knuth | first = Donald | title = [[The Art of Computer Programming]] | volume = 3 | publisher = Addison-Wesley | date = 1973
}}</ref> published Volume 3 of the [[Art of Computer Programming]] which extensively surveys average-case performance of algorithms for problems solvable in worst-case polynomial time, such as sorting and median-finding.
Line 14:
An efficient algorithm for [[NP-complete|{{math|'''NP'''}}-complete]] problems is generally characterized as one which runs in polynomial time for all inputs; this is equivalent to requiring efficient worst-case complexity. However, an algorithm which is inefficient on a "small" number of inputs may still be efficient for "most" inputs that occur in practice. Thus, it is desirable to study the properties of these algorithms where the average-case complexity may differ from the worst-case complexity and find methods to relate the two.
 
The fundamental notions of average-case complexity were developed by [[Leonid Levin]] in 1986 when he published a one-page paper<ref name="levin86">{{Cite journal |last=Levin |first=Leonid A. |date=February 1986 |title=Average Case Complete Problems |url=http://epubs.siam.org/doi/10.1137/0215020 |journal=SIAM Journal on Computing |language=en |volume=15 |issue=1 |pages=285–286 |doi=10.1137/0215020 |issn=0097-5397|url-access=subscription }}</ref> defining average-case complexity and completeness while giving an example of a complete problem for {{math|'''distNP'''}}, the average-case analogue of [[NP (complexity)|{{math|'''NP'''}}]].
 
==Definitions==
Line 68:
In his original paper, Levin showed an example of a distributional tiling problem that is average-case {{math|'''NP'''}}-complete.<ref name="levin86"/> A survey of known {{math|'''distNP'''}}-complete problems is available online.<ref name="wangsurvey"/>
 
One area of active research involves finding new {{math|'''distNP'''}}-complete problems. However, finding such problems can be complicated due to a result of Gurevich which shows that any distributional problem with a flat distribution cannot be {{math|'''distNP'''}}-complete unless [[EXP|{{math|'''EXP'''}}]] = [[NEXP|{{math|'''NEXP'''}}]].<ref name="gur87">{{Cite book |last=Gurevich |first=Yuri |chapter=Complete and incomplete randomized NP problems |date=October 1987 |title=28th Annual Symposium on Foundations of Computer Science (SFCS 1987) |chapter-url=https://ieeexplore.ieee.org/document/4568261 |pages=111–117 |doi=10.1109/SFCS.1987.14|isbn=0-8186-0807-2 }}</ref> (A flat distribution {{mvar|μ}} is one for which there exists an {{math|''ε'' &gt; 0}} such that for any {{mvar|x}}, {{math|''μ''(''x'') ≤ 2<sup>−{{abs|''x''}}<sup>''ε''</sup></sup>}}.) A result by Livne shows that all natural {{math|'''NP'''}}-complete problems have {{math|'''DistNP'''}}-complete versions.<ref name="livne06">{{Cite journal |last=Livne |first=Noam |date=December 2010 |title=All Natural NP-Complete Problems Have Average-Case Complete Versions |url=http://link.springer.com/10.1007/s00037-010-0298-9 |journal=Computational Complexity |language=en |volume=19 |issue=4 |pages=477–499 |doi=10.1007/s00037-010-0298-9 |issn=1016-3328|url-access=subscription }}</ref> However, the goal of finding a natural distributional problem that is {{math|'''DistNP'''}}-complete has not yet been achieved.<ref name="gol97">{{Citation |last=Goldreich |first=Oded |title=Notes on Levin's Theory of Average-Case Complexity |date=2011 |work=Studies in Complexity and Cryptography. Miscellanea on the Interplay between Randomness and Computation |series=Lecture Notes in Computer Science |volume=6650 |pages=233–247 |editor-last=Goldreich |editor-first=Oded |url=http://link.springer.com/10.1007/978-3-642-22670-0_21 |access-date=2025-05-21 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |doi=10.1007/978-3-642-22670-0_21 |isbn=978-3-642-22669-4|url-access=subscription }}</ref>
 
==Applications==
Line 93:
In 1992, Ben-David et al. showed that if all languages in {{math|'''distNP'''}} have good-on-average decision algorithms, they also have good-on-average search algorithms. Further, they show that this conclusion holds under a weaker assumption: if every language in {{math|'''NP'''}} is easy on average for decision algorithms with respect to the uniform distribution, then it is also easy on average for search algorithms with respect to the uniform distribution.<ref name="bd92">{{Cite book |last1=Ben-David |first1=S. |last2=Chor |first2=B. |last3=Goldreich |first3=O. |chapter=On the theory of average case complexity |date=1989 |title=Proceedings of the twenty-first annual ACM symposium on Theory of computing - STOC '89 |chapter-url=http://portal.acm.org/citation.cfm?doid=73007.73027 |language=en |publisher=ACM Press |pages=204–216 |doi=10.1145/73007.73027 |isbn=978-0-89791-307-2}}</ref> Thus, cryptographic one-way functions can exist only if there are {{math|'''distNP'''}} problems over the uniform distribution that are hard on average for decision algorithms.
 
In 1993, Feigenbaum and Fortnow showed that it is not possible to prove, under non-adaptive random reductions, that the existence of a good-on-average algorithm for a {{math|'''distNP'''}}-complete problem under the uniform distribution implies the existence of worst-case efficient algorithms for all problems in {{math|'''NP'''}}.<ref name="ff93">{{Cite journal |last1=Feigenbaum |first1=Joan |last2=Fortnow |first2=Lance |date=October 1993 |title=Random-Self-Reducibility of Complete Sets |url=http://epubs.siam.org/doi/10.1137/0222061 |journal=SIAM Journal on Computing |language=en |volume=22 |issue=5 |pages=994–1005 |doi=10.1137/0222061 |issn=0097-5397|url-access=subscription }}</ref> In 2003, Bogdanov and Trevisan generalized this result to arbitrary non-adaptive reductions.<ref name="bog03">{{Cite journal |last1=Bogdanov |first1=Andrej |last2=Trevisan |first2=Luca |date=January 2006 |title=On Worst-Case to Average-Case Reductions for NP Problems |url=https://epubs.siam.org/doi/10.1137/S0097539705446974 |journal=SIAM Journal on Computing |language=en |volume=36 |issue=4 |pages=1119–1159 |doi=10.1137/S0097539705446974 |issn=0097-5397|url-access=subscription }}</ref> These results show that it is unlikely that any association can be made between average-case complexity and worst-case complexity via reductions.<ref name="bog06"/>
 
==See also==