Content deleted Content added
m →Overview: missed emboldening of one vector on previous edit |
→Efficiency: wikilink |
||
Line 267:
===Efficiency===
The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such as [[Fourier–Motzkin elimination]]. However, in 1972, [[Victor Klee|Klee]] and Minty<ref name="KleeMinty">{{cite book|title=Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin)|editor-first=Oved|editor-last=Shisha|publisher=Academic Press|___location=New York-London|year=1972|mr=332165|last1=Klee|first1=Victor|authorlink1=Victor Klee|last2=Minty|first2= George J.|authorlink2=George J. Minty|chapter=How good is the simplex algorithm?|pages=159–175|ref=harv}}</ref> gave an example showing that the worst-case complexity of simplex method as formulated by Dantzig is [[exponential time]]. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation with [[polynomial time]], or even sub-exponential worst-case complexity.<ref name="PapSte">[[Christos H. Papadimitriou]] and Kenneth Steiglitz, ''Combinatorial Optimization: Algorithms and Complexity'', Corrected republication with a new preface, Dover. (computer science)</ref><ref name="Schrijver" >[[Alexander Schrijver]], ''Theory of Linear and Integer Programming''. John Wiley & sons, 1998, ISBN 0-471-98232-6 (mathematical)</ref>
Analyzing and quantifying the observation that the simplex algorithm is efficient in practice, even though it has exponential worst-case complexity, has led to the development of other measures of complexity. The simplex algorithm has polynomial-time [[Best, worst and average case|average-case complexity]] under various [[probability distribution]]s, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for the [[random matrix|random matrices]].<ref name="Schrijver"/><ref name="Borgwardt">The simplex algorithm takes on average ''D'' steps for a cube. {{harvtxt|Borgwardt|1987}}: {{cite book|last=Borgwardt|first=Karl-Heinz|title=The simplex method: A probabilistic analysis|series=Algorithms and Combinatorics (Study and Research Texts)|volume=1|publisher=Springer-Verlag|___location=Berlin|year=1987|pages=xii+268|isbn=3-540-17096-0|mr=868467|ref=harv}}</ref> Another approach to studying "[[porous set|typical phenoma]]" uses [[Baire category theory]] from [[general topology]], and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps. Another method to analyze the performance of the simplex algorithm studies the behavior of worst-case scenarios under small perturbation – are worst-case scenarios stable under a small change (in the sense of [[structural stability]]), or do they become tractable? Formally, this method uses random problems to which is added a [[normal distribution|Gaussian]] [[random vector]] ("[[smoothed complexity]]").<ref>{{Cite book | last1=Spielman | first1=Daniel | last2=Teng | first2=Shang-Hua | author2-link=Shanghua Teng | title=Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing | publisher=ACM | isbn=978-1-58113-349-3 | doi=10.1145/380752.380813 | year=2001 | chapter=Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time| pages=296–305 | arxiv=cs/0111050}}</ref>
|