Content deleted Content added
ClueBot NG (talk | contribs) m Reverting possible vandalism by 103.223.9.225 to version by Bellerophon5685. Report False Positive? Thanks, ClueBot NG. (3757788) (Bot) |
Citation bot (talk | contribs) Alter: doi. Add: pages, chapter, issue, volume, s2cid, author pars. 1-1. Removed parameters. Formatted dashes. Some additions/deletions were actually parameter name changes. | You can use this bot yourself. Report bugs here. | Suggested by AManWithNoPlan | Category:Pages_with_DOIs_inactive_as_of_2020_January | via #UCB_Category |
||
Line 4:
The name of the algorithm is derived from the concept of a [[simplex]] and was suggested by [[Theodore Motzkin|T. S. Motzkin]].<ref name="Murty22" >{{harvtxt|Murty|1983|loc=Comment 2.2}}</ref> Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial ''[[cone (geometry)|cone]]s'', and these become proper simplices with an additional constraint.<ref name="Murty39">{{harvtxt|Murty|1983|loc=Note 3.9}}</ref><ref name="StoneTovey">{{cite journal|last1=Stone|first1=Richard E.|last2=Tovey|first2=Craig A.|title=The simplex and projective scaling algorithms as iteratively reweighted least squares methods|journal=SIAM Review|volume=33|year=1991|issue=2|pages=220–237
|mr=1124362|jstor=2031142|doi=10.1137/1033049}}</ref><ref>{{cite journal|last1=Stone|first1=Richard E.|last2=Tovey|first2=Craig A.|title=Erratum: The simplex and projective scaling algorithms as iteratively reweighted least squares methods|journal=SIAM Review|volume=33|year=1991|issue=3|pages=461|mr=1124362|doi=10.1137/1033100|jstor=2031443}}</ref><ref name="Strang">{{cite journal|last=Strang|first=Gilbert|authorlink=Gilbert Strang|title=Karmarkar's algorithm and its place in applied mathematics|journal=[[The Mathematical Intelligencer]]|date=1 June 1987|issn=0343-6993|pages=4–10|volume=9|doi=10.1007/BF03025891|mr=883185|issue=2|s2cid=123541868}}</ref> The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a [[polytope]]. The shape of this polytope is defined by the [[System of linear inequalities|constraints]] applied to the objective function.
== Overview ==
Line 29:
== History ==
George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator. During 1946 his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of [[Wassily Leontief]], however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.<ref>{{Cite journal|url = http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA112060|title = Reminiscences about the origins of linear programming|date = April 1982|journal = Operations Research Letters|doi = 10.1016/0167-6377(82)90043-8|pmid = |access-date = |volume = 1|issue = 2 |pages=43–48|last1 = Dantzig|first1 = George B.}}</ref> Development of the simplex method was evolutionary and happened over a period of about a year.<ref>{{Cite journal|url = http://www.phpsimplex.com/en/Dantzig_interview.htm|title = An Interview with George B. Dantzig: The Father of Linear Programming|last = Albers and Reid|date = 1986|journal = College Mathematics Journal|volume = 17|issue = 4|doi = 10.1080/07468342.1986.11972971|pmid = |access-date = |pages = 292–314}}</ref>
After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems that [[George Dantzig#Mathematical statistics|he had mistaken]] as homework in his professor [[Jerzy Neyman]]'s class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence of [[Lagrange multipliers on Banach spaces|Lagrange multipliers]] for general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form of [[Lebesgue integral]]s. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient.<ref>{{Cite book|url = http://www.dtic.mil/dtic/tr/fulltext/u2/a182708.pdf|
==Standard form==
Line 259:
If the values of all basic variables are strictly positive, then a pivot must result in an improvement in the objective value. When this is always the case no set of basic variables occurs twice and the simplex algorithm must terminate after a finite number of steps. Basic feasible solutions where at least one of the ''basic ''variables is zero are called ''degenerate'' and may result in pivots for which there is no improvement in the objective value. In this case there is no actual change in the solution but only a change in the set of basic variables. When several such pivots occur in succession, there is no improvement; in large industrial applications, degeneracy is common and such "''stalling''" is notable.
Worse than stalling is the possibility the same set of basic variables occurs twice, in which case, the deterministic pivoting rules of the simplex algorithm will produce an infinite loop, or "cycle". While degeneracy is the rule in practice and stalling is common, cycling is rare in practice. A discussion of an example of practical cycling occurs in [[Manfred W. Padberg|Padberg]].<ref name="Padberg"/> [[Bland's rule]] prevents cycling and thus guarantees that the simplex algorithm always terminates.<ref name="Padberg"/><ref name="Bland">
{{cite journal|title=New finite pivoting rules for the simplex method|first=Robert G.|last=Bland|journal=Mathematics of Operations Research|volume=2|issue=2|date=May 1977|pages=103–107|doi=10.1287/moor.2.2.103|jstor=3689647|mr=459599|s2cid=18493293|url=https://semanticscholar.org/paper/874b988e359f63c8068226c53ef0a9bcd54e5e4d}}</ref><ref>{{harvtxt|Murty|1983|p=79}}</ref> Another pivoting algorithm, the [[criss-cross algorithm]] never cycles on linear programs.<ref>There are abstract optimization problems, called [[oriented matroid]] programs, on which Bland's rule cycles (incorrectly) while the [[criss-cross algorithm]] terminates correctly.</ref>
History-based pivot rules such as [[Zadeh's rule]] and [[Cunningham's rule]] also try to circumvent the issue of stalling and cycling by keeping track how often particular variables are being used, and then favor such variables that have been used least often.
Line 265:
===Efficiency===
The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such as [[Fourier–Motzkin elimination]]. However, in 1972, [[Victor Klee|Klee]] and Minty<ref name="KleeMinty">{{cite book|title=Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin)|editor-first=Oved|editor-last=Shisha|publisher=Academic Press|___location=New York-London|year=1972|mr=332165|last1=Klee|first1=Victor|authorlink1=Victor Klee|last2=Minty|first2= George J.|authorlink2=George J. Minty|chapter=How good is the simplex algorithm?|pages=159–175}}</ref> gave an example, the [[Klee–Minty cube]], showing that the worst-case complexity of simplex method as formulated by Dantzig is [[exponential time]]. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation with [[polynomial time]], although sub-exponential pivot rules are known.<ref>{{Citation
|
|
| last2 = Zwick
| first2 = Uri
Line 277:
| citeseerx = 10.1.1.697.2526
| isbn = 9781450335362
| s2cid = 1980659
}}
</ref>
In 2018, it was proved that a particular variant of the simplex method is [[NP-mighty]], i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are both [[NP-hardness|NP-hard]] problems.<ref>{{Cite journal|
Analyzing and quantifying the observation that the simplex algorithm is efficient in practice despite its exponential worst-case complexity has led to the development of other measures of complexity. The simplex algorithm has polynomial-time [[Best, worst and average case|average-case complexity]] under various [[probability distribution]]s, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for the [[random matrix|random matrices]].<ref name="Schrijver" >[[Alexander Schrijver]], ''Theory of Linear and Integer Programming''. John Wiley & sons, 1998, {{isbn|0-471-98232-6}} (mathematical)</ref><ref name="Borgwardt">The simplex algorithm takes on average ''D'' steps for a cube. {{harvtxt|Borgwardt|1987}}: {{cite book|last=Borgwardt|first=Karl-Heinz|title=The simplex method: A probabilistic analysis|series=Algorithms and Combinatorics (Study and Research Texts)|volume=1|publisher=Springer-Verlag|___location=Berlin|year=1987|pages=xii+268|isbn=978-3-540-17096-9|mr=868467}}</ref> Another approach to studying "[[porous set|typical phenomena]]" uses [[Baire category theory]] from [[general topology]], and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps.{{Citation needed|date=June 2019}} Another method to analyze the performance of the simplex algorithm studies the behavior of worst-case scenarios under small perturbation – are worst-case scenarios stable under a small change (in the sense of [[structural stability]]), or do they become tractable? This area of research, called [[smoothed analysis]], was introduced specifically to study the simplex method. Indeed, the running time of the simplex method on input with noise is polynomial in the number of variables and the magnitude of the perturbations.<ref>{{Cite book | last1=Spielman | first1=Daniel | last2=Teng | first2=Shang-Hua | author2-link=Shanghua Teng | title=Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing | publisher=ACM | isbn=978-1-58113-349-3 | doi=10.1145/380752.380813 | year=2001 | chapter=Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time| pages=296–305 | arxiv=cs/0111050| s2cid=1471 }}</ref>
==Other algorithms==
Other algorithms for solving linear-programming problems are described in the [[linear programming|linear-programming]] article. Another basis-exchange pivoting algorithm is the [[criss-cross algorithm]].<ref>{{cite journal|last1=Terlaky|first1=Tamás|last2=Zhang|first2=Shu Zhong|title=Pivot rules for linear programming: A Survey on recent theoretical developments|issue=1|journal=Annals of Operations Research|volume=46–47|year=1993|pages=203–233|doi=10.1007/BF02096264|mr=1260019|citeseerx = 10.1.1.36.7658 |s2cid=6058077|issn=0254-5330}}</ref><ref>{{cite news|first1=Komei|last1=Fukuda|first2=Tamás|last2=Terlaky|title=Criss-cross methods: A fresh view on pivot algorithms |journal=Mathematical Programming, Series B|volume=79|number=1–3|pages=369–395|editor1=Thomas M. Liebling |editor2=Dominique de Werra|publisher=North-Holland Publishing |___location=Amsterdam|year=1997|doi=10.1007/BF02614325|mr=1464775}}</ref> There are polynomial-time algorithms for linear programming that use interior point methods: these include [[Khachiyan]]'s [[ellipsoidal algorithm]], [[Karmarkar]]'s [[Karmarkar's algorithm|projective algorithm]], and [[interior point method|path-following algorithm]]s.<ref name="Vanderbei"/>
==Linear-fractional programming==
|