Simplex algorithm: Difference between revisions

Content deleted Content added
External links: add Archive.org link to simplex-m
 
(14 intermediate revisions by 11 users not shown)
Line 2:
{{about|the linear programming algorithm|the non-linear optimization heuristic|Nelder–Mead method}}
<!-- {{Context|date=March 2012}} -->
In [[optimization (mathematics)|mathematical optimization]], [[George Dantzig|Dantzig]]'s '''simplex algorithm''' (or '''simplex method''') is a popular [[algorithm]] for [[linear programming]].<ref name="Murty">{{cite book |last=Murty |first=Katta G. |author-link=Katta G. Murty |year=2000 |title=Linear programming |publisher=John Wiley & Sons |url=http://www.computer.org/csdl/mags/cs/2000/01/c1022.html}}</ref>{{failed verification|date=June 2025|reason=Could not locate reference.}}
 
The name of the algorithm is derived from the concept of a [[simplex]] and was suggested by [[Theodore Motzkin|T. S. Motzkin]].<ref name="Murty22" >{{harvtxt|Murty|1983|loc=Comment 2.2}}</ref> Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial ''[[cone (geometry)|cone]]s'', and these become proper simplices with an additional constraint.<ref name="Murty39">{{harvtxt|Murty|1983|loc=Note 3.9}}</ref><ref name="StoneTovey">{{cite journal|last1=Stone|first1=Richard E.|last2=Tovey|first2=Craig A.|title=The simplex and projective scaling algorithms as iteratively reweighted least squares methods|journal=SIAM Review|volume=33|year=1991|issue=2|pages=220–237
Line 8:
 
==History==
[[George Dantzig]] worked on planning methods for the US Army Air Force during World War II using a [[Mechanical_calculator#1900s_to_1970s|desk calculator]]. During 1946, his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of [[Wassily Leontief]], however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.<ref>{{Cite journal|url = https://apps.dtic.mil/sti/pdfs/ADA112060.pdf|archive-url = https://web.archive.org/web/20150520183722/http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA112060|url-status = live|archive-date = May 20, 2015|title = Reminiscences about the origins of linear programming|date = April 1982|journal = Operations Research Letters|doi = 10.1016/0167-6377(82)90043-8|volume = 1|issue = 2 |pages=43–48|last1 = Dantzig|first1 = George B.}}</ref> Development of the simplex method was evolutionary and happened over a period of about a year.<ref>{{Cite journal|url = http://www.phpsimplex.com/en/Dantzig_interview.htm|title = An Interview with George B. Dantzig: The Father of Linear Programming|last = Albers and Reid|date = 1986|journal = College Mathematics Journal|volume = 17|issue = 4|doi = 10.1080/07468342.1986.11972971|pages = 292–314}}</ref>
 
After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems that [[George Dantzig#Education|he had mistaken]] as homework in his professor [[Jerzy Neyman]]'s class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence of [[Lagrange multipliers]] for general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form of [[Lebesgue integral]]s. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient.<ref>{{Cite encyclopedia|url = http://apps.dtic.mil/dtic/tr/fulltext/u2/a182708.pdf|archive-url = https://web.archive.org/web/20150529003047/http://www.dtic.mil/dtic/tr/fulltext/u2/a182708.pdf|url-status = live|archive-date = May 29, 2015|title = Origins of the simplex method|last = Dantzig|first = George|date = May 1987|encyclopedia = A History of Scientific Computing|editor-last=Nash|editor-first=Stephen G.|publisher=Association for Computing Machinery|pages = 141–151|doi = 10.1145/87252.88081|isbn = 978-0-201-50814-7}}</ref>
Line 76:
\begin{bmatrix}
1 & -\mathbf{c}^T & 0 \\
\mathbf{0} & \mathbf{A} & \mathbf{b}
\end{bmatrix}
</math>
 
The first row defines the objective function and the remaining rows specify the constraints. The zero in the first column represents the zero vector of the same dimension as the vector ''<math>\mathbf{b''}</math> (different authors use different conventions as to the exact layout). If the columns of <math>\mathbf{A}</math> can be rearranged so that it contains the [[identity matrix]] of order ''<math>p''</math> (the number of rows in <math>\mathbf{A}</math>) then the tableau is said to be in ''canonical form''.<ref>{{harvtxt|Murty|1983|loc=section 2.3.2}}</ref> The variables corresponding to the columns of the identity matrix are called ''basic variables'' while the remaining variables are called ''nonbasic'' or ''free variables''. If the values of the nonbasic variables are set to 0, then the values of the basic variables are easily obtained as entries in ''<math>\mathbf{b''}</math> and this solution is a basic feasible solution. The algebraic interpretation here is that the coefficients of the linear equation represented by each row are either <math>0</math>, <math>1</math>, or some other number. Each row will have <math>1</math> column with value <math>1</math>, <math>p-1</math> columns with coefficients <math>0</math>, and the remaining columns with some other coefficients (these other variables represent our non-basic variables). By setting the values of the non-basic variables to zero we ensure in each row that the value of the variable represented by a <math>1</math> in its column is equal to the <math>b</math> value at that row.
 
Conversely, given a basic feasible solution, the columns corresponding to the nonzero variables can be expanded to a nonsingular matrix. If the corresponding tableau is multiplied by the inverse of this matrix then the result is a tableau in canonical form.<ref>{{harvtxt|Murty|1983|loc=section 3.12}}</ref>
Line 154:
:<math>
\begin{bmatrix}
31 & -\frac{2}{3} & -\frac{11}{3} & 0 & 0 & -\frac{4}{3} & -6020 \\
0 & \frac{7}{3} & \frac{1}{3} & 0 & 31 & -\frac{1}{3} & 15 5 \\
0 & \frac{2}{3} & \frac{5}{3} & 31 & 0 & \frac{1}{3} & 155
\end{bmatrix}
</math>
Line 164:
 
For the next step, there are no positive entries in the objective row and in fact
:<math display="block">Z = -20 + \frac{-60+2x+11y+4t2}{3} = -20 x+ \frac{2x11}{3}y+11y+4t\frac{4}{3}t</math>
so the minimum value of ''Z'' is&nbsp;&minus;20.
 
Line 175:
Consider the linear program
:Minimize
::<math>Z = -2 ax - 3 by - 4 cz\,</math>
 
:Subject to
Line 184:
\end{align}</math>
 
It differs from the previous example by having equality instead of inequality constraints. The previous solution <math>x=y=0\, , z=5</math> violates the first constraint.
This new problem is represented by the (non-canonical) tableau
:<math>
\begin{bmatrix}
Line 244 ⟶ 245:
</math>
 
This is, fortuitously, already optimal and the optimum value for the original linear program is&nbsp;−130/7. This value is "worse" than -20 which is to be expected for a problem which is more constrained.
 
==Advanced topics==
Line 292 ⟶ 293:
</ref>
 
In 2014, it was proved{{citation-needed|date=January 2024}} that a particular variant of the simplex method is [[NP-mighty]], i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are both [[NP-hardness|NP-hard]] problems.<ref>{{Cite journal|last1=Disser|first1=Yann|last2=Skutella|first2=Martin|date=2018-11-01|title=The Simplex Algorithm Is NP-Mighty|journal=ACM Trans. Algorithms|volume=15|issue=1|pages=5:1–5:19|doi=10.1145/3280847|issn=1549-6325|arxiv=1311.5935|s2cid=54445546}}</ref> At about the same time it was shown that there exists an artificial pivot rule for which computing its output is [[PSPACE-complete]].<ref>{{Citation | last1 = Adler | first1 = Ilan|author1-link=Ilan Adler | last2 = Christos | first2 = Papadimitriou | author2-link = Christos Papadimitriou | last3 = Rubinstein | first3 = Aviad | title = Integer Programming and Combinatorial Optimization | chapter = On Simplex Pivoting Rules and Complexity Theory | volume = 17 | pages = 13–24 | year = 2014 | arxiv = 1404.3320 | doi = 10.1007/978-3-319-07557-0_2| series = Lecture Notes in Computer Science | isbn = 978-3-319-07556-3 | s2cid = 891022 }}</ref> In 2015, this was strengthened to show that computing the output of Dantzig's pivot rule is [[PSPACE-complete]].<ref>{{Citation | last1 = Fearnly | first1 = John | last2 = Savani | first2 = Rahul | title = Proceedings of the forty-seventh annual ACM symposium on Theory of Computing | chapter = The Complexity of the Simplex Method | pages = 201–208 | year = 2015 | arxiv = 1404.0605 | doi = 10.1145/2746539.2746558| isbn = 9781450335362 | s2cid = 2116116 }}</ref>
 
===Efficiency in practice===
Line 300 ⟶ 301:
 
==Other algorithms==
Other algorithms for solving linear-programming problems are described in the [[linear programming|linear-programming]] article. Another basis-exchange pivoting algorithm is the [[criss-cross algorithm]].<ref>{{cite journal|last1=Terlaky|first1=Tamás|last2=Zhang|first2=Shu Zhong|title=Pivot rules for linear programming: A Survey on recent theoretical developments|issue=1|journal=Annals of Operations Research|volume=46–47|year=1993|pages=203–233|doi=10.1007/BF02096264|mr=1260019|citeseerx = 10.1.1.36.7658 |s2cid=6058077|issn=0254-5330}}</ref><ref>{{cite journal|first1=Komei|last1=Fukuda|author1-link=Komei Fukuda|first2=Tamás|last2=Terlaky|author2-link=Tamás Terlaky|title=Criss-cross methods: A fresh view on pivot algorithms |journal=Mathematical Programming, Series B|volume=79|number=1–3|pages=369–395|editor1=Thomas M. Liebling |editor2=Dominique de Werra|publisher=North-Holland Publishing |___location=Amsterdam|year=1997|doi=10.1007/BF02614325|mr=1464775|s2cid=2794181 |url=http://infoscience.epfl.ch/record/77270 }}</ref> There are polynomial-time algorithms for linear programming that use interior point methods: these include [[Khachiyan]]'s [[ellipsoidal algorithm]], [[Karmarkar]]'s [[Karmarkar's algorithm|projective algorithm]], and [[interior point method|path-following algorithm]]s.<ref name="Vanderbei"/>. The [[Big_M_method|Big-M method]] is an alternative strategy for solving a linear program, using a single-phase simplex.
 
==Linear-fractional programming==
Line 311 ⟶ 312:
==See also==
{{div col}}
* [[Bland's rule|Pivoting rule of Bland]], which avoids cycling
* [[Criss-cross algorithm]]
* [[Cutting-plane method]]
Line 318 ⟶ 320:
* [[Karmarkar's algorithm]]
* [[Nelder–Mead method|Nelder–Mead simplicial heuristic]]
* [[Loss Functions]] - a type of Objective Function
* [[Bland's rule|Pivoting rule of Bland]], which avoids cycling
{{colend}}
 
Line 341 ⟶ 343:
* [http://math.uww.edu/~mcfarlat/s-prob.htm Example of Simplex Procedure for a Standard Linear Programming Problem] by Thomas McFarland of the University of Wisconsin-Whitewater.
* [http://www.phpsimplex.com/simplex/simplex.htm?l=en PHPSimplex: online tool to solve Linear Programming Problems] by Daniel Izquierdo and Juan José Ruiz of the University of Málaga (UMA, Spain)
* [httphttps://wwwweb.archive.org/web/20180513100217/http://simplex-m.com/ simplex-m] Online Simplex Solver
 
{{Optimization algorithms|convex}}