Fundamental theorem of linear programming: Difference between revisions

Content deleted Content added
dab
 
(23 intermediate revisions by 19 users not shown)
Line 1:
The{{short description|Extremes of a linear function over a convex polygonal region occur at the region's corners}}
In [[mathematical optimization]], the '''fundamental theorem of [[linear programming]]''' states, in a weak formulation, states that the [[maxima]] and [[minima]] of a [[linear function]] over a [[convex polygon]]al region occur at the region's corners. Further, if an extreme value occurs at two corners, then it must also occur everywhere on the [[line segment]] between them.
 
==Statement==
Consider the optimization problem
 
:<math>\min c^T x \text{ subject to } x \in P</math>
 
Where <math>P = \{x \in \mathbb{R}^n : Ax \leq b\}</math>. If <math>P</math> is a bounded polyhedron (and thus a polytope) and <math>x^\ast</math> is an optimal solution to the problem, then <math>x^\ast</math> is either an extreme point (vertex) of <math>P</math>, or lies on a face <math>F \subset P</math> of optimal solutions.
 
==Proof==
Suppose, for the sake of contradiction, that <math>x^\ast \in \mathrm{int}(P)</math>. Then there exists some <math>\epsilon > 0</math> such that the ball of radius <math>\epsilon</math> centered at <math>x^\ast</math> is contained in <math>P</math>, that is <math>B_{\epsilon}(x^\ast) \subset P</math>. Therefore,
 
:<math>x^\ast - \frac{\epsilon}{2} \frac{c}{||c||} \in P</math> and
 
:<math>c^T\left( x^\ast - \frac{\epsilon}{2} \frac{c}{||c||}\right) = c^T x^\ast - \frac{\epsilon}{2} \frac{c^T c}{||c||} = c^T x^\ast - \frac{\epsilon}{2} ||c|| < c^T x^\ast.</math>
 
Hence <math>x^\ast</math> is not an optimal solution, a contradiction. Therefore, <math>x^\ast</math> must live on the boundary of <math>P</math>. If <math>x^\ast</math> is not a vertex itself, it must be the convex combination of vertices of <math>P</math>, say <math>x_1, ..., x_t</math>. Then <math>x^\ast = \sum_{i=1}^t \lambda_i x_i</math> with <math>\lambda_i \geq 0</math> and <math>\sum_{i=1}^t \lambda_i = 1</math>. Observe that
:<math>0=c^{T}\left(\left(\sum_{i=1}^{t}\lambda_{i}x_{i}\right)-x^{\ast}\right)=c^{T}\left(\sum_{i=1}^{t}\lambda_{i}(x_{i}-x^{\ast})\right)=\sum_{i=1}^{t}\lambda_{i}(c^{T}x_{i}-c^{T}x^{\ast}).</math>
 
Since <math>x^{\ast}</math> is an optimal solution, all terms in the sum are nonnegative. Since the sum is equal to zero, we must have that each individual term is equal to zero. Hence, <math>c^{T}x^{\ast}=c^{T}x_{i}</math> for each <math>x_i</math>, so every <math>x_i</math> is also optimal, and therefore all points on the face whose vertices are <math>x_1, ..., x_t</math>, are optimal solutions.
 
==References==
* http://demonstrations.wolfram.com/TheFundamentalTheoremOfLinearProgramming/
 
* {{cite book |last=Bertsekas |first=Dimitri P. |title=Nonlinear Programming |year=1995 |edition=1st |publisher=Athena Scientific |___location=Belmont, Massachusetts |page=Proposition B.21(c) |isbn=1-886529-14-0}}
{{Fundamental theorems}}
* {{cite web |title=The Fundamental Theorem of Linear Programming |url=http://demonstrations.wolfram.com/TheFundamentalTheoremOfLinearProgramming/ |website=WOLFRAM Demonstrations Project |access-date=25 September 2024}}
 
{{math-stub}}
{{comp-sci-stub}}
 
[[Category:FundamentalLinear theoremsprogramming]]
[[Category:Theorems in mathematical analysis]]