Interior-point method: Difference between revisions

Content deleted Content added
Types of Convex Programs Solvable via Interior-Point Methods: Nicely format the optimization problems.
Line 101:
 
== Potential-reduction methods ==
PotentialFor potential-reduction methods, arethe elaboratedproblem is presented in. the ''conic form'':<ref name=":0" />{{Rp|___location=Sec.5}} For potential-reduction methods, the problem is presented in the ''conic form'':<blockquote>'''minimize ''c''<sup>T</sup>''x'' s.t. ''x'' in ''{b+L} ᚢ K''''', </blockquote>where ''b'' is a vector in R<sup>''n''</sup>, L is a [[linear subspace]] in R<sup>''n''</sup> (so ''b''+''L'' is an [[affine plane]]), and ''K'' is a closed pointed [[convex cone]] with a nonempty interior. Every convex program can be converted to the conic form. To use the potential-reduction method (specifically, the extension of [[Karmarkar's algorithm]] to convex programming), we need the following assumptions:<ref name=":0" />{{Rp|___location=Sec.6}}
{{Expand section|date=November 2023}}
 
Potential-reduction methods are elaborated in.<ref name=":0" />{{Rp|___location=Sec.5}} For potential-reduction methods, the problem is presented in the ''conic form'':<blockquote>'''minimize ''c''<sup>T</sup>''x'' s.t. ''x'' in ''{b+L} ᚢ K''''', </blockquote>where ''b'' is a vector in R<sup>''n''</sup>, L is a [[linear subspace]] in R<sup>''n''</sup> (so ''b''+''L'' is an [[affine plane]]), and ''K'' is a closed pointed [[convex cone]] with a nonempty interior. Every convex program can be converted to the conic form.
* A. The feasible set ''{b+L} ᚢ K'' is bounded, and intersects the interior of the cone ''K''.
* B. We are given in advance a strictly-feasible solution ''x''^, that is, a feasible solution in the interior of ''K''.
* C. We know in advance the optimal objective value, c*, of the problem.
* D. We are given an ''M''-logarithmically-homogeneous [[self-concordant barrier]] ''F'' for the cone ''K''.
 
Assumptions A, B and D are needed in most interior-point methods. Assumption C is specific to Karmarkar's approach; it can be alleviated by using a "sliding objective value". It is possible to further reduce the program to the ''Karmarkar format'':<blockquote>'''minimize ''s''<sup>T</sup>''x'' s.t. ''x'' in ''M ᚢ K'' and ''e''<sup>T</sup>''x'' = 1''' </blockquote>where ''M'' is a [[linear subspace]] of in R<sup>''n''</sup>, and the optimal objective value is 0.
 
The method is based on the following [[potential function]]:<blockquote>''v''(''x'') = ''F''(''x'') + ''M'' ln (''s''<sup>T</sup>''x'')</blockquote>where ''F'' is the ''M''-self-concordant barrier for the feasible cone. It is possible to prove that, when ''x'' is strictly feasible and ''v''(''x'') is very small (- very negative), ''x'' is approximately-optimal. The idea of the potential-reduction method is to modify ''x'' such that the potential at each iteration drops by at least a fixed constant ''X'' (specifically, ''X''=1/3-ln(4/3)). This implies that, after ''i'' iterations, the difference between objective value and the optimal objective value is at most ''V'' * exp(-''i X'' / ''M''), where ''V'' is a data-dependent constant. Therefore, the number of Newton steps required for an ''ε''-approximate solution is at most <math>O(1) \cdot M \cdot \ln\left(\frac{V}{\varepsilon} + 1\right)+1 </math>.
 
Note that in path-following methods the expression is <math>\sqrt{M}</math> rather than ''M'', which is better in theory. But in practice, Karmarkar's method allows taking much larger steps towards the goal, so it may converge much faster than the theoretical guarantees.
 
==Primal-dual methods==