Content deleted Content added
Erel Segal (talk | contribs) Tags: nowiki added Visual edit |
Erel Segal (talk | contribs) |
||
Line 2:
== Input ==
Our goal is to solve a [[convex optimization]] problem of the form:<blockquote>'''minimize ''f''(''x'') s.t. ''x'' in ''G'','''</blockquote>where ''f'' is a [[convex function]], and ''G'' is a [[Convex set|convex subset]] of a Euclidean space ''R<sup>
We assume that we have a "subgradient oracle": a routine that can compute a [[subgradient]] of ''f'' at any given point (if ''f'' is differentiable, then the only subgradient is the [[gradient]] <math>\nabla f</math>; but we do not assume that ''f'' is differentiable).
Line 18:
== Convergence ==
It can be proved that <blockquote><math>Volume(G_{t+1})\leq \left[1-\left(\frac{n}{n+1}\right)^n\right]\cdot Volume(G_t)</math> .</blockquote>Therefore,<blockquote><math>f(x_t) - \min_G f \leq \left[1-\left(\frac{n}{n+1}\right)^n\right]^{t/n} [\max_G f - \min_G f] </math>.</blockquote>In other words, the method has [[linear convergence]] of the residual objective value, with convergence rate <math>\left[1-\left(\frac{n}{n+1}\right)^n\right]^{1/n} \leq (1-1/e)^{1/n} </math>. To get an ε-approximation to the objective value, the number of required steps is at most <math>2.13 n \ln(1/\epsilon) + 1 </math>.<ref name=":0" />{{Rp|___location=Sec.8.2.2}}
== Computational complexity ==
The main problem with the method is that, in each step, we have to compute the center-of-gravity of a polytope. All the methods known so far for this problem require a number of arithmetic operations that is exponential in the dimension ''n.<ref name=":0" />{{Rp|___location=Sec.8.2.2}}'' Therefore, the method is not useful in practice when there are 5 or more dimensions.
== References ==
|