Proximal gradient method: Difference between revisions

Content deleted Content added
Added tags to the page using Page Curation (more footnotes)
Added links to other articles
Tag: gettingstarted edit
Line 3:
{{Underlinked|date=June 2013}}
 
[[Convex optimization]] is a sub field of [[optimization]] which can produce reliable solutions and can be solved exactly. Many [[signal processing]] problems can be formulated as convex optimization problems of form
 
:<math> \begin{align}
Line 9:
\end{align}</math>
 
where <math>f_1, f_2, ..., f_n </math> are [[convex functions]] defined from <math>f: \mathbb{R}^N \rightarrow \mathbb{R} </math>
where some of the functions are non-differentiable, this rules out our conventional smooth optimization techniques like
[[Gradient descent|Steepest decent method]], [[conjugate gradient method]] etc. There is a specific class of [[algorithms]] which can solve above optimization problem. These methods proceed by splitting,
in that the functions <math>f_1, . . . , f_n</math> are used individually so as to yield an easily [[implementable]] algorithm.
They are called [[proximal]] because each non [[smooth function]] among <math>f_1, . . . , f_n</math> is involved via its proximity
operator. Iterative Shrinkage thresholding algorithm, [[Landweber iteration|projected Landweber]], projected
gradient, [[alternating projection]]s, [[Alternating direction method of multipliers#Alternating direction method of multipliers|alternating-direction method of multipliers]], alternating
split [[Bregman method|Bregman]] are special instances of proximal algorithms. Details of proximal methods are discussed in<ref>
{{cite news |last1=Combettes |first1=Patrick L. |last2= Pesquet |first2=Jean-Chritophe |title=Proximal Splitting Methods in Signal Processing|page={11–23} |year=2009 |url=http://arxiv.org/abs/0912.3522}}</ref>
 
== Notations and Terminology ==
Let <math>\mathbb{R}^N</math>, the <math>N</math>-dimensional [[euclidean space]], be the ___domain of the function
<math> f: \mathbb{R}^N \rightarrow [-\infty,+\infty]</math>. Suppose <math>C</math> is the non-empty
convex subset of <math>\mathbb{R}^N</math>. Then, the indicator function of <math>C</math> is defined as
Line 58:
Let <math>f_i</math> be the indicator function of non-empty closed convex set <math>C_i</math> modeling a constraint.
This reduces to convex feasibility problem, which require us to find a solution such that it lies in the intersection
of all convex sets <math>C_i</math>. In POCS method each set <math>C_i</math> is incorporated by its [[projection operator]]
<math>P_{C_i}</math>. So in each [[iteration]] <math>x</math> is updated as
 
<math>
Line 65:
</math>
 
However beyond such problems Projection[[projection operatorsoperator]]s are not appropriate and more general operators
are required to tackle them. Among the various generalizations of the notion of a convex projection
operator that exist, proximity operators are best suited for other purposes.