Content deleted Content added
m →External links: Remove unicode control characters (CHECKWIKI error 16) using AWB (9369) |
m Bot: link syntax/spacing and minor changes |
||
Line 2:
[[Convex optimization]] is a sub field of optimization which can produce reliable solutions
and can be solved exactly. Many signal processing problems can be formulated as convex
optimization problems of form
Line 9:
\end{align}</math>
where <math>f_1, f_2, ..., f_n </math> are convex functions defined from <math>f: \mathbb{R}^N \rightarrow \mathbb{R} </math>
where some of the functions are non-differentiable, this rules out our conventional smooth optimization techniques like
[
in that the functions <math>f_1, . . . , f_n</math> are used individually so as to yield an easily implementable algorithm.
They are called proximal because each non smooth function among <math>f_1, . . . , f_n</math> is involved via its proximity
operator. Iterative Shrinkage thresholding algorithm, [
gradient, [
split Bregman are special instances of proximal algorithms. Details of proximal methods are discussed in
{{cite news |last1=Combettes |first1=Patrick L. |last2= Pesquet |first2=Jean-Chritophe |title=Proximal Splitting
== Notations and Terminology ==
Let <math>\mathbb{R}^N</math> is the <math>N</math>-dimensional euclidean space and ___domain of function
<math> f: \mathbb{R}^N \rightarrow ]-\infty,+\infty]</math>. Suppose <math>C</math> be the non-empty
convex subset of <math>\mathbb{R}^N</math>. The indicator function of <math>C</math> is defined as
Line 43:
</math>
If <math>C</math> is closed and convex, the projection of <math>x \in \mathbb{R}^N</math> onto <math>C</math> is the unique point
<math>P_Cx \in C</math> such that <math>D_C(x) = \parallel x - P_Cx \parallel_2 </math>.
[
<math>
Line 54:
== Proximal Gradient methods ==
One of the widely used convex optimization algorithm is POCS ([
This algorithm is employed to recover/synthesize a signal satisfying simultaneously several convex constraints.
Let <math>f_i</math> be the indicator function of non-empty closed convex set <math>C_i</math> modeling a constraint.
This reduces to convex feasibility problem, which require us to find a solution such that it lies in the intersection
of all convex sets <math>C_i</math>. In POCS method each set <math>C_i</math> is incorporated by its projection operator
<math>P_{C_i}</math>. So in each iteration <math>x</math> is updated as
Line 65:
</math>
However beyond such problems Projection operators are not appropriate and more general operators
are required to tackle them. Among the various generalizations of the notion of a convex projection
operator that exist, proximity operators are best suited for other purposes.
Line 102:
== Examples ==
Special instances of Proximal Gradient Methods are
*[
*[
*[
*[
*Fast Iterative Shrinkage Thresholding Algorithm (FISTA)<ref>
{{cite news | last1="Beck | first1=A | last2=Teboulle | first2 = M | title=A fast iterative shrinkage-thresholding algorithm for linear inverse problems
Line 112:
== See also ==
* [
* [
== References ==
Line 127:
*{{ cite book
| last1 = Combettes
| first1 = Patrick L.
| last2 = Pesquet
| first2 = Jean-Christophe
Line 140:
==External links==
* Stephen Boyd and Lieven Vandenberghe Book , [http://www.stanford.edu/~boyd/cvxbook/ ''Convex optimization'']
* [http://www.stanford.edu/class/ee364a/ EE364a: Convex Optimization I] and [http://www.stanford.edu/class/ee364b/ EE364b: Convex Optimization II], Stanford course homepages
* [http://www.eecs.berkeley.edu/~elghaoui/Teaching/EE227A/lecture18.pdf EE227A: Lieven Vandenberghe Notes] Lecture 18
|