Proximal gradient method: Difference between revisions

Content deleted Content added
m layout
minor change, reformatting of a sum so \sum is used
Line 1:
{{more footnotes|date=November 2013}}
 
'''Proximal gradient methods''' are a generalized form of projection used to solve non-differentiable [[convex optimization]] problems. Many interesting problems can be formulated as convex optimization problems of form
 
:<math>
\operatorname{min}\limits_{x \in \mathbb{R}^N} f_1(x) +f_2(x) + \cdots+ f_sum_{n-i=1}(x)^n +f_nf_i(x)
</math>
where <math>f_1f_i,\ f_2,i ...= 1, f_n\dots, n</math> are [[convex functions]] defined from <math>f: \mathbb{R}^N \rightarrow \mathbb{R} </math>
 
where <math>f_1, f_2, ..., f_n </math> are [[convex functions]] defined from <math>f: \mathbb{R}^N \rightarrow \mathbb{R} </math>
where some of the functions are non-differentiable, this rules out our conventional smooth optimization techniques like
[[Gradient descent|Steepest descent method]], [[conjugate gradient method]] etc. There is a specific class of [[algorithms]] which can solve the above optimization problem. These methods proceed by splitting,