Proximal gradient method: Difference between revisions

Content deleted Content added
Lavaka (talk | contribs)
m Christophe was mispelled; also, "et al." is not used when there are only 2 authors on a paper
No edit summary
Line 12:
where <math>f_1, f_2, ..., f_n </math> are [[convex functions]] defined from <math>f: \mathbb{R}^N \rightarrow \mathbb{R} </math>
where some of the functions are non-differentiable, this rules out our conventional smooth optimization techniques like
[[Gradient descent|Steepest decentdescent method]], [[conjugate gradient method]] etc. There is a specific class of [[algorithms]] which can solve above optimization problem. These methods proceed by splitting,
in that the functions <math>f_1, . . . , f_n</math> are used individually so as to yield an easily [[implementable]] algorithm.
They are called [[proximal]] because each non [[smooth function]] among <math>f_1, . . . , f_n</math> is involved via its proximity