Content deleted Content added
No edit summary |
|||
Line 1:
{{user sandbox}}
'''Proximal gradient''' (also known as forward backward splitting) '''methods for learning''' is an area of research in [[optimization]] and [[statistical learning theory]] which studies algorithms for a general class of [[Convex_function#Definition|convex]] [[Regularization_(mathematics)|regularization]] problems where the regularization penalty may not be [[
:<math>\min_{w\in\mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2+ \lambda \|w\|_1, \quad \text{ where } x_i\in \mathbb{R}^d\text{ and } y_i\in\mathbb{R}.</math>
|