Content deleted Content added
Citation bot (talk | contribs) Removed proxy/dead URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine |
Citation bot (talk | contribs) Added doi. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Machine learning | #UCB_Category 154/230 |
||
Line 34:
where <math>x_i\in \mathbb{R}^d\text{ and } y_i\in\mathbb{R}.</math> The <math>\ell_1</math> regularization problem is sometimes referred to as ''lasso'' ([[Lasso (statistics)|least absolute shrinkage and selection operator]]).<ref name=tibshirani /> Such <math>\ell_1</math> regularization problems are interesting because they induce '' sparse'' solutions, that is, solutions <math>w</math> to the minimization problem have relatively few nonzero components. Lasso can be seen to be a convex relaxation of the non-convex problem
:<math>\min_{w\in\mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n (y_i- \langle w,x_i\rangle)^2+ \lambda \|w\|_0, </math>
where <math>\|w\|_0</math> denotes the <math>\ell_0</math> "norm", which is the number of nonzero entries of the vector <math>w</math>. Sparse solutions are of particular interest in learning theory for interpretability of results: a sparse solution can identify a small number of important factors.<ref name=tibshirani>{{cite journal|last=Tibshirani|first=R.|title=Regression shrinkage and selection via the lasso|journal=J. R. Stat. Soc. Ser. B|year=1996|volume=58|series=1|issue=1|pages=267–288|doi=10.1111/j.2517-6161.1996.tb02080.x }}</ref>
=== Solving for L<sub>1</sub> proximity operator ===
|