Content deleted Content added
mNo edit summary |
Added examples. |
||
Line 262:
==Theoretical properties==
The properties of gradient descent depend on the properties of the objective function and the variant of gradient descent used (for example, if a [[line search]] step is used). The assumptions made affect the convergence rate, and other properties, that can be proven for gradient descent.<ref name=":1">{{cite arXiv|last=Bubeck |first=Sébastien |title=Convex Optimization: Algorithms and Complexity |date=2015 |class=math.OC |eprint=1405.4980 }}</ref> For example, if the objective is assumed to be [[Strongly convex function|strongly convex]] and [[Lipschitz continuity|lipschitz smooth]], then gradient descent converges linearly with a fixed step size.<ref name="auto"/> Looser assumptions lead to either weaker convergence guarantees or require a more sophisticated step size selection.<ref name=":1" />
== Examples ==
* [[Yang–Mills flow]]
* [[Yang–Mills–Higgs flow]]
* [[Seiberg–Witten flow]]
==See also==
|