Consensus based optimization: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: url, issue. URLs might have been anonymized. Add: s2cid, arxiv, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 28/35
ce
Line 15:
* '''Scaled noise:''' For each <math>t\geq 0</math> and <math>i=1,\dots,N</math>, we denote by <math>B^i_t</math> independent standard Brownian motions. The function <math>D:{\cal{X}}\to\R^s</math> incorporates the drift of the <math>i</math>th particle and determines the noise model. The most common choices are:
** ''Isotropic noise'', <math>D(\cdot) = \|\cdot \|</math>: In this case <math>s=1</math> and every component of the noise vector is scaled equally. This was used in the original version of the algorithm<ref name=":0" />.
** ''Anisotropic noise<ref>{{Citationcite arxiv |last1=Carrillo |first1=José A. |title=A consensus-based global optimization method for high dimensional machine learning problems |date=2020-03-04 |url=http://arxiv.org/abs/1909.09249 |access-date=2024-02-05 |arxiv=1909.09249 |last2=Jin |first2=Shi |last3=Li |first3=Lei |last4=Zhu |first4=Yuhua}}</ref>'', <math>D(\cdot) = |\cdot|</math>: In the special case, where <math>{\cal{X}}\subset \R^d</math>, this means that <math>s=d</math> and <math>D</math> applies the absolute value function component-wise. Here, every component of the noise vector is scaled, dependent on the corresponding entry of the drift vector.
* '''Hyperparameters:''' The parameter <math>\sigma \geq 0</math> scales the influence of the noise term. The parameter <math>\alpha \geq 0</math> determines the separation effect of the particles<ref name=":0" />:
** in the limit <math>\alpha\to 0</math> every particle is assigned the same weight and the consensus point is a regular mean.