Particle swarm optimization: Difference between revisions

Content deleted Content added
A basic, canonical PSO algorithm: Added an alternate way to initialize some parameters.
Variations and practicalities: Additional examples of simple variations.
Line 104:
 
== Variations and practicalities ==
There are a number of considerations in using PSO in practice; one might wish to clamp the positions or velocities to a certain maximum amountrange, for instance. Or give each particle a finite lifespan after which it would re-spawn at a random position. The considerable adaptability of PSO to variations and hybrids is seen as a strength over other robust evolutionary optimization mechanisms, such as [[genetic algorithms]]. For example, one common, reasonable modification is to add a probabilistic bit-flipping local search heuristic to the loop. Normally, a stochastic hill-climber risks getting stuck at local maxima, but the stochastic exploration and communication of the swarm overcomes this. Thus, PSO can be seen as a basic search "workbench" that can be adapted as needed for the problem at hand.
 
Note that the research literature has uncovered many heuristics and variants determined to be better with respect to convergence speed and robustness, such as clever choices of <math>\omega</math>, <math>c_i</math>, and <math>r_i</math>. There are also other variants of the algorithm, such as discretized versions for searching over subsets of <math>\mathbb{Z}^n</math> rather than <math>\mathbb{R}^n</math>. There has also been experimentation with coevolutionary versions of the PSO algorithm with good results reported. Very frequently the value of <math>\omega</math> is taken to decrease over time; e.g., one might have the PSO run for a certain number of iterations and DECREASE linearly from a starting value (0.9, say) to a final value (0.4, say) in order to facilitate exploitation over exploration in later states of the search. The literature is full of such heuristics. In other words, the canonical PSO algorithm is not as strong as various improvements which have been developed on several common function optimization benchmarks and consulting the literature for ideas on parameter choices and variants for particular problems is likely to be helpful.