Particle swarm optimization: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: issue, volume, article-number, bibcode. Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 28/967
 
(23 intermediate revisions by 14 users not shown)
Line 2:
{{Short description|Iterative simulation method}}
[[File:ParticleSwarmArrowsAnimation.gif|thumb|A particle swarm searching for the [[global minimum]] of a function]]
In [[computational science]], '''particle swarm optimization''' ('''PSO''')<ref name=bonyadi16survey/> is a computational method that [[Mathematical optimization|optimizes]] a problem by [[iterative method|iteratively]] trying to improve a [[candidate solution]] with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed [[Point particle|particle]]s, and moving these particles around in the [[Optimization (mathematics)#Concepts and notation|search-space]] according to simple [[formula|mathematical formulaformulae]] over the particle's [[Position (vector)|position]] and [[velocity]]. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
{{Evolutionary algorithms}}
In [[computational science]], '''particle swarm optimization''' ('''PSO''')<ref name=bonyadi16survey/> is a computational method that [[Mathematical optimization|optimizes]] a problem by [[iterative method|iteratively]] trying to improve a [[candidate solution]] with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed [[Point particle|particle]]s, and moving these particles around in the [[Optimization (mathematics)#Concepts and notation|search-space]] according to simple [[formula|mathematical formula]] over the particle's [[Position (vector)|position]] and [[velocity]]. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
 
PSO is originally attributed to [[James Kennedy (social psychologist)|Kennedy]], [[Russell C. Eberhart|Eberhart]] and [[Yuhui Shi|Shi]]<ref name=kennedy95particle/><ref name=shi98modified/> and was first intended for [[computer simulation|simulating]] [[social behaviour]],<ref name=kennedy97particle/> as a stylized representation of the movement of organisms in a bird [[Flocking (behavior)|flock]] or [[fish school]]. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart<ref name=kennedy01swarm/> describes many philosophical aspects of PSO and [[swarm intelligence]]. An extensive survey of PSO applications is made by [[Riccardo Poli|Poli]].<ref name=poli07analysis/><ref name=poli08analysis/> In 2017, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.<ref name=bonyadi16survey/>
Line 37 ⟶ 36:
The values '''b<sub>lo</sub>''' and '''b<sub>up</sub>''' represent the lower and upper boundaries of the search-space respectively. The w parameter is the inertia weight. The parameters φ<sub>p</sub> and φ<sub>g</sub> are often called cognitive coefficient and social coefficient.
 
The termination criterion can be the number of iterations performed, or a solution where the adequate objective function value is found.<ref name="bratton2007" /> The parameters w, φ<sub>p</sub>, and φ<sub>g</sub> are selected by the practitioner and control the behaviour and efficacy of the PSO method ([[#Parameter selection|below]]).
 
== Parameter selection ==
Line 51 ⟶ 50:
 
==Neighbourhoods and topologies==
The topology of the swarm defines the subset of particles with which each particle can exchange information.<ref name=kennedy2002population/> The basic version of the algorithm uses the global topology as the swarm communication structure.<ref name=bratton2007/> This topology allows all particles to communicate with all the other particles, thus the whole swarm share the same best position '''g''' from a single particle. However, this approach might lead the swarm to be trapped into a local minimum,<ref>Mendes, R. (2004). [https://pdfs.semanticscholar.org/d224/80b09d1f0759fb20e0fb0bd2de205457c8bc.pdf Population Topologies and Their Influence in Particle Swarm Performance]{{Dead link|date=August 2025 |bot=InternetArchiveBot |fix-attempted=yes }} (PhD thesis). Universidade do Minho.</ref> thus different topologies have been used to control the flow of information among particles. For instance, in local topologies, particles only share information with a subset of particles.<ref name=bratton2007/> This subset can be a geometrical one<ref>Suganthan, Ponnuthurai N. "[https://ieeexplore.ieee.org/abstract/document/785514/ Particle swarm optimiser with neighbourhood operator]." Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on. Vol. 3. IEEE, 1999.</ref> – for example "the ''m'' nearest particles" – or, more often, a social one, i.e. a set of particles that is not depending on any distance. In such cases, the PSO variant is said to be local best (vs global best for the basic PSO).
 
A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others.<ref name=bratton2007/> The topology is not necessarily static. In fact, since the topology is related to the diversity of communication of the particles,<ref name=oliveira2016communication/> some efforts have been done to create adaptive topologies (SPSO,<ref>SPSO [http://www.particleswarm.info Particle Swarm Central]</ref> APSO,<ref> Almasi, O. N. and Khooban, M. H. (2017). A parsimonious SVM model selection criterion for classification of real-world data sets via an adaptive population-based algorithm. Neural Computing and Applications, 1-9. [https://link.springer.com/article/10.1007/s00521-017-2930-y https://doi.org/10.1007/s00521-017-2930-y]</ref> stochastic star,<ref>Miranda, V., Keko, H. and Duque, Á. J. (2008). [https://repositorio.inesctec.pt/bitstream/123456789/1561/1/PS-05818.pdf Stochastic Star Communication Topology in Evolutionary Particle Swarms (EPSO)]. International Journal of Computational Intelligence Research (IJCIR), Volume 4, Number 2, pp. 105-116</ref> TRIBES,<ref>Clerc, M. (2006). Particle Swarm Optimization. ISTE (International Scientific and Technical Encyclopedia), 2006</ref> Cyber Swarm,<ref>Yin, P., Glover, F., Laguna, M., & Zhu, J. (2011). [http://leeds-faculty.colorado.edu/glover/fred%20pubs/428%20-%20A_complementary_cyber_swarm_algorithm_pub%20version%20w%20pen%20et%20al.pdf A Complementary Cyber Swarm Algorithm]. International Journal of Swarm Intelligence Research (IJSIR), 2(2), 22-41</ref> and C-PSO<ref name=elshamy07sis/>)
 
By using the ring topology, PSO can attain generation-level parallelism, significantly enhancing the evolutionary speed.<ref>{{cite journal |last1=Jian-Yu |first1=Li |title=Generation-Level Parallelism for Evolutionary Computation: A Pipeline-Based Parallel Particle Swarm Optimization |journal=IEEE Transactions on Cybernetics |date=2021 |volume=51 |issue=10 |pagepages=4848-48594848–4859 |doi=10.1109/TCYB.2020.3028070 |urlpmid=https://ieeexplore33147159 |bibcode=2021ITCyb.ieee.org/stamp/stamp51.jsp?tp=&arnumber=92485944848L }}</ref>
 
== Inner workings ==
Line 79 ⟶ 78:
Without the need for a trade-off between convergence ('exploitation') and divergence ('exploration'), an adaptive mechanism can be introduced. Adaptive particle swarm optimization (APSO) <ref name=zhan09adaptive/> features better search efficiency than standard PSO. APSO can perform global search over the entire search space with a higher convergence speed. It enables automatic control of the inertia weight, acceleration coefficients, and other algorithmic parameters at the run time, thereby improving the search effectiveness and efficiency at the same time. Also, APSO can act on the globally best particle to jump out of the likely local optima. However, APSO will introduce new algorithm parameters, it does not introduce additional design or implementation complexity nonetheless.
 
Besides, through the utilization of a scale-adaptive fitness evaluation mechanism, PSO can efficiently address computationally expensive optimization problems.<ref>{{cite journal |last1=Wang |first1=Ye‐QunYe-Qun |last2=Li |first2=Jian‐YuJian-Yu |last3=Chen |first3=Chun‐HuaChun-Hua |last4=Zhang |first4=Jun |last5=Zhan |first5=Zhi‐HuiZhi-Hui |title=Scale adaptive fitness evaluation‐basedevaluation-based particle swarm optimisation for hyperparameter and architecture optimisation in neural networks and deep learning |journal=CAAI Transactions on Intelligence Technology |date=September 2023 |volume=8 |issue=3 |page=849-862 |doi=10.1049/cit2.12106 |url=https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.12106|doi-access=free }}</ref>
 
== Variants ==
Line 87 ⟶ 86:
 
A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances."<ref name=bratton2007/> The latest is Standard PSO 2011 (SPSO-2011).<ref name=Zambrano-Bigiarini2013/>
 
In addition, some PSO variants have been developed to solve large-scale global optimization (LSGO) problems with more than 1000 dimensions. Representative variants include competitive swarm optimizer (CSO) and level-based learning swarm optimizer (LLSO).<ref name=llso/> Recently, PSO has also been extended to solve multi-agent consensus-based distributed optimization problems, e.g., multi-agent consensus-based PSO with adaptive internal and external learning (MASOIE),<ref name=masoie/> etc.
 
=== Hybridization ===
Line 102 ⟶ 103:
Initialization of velocities may require extra inputs. The Bare Bones PSO variant<ref>{{Cite book|last=Kennedy|first=James|title=Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706) |chapter=Bare bones particle swarms |date=2003|pages=80–87|doi=10.1109/SIS.2003.1202251|isbn=0-7803-7914-4|s2cid=37185749}}</ref> has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.
 
In this variant of PSO one dispencesdispenses with the velocity of the particles and instead updates the positions of the particles using the following simple rule,
:<math>
\vec x_i = G\left(\frac{\vec p_i+\vec g}{2},||\vec p_i-\vec g||\right) \,,
Line 115 ⟶ 116:
\vec x_i \leftarrow (1-\beta)\vec x_i + \beta \vec g + \alpha L \vec u \,,
</math>
where <math>\vec u</math> is a random uniformly distributed unit vector, <math>L</math> is the typical length of the problem at hand, and <math>\beta\sim 0.1-0.7</math> and <math>\alpha\sim 0.1-0.5</math> are the parameters of the method. As a refinement of the method one can decrease <math>\alpha</math> with each iteration, <math>\alpha_n=\alpha_0\gamma^n</math>, where <math>n</math> is the number of the iteration and <math>0 < \gamma < 1</math> is the decrease control parameter.
 
===Multi-objective optimization===
Line 121 ⟶ 122:
 
===Binary, discrete, and combinatorial===
As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous ___domain, to apply a classical PSO, and then to demap the result. Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.<ref>Roy, R., Dehuri, S., & Cho, S. B. (2012). [http://sclab.yonsei.ac.kr/publications/Papers/IJ/A%20Novel%20Particle%20Swarm%20Optimization%20Algorithm%20for%20Multi-Objective%20Combinatorial%20Optimization%20Problem.pdf A Novel Particle Swarm Optimization Algorithm for Multi-Objective Combinatorial Optimization Problem] {{Webarchive|url=https://web.archive.org/web/20220120210030/http://sclab.yonsei.ac.kr/publications/Papers/IJ/A%20Novel%20Particle%20Swarm%20Optimization%20Algorithm%20for%20Multi-Objective%20Combinatorial%20Optimization%20Problem.pdf |date=2022-01-20 }}. 'International Journal of Applied Metaheuristic Computing (IJAMC)', 2(4), 41-57</ref>
 
However, it can be noted that the equations of movement make use of operators that perform four actions:
Line 141 ⟶ 142:
* [[Fish School Search]]
* [[Dispersive flies optimisation]]
* [[Consensus based optimization]]
 
== References ==
Line 156 ⟶ 158:
<ref name="xinchao10perturbed">{{cite journal | title=A perturbed particle swarm algorithm for numerical optimization | last=Xinchao | first=Z. | journal=Applied Soft Computing | year=2010 | volume=10 | issue=1 | pages=119–124 | doi=10.1016/j.asoc.2009.06.010}}</ref>
 
<ref name="zhan09adaptive">{{cite journal | first1=Z-H. | last1=Zhan | title=Adaptive Particle Swarm Optimization | last2=Zhang | first2=J. | last3=Li | first3=Y | last4=Chung | first4=H.S-H. | journal=IEEE Transactions on Systems, Man, and Cybernetics | year=2009 | volume=39 | issue=6 | pages=1362–1381 | doi=10.1109/TSMCB.2009.2015956 | pmid=19362911 | bibcode=2009ITSMB..39.1362Z | s2cid=11191625 | url=http://eprints.gla.ac.uk/7645/1/7645.pdf }}</ref>
 
<ref name="zhan10OLPSO">{{cite journal | first1=Z-H. | last1=Zhan | title=Orthogonal Learning Particle Swarm Optimization | last2=Zhang | first2=J. | last3=Li | first3=Y | last4=Shi | first4=Y-H. | journal=IEEE Transactions on Evolutionary Computation | year=2011 | volume=15 | issue=6 | pages=832–847| doi=10.1109/TEVC.2010.2052054 | bibcode=2011ITEC...15..832Z | url=http://eprints.gla.ac.uk/44801/1/44801.pdf }}</ref>
 
<ref name="Bonyadi2014">{{cite journal | first1=Mohammad reza. | last1=Bonyadi | title=A locally convergent rotationally invariant particle swarm optimization algorithm | last2=Michalewicz | first2=Z. | journal=Swarm Intelligence | year=2014 | volume=8 | issue=3 | pages=159–198 | doi=10.1007/s11721-014-0095-1| s2cid=2261683 | url=https://espace.library.uq.edu.au/view/UQ:396054/ERAUQ396054.pdf }}</ref>
Line 187 ⟶ 189:
<ref name="trelea03particle">{{cite journal | title=The Particle Swarm Optimization Algorithm: convergence analysis and parameter selection | last=Trelea | first=I.C. | journal=Information Processing Letters | year=2003 | volume=85 | issue=6 | pages=317–325 | doi=10.1016/S0020-0190(02)00447-7}}</ref>
 
<ref name="clerc02explosion">{{cite journal | first1=M. | last1=Clerc | title=The particle swarm - explosion, stability, and convergence in a multidimensional complex space | last2=Kennedy | first2=J. | journal=IEEE Transactions on Evolutionary Computation | year=2002 | volume=6 | issue=1 | pages=58–73 | doi=10.1109/4235.985692| bibcode=2002ITEC....6...58C | citeseerx=10.1.1.460.6608 }}</ref>
 
<ref name="xzy02dpso">Xie, Xiao-Feng; Zhang, Wen-Jun; Yang, Zhi-Lian (2002). [http://www.wiomax.com/team/xie/paper/CEC02.pdf A dissipative particle swarm optimization]. ''Congress on Evolutionary Computation'' (CEC), Honolulu, HI, USA: 1456-1461.</ref>
 
<ref name="bratton08simplified">{{cite journal | first1=D. | last1=Bratton | url=http://downloads.hindawi.com/archive/2008/654184.pdf | title=A Simplified Recombinant PSO | last2=Blackwell | first2=T. | journal=Journal of Artificial Evolution and Applications | volume=2008 | pages=1–10 | year=2008| article-number=654184 | doi=10.1155/2008/654184 | doi-access=free }}</ref>
 
<ref name="kennedy01swarm">{{cite book | first1=J. | last1=Kennedy | title=Swarm Intelligence | publisher=Morgan Kaufmann | last2=Eberhart | first2=R.C. | year=2001 | isbn=978-1-55860-595-4}}</ref>
Line 205 ⟶ 207:
<ref name="poli07analysis">{{cite journal | url=http://cswww.essex.ac.uk/technical-reports/2007/tr-csm469.pdf | title=An analysis of publications on particle swarm optimisation applications | last=Poli | first=R. | journal=Technical Report CSM-469 | year=2007 | access-date=2010-05-03 | archive-url=https://web.archive.org/web/20110716231935/http://cswww.essex.ac.uk/technical-reports/2007/tr-csm469.pdf | archive-date=2011-07-16 | url-status=dead }}</ref>
 
<ref name="poli08analysis">{{cite journal | url=http://downloads.hindawi.com/archive/2008/685175.pdf | title=Analysis of the publications on the applications of particle swarm optimisation | last=Poli | first=R. | journal=Journal of Artificial Evolution and Applications | year=2008 | volume=2008 | pages=1–10 | article-number=685175 | doi=10.1155/2008/685175| doi-access=free }}</ref>
 
<ref name="evers09thesis">{{cite book | url=http://www.georgeevers.org/publications.htm | title=An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization | publisher=The University of Texas - Pan American, Department of Electrical Engineering | last=Evers | first=G. | year=2009 | format=Master's thesis | access-date=2010-05-05 | archive-date=2011-05-18 | archive-url=https://web.archive.org/web/20110518164430/http://www.georgeevers.org/publications.htm | url-status=dead }}</ref>
 
<ref name="tu04robust">{{cite journal | first1=Z. | last1=Tu | title=A robust stochastic genetic algorithm (StGA) for global numerical optimization | last2=Lu | first2=Y. | journal=IEEE Transactions on Evolutionary Computation | year=2004 | volume=8 | issue=5 | pages=456–470 | doi=10.1109/TEVC.2004.831258| bibcode=2004ITEC....8..456T | s2cid=22382958 }}</ref>
 
<ref name="tu04corrections">{{cite journal | first1=Z. | last1=Tu | title=Corrections to "A Robust Stochastic Genetic Algorithm (StGA) for Global Numerical Optimization''" | last2=Lu | first2=Y. | journal=IEEE Transactions on Evolutionary Computation | year=2008 | volume=12 | issue=6 | pages=781 | doi=10.1109/TEVC.2008.926734| bibcode=2008ITEC...12..781T | s2cid=2864886 }}</ref>
 
<ref name="meissner06optimized">{{cite journal | first1=M. | last1=Meissner | title=Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training | last2=Schmuker | first2=M. | last3=Schneider | first3=G. | journal=BMC Bioinformatics | pmc=1464136 | year=2006 | volume=7 | issue=1 | pages=125 | doi=10.1186/1471-2105-7-125 | pmid=16529661 | doi-access=free }}</ref>
Line 226 ⟶ 228:
<ref name="coellocoello02MOPSO">{{cite conference | first1=C. | last1=Coello Coello | url=http://portal.acm.org/citation.cfm?id=1252327 | title=MOPSO: A Proposal for Multiple Objective Particle Swarm Optimization | last2=Salazar Lechuga | first2=M. | book-title=Congress on Evolutionary Computation (CEC'2002) | year=2002 | pages=1051–1056}}</ref>
 
<ref name="Chen10SPSO">{{cite journal | first1=Wei-neng | last1=Chen | title=A novel set-based particle swarm optimization method for discrete optimization problem | last2=Zhang | first2=Jun | journal=IEEE Transactions on Evolutionary Computation | year=2010 | volume=14 | issue=2 | pages=278–300 | doi=10.1109/tevc.2009.2030331| bibcode=2010ITEC...14..278C | citeseerx=10.1.1.224.5378 | s2cid=17984726 }}</ref>
 
<ref name="elshamy07sis">{{cite conference | first1=W. | last1=Elshamy | url=http://people.cis.ksu.edu/~welshamy/pubs/ieee_sis07.pdf | title=Clubs-based Particle Swarm Optimization | last2=Rashad | first2=H. | last3=Bahgat | first3=A. | book-title=IEEE Swarm Intelligence Symposium 2007 (SIS2007) | year=2007 | ___location=Honolulu, HI | pages=289–296 | access-date=2012-04-27 | archive-url=https://web.archive.org/web/20131023025232/http://people.cis.ksu.edu/~welshamy/pubs/ieee_sis07.pdf | archive-date=2013-10-23 | url-status=dead }}</ref>
Line 236 ⟶ 238:
<ref name="taherkhani2016inertia">{{cite journal | first1=M. | last1=Taherkhani | title=A novel stability-based adaptive inertia weight for particle swarm optimization | last2=Safabakhsh | first2=R. | journal=Applied Soft Computing | year=2016 | volume=38 | pages=281–295 | doi=10.1016/j.asoc.2015.10.004}}</ref>
 
<ref name="bratton2007">{{cite book | first1=Daniel | last1=Bratton | url=http://www.cil.pku.edu.cn/resources/pso_paper/src/2007SPSO.pdf | last2=Kennedy | first2=James | title=2007 IEEE Swarm Intelligence Symposium | chapter=Defining a Standard for Particle Swarm Optimization | pages=120–127 | year=2007 | doi=10.1109/SIS.2007.368035 | isbn=978-1-4244-0708-8 | s2cid=6217309 | archive-date=2016-01-27 | access-date=2016-01-22 | archive-url=https://web.archive.org/web/20160127030145/http://www.cil.pku.edu.cn/resources/pso_paper/src/2007SPSO.pdf | url-status=dead }}</ref>
 
<ref name="Zambrano-Bigiarini2013">{{cite book | first1=M. | last1=Zambrano-Bigiarini | last2=Clerc | first2=M. | last3=Rojas | first3=R. | title=2013 IEEE Congress on Evolutionary Computation | chapter=Standard Particle Swarm Optimisation 2011 at CEC-2013: A baseline for future PSO improvements | publisher=Evolutionary Computation (CEC), 2013 IEEE Congress on | pages=2337–2344 | year=2013| doi=10.1109/CEC.2013.6557848 | isbn=978-1-4799-0454-9 | s2cid=206553432 }}</ref>
 
<ref name="kennedy2002population">{{cite book | first1=J. | last1=Kennedy | last2=Mendes | first2=R. | date=2002 | title=Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600) | chapter=Population structure and particle swarm performance | volume=2 | pages=1671–1676 vol.2 | year=2002 | doi=10.1109/CEC.2002.1004493| isbn=978-0-7803-7282-5 | citeseerx=10.1.1.114.7988 | s2cid=14364974 }}</ref>
 
<ref name="oliveira2016communication">{{cite book | first1=M. | last1=Oliveira | last2=Pinheiro | first2=D. | last3=Andrade | first3=B. | last4=Bastos-Filho | first4=C. | last5=Menezes | first5=R. | title=Swarm Intelligence | chapter=Communication Diversity in Particle Swarm Optimizers | volume=9882 | year=2016 | pages=77–88 | doi=10.1007/978-3-319-44427-7_7| series=Lecture Notes in Computer Science | isbn=978-3-319-44426-0 | s2cid=37588745 }}</ref>
Line 249 ⟶ 251:
 
<ref name="nobile2015">{{cite conference | first1=M.S | last1=Nobile | title=Proactive particles in swarm optimization: a self-tuning algorithm based on fuzzy logic | last2=Pasi | first2=G. | last3=Cazzaniga | first3=P. | last4=Besozzi | first4=D. | last5=Colombo | first5=R. | last6=Mauri | first6=G. | book-title=Proceedings of the 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2015), Istanbul (Turkey) | year=2015 | pages=1–8| doi=10.1109/FUZZ-IEEE.2015.7337957 }}</ref>
 
<ref name=llso>{{cite journal
|last1=Yang
|first1=Q.
|last2=CHEN
|first2=W-N.
|last3=Deng
|first3=J-D.
|last4=Li
|first4=Y.
|last5=Gu
|first5=T.
|last6=Zhang
|first6=J.
|title=A Level-based Learning Swarm Optimizer for Large Scale Optimization
|journal=IEEE Transactions on Evolutionary Computation
|year=2018
|volume=22
|issue=4
|pages=578–594
|doi=10.1109/TEVC.2017.2743016
|bibcode=2018ITEC...22..578Y
}}</ref>
 
<ref name=masoie>{{cite journal
|last1=Chen
|first1=T-Y.
|last2=Chen
|first2=W-N.
|last3=Wei
|first3=F-F.
|last4=Hu
|first4=X-M.
|last5=Zhang
|first5=J.
|title=Multi-Agent Swarm Optimization With Adaptive Internal and External Learning for Complex Consensus-Based Distributed Optimization
|year=2024
|journal=IEEE Transactions on Evolutionary Computation
|volume=29
|issue=4
|page=1
|doi=10.1109/TEVC.2024.3380436
}}</ref>
}}
 
Line 256 ⟶ 301:
*[http://www.particleswarm.info Particle Swarm Central] is a repository for information on PSO. Several source codes are freely available.
*[http://vimeo.com/17407010 A brief video] of particle swarms optimizing three benchmark functions.
*[http://www.mathworks.com/matlabcentral/fileexchange/11559-particle-swarm-optimization-simulation Simulation of PSO convergence in a two-dimensional space (Matlab).] {{Webarchive|url=https://web.archive.org/web/20240414152449/http://www.mathworks.com/matlabcentral/fileexchange/11559-particle-swarm-optimization-simulation |date=2024-04-14 }}
*[http://www.vocal.com/particle-swarm-optimization/ Applications] of PSO.
*{{cite journal|doi=10.1016/j.eswa.2008.10.086|title=Automatic calibration of a rainfall–runoff model using a fast and elitist multi-objective particle swarm algorithm|journal=Expert Systems with Applications|volume=36|issue=5|pages=9533–9538|year=2009|last1=Liu|first1=Yang}}
*[http://www.adaptivebox.net/research/bookmark/psocodes_link.html Links to PSO source code] {{Webarchive|url=https://web.archive.org/web/20210415022817/http://www.adaptivebox.net/research/bookmark/psocodes_link.html |date=2021-04-15 }}
 
{{Major subfields of optimization}}
Line 265 ⟶ 310:
 
{{DEFAULTSORT:Particle swarm optimization}}
[[Category:MetaheuristicsNature-inspired metaheuristics]]
[[Category:EvolutionaryOptimization algorithms and methods]]
[[Category:Multi-agent systems]]