Content deleted Content added
→Evolutionary algorithms: differential evolution is also routinely used for MO |
m clean up spacing around commas and other punctuation fixes, replaced: ,M → , M, ,N → , N, ,f → , f, ,k → , k (2), , → , |
||
Line 6:
'''Bicriteria optimization''' denotes the special case in which there are two objective functions.
There is a direct relationship between [[multitask optimization]] and multi-objective optimization.<ref>J. -Y. Li, Z. -H. Zhan, Y. Li and J. Zhang, "Multiple Tasks for Multiple Objectives: A New Multiobjective Optimization Method via Multitask Optimization," in IEEE Transactions on Evolutionary Computation, {{doi|10.1109/TEVC.2023.3294307}}</ref>
== Introduction ==
Line 209:
=== Evolutionary algorithms ===
[[Evolutionary algorithms]] are popular approaches to generating Pareto optimal solutions to a multi-objective optimization problem. Most evolutionary multi-objective optimization (EMO) algorithms apply Pareto-based ranking schemes. Evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II),<ref name="doi10.1109/4235.996017">{{Cite journal | doi = 10.1109/4235.996017| title = A fast and elitist multiobjective genetic algorithm: NSGA-II| journal = IEEE Transactions on Evolutionary Computation| volume = 6| issue = 2| pages = 182| year = 2002| last1 = Deb | first1 = K.| last2 = Pratap | first2 = A.| last3 = Agarwal | first3 = S.| last4 = Meyarivan | first4 = T.| citeseerx = 10.1.1.17.7771| s2cid = 9914171}}</ref> its extended version NSGA-III,<ref>{{Cite journal |last1=Deb |first1=Kalyanmoy |last2=Jain |first2=Himanshu |date=2014 |title=An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints |url=https://ieeexplore.ieee.org/document/6600851 |journal=IEEE Transactions on Evolutionary Computation |volume=18 |issue=4 |pages=577–601 |doi=10.1109/TEVC.2013.2281535 |s2cid=206682597 |issn=1089-778X}}</ref><ref>{{Cite journal |last1=Jain |first1=Himanshu |last2=Deb |first2=Kalyanmoy |date=2014 |title=An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach |url=https://ieeexplore.ieee.org/document/6595567 |journal=IEEE Transactions on Evolutionary Computation |volume=18 |issue=4 |pages=602–622 |doi=10.1109/TEVC.2013.2281534 |s2cid=16426862 |issn=1089-778X}}</ref>
Another paradigm for multi-objective optimization based on novelty using evolutionary algorithms was recently improved upon.<ref name=vargas2015>Danilo Vasconcellos Vargas, Junichi Murata, Hirotaka Takano, Alexandre Claudio Botazzo Delbem (2015), "[https://arxiv.org/abs/1901.00266 General Subpopulation Framework and Taming the Conflict Inside Populations]", Evolutionary computation 23 (1), 1-36.</ref> This paradigm searches for novel solutions in objective space (i.e., novelty search<ref>Lehman, Joel, and Kenneth O. Stanley. "Abandoning objectives: Evolution through the search for novelty alone." Evolutionary computation 19.2 (2011): 189-223.</ref> on objective space) in addition to the search for non-dominated solutions. Novelty search is like stepping stones guiding the search to previously unexplored places. It is especially useful in overcoming bias and plateaus as well as guiding the search in many-objective optimization problems.
=== Deep learning methods ===
[[Deep learning]] conditional methods are new approaches to generating several Pareto optimal solutions. The idea is to use the generalization capacity of deep neural networks to learn a model of the entire Pareto front from a limited number of example trade-offs along that front, a task called ''Pareto Front Learning''.<ref name=":0">{{Cite journal |last1=Navon |first1=Aviv |last2=Shamsian |first2=Aviv |last3=Chechik |first3=Gal |last4=Fetaya |first4=Ethan |date=2021-04-26 |title=Learning the Pareto Front with Hypernetworks |url=https://openreview.net/pdf?id=NjF772F4ZZR |journal=Proceedings of International Conference on Learning Representations (ICLR)|arxiv=2010.04104 }}</ref> Several approaches address this setup, including using hypernetworks<ref name=":0" /> and using Stein variational gradient descent.<ref>{{Cite journal |last1=Xingchao |first1=Liu |last2=Xin |first2=Tong |last3=Qiang |first3=Liu |date=2021-12-06 |title=Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent |url=https://proceedings.neurips.cc/paper/2021/hash/7bb16972da003e87724f048d76b7e0e1-Abstract.html |journal=Advances in Neural Information Processing Systems |language=en |volume=34}}</ref>
Line 219:
Commonly known a posteriori methods are listed below:
* ε-constraint method<ref name="Mavrotas2009">{{cite journal|last1=Mavrotas|first1=George|title=Effective implementation of the ε-constraint method in Multi-Objective Mathematical Programming problems|journal=Applied Mathematics and Computation|volume=213|issue=2|year=2009|pages=455–465|issn=0096-3003|doi=10.1016/j.amc.2009.03.037}}</ref><ref name="CarvalhoRibeiro2020">{{cite journal|last1=Carvalho|first1=Iago A.|last2=Ribeiro|first2=Marco A.|title=An exact approach for the Minimum-Cost Bounded-Error Calibration Tree problem|journal=Annals of Operations Research|volume=287|issue=1|year=2020|pages=109–126|issn=0254-5330|doi=10.1007/s10479-019-03443-4|s2cid=209959109}}</ref>
* Pareto-Hypernetworks <ref name=":0"/>
* Multi-objective Branch-and-Bound<ref name="MavrotasDiakoulaki2005">{{cite journal|last1=Mavrotas|first1=G.|last2=Diakoulaki|first2=D.|title=Multi-criteria branch and bound: A vector maximization algorithm for Mixed 0-1 Multiple Objective Linear Programming|journal=Applied Mathematics and Computation|volume=171|issue=1|year=2005|pages=53–71|issn=0096-3003|doi=10.1016/j.amc.2005.01.038}}</ref><ref name="VincentSeipp2013">{{cite journal|last1=Vincent|first1=Thomas|last2=Seipp|first2=Florian|last3=Ruzika|first3=Stefan|last4=Przybylski|first4=Anthony|last5=Gandibleux|first5=Xavier|title=Multiple objective branch and bound for mixed 0-1 linear programming: Corrections and improvements for the biobjective case|journal=Computers & Operations Research|volume=40|issue=1|year=2013|pages=498–509|issn=0305-0548|doi=10.1016/j.cor.2012.08.003}}</ref><ref name="PrzybylskiGandibleux2017">{{cite journal|last1=Przybylski|first1=Anthony|last2=Gandibleux|first2=Xavier|title=Multi-objective branch and bound|journal=European Journal of Operational Research|volume=260|issue=3|year=2017|pages=856–872|issn=0377-2217|doi=10.1016/j.ejor.2017.01.032}}</ref>
* Normal Boundary Intersection (NBI)<ref name="doi10.1137/S1052623496307510" />
Line 260:
# classification of objective functions.<ref name=Miettinen2008 />
On the other hand, a fourth type of generating a small sample of solutions is included in:<ref name=Luque2011>{{cite journal | last1 = Luque | first1 = M. | last2 = Ruiz | first2 = F. | last3 = Miettinen | first3 = K. | title = Global formulation for interactive multiobjective optimization | doi = 10.1007/s00291-008-0154-3 | journal = OR Spectrum | volume = 33 | pages = 27–48 | year = 2008 | s2cid = 15050545 | url = http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-65364}}</ref><ref name=Ruiz2012>{{Cite journal | last1 = Ruiz | first1 = F. | last2 = Luque | first2 = M. | last3 = Miettinen | first3 = K. | title = Improving the computational efficiency in a global formulation (GLIDE) for interactive multiobjective optimization | doi = 10.1007/s10479-010-0831-x | journal = Annals of Operations Research | volume = 197 | pages = 47–70 | year = 2011 | s2cid = 14947919 | url = http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-63800}}</ref> An example of the interactive method utilizing trade-off information is the [[Zionts-Wallenius method]],<ref name=Zionts1976>{{cite journal | last1 = Zionts | first1 = S. | last2 = Wallenius | first2 = J. | doi = 10.1287/mnsc.22.6.652 | title = An Interactive Programming Method for Solving the Multiple Criteria Problem | journal = Management Science | volume = 22 | issue = 6 | pages = 652 | year = 1976 }}</ref> where the decision maker is shown several objective trade-offs at each iteration, and (s)he is expected to say whether (s)he likes, dislikes, or is indifferent with respect to each trade-off. In reference point-based methods (see e.g.,
| last1 = Nakayama
| first1 = H.
Line 278:
== Hybrid methods ==
Different [[hybrid algorithm|hybrid]] methods exist, but here we consider hybridizing MCDM ([[multi-criteria decision-making]]) and EMO (evolutionary multi-objective optimization). A hybrid algorithm in multi-objective optimization combines algorithms/approaches from these two fields (see e.g.,
The roots for hybrid multi-objective optimization can be traced to the first Dagstuhl seminar organized in November 2004 (see [http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=04461 here]). Here, some of the best minds{{Citation needed|date=July 2018}} in EMO (Professor Kalyanmoy Deb, Professor Jürgen Branke, etc.) and MCDM (Professor Kaisa Miettinen, Professor Ralph E. Steuer, etc.) realized the potential in combining ideas and approaches of MCDM and EMO fields to prepare hybrids of them. Subsequently, many more Dagstuhl seminars have been arranged to foster collaboration. Recently, hybrid multi-objective optimization has become an important theme in several international conferences in the area of EMO and MCDM (see e.g.,
== Visualization of the Pareto front ==
Visualization of the Pareto front is one of the a posteriori preference techniques of multi-objective optimization. The a posteriori preference techniques provide an important class of multi-objective optimization techniques.<ref name="Miettinen1999" /> Usually, the a posteriori preference techniques include four steps: (1) computer approximates the Pareto front, i.e., the Pareto optimal set in the objective space; (2) the decision maker studies the Pareto front approximation; (3) the decision maker identifies the preferred point at the Pareto front; (4) computer provides the Pareto optimal decision, whose output coincides with the objective point identified by the decision maker. From the point of view of the decision maker, the second step of the a posteriori preference techniques is the most complicated. There are two main approaches to informing the decision maker. First, a number of points of the Pareto front can be provided in the form of a list (interesting discussion and references are given in<ref name="BensonSayin1997">{{cite journal|last1=Benson|first1=Harold P.|last2=Sayin|first2=Serpil|title=Towards finding global representations of the efficient set in multiple objective mathematical programming|journal=Naval Research Logistics|volume=44|issue=1|year=1997|pages=47–67|issn=0894-069X|doi=10.1002/(SICI)1520-6750(199702)44:1<47::AID-NAV3>3.0.CO;2-M|hdl=11693/25666|url=http://repository.bilkent.edu.tr/bitstream/11693/25666/1/Towards%20finding%20global%20representations%20of%20the%20efficient%20set%20in%20multiple%20objective%20mathematical%20programming.pdf}}</ref>) or using heatmaps.<ref name="Pryke, Mostaghim, Nazemi">{{cite book|last=Pryke|first=Andy|author2=Sanaz Mostaghim |author3=Alireza Nazemi |title=Evolutionary Multi-Criterion Optimization |chapter=Heatmap Visualization of Population Based Multi Objective Algorithms |volume=4403|year=2007|pages=361–375|doi=10.1007/978-3-540-70928-2_29|series=Lecture Notes in Computer Science|isbn=978-3-540-70927-5|s2cid=2502459 }}</ref>
=== Visualization in bi-objective problems: tradeoff curve ===
|