Test functions for optimization: Difference between revisions

Content deleted Content added
Changed small typo (DC-ROM to CD-ROM)
Trobolt (talk | contribs)
m Make the symbol consistent
 
(169 intermediate revisions by 79 users not shown)
Line 1:
In{{Short applieddescription|Functions mathematics, '''test functions''', known as '''artificial landscapes''', are usefulused to evaluate characteristics of optimization algorithms, such as:}}
In applied mathematics, '''test functions''', known as '''artificial landscapes''', are useful to evaluate characteristics of optimization algorithms, such as [[Rate of convergence|convergence rate]], precision, robustness and general performance.
* Velocity of convergence.
* Precision.
* Robustness.
* General performance.
 
Here some test functions are presented with the aim of giving an idea about the different situations that optimization algorithms have to face when coping with these kinds of problems. In the first part, some objective functions for single-objective optimization cases are presented. In the second part, test functions with their respective [[Pareto front|Pareto fronts]] for [[multi-objective optimization]] problems (MOP) are given.
 
The artificial landscapes presented herein for single-objective optimization problems are taken from Bäck,<ref>{{cite book|last=Bäck|first=Thomas|title=Evolutionary algorithms in theory and practice : evolution strategies, evolutionary programming, genetic algorithms|year=1995|publisher=Oxford University Press|___location=Oxford|isbn=978-0-19-509971-03|pagespage=328}}</ref> Haupt et. al.<ref>{{cite book|last=Haupt|first=Randy L. Haupt, Sue Ellen|title=Practical genetic algorithms with CD-Rom|year=2004|publisher=J. Wiley|___location=New York|isbn=978-0-471-45565-23|edition=2nd}}</ref> and from Rody Oldenhuis software.<ref>{{cite web|last=Oldenhuis|first=Rody|title=Many test functions for global optimizers|url=http://www.mathworks.com/matlabcentral/fileexchange/23147-many-testfunctions-for-global-optimizers|publisher=Mathworks|accessdateaccess-date=1 November 2012}}</ref> Given the amountnumber of problems (55 in total), just a few are presented here. The complete list of test functions is found on the Mathworks website.<ref>{{cite web|last=Ortiz|first=Gilberto A.|title=Evolution Strategies (ES)|url=http://www.mathworks.com/matlabcentral/fileexchange/35801-evolution-strategies-es|publisher=Mathworks|accessdate=1 November 2012}}</ref>
 
The test functions used to evaluate the algorithms for MOP were taken from Deb,<ref name="Deb:2002">Deb, Kalyanmoy (2002) Multiobjective optimization using evolutionary algorithms (Repr. ed.). Chichester [u.a.]: Wiley. ISBN {{isbn|0-471-87339-X}}.</ref> Binh et. al.<ref name="Binh97">Binh T. and Korn U. (1997) [https://web.archive.org/web/20190801183649/https://pdfs.semanticscholar.org/cf68/41a6848ca2023342519b0e0e536b88bdea1d.pdf MOBES: A Multiobjective Evolution Strategy for Constrained Optimization Problems]. In: Proceedings of the Third International Conference on Genetic Algorithms. Czech Republic. pp. 176-182176–182</ref> and Binh.<ref name="Binh99">Binh T. (1999) [https://www.researchgate.net/profile/Thanh_Binh_To/publication/2446107_A_Multiobjective_Evolutionary_Algorithm_The_Study_Cases/links/53eb422f0cf28f342f45251d.pdf A multiobjective evolutionary algorithm. The study cases.] Technical report. Institute for Automation and Communication. Barleben, Germany</ref> You can download theThe software developed by Deb can be downloaded,<ref name="Deb_nsga">Deb K. (2011) Software for multi-objective NSGA-II code in C. Available at URL:http https://www.iitk.ac.in/kangal/codes.shtml. Revision 1.1.6</ref> which implements the NSGA-II procedure with GAs, or the program posted on Internet,<ref>{{cite web|last=Ortiz|first=Gilberto A.|title=Multi-objective optimization using ES as Evolutionary Algorithm.|url=http://www.mathworks.com/matlabcentral/fileexchange/35824-multi-objective-optimization-using-evolution-strategies-es-as-evolutionary-algorithm-ea|publisher=Mathworks|accessdateaccess-date=1 November 2012}}</ref> which implements the NSGA-II procedure with ES.
 
Just a general form of the equation, a plot of the objective function, boundaries of the object variables and the coordinates of global minima are given herein.
 
==Test functions for single-objective optimization problems==
 
{| class="sortable wikitable" style="text-align:center"
! Name
! Plot
! Formula
! Global minimum
! Search ___domain
|-
| [[Rastrigin function]]
! Name !! Plot !! Formula !! Minimum !! Search ___domain
| [[File:Rastrigin contour plot.svg|200px|Rastrigin function for n=2]]
|<math>f(\mathbf{x}) = A n + \sum_{i=1}^n \left[x_i^2 - A\cos(2 \pi x_i)\right]</math>
<math>\text{where: } A=10</math>
|<math>f(0, \dots, 0) = 0</math>
|<math>-5.12\le x_{i} \le 5.12 </math>
|-
| [[Ackley's function:]]
|| [[File:Ackley's contour function.pdfsvg|200px|Ackley's function for n=2]]
||<math>f(x,y) = -20\exp\left([-0.2\sqrt{0.5\left(x^{2}+y^{2}\right)}\right)]</math>
<math>-\exp\left([0.5\left(\cos\left( 2\pi x\right) + \cos\left( 2\pi y \right)\right)\right)] + e + 20</math>
||<math>f(0,0) = 0</math>
||<math>-5\le x,y \le 5</math>
|-
| Sphere function
|| [[File:Sphere function in 3Dcontour.pdfsvg|200px|Sphere function for n=2]]
|| <math>f(\boldsymbol{x}) = \sum_{i=1}^{n} x_{i}^{2}</math>
|| <math>f(x_{1}, \dots, x_{n}) = f(0, \dots, 0) = 0</math>
|| <math>-\infty \le x_{i} \le \infty</math>, <math>1 \le i \le n</math>
|-
| [[Rosenbrock function]]
|| [[File:Rosenbrock's function in 3Dcontour.pdfsvg|200px|Rosenbrock's function for n=2]]
|| <math>f(\boldsymbol{x}) = \sum_{i=1}^{n-1} \left[ 100 \left(x_{i+1} - x_{i}^{2}\right)^{2} + \left(1 - x_{i} - 1\right)^{2}\right]</math>
|| <math>\text{Min} =
\begin{cases}
n=2 & \rightarrow \quad f(1,1) = 0, \\
n=3 & \rightarrow \quad f(1,1,1) = 0, \\
n>3 & \rightarrow \quad f\left(\underbrace{1,\dots,1}_{(n) \text{ times}}\right) = 0 \\
\end{cases}
</math>
|| <math>-\infty \le x_{i} \le \infty</math>, <math>1 \le i \le n</math>
|-
| [[Beale's function]]
|| [[File:Beale's functioncontour.pdfsvg|200px|Beale's function]]
|| <math>f(x,y) = \left( 1.5 - x + xy \right)^{2} + \left( 2.25 - x + xy^{2}\right)^{2}</math>
<math>+ \left(2.625 - x+ xy^{3}\right)^{2}</math>
|| <math>f(3, 0.5) = 0</math>
|| <math>-4.5 \le x,y \le 4.5</math>
|-
| [[Goldstein–Price function:]]
|| [[File:Goldstein -Price functioncontour.pdfsvg|200px|Goldstein–Price function]]
|| <math>f(x,y) = \left([1+\left(x+y+1\right)^{2}\left(19-14x+3x^{2}-14y+6xy+3y^{2}\right)\right)]</math>
<math>\left([30+\left(2x-3y\right)^{2}\left(18-32x+12x^{2}+48y-36xy+27y^{2}\right)\right)]</math>
|| <math>f(0, -1) = 3</math>
|| <math>-2 \le x,y \le 2</math>
|-
| [[Booth's function:]]
|| [[File:Booth's functioncontour.pdfsvg|200px|Booth's function]]
||<math>f(x,y) = \left( x + 2y -7\right)^{2} + \left(2x +y - 5\right)^{2}</math>
||<math>f(1,3) = 0</math>
||<math>-10 \le x,y \le 10</math>
|-
| Bukin function N.6:
|| [[File:Bukin function 6 contour.pdfsvg|200px|Bukin function N.6]]
|| <math>f(x,y) = 100\sqrt{\left|y - 0.01x^{2}\right|} + 0.01 \left|x+10 \right|.\quad</math>
|| <math>f(-10,1) = 0</math>
|| <math>-15\le x \le -5</math>, <math>-3\le y \le 3</math>
|-
| [[Matyas function:]]
|| [[File:Matyas functioncontour.pdfsvg|200px|Matyas function]]
|| <math>f(x,y) = 0.26 \left( x^{2} + y^{2}\right) - 0.48 xy</math>
|| <math>f(0,0) = 0</math>
|| <math>-10\le x,y \le 10</math>
|-
| Lévi function N.13:
||[[File:LeviLevi13 function 13contour.pdfsvg|200px|Lévi function N.13]]
|| <math>f(x,y) = \sin^{2}\left( 3\pi x\right) + \left(x-1\right)^{2}\left(1+\sin^{2}\left( 3\pi y\right)\right)</math>
<math>+\left(y-1\right)^{2}\left(1+\sin^{2}\left( 2\pi y\right)\right)</math>
|| <math>f(1,1) = 0</math>
|| <math>-10\le x,y \le 10</math>
|-
| [[Griewank function]]
| [[File:Griewank 2D Contour.svg|200px|Griewank's function]]
| <math>f(\boldsymbol{x})= 1+ \frac {1}{4000} \sum _{i=1}^n x_i^2 -\prod _{i=1}^n P_i(x_i)</math>, where <math>P_i(x_i)=\cos \left( \frac {x_i}{\sqrt {i}} \right)</math>
|<math>f(0, \dots, 0) = 0</math>
|<math>-\infty \le x_{i} \le \infty</math>, <math>1 \le i \le n</math>
|-
| [[Himmelblau's function]]
|[[File:Himmelblau contour plot.svg|200px|Himmelblau's function]]
| <math>f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2.\quad</math>
| <math>\text{Min} =
\begin{cases}
f\left(3.0, 2.0\right) & = 0.0 \\
f\left(-2.805118, 3.131312\right) & = 0.0 \\
f\left(-3.779310, -3.283186\right) & = 0.0 \\
f\left(3.584428, -1.848126\right) & = 0.0 \\
\end{cases}
</math>
| <math>-5\le x,y \le 5</math>
|-
| Three-hump camel function:
|| [[File:Three-hump-camel Hump Camel functioncontour.pdfsvg|200px|Three Hump Camel function]]
|| <math>f(x,y) = 2x^{2} - 1.05x^{4} + \frac{x^{6}}{6} + xy + y^{2}</math>
|| <math>f(0,0) = 0</math>
|| <math>-5\le x,y \le 5</math>
|-
| [[Easom function:]]
|| [[File:Easom functioncontour.pdfsvg|200px|Easom function]]
|| <math>f(x,y) = -\cos \left(x\right)\cos \left(y\right) \exp\left(-\left(\left(x-\pi\right)^{2} + \left(y-\pi\right)^{2}\right)\right)</math>
|| <math>f(\pi , \pi) = -1</math>
|| <math>-100\le x,y \le 100</math>
|-
| Cross-in-tray function:
|| [[File:Cross-in-tray functioncontour.pdfsvg|200px|Cross-in-tray function]]
|| <math>f(x,y) = -0.0001 \left([ \left| \sin \left(x\right) \sin \left(y\right) \exp \left( \left|100 - \frac{\sqrt{x^{2} + y^{2}}}{\pi} \right|\right)\right| + 1 \right)]^{0.1}</math>
|| <math>\text{Min} =
\begin{cases}
f\left(1.34941, -1.34941\right) & = -2.06261 \\
Line 106 ⟶ 132:
\end{cases}
</math>
|| <math>-10\le x,y \le 10</math>
|-
| [[Eggholder function]]<ref name="Whitley Rana Dzubera Mathias 1996 pp. 245–276">{{cite journal | last1=Whitley | first1=Darrell | last2=Rana | first2=Soraya | last3=Dzubera | first3=John | last4=Mathias | first4=Keith E. | title=Evaluating evolutionary algorithms | journal=Artificial Intelligence | publisher=Elsevier BV | volume=85 | issue=1–2 | year=1996 | issn=0004-3702 | doi=10.1016/0004-3702(95)00124-7 | pages=264| doi-access=free }}</ref><ref name="vanaret2015hybridation">Vanaret C. (2015) [https://www.researchgate.net/publication/337947149_Hybridization_of_interval_methods_and_evolutionary_algorithms_for_solving_difficult_optimization_problems Hybridization of interval methods and evolutionary algorithms for solving difficult optimization problems.] PhD thesis. Ecole Nationale de l'Aviation Civile. Institut National Polytechnique de Toulouse, France.</ref>
| Eggholder function:
|| [[File:Eggholder functioncontour.pdfsvg|200px|Eggholder function]]
|| <math>f(x,y) = - \left(y+47\right) \sin \left(\sqrt{\left|\frac{x}{2}+\left(y+47\right)\right|}\right) - x \sin \left(\sqrt{\left|x - \left(y + 47 \right)\right|}\right)</math>
|| <math>f(512, 404.2319) = -959.6407</math>
|| <math>-512\le x,y \le 512</math>
|-
| [[Hölder table function:]]
|| [[File:HolderHoelder table functioncontour.pdfsvg|200px|Holder table function]]
|| <math>f(x,y) = - \left|\sin \left(x\right) \cos \left(y\right) \exp \left(\left|1 - \frac{\sqrt{x^{2} + y^{2}}}{\pi} \right|\right)\right|</math>
|| <math>\text{Min} =
\begin{cases}
f\left(8.05502, 9.66459\right) & = -19.2085 \\
Line 125 ⟶ 151:
\end{cases}
</math>
|| <math>-10\le x,y \le 10</math>
|-
| [[McCormick function:]]
|| [[File:McCormick functioncontour.pdfsvg|200px|McCormick function]]
|| <math>f(x,y) = \sin \left(x+y\right) + \left(x-y\right)^{2} - 1.5x + 2.5y + 1</math>
|| <math>f(-0.54719,-1.54719) = -1.9133</math>
|| <math>-1.5\le x \le 4</math>, <math>-3\le y \le 4</math>
|-
| Schaffer function N. 2:
|| [[File:SchafferSchaffer2 function 2contour.pdfsvg|200px|Schaffer function N.2]]
|| <math>f(x,y) = 0.5 + \frac{\sin^{2}\left(x^{2} - y^{2}\right) - 0.5}{\left([1 + 0.001\left(x^{2} + y^{2}\right) \right)]^{2}}</math>
|| <math>f(0, 0) = 0</math>
|| <math>-100\le x,y \le 100</math>
|-
| Schaffer function N. 4:
|| [[File:SchafferSchaffer4 function 4contour.pdfsvg|200px|Schaffer function N.4]]
|| <math>f(x,y) = 0.5 + \frac{\cos^{2}\left([\sin \left( \left|x^{2} - y^{2}\right|\right)\right)] - 0.5}{\left([1 + 0.001\left(x^{2} + y^{2}\right) \right)]^{2}}</math>
|| <math>f(0,1.25313)\text{Min} = 0.292579</math>
\begin{cases}
|| <math>-100\le x,y \le 100</math>
f\left(0,1.25313\right) & = 0.292579 \\
f\left(0,-1.25313\right) & = 0.292579 \\
f\left(1.25313,0\right) & = 0.292579 \\
f\left(-1.25313,0\right) & = 0.292579
\end{cases}
</math>
| <math>-100\le x,y \le 100</math>
|-
| [[Styblinski–Tang function:]]
|| [[File:Styblinski-Tang functioncontour.pdfsvg|200px|Styblinski-Tang function]]
|| <math>f(\boldsymbol{x}) = \frac{\sum_{i=1}^{n} x_{i}^{4} - 16x_{i}^{2} + 5x_{i}}{2}</math>
|| <math>-39.16617n < f\left(\underbrace{-2.903534, \ldots, -2.903534}_{(n) \text{ times}} \right) < -39.16616n</math>
|| <math>-5\le x_{i} \le 5</math>, <math>1\le i \le n</math>..
|-
| [[Shekel function]]
| Simionescu function:<ref>{{cite book|last=Simionescu|first=P.A.|title=Computer Aided Graphing and Simulation Tools for AutoCAD Users|year=2014|publisher=CRC Press|___location=Boca Raton, FL|isbn=9-781-48225290-3|edition=1st}}</ref>
|| [[FileImage:Simionescu's functionShekel_2D.PNGjpg|200px|SimionescuA Shekel function in 2 dimensions and with 10 maxima]]
|| <math>f(x,y) = 0.1xy</math>,
<math>f(\textboldsymbol{subjectedx}) to:= }\sum_{i = x^2+y1}^2{m} \le; \left(r_ c_{Ti} +r_{S} \cossum\left(nlimits_{j \arctan= \frac1}^{xn} (x_{yj} \right- a_{ji})^2 \right)^2</math>{-1}
</math>
<math>\text{where: } r_{T}=1, r_{S}=0.2 \text{ and } n = 8</math>
|
|| <math>f(\pm 0.85586214,\mp 0.85586214) = -0.072625</math>
|| <math>-1.25\infty \le x,yx_{i} \le \infty</math>, <math>1.25 \le i \le n</math>
|}
 
==Test functions for multi-objectiveconstrained optimization problems==
 
{| class="wikitable" style="text-align:center"
|-
! Name !! Plot !! Formula !! Global minimum !! Search ___domain
|-
| Rosenbrock function constrained to a disk<ref>{{Cite web|url=https://www.mathworks.com/help/optim/ug/example-nonlinear-constrained-minimization.html?requestedDomain=www.mathworks.com|title=Solve a Constrained Nonlinear Problem - MATLAB & Simulink|website=www.mathworks.com|access-date=2017-08-29}}</ref>
|| [[File:Rosenbrock circle constraint.svg|200px|Rosenbrock function constrained to a disk]]
|| <math>f(x,y) = (1-x)^2 + 100(y-x^2)^2</math>,
 
subjected to: <math> x^2 + y^2 \le 2 </math>
|| <math>f(1.0,1.0) = 0</math>
|| <math>-1.5\le x \le 1.5</math>, <math>-1.5\le y \le 1.5</math>
|-
| Mishra's Bird function - constrained<ref>{{Cite web|url=http://www.phoenix-int.com/software/benchmark_report/bird_constrained.php|title=Bird Problem (Constrained) {{!}} Phoenix Integration|access-date=2017-08-29|url-status=bot: unknown|archive-url=https://web.archive.org/web/20161229032528/http://www.phoenix-int.com/software/benchmark_report/bird_constrained.php|archive-date=2016-12-29}}</ref><ref>{{Cite journal|last=Mishra|first=Sudhanshu|date=2006|title=Some new test functions for global optimization and performance of repulsive particle swarm method|url=https://mpra.ub.uni-muenchen.de/2718/|journal=MPRA Paper}}</ref>
|| [[File:Mishra bird contour.svg|200px|Bird function (constrained)]]
|| <math>f(x,y) = \sin(y) e^{\left [(1-\cos x)^2\right]} + \cos(x) e^{\left [(1-\sin y)^2 \right]} + (x-y)^2</math>,
subjected to: <math> (x+5)^2 + (y+5)^2 < 25 </math>
|| <math>f(-3.1302468,-1.5821422) = -106.7645367</math>
|| <math>-10\le x \le 0</math>, <math>-6.5\le y \le 0</math>
|-
| Townsend function (modified)<ref>{{Cite web|url=http://www.chebfun.org/examples/opt/ConstrainedOptimization.html|title=Constrained optimization in Chebfun|last=Townsend|first=Alex|date=January 2014|website=chebfun.org|access-date=2017-08-29}}</ref>
|| [[File:Townsend contour.svg|200px|Heart constrained multimodal function]]
|| <math>f(x,y) = -[\cos((x-0.1)y)]^2 - x \sin(3x+y)</math>,
subjected to:<math>x^2+y^2 < \left[2\cos t - \frac 1 2 \cos 2t - \frac 1 4 \cos 3t - \frac 1 8 \cos 4t\right]^2 + [2\sin t]^2 </math>
where: {{Math|1=''t'' = Atan2(x,y)}}
|| <math>f(2.0052938,1.1944509) = -2.0239884</math>
|| <math>-2.25\le x \le 2.25</math>, <math>-2.5\le y \le 1.75</math>
 
|-
| '''Keane's bump function'''{{anchor|Keane's bump function}}<ref>{{cite journal |last1=Mishra |first1=Sudhanshu |title=Minimization of Keane’s Bump Function by the Repulsive Particle Swarm and the Differential Evolution Methods |date=5 May 2007 |url=https://econpapers.repec.org/paper/pramprapa/3098.htm |journal=MPRA Paper|publisher=University Library of Munich, Germany}}</ref>
|| [[File:Estimation of Distribution Algorithm animation.gif|200px|Keane's bump function]]
|| <math>f(\boldsymbol{x}) = -\left| \frac{\left[ \sum_{i=1}^m \cos^4 (x_i) - 2 \prod_{i=1}^m \cos^2 (x_i) \right]}{{\left( \sum_{i=1}^m ix^2_i \right)}^{0.5}} \right| </math>,
subjected to: <math> 0.75 - \prod_{i=1}^m x_i < 0 </math>, and
<math> \sum_{i=1}^m x_i - 7.5m < 0 </math>
|| <math>f((1.60025376,0.468675907)) = -0.364979746</math>
|| <math>0 < x_i < 10</math>
|}
 
==Test functions for multi-objective optimization==
 
{{explain|reason=What does it mean to minimize two objective functions?|date=September 2016}}
 
{| class="wikitable" style="text-align:center"
|-
! Name !! Plot !! Functions !! Constraints !! Search ___domain
|-
| [[Binh and Korn function]]:<ref name="Binh97"/>
|| [[File:Binh and Korn function.pdf|200px|Binh and Korn function]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) & = 4x^{2} + 4y^{2} \\
f_{2}\left(x,y\right) & = \left(x - 5\right)^{2} + \left(y - 5\right)^{2} \\
\end{cases}
</math>
||<math>\text{s.t.} =
\begin{cases}
g_{1}\left(x,y\right) & = \left(x - 5\right)^{2} + y^{2} \leq 25 \\
g_{2}\left(x,y\right) & = \left(x - 8\right)^{2} + \left(y + 3\right)^{2} \geq 7.7 \\
\end{cases}
</math>
|| <math>0\le x \le 5</math>, <math>0\le y \le 3</math>
|-
| [[Chankong and Haimes function]]:<ref>{{cite book |last1=Chankong |first1=Vira |last2=Haimes |first2=Yacov Y. |title=Multiobjective decision making. Theory and methodology. |isbn=0-444-00710-5|year=1983 |publisher=North Holland }}</ref>
| Chakong and Haimes function:
|| [[File:Chakong and Haimes function.pdf|200px|Chakong and Haimes function]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) & = 2 + \left(x-2\right)^{2} + \left(y-1\right)^{2} \\
f_{2}\left(x,y\right) & = 9x - \left(y - 1\right)^{2} \\
\end{cases}
</math>
|| <math>\text{s.t.} =
\begin{cases}
g_{1}\left(x,y\right) & = x^{2} + y^{2} \leq 225 \\
g_{2}\left(x,y\right) & = x - 3y + 10 \leq 0 \\
\end{cases}
</math>
|| <math>-20\le x,y \le 20</math>
|-
| [[Fonseca–Fleming function]]:<ref name="FonzecaFleming:1995">{{cite journal |first1=C. M. |last1=Fonseca |first2=P. J. |last2=Fleming |title=An Overview of Evolutionary Algorithms in Multiobjective Optimization |journal=[[Evolutionary Computation (journal)|Evol Comput]] |volume=3 |issue=1 |pages=1–16 |year=1995 |doi=10.1162/evco.1995.3.1.1 |citeseerx=10.1.1.50.7779 |s2cid=8530790 }}</ref>
| Fonseca and Fleming function:
|| [[File:Fonseca and Fleming function.pdf|200px|Fonseca and Fleming function]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = 1 - \exp \left([-\sum_{i=1}^{n} \left(x_{i} - \frac{1}{\sqrt{n}} \right)^{2} \right)] \\
f_{2}\left(\boldsymbol{x}\right) & = 1 - \exp \left([-\sum_{i=1}^{n} \left(x_{i} + \frac{1}{\sqrt{n}} \right)^{2} \right)] \\
\end{cases}
</math>
Line 210 ⟶ 284:
|-
| Test function 4:<ref name="Binh99"/>
|| [[File:Test function 4 - Binh.pdf|200px|Test function 4.<ref name="Binh99" />]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) & = x^{2} - y \\
f_{2}\left(x,y\right) & = -0.5x - y - 1 \\
\end{cases}
</math>
|| <math>\text{s.t.} =
\begin{cases}
g_{1}\left(x,y\right) & = 6.5 - \frac{x}{6} - y \geq 0 \\
g_{2}\left(x,y\right) & = 7.5 - 0.5x - y \geq 0 \\
g_{3}\left(x,y\right) & = 30 - 5x - y \geq 0 \\
\end{cases}
</math>
|| <math>-7\le x,y \le 4</math>
|-
| [[Kursawe function]]:<ref name="Kursawe:1991">F. Kursawe, “[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.47.8050&rep=rep1&type=pdf A variant of evolution strategies for vector optimization],” in [[Parallel Problem Solving from Nature|PPSN]] I, Vol 496 Lect Notes in Comput Sc. Springer-Verlag, 1991, pp.&nbsp;193–197.</ref>
| Kursawe function:
|| [[File:Kursawe function.pdf|200px|Kursawe function]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = \sum_{i=1}^{2} \left[-10 \exp \left(-0.2 \sqrt{x_{i}^{2} + x_{i+1}^{2}} \right) \right] \\
& \\
f_{2}\left(\boldsymbol{x}\right) & = \sum_{i=1}^{3} \left[\left|x_{i}\right|^{0.8} + 5 \sin \left(x_{i}^{3} \right) \right] \\
\end{cases}
</math>
Line 238 ⟶ 312:
||<math>-5\le x_{i} \le 5</math>, <math>1\le i \le 3</math>.
|-
| Schaffer function N. 1:<ref name="Schaffer:1984">{{cite book |last=Schaffer |first=J. David |date=1984 |chapter=Multiple Objective Optimization with Vector Evaluated Genetic Algorithms |title=Proceedings of the First International Conference on Genetic Algorithms |editor1=G.J.E Grefensette |editor2=J.J. Lawrence Erlbraum |oclc=20004572 }}</ref>
| Schaffer function N. 1:
|| [[File:Schaffer function 1.pdf|200px|Schaffer function N.1]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x\right) & = x^{2} \\
f_{2}\left(x\right) & = \left(x-2\right)^{2} \\
\end{cases}
</math>
||
|| <math>-A\le x \le A</math>. Values of <math>A</math> formfrom <math>10</math> to <math>10^{5}</math> have been used successfully. Higher values of <math>A</math> increase the difficulty of the problem.
|-
| Schaffer function N. 2:
Line 253 ⟶ 327:
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x\right) & = \begin{cases}
-x, & \text{if } x \le 1 \\
x-2, & \text{if } 1 < x \le 3 \\
Line 259 ⟶ 333:
x-4, & \text{if } x > 4 \\
\end{cases} \\
f_{2}\left(x\right) & = \left(x-5\right)^{2} \\
\end{cases}
</math>
Line 269 ⟶ 343:
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) & = \left[1 + \left(A_{1} - B_{1}\left(x,y\right) \right)^{2} + \left(A_{2} - B_{2}\left(x,y\right) \right)^{2} \right] \\
f_{2}\left(x,y\right) & = \left(x + 3\right)^{2} + \left(y + 1 \right)^{2} \\
\end{cases}
</math>
<math>\text{where} =
\begin{cases}
A_{1} & = 0.5 \sin \left(1\right) - 2 \cos \left(1\right) + \sin \left(2\right) - 1.5 \cos \left(2\right) \\
A_{2} & = 1.5 \sin \left(1\right) - \cos \left(1\right) + 2 \sin \left(2\right) - 0.5 \cos \left(2\right) \\
B_{1}\left(x,y\right) & = 0.5 \sin \left(x\right) - 2 \cos \left(x\right) + \sin \left(y\right) - 1.5 \cos \left(y\right) \\
B_{2}\left(x,y\right) & = 1.5 \sin \left(x\right) - \cos \left(x\right) + 2 \sin \left(y\right) - 0.5 \cos \left(y\right)
\end{cases}
</math>
Line 284 ⟶ 358:
||<math>-\pi\le x,y \le \pi</math>
|-
| Zitzler–Deb–Thiele's function N. 1:<ref name="Debetal2002testpr">{{cite book |last1=Deb |first1=Kalyan |last2=Thiele |first2=L. |last3=Laumanns |first3=Marco |last4=Zitzler |first4=Eckart |title=Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600) |chapter=Scalable multi-objective optimization test problems |date=2002 |volume=1 |pages=825–830 |doi=10.1109/CEC.2002.1007032|isbn=0-7803-7282-4 |s2cid=61001583 }}</ref>
| Zitzler–Deb–Thiele's function N. 1:
|| [[File:Zitzler-Deb-Thiele's function 1.pdf|200px|Zitzler-Deb-Thiele's function N.1]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = x_{1} \\
f_{2}\left(\boldsymbol{x}\right) & = g\left(\boldsymbol{x}\right) h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) \\
g\left(\boldsymbol{x}\right) & = 1 + \frac{9}{29} \sum_{i=2}^{30} x_{i} \\
h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) & = 1 - \sqrt{\frac{f_{1}\left(\boldsymbol{x}\right)}{g\left(\boldsymbol{x}\right)}} \\
\end{cases}
</math>
Line 297 ⟶ 371:
||<math>0\le x_{i} \le 1</math>, <math>1\le i \le 30</math>.
|-
| Zitzler–Deb–Thiele's function N. 2:<ref name="Debetal2002testpr" />
|| [[File:Zitzler-Deb-Thiele's function 2.pdf|200px|Zitzler-Deb-Thiele's function N.2]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = x_{1} \\
f_{2}\left(\boldsymbol{x}\right) & = g\left(\boldsymbol{x}\right) h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) \\
g\left(\boldsymbol{x}\right) & = 1 + \frac{9}{29} \sum_{i=2}^{30} x_{i} \\
h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) & = 1 - \left(\frac{f_{1}\left(\boldsymbol{x}\right)}{g\left(\boldsymbol{x}\right)}\right)^{2} \\
\end{cases}
</math>
Line 310 ⟶ 384:
|| <math>0\le x_{i} \le 1</math>, <math>1\le i \le 30</math>.
|-
| Zitzler–Deb–Thiele's function N. 3:<ref name="Debetal2002testpr" />
|| [[File:Zitzler-Deb-Thiele's function 3.pdf|200px|Zitzler-Deb-Thiele's function N.3]]
||<math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = x_{1} \\
f_{2}\left(\boldsymbol{x}\right) & = g\left(\boldsymbol{x}\right) h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) \\
g\left(\boldsymbol{x}\right) & = 1 + \frac{9}{29} \sum_{i=2}^{30} x_{i} \\
h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) & = 1 - \sqrt{\frac{f_{1}\left(\boldsymbol{x}\right)}{g\left(\boldsymbol{x} \right)}} - \left(\frac{f_{1}\left(\boldsymbol{x}\right)}{g\left(\boldsymbol{x}\right)} \right) \sin \left(10 \pi f_{1} \left(\boldsymbol{x} \right) \right)
\end{cases}
</math>
Line 323 ⟶ 397:
||<math>0\le x_{i} \le 1</math>, <math>1\le i \le 30</math>.
|-
| Zitzler–Deb–Thiele's function N. 4:<ref name="Debetal2002testpr" />
|| [[File:Zitzler-Deb-Thiele's function 4.pdf|caption2 = 200px|Zitzler-Deb-Thiele's function N.4]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = x_{1} \\
f_{2}\left(\boldsymbol{x}\right) & = g\left(\boldsymbol{x}\right) h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) \\
g\left(\boldsymbol{x}\right) & = 91 + \sum_{i=2}^{10} \left(x_{i}^{2} - 10 \cos \left(4 \pi x_{i}\right) \right) \\
h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) & = 1 - \sqrt{\frac{f_{1}\left(\boldsymbol{x}\right)}{g\left(\boldsymbol{x} \right)}}
\end{cases}
</math>
Line 336 ⟶ 410:
||<math>0\le x_{1} \le 1</math>, <math>-5\le x_{i} \le 5</math>, <math>2\le i \le 10</math>
|-
| Zitzler–Deb–Thiele's function N. 6:<ref name="Debetal2002testpr" />
|| [[File:Zitzler-Deb-Thiele's function 6.pdf|200px|Zitzler-Deb-Thiele's function N.6]]
||<math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = 1 - \exp \left(-4x_{1}\right)\sin^{6}\left(6 \pi x_{1} \right) \\
f_{2}\left(\boldsymbol{x}\right) & = g\left(\boldsymbol{x}\right) h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) \\
g\left(\boldsymbol{x}\right) & = 1 + 9 \left[\frac{\sum_{i=2}^{10} x_{i}}{9}\right]^{0.25} \\
h \left(f_{1}\left(\boldsymbol{x}\right),g\left(\boldsymbol{x}\right)\right) & = 1 - \left(\frac{f_{1}\left(\boldsymbol{x}\right)}{g\left(\boldsymbol{x} \right)}\right)^{2} \\
\end{cases}
</math>
Line 349 ⟶ 423:
||<math>0\le x_{i} \le 1</math>, <math>1\le i \le 10</math>.
|-
| Osyczka and Kundu function:<ref name="OsyczkaKundu1995">{{cite journal |last1=Osyczka |first1=A. |last2=Kundu |first2=S. |title=A new method to solve generalized multicriteria optimization problems using the simple genetic algorithm |journal=Structural Optimization |date=1 October 1995 |volume=10 |issue=2 |pages=94–99 |doi=10.1007/BF01743536 |s2cid=123433499 |issn=1615-1488}}</ref>
| Viennet function:
|| [[File:Viennet function.pdf|200px|Viennet function]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) & = 0.5\left(x^{2} + y^{2}\right) + \sin\left(x^{2} + y^{2} \right) \\
f_{2}\left(x,y\right) & = \frac{\left(3x - 2y + 4\right)^{2}}{8} + \frac{\left(x - y + 1\right)^{2}}{27} + 15 \\
f_{3}\left(x,y\right) & = \frac{1}{x^{2} + y^{2} + 1} - 1.1 \exp \left(- \left(x^{2} + y^{2} \right) \right) \\
\end{cases}
</math>
||
||<math>-3\le x,y \le 3</math>.
|-
| Osyczka and Kundu function:
|| [[File:Osyczka and Kundu function.pdf|200px|Osyczka and Kundu function]]
||<math>\text{Minimize} =
\begin{cases}
f_{1}\left(\boldsymbol{x}\right) & = -25 \left(x_{1}-2\right)^{2} - \left(x_{2}-2\right)^{2} - \left(x_{3}-1\right)^{2}
- \left(x_{4}-4\right)^{2} - \left(x_{5}-1\right)^{2} \\
f_{2}\left(\boldsymbol{x}\right) & = \sum_{i=1}^{6} x_{i}^{2} \\
\end{cases}
</math>
||<math>\text{s.t.} =
\begin{cases}
g_{1}\left(\boldsymbol{x}\right) & = x_{1} + x_{2} - 2 \geq 0 \\
g_{2}\left(\boldsymbol{x}\right) & = 6 - x_{1} - x_{2} \geq 0 \\
g_{3}\left(\boldsymbol{x}\right) & = 2 - x_{2} + x_{1} \geq 0 \\
g_{4}\left(\boldsymbol{x}\right) & = 2 - x_{1} + 3x_{2} \geq 0 \\
g_{5}\left(\boldsymbol{x}\right) & = 4 - \left(x_{3}-3\right)^{2} - x_{4} \geq 0 \\
g_{6}\left(\boldsymbol{x}\right) & = \left(x_{5} - 3\right)^{2} + x_{6} - 4 \geq 0
\end{cases}
</math>
|| <math>0\le x_{1},x_{2},x_{6} \le 10</math>, <math>1\le x_{3},x_{5} \le 5</math>, <math>0\le x_{4} \le 6</math>.
|-
| CTP1 function (2 variables):<ref name="Deb:2002"/><ref name="Jimenezetal2002">{{cite book |last1=Jimenez |first1=F. |last2=Gomez-Skarmeta |first2=A. F. |last3=Sanchez |first3=G. |last4=Deb |first4=K. |title=Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600) |chapter=An evolutionary algorithm for constrained multi-objective optimization |date=May 2002 |volume=2 |pages=1133–1138 |doi=10.1109/CEC.2002.1004402|isbn=0-7803-7282-4 |s2cid=56563996 }}</ref>
| CTP1 function (2 variables):<ref name="Deb:2002"/>
|| [[File:CTP1 function (2 variables).pdf|200px|CTP1 function (2 variables).<ref name="Deb:2002" />]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) & = x \\
f_{2}\left(x,y\right) & = \left(1 + y\right) \exp \left(-\frac{x}{1+y} \right)
\end{cases}
</math>
||<math>\text{s.t.} =
\begin{cases}
g_{1}\left(x,y\right) & = \frac{f_{2}\left(x,y\right)}{0.858 \exp \left(-0.541 f_{1}\left(x,y\right)\right)} \geq 1 \\
g_{12}\left(x,y\right) & = \frac{f_{2}\left(x,y\right)}{0.728 \exp \left(-0.295 f_{1}\left(x,y\right)\right)} \geq 1
\end{cases}
</math>
Line 399 ⟶ 461:
|-
| Constr-Ex problem:<ref name="Deb:2002"/>
|| [[File:Constr-Ex problem.pdf|200px|Constr-Ex problem.<ref name="Deb:2002" />]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) & = x \\
f_{2}\left(x,y\right) & = \frac{1 + y}{x} \\
\end{cases}
</math>
|| <math>\text{s.t.} =
\begin{cases}
g_{1}\left(x,y\right) & = y + 9x \geq 6 \\
g_{12}\left(x,y\right) & = -y + 9x \geq 1 \\
\end{cases}
</math>
|| <math>0.1\le x \le 1</math>, <math>0\le y \le 5</math>
|-
 
| Viennet function:
|| [[File:Viennet function.pdf|200px|Viennet function]]
|| <math>\text{Minimize} =
\begin{cases}
f_{1}\left(x,y\right) = 0.5\left(x^{2} + y^{2}\right) + \sin\left(x^{2} + y^{2} \right) \\
f_{2}\left(x,y\right) = \frac{\left(3x - 2y + 4\right)^{2}}{8} + \frac{\left(x - y + 1\right)^{2}}{27} + 15 \\
f_{3}\left(x,y\right) = \frac{1}{x^{2} + y^{2} + 1} - 1.1 \exp \left(- \left(x^{2} + y^{2} \right) \right) \\
\end{cases}
</math>
||
||<math>-3\le x,y \le 3</math>.
|}
 
== See also ==
{{commons category|Test functions (mathematical optimization)}}
* [[Himmelblau's function]]
* [[Rosenbrock function]]
* [[Rastrigin function]]
* [[Shekel function]]
 
==References==
 
<references/>
 
== External links ==
* [https://github.com/nathanrooy/landscapes landscapes]
 
{{DEFAULTSORT:Test functions for optimization}}
[[Category:Mathematical optimization]]
[[Category:Constraint programming]]
[[Category:Convex optimization]]
[[Category:Types ofTest functions for optimization| ]]
[[Category:Test items]]