'''Simulation-based optimization''' (also known as simply '''simulation optimization''') integrates [[optimization (mathematics)|optimization]] techniques into [[computer simulation|simulation]] modeling and analysis. Because of the complexity of the simulation, the [[objective function]] may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques (called output analysis in simulation methodology).
Once a system is mathematically modeled, computer-based simulations provide the information about its behavior. Parametric simulation methods can be used to improve the performance of a system. In this method, the input of each variable is varied with other parameters remaining constant and the effect on the design objective is observed. This is a time-consuming method and improves the performance partially. To obtain the optimal solution with minimum computation and time, the problem is solved iteratively where in each iteration the solution moves closer to the optimum solution. Such methods are known as ‘numerical optimization’ or, ‘simulation-based optimization’.<ref>Nguyen, Anh-Tuan, Sigrid Reiter, and Philippe Rigo. "[https://orbi.uliege.be/bitstream/2268/155988/1/Nguyen%20AT.pdf A review on simulation-based optimization methods applied to building performance analysis]."''Applied Energy'' 113 (2014): 1043–1058.</ref> or 'simulation-based multi-objective optimization' used when more than one objective is involved.
In simulation experiment, wethe wantgoal is to evaluate the effect of different values of input variables on a system , which is called running simulation experiments. However , sometimesthe weinterest areis interestedsometimes in finding the optimal value for input variables in terms of the system outcomes. One way could be running simulation experiments for all possible input variables. However , this approach is not always practical due to several possible situations and it just makes it intractable to run experimentexperiments for each scenario. For example, there might be sotoo many possible values for input variables, or the simulation model might be sotoo complicated and expensive to run for suboptimala large set of input variable values. In these cases, wethe aregoal interestedis into findingiterative find optimal values for the input variables rather than trying all possible values. This process is called simulation optimization.< supref> Carson, Yolanda, and Anu Maria. "[ [Userhttp: Lpetroia/ sandbox|[/citeseerx.ist.psu.edu/viewdoc/download?doi=10.1 .1.24.9192&rep=rep1&type=pdf Simulation optimization: methods and applications] ]]." ''Proceedings of the 29th Winter Simulation Conference''. IEEE Computer Society, 1997.</ supref> ▼
== Simulation-based optimization methods == ▼
Simulation-based optimization methods can be categorized into the following groups:<ref name=Fu>{{cite book|last=Fu|first=Michael, editor|title=Handbook of Simulation Optimization|publisher=Springer|year=2015|url=http://www.springer.com/us/book/9781493913831}}</ref><ref>Deng, G. (2007). ''Simulation-based optimization'' (Doctoral dissertation, UNIVERSITY OF WISCONSIN–MADISON).</ref> ▼
* Response surface methodology (constructing surrogate model, to approximate the underlying function <math display="inline">f</math>)
* Heuristic methods (three most popular methods: [[genetic algorithm]]s, [[tabu search]], and [[simulated annealing]])
* [[Stochastic approximation]] (category of [[Gradient descent|gradient-based]] approaches.)
* Derivative-free optimization methods ▼
* [[Dynamic programming]] and neuro-dynamic programming
#Specific '''[[User:Lpetroia/sandbox|Jumpsimulation–based up^]]''' optimization methods can be chosen according to Figure 1 based on the decision variable types.<ref>Jalali, Hamed, and Inneke Van Nieuwenhuyse. " [https://core.ac.uk/download/pdf/34623919.pdf Simulation optimization in inventory replenishment: a classification ]." IIE Transactions 47.11 (2015): 1217-1235. </ref>▼
== Application ==
[[File: ClassificationSlide1 of simulation based optimization according to variable types1.jpg|thumb|Fig .1 . Classification of simulation based optimization according to variable types]] ▼
Simulation-based optimization is an important subject in various areas such as chemical engineering, civil engineering, and petroleum engineering. An important application is optimizing the locations of oil wells in hydrocarbon reservoirs.<ref>{{cite journal | doi = 10.2118/173219-PA | title = Closed-loop field development under uncertainty using optimization with sample validation | journal=SPE Journal|volume=20 |issue=5 |pages=0908–0922}}</ref>
[[Optimization (computer science)|Optimization]] exists in two main branches of operationaloperations research: ▼
''Optimization [[Parametric programming|parametric]] (static)'' – theThe objective is to find the values of the parameters, which are “static” for all states, with the goal of maximizemaximizing or minimizeminimizing a function. In this case, thereone is thecan use of [[mathematical programming ]], such as [[linear programingprogramming]]. In this scenario, simulation helps when the parameters contain noise or the evaluation of the problem would demand excess ofexcessive computer time, due to its complexity. <ref name=":0" />▼
== '''Simulation-base Optimization'''<nowiki> ==</nowiki>
''Optimization [[Optimal control|control]] (dynamic)'' – This is used largely in [[computer sciencesscience]] and [[electrical engineering , what results in many papers and projects in these fields]]. The optimal control is per state and the results change in each of them. ThereOne iscan use of mathematical programming, as well as dynamic programming. In this scenario, simulation can generate random samples and solve complex and large-scale problems.< supref name=":0"> Abhijit Gosavi, [ [Userhttp: Lpetroia/ sandbox|[3]]/citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.462.5587&rep=rep1&type=pdf Simulation‐Based Optimization: Parametric Optimization Techniques and Reinforcement Learning] , Springer, 2nd Edition (2015)</ supref> ▼
▲In simulation experiment, we want to evaluate the effect of different values of input variables on a system, which is called running simulation experiments. However sometimes we are interested in finding the optimal value for input variables in terms of the system outcomes. One way could be running simulation experiments for all possible input variables. However this approach is not always practical due to several possible situations and it just makes it intractable to run experiment for each scenario. For example, there might be so many possible values for input variables, or simulation model might be so complicated and expensive to run for suboptimal input variable values. In these cases, we are interested in finding optimal values for input variables rather than trying all possible values. This process is called simulation optimization.<sup>[[User:Lpetroia/sandbox|[1]]]</sup>
▲== Simulation-based optimization methods ==
Specific simulation based optimization methods can be chosen according to figure 1 based on the decision variable types.<sup>[[User:Lpetroia/sandbox|[2]]]</sup>
ThereSome areimportant fiveapproaches methods classifying thein simulation based optimization . They are discussed below :. ▼
▲[[File:Classification of simulation based optimization according to variable types.jpg|thumb|Fig 1. Classification of simulation based optimization according to variable types]]
▲Simulation-based optimization methods can be categorized into the following groups:<ref name=Fu>{{cite book| editor-last=Fu| editor-first=Michael , editor|title=Handbook of Simulation Optimization|publisher=Springer|year=2015|url= httphttps://www.springer.com/us/book/9781493913831}} </ref><ref>Deng, G. (2007). ''Simulation-based optimization'' (Doctoral dissertation, UNIVERSITY OF WISCONSIN–MADISON).</ref>
<ref>Spall, J.C. (2003). ''Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control''. Hoboken: Wiley.</ref>
=== Statistical ranking and selection methods (R/S) ===
▲[[Optimization (computer science)|Optimization]] exists in two main branches of operational research:
Ranking and selection methods are designed for problems where the alternatives are fixed and known, and simulation is used to estimate the system performance.
In the simulation optimization setting, applicable methods include indifference zone approaches, optimal computing budget allocation, and knowledge gradient algorithms.
==== [[Response surface methodology |Response Surface Methodology]] (RSM) ==== ▼
▲''Optimization [[Parametric programming|parametric]] (static)'' – the objective is to find the values of the parameters, which are “static” for all states, with the goal of maximize or minimize a function. In this case, there is the use of mathematical programming, such as linear programing. In this scenario, simulation helps when the parameters contain noise or the evaluation of the problem would demand excess of computer time, due to its complexity.
In [[response surface methodology ]], wethe areobjective tryingis to find the relationship between the input variables and the response variables. The process starts from trying to fit a linear regression model. If the P-value turns out to be low, then a higher degree polynomial regression, which is usually quadratic, will be implemented. The process of finding a good relationship between input and response variables will be done for each simulation test. In simulation optimization, response surface method can be used to find the best input variables that produce desired outcomes in terms of response variables.< supref> Rahimi Mazrae Shahi, M., Fallah Mehdipour, E. and Amiri, M. (2016), [ [Userhttps: Lpetroia/ sandbox|[4]]/onlinelibrary.wiley.com/doi/abs/10.1111/itor.12150 Optimization using simulation and response surface methodology with an application on subway train scheduling] . Intl. Trans. in Op. Res., 23: 797–811. {{doi|10.1111/itor.12150}}</ supref> ▼
==== [[Heuristic (computer science)|Heuristic methods ]] ==== ▼
▲''Optimization [[Optimal control|control]] (dynamic)'' – used largely in computer sciences and electrical engineering, what results in many papers and projects in these fields. The optimal control is per state and the results change in each of them. There is use of mathematical programming, as well as dynamic programming. In this scenario, simulation can generate random samples and solve complex and large-scale problems.<sup>[[User:Lpetroia/sandbox|[3]]]</sup>
[[Heuristic (computer science)|Heuristic methods ]] change accuracy by speed. Their goal is to find a good solution faster than the traditional methods, when they are too slow or fail in solving the problem. Usually they find local optimal instead of the optimal value; however, the values are considered close enough of the final solution. Examples of thisthese kindkinds of methodmethods isinclude [[tabu search ]] orand [[genetic algorithmalgorithms]]. <ref name=":0" />▼
Metamodels enable researchers to obtain reliable approximate model outputs without running expensive and time-consuming computer simulations. Therefore, the process of model optimization can take less computation time and cost.<ref>{{Cite journal|last=Yousefi|first=Milad|last2=Yousefi|first2=Moslem|last3=Ferreira|first3=Ricardo Poley Martins|last4=Kim|first4=Joong Hoon|last5=Fogliatto|first5=Flavio S.|title=Chaotic genetic algorithm and Adaboost ensemble metamodeling approach for optimum resource planning in emergency departments|journal=Artificial Intelligence in Medicine|volume=84|pages=23–33|doi=10.1016/j.artmed.2017.10.002|pmid=29054572|year=2018}}</ref>
=== Methods ===
▲There are five methods classifying the simulation based optimization. They are discussed below:
==== [[Stochastic approximation ]] ==== ▼
▲==== [[Response surface methodology|Response Surface Methodology]] (RSM) ====
[[Stochastic approximation ]] is used when the function cannot be computed directly, only estimated via noisy observations. In thisthese scenarios, this method (or family of methods) looks for the extrema of these function. The objective function would be: <ref>Powell, W. (2011). ''Approximate Dynamic Programming Solving the Curses of Dimensionality'' (2nd ed., Wiley Series in Probability and Statistics). Hoboken: Wiley.</ref>▼
▲In response surface methodology, we are trying to find the relationship between the input variables and the response variables. The process starts from trying to fit a linear regression model. If the P-value turns out to be low, then a higher degree polynomial regression, which is usually quadratic, will be implemented. The process of finding a good relationship between input and response variables will be done for each simulation test. In simulation optimization, response surface method can be used to find the best input variables that produce desired outcomes in terms of response variables.<sup>[[User:Lpetroia/sandbox|[4]]]</sup>
:<math>\underset{\text{x}\in\theta}{\min}f\bigl(\text{x}\bigr) = \underset{\text{x}\in\theta}{\min}\Epsilon[F\bigl(\text{x,y})]</math>
▲==== [[Heuristic (computer science)|Heuristic methods]] ====
▲Heuristic methods change accuracy by speed. Their goal is to find a good solution faster than the traditional methods, when they are too slow or fail in solving the problem. Usually they find local optimal instead of the optimal value; however, the values are considered close enough of the final solution. Examples of this kind of method is tabu search or genetic algorithm.
:<math>y</math> is a random variable that represents the noise.
▲==== [[Stochastic approximation]] ====
▲Stochastic approximation is used when the function cannot be computed directly, only estimated via noisy observations. In this scenarios, this method (or family of methods) looks for the extrema of these function. The objective function would be:
:<math>\underset{\text{x}\in\theta}{\min}</math> is the parameter that minimizes <math>f\bigl(\text{x}\bigr) = \underset{\text{x}\in\theta}{\min}\Epsilon[F\bigl(\text{x,y})]</math>.
:<math>y\theta</math> is athe random___domain variable that representsof the noiseparameter <math>x</math>.
▲*=== Derivative-free optimization methods ===
<math>x</math> is the parameter that minimizes <math>f\bigl(\text{x}\bigr)</math>.
[[Derivative-free optimization ]] is a subject of mathematical optimization. This method is applied to a certain optimization problem when its derivatives are unavailable or unreliable. DerivateDerivative-free methodmethods establishesestablish a model based on sample function values or directly draw a sample set of function values without exploiting a detailed model. Since it needs no derivatives, it cannot be compared to derivative-based methods.< supref> Conn, A. R.; [[ User:Lpetroia/sandboxKatya Scheinberg| Scheinberg, K.]]; [ 5[Luis Nunes Vicente|Vicente, L. N.]] (2009). [http://www.mat.uc.pt/~lnv/idfo/ ''Introduction to Derivative-Free Optimization''] . MPS-SIAM Book Series on Optimization. Philadelphia: SIAM. Retrieved 2014-01-18.</ supref> ▼
For unconstrained optimization problems, it has athe form: ▼
<math>\theta</math> is the ___domain of the parameter <math>x</math>.
:<math>\underset{\text{x}\in\R^n}{\min}f\bigl(\text{x}\bigr)</math>
==== [[Derivative-free optimization|Derivative-free optimization methods]] ====
▲Derivative-free optimization is a subject of mathematical optimization. This method is applied to a certain optimization problem when its derivatives are unavailable or unreliable. Derivate-free method establishes model based on sample function values or directly draw a sample set of function values without exploiting detailed model. Since it needs no derivatives, it cannot be compared to derivative-based methods.<sup>[[User:Lpetroia/sandbox|[5]]]</sup>
The limitationlimitations of derivative-free optimization: ▼
▲For unconstrained optimization problems, it has a form:
1. Some methods cannot handle optimization problems with more than a few variables; the results are usually not so accurate. However, there are numerous practical cases where derivative-free methods have been successful in non-trivial simulation optimization problems that include randomness manifesting as "noise" in the objective function. See, for example, the following
▲The limitation of derivative-free optimization:
<ref name=Fu/>
.<ref>Fu, M.C., Hill, S.D. Optimization of discrete event systems via simultaneous perturbation stochastic approximation. ''IIE Transactions'' 29, 233–243 (1997). https://doi.org/10.1023/A:1018523313043</ref>
1. It is usually cannot handle optimization problems with a few tens of variables, the results via this method are usually not so accurate.
2. When confronted with minimizing non-convex functions, it will show its limitation.
3. Derivative-free optimization methods isare relatively simple and easy, howeverbut, itlike ismost notoptimization somethods, some care is goodrequired in theorypractical andimplementation (e.g., in practicechoosing the algorithm parameters).
==== Dynamic programming and neuro-dynamic programming<sup>[[User:Lpetroia/sandbox|[6]]]</sup> ====
===== [[Dynamic programming]]<sup>[[User:Lpetroia/sandbox|[7]]]</sup>: =====
[[Dynamic programming]] deals with situations where decisions are made in stagestages. The key to this kind of problemsproblem is to trade off the present and future costs.<ref>Cooper, Leon; Cooper, Mary W. Introduction to dynamic programming. New York: Pergamon Press, 1981</ref>
One dynamic basic model has two features:
1) HasIt has a discrete time dynamic system.
2) The cost function is additive over time.
For discrete featurefeatures, dynamic programming has the form:
:<math>x_{k+1} = f_k(x_{k},u_{k},w_{k}) , k=0,1,...,N-1</math>
:<math>k</math> represents the index of discrete time.
:<math>kx_k</math> is the state of the time k, it contains the past information and prepareprepares it for the future optimization.
:<math>u_k</math> is the control variable.
:<math>w_k</math> is the random parameter.
For the cost function, it has the form:
:<math>g_N(X_N) + \sum_{k=0}^{N-1} gkg_k(x_k,u_k,W_k)</math>
<math>g_N(X_N)</math> is the cost at the end of the process.
As the cost cannot be optimized meaningfully, weit can usebe used the expect value:
<math>E\{g_N(X_N) + \sum_{k=0}^{N-1} g_k(x_k,u_k,W_k) \}</math> ▼
===== Neuro-dynamic programming: ===== ▼
Neuro-dynamic programming is the same as dynamic programming except that the former has the concept of approximation architectures. It combines artificial intelligence, simulation-base algorithms, and functional approach techniques. “Neuro” in this term origins from artificial intelligence community. It means learning how to make improved decisions for the future via built-in mechanism based on the current behavior. The most important part of neuro-dynamic programming is to build a trained neuro network for the optimal problem. ▼
=== Limitations ===
Simulation base optimization has some limitations<sup>[[User:Lpetroia/sandbox|[8]]]</sup>, such as the difficulty of create a model that imitates the dynamic behavior of the system in a way that is considered good enough for its representation. Other problem is how complex it is the determination of uncontrollable parameters of the real-world system and of the simulation. Moreover, only a statistical estimation of the real values can be obtained. It is not easy to determine the objective function, since it is result of measurements, what can be harmful for the solutions<sup>[[User:Lpetroia/sandbox|[9]]]</sup>. ▼
▲:<math>E\{g_N(X_N) + \sum_{k=0}^{N-1} g_k(x_k,u_k,W_k) \}</math>
=== Examples ===
[[File:Simulation-based optimization for building performance studies.png|thumb|Fig 2. Simulation-based optimization for building performance studies]]
The literature presents many uses of Simulation Based Optimization. Nguyen et al.<sup>[[User:Lpetroia/sandbox|[10]]]</sup>, for example, discuss in their paper the use of simulation-based optimization for supporting the project of high performance buildings, such as green buildings. The figure 2 presents their method simplified.
▲===== Neuro-dynamic programming : =====
Saif et al.<sup>[[User:Lpetroia/sandbox|[11]]]</sup> present another possible use of Simulation Based Optimization: allocate energy resources in an imperfect power distribution system, in an optimal way, considering ___location and capacity.
▲Neuro-dynamic programming is the same as dynamic programming except that the former has the concept of approximation architectures. It combines [[artificial intelligence ]], simulation-base algorithms, and functional approach techniques. “Neuro” in this term origins from artificial intelligence community. It means learning how to make improved decisions for the future via built-in mechanism based on the current behavior. The most important part of neuro-dynamic programming is to build a trained neuro network for the optimal problem. <ref>Van Roy, B., Bertsekas, D., Lee, Y., & [[John Tsitsiklis|Tsitsiklis, J.]] (1997). [https://web.stanford.edu/~bvr/pubs/retail.pdf Neuro-dynamic programming approach to retailer inventory management]. ''Proceedings of the IEEE Conference on Decision and Control,'' ''4'', 4052-4057.</ref>
=== ReferencesLimitations ===
▲Simulation base-based optimization has some limitations <sup>[[User:Lpetroia/sandbox|[8]]]</sup>, such as the difficulty of createcreating a model that imitates the dynamic behavior of thea system in a way that is considered good enough for its representation. OtherAnother problem is howcomplexity complex it isin the determination ofdetermining uncontrollable parameters of theboth real-world system and of the simulation. Moreover, only a statistical estimation of the real values can be obtained. It is not easy to determine the objective function, since it is a result of measurements, whatwhich can be harmful forto the solutions .< supref> Prasetio, Y. (2005). ''[ [Userhttps: Lpetroia/ sandbox|[9]]/elibrary.ru/item.asp?id=9387151 Simulation-based optimization for complex stochastic systems] ''. University of Washington.</ supref> <ref>Deng, G. , & Ferris, Michael. (2007). ''Simulation-based Optimization,'' ProQuest Dissertations and Theses</ref>
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Carson, Yolanda, and Anu Maria. "Simulation optimization: methods and applications." ''Proceedings of the 29th conference on Winter simulation''. IEEE Computer Society, 1997.
▲# '''[[User:Lpetroia/sandbox|Jump up^]]''' Jalali, Hamed, and Inneke Van Nieuwenhuyse. "Simulation optimization in inventory replenishment: a classification." IIE Transactions 47.11 (2015): 1217-1235.
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Abhijit Gosavi, Simulation‐Based Optimization: Parametric Optimization Techniques and Reinforcement Learning, Springer, 2nd Edition (2015)
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Rahimi Mazrae Shahi, M., Fallah Mehdipour, E. and Amiri, M. (2016), Optimization using simulation and response surface methodology with an application on subway train scheduling. Intl. Trans. in Op. Res., 23: 797–811. doi:10.1111/itor.12150
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Conn, A. R.; Scheinberg, K.; Vicente, L. N. (2009). [http://www.mat.uc.pt/~lnv/idfo/ ''Introduction to Derivative-Free Optimization'']. MPS-SIAM Book Series on Optimization. Philadelphia: SIAM. Retrieved 2014-01-18.
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Van Roy, B., Bertsekas, D., Lee, Y., & Tsitsiklis, J. (1997). Neuro-dynamic programming approach to retailer inventory management. ''Proceedings of the IEEE Conference on Decision and Control,'' ''4'', 4052-4057.
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Cooper, Leon; Cooper, Mary W. Introduction to dynamic programming. New York: Pergamon Press, 1981
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Prasetio, Y. (2005). ''Simulation-based optimization for complex stochastic systems''. University of Washington.
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Deng, G., & Ferris, Michael. (2007). ''Simulation-based Optimization,'' ProQuest Dissertations and Theses
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Nguyen, S., Reiter, P., Rigo, A., & Anh-Tuan Nguyen, S. (2014). A review on simulation-based optimization methods applied to building performance analysis.''Applied Energy,'' ''113'', 1043-1058.
# '''[[User:Lpetroia/sandbox|Jump up^]]''' Saif, A., Ravikumar Pandi, V., Zeineldin, H., & Kennedy, S. (2013). Optimal allocation of distributed energy resources through simulation-based optimization. ''Electric Power Systems Research,'' ''104'', 1-8.
==References==
|