Content deleted Content added
m Removed invisible unicode characters + other fixes, replaced: → (24) using AWB (12016) |
m remove nowiki |
||
(39 intermediate revisions by 23 users not shown) | |||
Line 1:
'''Simulation-based optimization''' (also known as simply '''simulation optimization''') integrates [[optimization (mathematics)|optimization]] techniques into [[computer simulation|simulation]] modeling and analysis. Because of the complexity of the simulation, the [[objective function]] may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques (called output analysis in simulation methodology).
Once a system is mathematically modeled, computer-based simulations provide
In simulation experiment, the goal is to evaluate the effect of different values of input variables on a system
Specific
[[File:Slide1 1.jpg|thumb|Fig.1 Classification of simulation based optimization according to variable types]]
[[Optimization (computer science)|Optimization]] exists in two main branches of
''Optimization [[Parametric programming|parametric]] (static)'' –
''Optimization [[Optimal control|control]] (dynamic)'' – This is used largely in [[computer
== Simulation-based optimization methods ==
There are five methods classifying the simulation based optimization. They are discussed below:▼
▲
=== Response surface methodo<nowiki/>logy (RSM) ===▼
<ref name=Fu>{{cite book|editor-last=Fu|editor-first=Michael|title=Handbook of Simulation Optimization|publisher=Springer|year=2015|url=https://www.springer.com/us/book/9781493913831}}</ref>
In [[response surface methodology]], the objective is to find the relationship between the input variables and the response variables. The process starts from trying to fit a linear regression model. If the P-value turns out to be low, then a higher degree polynomial regression, which is usually quadratic, will be implemented. The process of finding a good relationship between input and response variables will be done for each simulation test. In simulation optimization, response surface method can be used to find the best input variables that produce desired outcomes in terms of response variables.<ref>Rahimi Mazrae Shahi, M., Fallah Mehdipour, E. and Amiri, M. (2016), Optimization using simulation and response surface methodology with an application on subway train scheduling. Intl. Trans. in Op. Res., 23: 797–811. doi:10.1111/itor.12150</ref>▼
<ref>Spall, J.C. (2003). ''Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control''. Hoboken: Wiley.</ref>
=== Statistical ranking and selection methods (R/S) ===
Ranking and selection methods are designed for problems where the alternatives are fixed and known, and simulation is used to estimate the system performance.
In the simulation optimization setting, applicable methods include indifference zone approaches, optimal computing budget allocation, and knowledge gradient algorithms.
▲In [[response surface methodology]], the objective is to find the relationship between the input variables and the response variables. The process starts from trying to fit a linear regression model. If the P-value turns out to be low, then a higher degree polynomial regression, which is usually quadratic, will be implemented. The process of finding a good relationship between input and response variables will be done for each simulation test. In simulation optimization, response surface method can be used to find the best input variables that produce desired outcomes in terms of response variables.<ref>Rahimi Mazrae Shahi, M., Fallah Mehdipour, E. and Amiri, M. (2016), [https://onlinelibrary.wiley.com/doi/abs/10.1111/itor.12150 Optimization using simulation and response surface methodology with an application on subway train scheduling]. Intl. Trans. in Op. Res., 23: 797–811. {{doi
=== Heuristic methods ===
[[Heuristic (computer science)|Heuristic methods]] change accuracy by speed. Their goal is to find a good solution faster than the traditional methods, when they are too slow or fail in solving the problem. Usually they find local optimal instead of the optimal value; however, the values are considered close enough of the final solution. Examples of
Metamodels enable researchers to obtain reliable approximate model outputs without running expensive and time-consuming computer simulations. Therefore, the process of model optimization can take less computation time and cost.<ref>{{Cite journal|last=Yousefi|first=Milad|last2=Yousefi|first2=Moslem|last3=Ferreira|first3=Ricardo Poley Martins|last4=Kim|first4=Joong Hoon|last5=Fogliatto|first5=Flavio S.|title=Chaotic genetic algorithm and Adaboost ensemble metamodeling approach for optimum resource planning in emergency departments|journal=Artificial Intelligence in Medicine|volume=84|pages=23–33|doi=10.1016/j.artmed.2017.10.002|pmid=29054572|year=2018}}</ref>
=== Stochastic approximation ===
[[Stochastic approximation]] is used when the function cannot be computed directly, only estimated via noisy observations. In
:<math>\underset{\text{x}\in\theta}{\min}f\bigl(\text{x}\bigr) = \underset{\text{x}\in\theta}{\min}\Epsilon[F\bigl(\text{x,y})]</math>
:<math>y</math> is a random variable that represents the noise.
:<math>x</math> is the parameter that minimizes <math>f\bigl(\text{x}\bigr)</math>.
:<math>\theta</math> is the ___domain of the parameter <math>x</math>.
=== Derivative-free optimization methods ===
For unconstrained optimization problems, it has
:<math>\underset{\text{x}\in\R^n}{\min}f\bigl(\text{x}\bigr)</math>
The
1.
<ref name=Fu/>
.<ref>Fu, M.C., Hill, S.D. Optimization of discrete event systems via simultaneous perturbation stochastic approximation. ''IIE Transactions'' 29, 233–243 (1997). https://doi.org/10.1023/A:1018523313043</ref>
2. When confronted with minimizing non-convex functions, it will show its limitation.
3. Derivative-free optimization methods
=== Dynamic programming and neuro-dynamic programming ===
==== Dynamic programming ====
[[Dynamic programming]] deals with situations where decisions are made in
One dynamic basic model has two features:
1)
2) The cost function is additive over time.
For discrete
:<math>x_{k+1} = f_k(x_{k},u_{k},w_{k}) , k=0,1,...,N-1</math>
:<math>k</math> represents the index of discrete time.
:<math>x_k</math> is the state of the time k, it contains the past information and
:<math>u_k</math> is the control variable.
:<math>w_k</math> is the random parameter.
For the cost function, it has the form:
:<math>g_N(X_N) + \sum_{k=0}^{N-1}
<math>g_N(X_N)</math> is the cost at the end of the process.
Line 79 ⟶ 90:
As the cost cannot be optimized meaningfully, it can be used the expect value:
:<math>E\{g_N(X_N) + \sum_{k=0}^{N-1} g_k(x_k,u_k,W_k) \}</math>
==== Neuro-dynamic programming ====
Neuro-dynamic programming is the same as dynamic programming except that the former has the concept of approximation architectures. It combines [[artificial intelligence]], simulation-base algorithms, and functional approach techniques. “Neuro” in this term origins from artificial intelligence community. It means learning how to make improved decisions for the future via built-in mechanism based on the current behavior. The most important part of neuro-dynamic programming is to build a trained neuro network for the optimal problem.<ref>Van Roy, B., Bertsekas, D., Lee, Y., & [[John Tsitsiklis|Tsitsiklis, J.]] (1997). [https://web.stanford.edu/~bvr/pubs/retail.pdf Neuro-dynamic programming approach to retailer inventory management]. ''Proceedings of the IEEE Conference on Decision and Control,'' ''4'', 4052-4057.</ref>
== Limitations ==
Simulation
==References==
|