Optimization problem: Difference between revisions

Content deleted Content added
fixing reference
Add search space section
 
(43 intermediate revisions by 28 users not shown)
Line 1:
{{Short description|Problem of finding the best feasible solution}}
{{Broader|Mathematical optimization}}
 
In [[mathematics]], [[engineering]], [[computer science]] and [[economics]], an '''optimization problem''' is the [[Computational problem|problem]] of finding the ''best'' solution from all [[feasible solution]]s.
In [[mathematics]] and [[computer science]], an '''optimization problem''' is the [[Computational_problem|problem]] of finding the ''best'' solution from all [[feasible solution]]s. Optimization problems can be divided into two categories depending on whether the [[Variable (mathematics)|variables]] are [[continuous variable|continuous]] or [[discrete variable|discrete]]. An optimization problem with [[Discrete mathematics|discrete]] variables is known as a [[Combinatorial Optimization|combinatorial optimization problem]]. In a combinatorial optimization problem, we are looking for an object such as an [[integer]], [[permutation]] or [[Graph (discrete mathematics)|graph]] from a [[Finite set|finite]] (or possibly [[Countable set|countably infinite]]) set. Problems with continuous variables include constrained problems and multimodal problems.
 
Optimization problems can be divided into two categories, depending on whether the [[Variable (mathematics)|variables]] are [[continuous variable|continuous]] or [[discrete variable|discrete]]:
* An optimization problem with discrete variables is known as a ''[[discrete optimization]]'', in which an [[Mathematical object|object]] such as an [[integer]], [[permutation]] or [[Graph (discrete mathematics)|graph]] must be found from a [[countable set]].
* A problem with continuous variables is known as a ''[[continuous optimization]]'', in which an optimal value from a [[continuous function]] must be found. They can include [[Constrained optimization|constrained problem]]s and multimodal problems.
 
== Search space ==
In the context of an optimization problem, the '''search space''' refers to the set of all possible points or solutions that satisfy the problem's constraints, targets, or goals.<ref>{{Cite web |title=Search Space |url=https://courses.cs.washington.edu/courses/cse473/06sp/GeneticAlgDemo/searchs.html |access-date=2025-05-10 |website=courses.cs.washington.edu}}</ref> These points represent the feasible solutions that can be evaluated to find the optimal solution according to the objective function. The search space is often defined by the ___domain of the function being optimized, encompassing all valid inputs that meet the problem's requirements.<ref>{{Cite web |date=2020-09-22 |title=Search Space - LessWrong |url=https://www.lesswrong.com/w/search-space |access-date=2025-05-10 |website=www.lesswrong.com |language=en}}</ref>
 
The search space can vary significantly in size and complexity depending on the problem. For example, in a continuous optimization problem, the search space might be a multidimensional real-valued ___domain defined by bounds or constraints. In a discrete optimization problem, such as combinatorial optimization, the search space could consist of a finite set of permutations, combinations, or configurations.
 
In some contexts, the term ''search space'' may also refer to the optimization of the ___domain itself, such as determining the most appropriate set of variables or parameters to define the problem. Understanding and effectively navigating the search space is crucial for designing efficient algorithms, as it directly influences the computational complexity and the likelihood of finding an optimal solution.
 
==Continuous optimization problem==
 
The ''[[Canonical form|standard form]]'' of a ([[Continuity (mathematics)|continuous]]) optimization problem is<ref>{{cite book|title=Convex Optimization|first1=Stephen P.|last1=Boyd|first2=Lieven|last2=Vandenberghe|page=129|year=2004|publisher=Cambridge University Press|isbn=978-0-521-83378-3|url=httphttps://wwwweb.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf#page=143|format=pdf}}</ref>
: <math display=block>\begin{align}
&\underset{x}{\operatorname{minimize}}& & f(x) \\
&\operatorname{subject\;to}
& &g_i(x) \leq 0, \quad i = 1,\dots,m \\
&&&h_ih_j(x) = 0, \quad ij = 1, \dots,p
\end{align}</math>
where
* <{{math>|''f(x)'' : \mathbb{R}^n[[Euclidean \to \mathbb{R}space|ℝ<sup>''n''</mathsup>]] → [[Real numbers|ℝ]]}} is the '''[[Loss function|objective function]]''' to be minimized over the {{mvar|n}}-variable <math>vector {{mvar|x</math>}},
* <{{math|''g<sub>g_ii</sub>''(''x'') \leq 0</math>}} are called '''inequality [[Constraint (mathematics)|constraints]]''', and
* <{{math|''h<sub>h_ij</sub>''(''x'') {{=}} 0</math>}} are called '''equality constraints'''., and
* {{math|''m'' ≥ 0}} and {{math|''p'' ≥ 0}}.
By convention, the standard form defines a '''minimization problem'''. A '''maximization problem''' can be treated by [[Additive inverse|negating]] the objective function.
 
If {{math|''m'' {{=}} ''p'' {{=}} 0}}, the problem is an unconstrained optimization problem. By convention, the standard form defines a '''minimization problem'''. A '''maximization problem''' can be treated by [[Additive inverse|negating]] the objective function.
 
==Combinatorial optimization problem==
{{Main|Combinatorial optimization}}
 
Formally, a [[combinatorial optimization]] problem <math>{{mvar|A</math>}} is a quadruple{{Citation <needed|date=January 2018}} {{math>|(''I'', ''f'', ''m'', ''g'')</math>}}, where
* <{{math>|I</math>}} is a [[Set (mathematics)|set]] of instances;
* given an instance <{{math>|''x'' \in ''I</math>''}}, <{{math>|''f''(''x'')</math>}} is the set of feasible solutions;
* given an instance <math>{{mvar|x</math>}} and a feasible solution <math>{{mvar|y</math>}} of <math>{{mvar|x</math>}}, <{{math>|''m''(''x'', ''y'')</math>}} denotes the [[Measure (mathematics)|measure]] of <math>{{mvar|y</math>}}, which is usually a [[Positive (mathematics)|positive]] [[Real number|real]].
* <math>{{mvar|g</math>}} is the goal function, and is either <{{math>\|[[Minimum (mathematics)|min</math>]]}} or <{{math>\|[[Maximum (mathematics)|max</math>]]}}.
 
The goal is then to find for some instance <math>{{mvar|x</math>}} an ''optimal solution'', that is, a feasible solution <math>{{mvar|y</math>}} with
<math display=block>m(x, y) = g \left\{ m(x, y') \mid: y' \in f(x) \right\} .</math>
 
For each combinatorial optimization problem, there is a corresponding [[decision problem]] that asks whether there is a feasible solution for some particular measure <{{math|''m''<sub>m_00</mathsub>}}. For example, if there is a [[Graph (discrete mathematics)|graph]] <math>{{mvar|G</math>}} which contains vertices <math>{{mvar|u</math>}} and <math>{{mvar|v</math>}}, an optimization problem might be "find a path from <math>{{mvar|u</math>}} to <math>{{mvar|v</math>}} that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from <math>{{mvar|u</math>}} to <math>{{mvar|v</math>}} that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.
: <math>
m(x, y) = g \{ m(x, y') \mid y' \in f(x) \} .
</math>
 
For each combinatorial optimization problem, there is a corresponding [[decision problem]] that asks whether there is a feasible solution for some particular measure <math>m_0</math>. For example, if there is a [[Graph (discrete mathematics)|graph]] <math>G</math> which contains vertices <math>u</math> and <math>v</math>, an optimization problem might be "find a path from <math>u</math> to <math>v</math> that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from <math>u</math> to <math>v</math> that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.
 
In the field of [[approximation algorithm]]s, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.<ref name=Ausiello03>{{citation
Line 42 ⟶ 55:
|display-authors=etal}}</ref>
 
==See also==
=== NP optimization problem ===
 
An ''NP-optimization problem'' (NPO) is a combinatorial optimization problem with the following additional conditions.<ref name=Hromkovic02>{{citation
| last1 = Hromkovic | first1 = Juraj
| year = 2002
| edition = 2nd
| title = Algorithmics for Hard Problems
| series = Texts in Theoretical Computer Science
| publisher = Springer
| isbn = 978-3-540-44134-2
}}</ref> Note that the below referred [[Polynomial|polynomials]] are functions of the size of the respective functions' inputs, not the size of some implicit set of input instances.
* the size of every feasible solution <math>\scriptstyle y\in f(x)</math> is polynomially [[Bounded set|bounded]] in the size of the given instance <math>x</math>,
* the languages <math>\scriptstyle \{\,x\,\mid\, x \in I \,\}</math> and <math>\scriptstyle \{\,(x,y)\, \mid\, y \in f(x) \,\}</math> can be [[decidable language|recognized]] in [[polynomial time]], and
* ''m'' is [[polynomial time|polynomial-time computable]].
 
*[[ {{annotated link|Counting problem (complexity)]]}}
This implies that the corresponding decision problem is in [[NP (complexity)|NP]]. In computer science, interesting optimization problems usually have the above properties and are therefore NPO problems. A problem is additionally called a P-optimization (PO) problem, if there exists an algorithm which finds optimal solutions in polynomial time. Often, when dealing with the class NPO, one is interested in optimization problems for which the decision versions are NP-complete. Note that hardness relations are always with respect to some reduction. Due to the connection between approximation algorithms and computational optimization problems, reductions which preserve approximation in some respect are for this subject preferred than the usual [[Turing reduction|Turing]] and [[Karp reduction]]s. An example of such a reduction would be the [[L-reduction]]. For this reason, optimization problems with NP-complete decision versions are not necessarily called NPO-complete.<ref name=Kann92>{{citation
* {{annotated link|Design Optimization}}
| last1 = Kann | first1 = Viggo
* {{annotated link|Ekeland's variational principle}}
| year = 1992
*[[ {{annotated link|Function problem]]}}
| title = On the Approximability of NP-complete Optimization Problems
*[[ {{annotated link|Glove problem]]}}
| publisher = Royal Institute of Technology, Sweden
*[[ {{annotated link|Operations research]]}}
| isbn = 91-7170-082-X
* {{annotated link|Satisficing}} − the optimum need not be found, just a "good enough" solution.
}}</ref>
*[[ {{annotated link|Search problem]]}}
*[[ {{annotated link|Semi-infinite programming]]}}
 
==References==
NPO is divided into the following subclasses according to their approximability:<ref name=Hromkovic02/>
* ''NPO(I)'': Equals [[FPTAS]]. Contains the [[Knapsack problem]].
* ''NPO(II)'': Equals [[Polynomial-time approximation scheme|PTAS]]. Contains the [[Makespan scheduling problem]].
* ''NPO(III)'': :The class of NPO problems that have polynomial-time algorithms which computes solutions with a cost at most ''c'' times the optimal cost (for minimization problems) or a cost at least <math>1/c</math> of the optimal cost (for maximization problems). In [[Juraj Hromkovič|Hromkovič]]'s book, excluded from this class are all NPO(II)-problems save if P=NP. Without the exclusion, equals APX. Contains [[MAX-SAT]] and metric [[Travelling salesman problem|TSP]].
* ''NPO(IV)'': :The class of NPO problems with polynomial-time algorithms approximating the optimal solution by a ratio that is polynomial in a logarithm of the size of the input. In Hromkovic's book, all NPO(III)-problems are excluded from this class unless P=NP. Contains the [[set cover]] problem.
* ''NPO(V)'': :The class of NPO problems with polynomial-time algorithms approximating the optimal solution by a ratio bounded by some function on n. In Hromkovic's book, all NPO(IV)-problems are excluded from this class unless P=NP. Contains the [[Travelling salesman problem|TSP]] and [[Clique_problem|Max Clique problems]].
 
Another class of interest is NPOPB, NPO with polynomially bounded cost functions. Problems with this condition have many desirable properties.
 
==References==
{{reflist}}
*"How Traffic Shaping Optimizes Network Bandwidth." IPC. N.p., 12 July 2016. Web. 13 Feb. 2017.
 
==SeeExternal alsolinks==
*[[Semi-infinite programming]]
*[[Search problem]]
*[[Counting problem (complexity)]]
*[[Function problem]]
*[[Glove problem]]
*[[Operations research]]
 
* {{cite web|title=How Traffic Shaping Optimizes Network Bandwidth|work=IPC|date=12 July 2016|access-date=13 February 2017|url=https://www.ipctech.com/how-traffic-shaping-optimizes-network-bandwidth}}
 
{{Convex analysis and variational analysis}}
{{Authority control}}
 
[[Category:Computational problems]]