Content deleted Content added
ce |
Citation bot (talk | contribs) Alter: template type, url. URLs might have been anonymized. Add: hdl, eprint, class, arxiv, s2cid, isbn, pages, year, title, chapter, doi, chapter-url, authors 1-1. Removed or converted URL. Changed bare reference to CS1/2. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar |
||
Line 207:
This is an SDP because the objective function and constraints are all linear functions of vector inner products. Solving the SDP gives a set of unit vectors in <math>\mathbf{R^n}</math>; since the vectors are not required to be collinear, the value of this relaxed program can only be higher than the value of the original quadratic integer program. Finally, a rounding procedure is needed to obtain a partition. Goemans and Williamson simply choose a uniformly random hyperplane through the origin and divide the vertices according to which side of the hyperplane the corresponding vectors lie. Straightforward analysis shows that this procedure achieves an expected ''approximation ratio'' (performance guarantee) of 0.87856 - ε. (The expected value of the cut is the sum over edges of the probability that the edge is cut, which is proportional to the angle <math>\cos^{-1}\langle v_{i}, v_{j}\rangle</math> between the vectors at the endpoints of the edge over <math>\pi</math>. Comparing this probability to <math>(1-\langle v_{i}, v_{j}\rangle)/{2}</math>, in expectation the ratio is always at least 0.87856.) Assuming the [[unique games conjecture]], it can be shown that this approximation ratio is essentially optimal.
Since the original paper of Goemans and Williamson, SDPs have been applied to develop numerous approximation algorithms. Recently, Prasad Raghavendra has developed a general framework for constraint satisfaction problems based on the [[unique games conjecture]].<ref>
== Algorithms ==
Line 215:
=== Interior point methods ===
Most codes are based on [[interior point methods]] (CSDP, [[MOSEK]], SeDuMi, [https://www.math.cmu.edu/~reha/sdpt3.html SDPT3], DSDP, SDPA). Robust and efficient for general linear SDP problems. Restricted by the fact that the algorithms are second-order methods and need to store and factorize a large (and often dense) matrix. Theoretically, the state-of-the-art high-accuracy SDP algorithms<ref>{{Cite journal |
=== First-order methods ===
Line 234:
=== Approximate methods ===
Algorithms that solve SDPs approximately have been proposed as well. The main goal of such methods is to achieve lower complexity in applications where approximate solutions are sufficient and complexity must be minimal. A prominent method that has been used for data detection in multiple-input multiple-output (MIMO) wireless systems is Triangular Approximate SEmidefinite Relaxation (TASER),<ref>{{Cite journal|
== Applications ==
Semidefinite programming has been applied to find approximate solutions to combinatorial optimization problems, such as the solution of the [[max cut]] problem with an [[approximation ratio]] of 0.87856. SDPs are also used in geometry to determine tensegrity graphs, and arise in control theory as [[Linear matrix inequality|LMIs]], and in inverse elliptic coefficient problems as convex, non-linear, semidefiniteness constraints.<ref>{{citation|last1=Harrach|first1=Bastian|date=2021|title=Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming|journal=Optimization Letters|language=en|doi=10.1007/s11590-021-01802-4|arxiv=2105.11440|s2cid=235166806}}</ref>
.
|