Semidefinite programming: Difference between revisions

Content deleted Content added
ce
Citation bot (talk | contribs)
Alter: template type, url. URLs might have been anonymized. Add: hdl, eprint, class, arxiv, s2cid, isbn, pages, year, title, chapter, doi, chapter-url, authors 1-1. Removed or converted URL. Changed bare reference to CS1/2. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar
Line 207:
This is an SDP because the objective function and constraints are all linear functions of vector inner products. Solving the SDP gives a set of unit vectors in <math>\mathbf{R^n}</math>; since the vectors are not required to be collinear, the value of this relaxed program can only be higher than the value of the original quadratic integer program. Finally, a rounding procedure is needed to obtain a partition. Goemans and Williamson simply choose a uniformly random hyperplane through the origin and divide the vertices according to which side of the hyperplane the corresponding vectors lie. Straightforward analysis shows that this procedure achieves an expected ''approximation ratio'' (performance guarantee) of 0.87856 - ε. (The expected value of the cut is the sum over edges of the probability that the edge is cut, which is proportional to the angle <math>\cos^{-1}\langle v_{i}, v_{j}\rangle</math> between the vectors at the endpoints of the edge over <math>\pi</math>. Comparing this probability to <math>(1-\langle v_{i}, v_{j}\rangle)/{2}</math>, in expectation the ratio is always at least 0.87856.) Assuming the [[unique games conjecture]], it can be shown that this approximation ratio is essentially optimal.
 
Since the original paper of Goemans and Williamson, SDPs have been applied to develop numerous approximation algorithms. Recently, Prasad Raghavendra has developed a general framework for constraint satisfaction problems based on the [[unique games conjecture]].<ref>Raghavendra,{{Cite P. 2008. [book|chapter-url=http://doi.acm.org/10.1145/1374376.1374414 |doi=10.1145/1374376.1374414|chapter=Optimal algorithms and inapproximability results for every CSP?]. In |title=Proceedings of the 40thfortieth Annualannual ACM Symposiumsymposium on theoryTheory of Computing (Victoria, British Columbia, Canada, May 17–20, computing|year=2008). STOC '08. ACM, New York, NY, 245-254.|last1=Raghavendra|first1=Prasad|pages=245–254|isbn=9781605580470|s2cid=15075197}}</ref>
 
== Algorithms ==
Line 215:
 
=== Interior point methods ===
Most codes are based on [[interior point methods]] (CSDP, [[MOSEK]], SeDuMi, [https://www.math.cmu.edu/~reha/sdpt3.html SDPT3], DSDP, SDPA). Robust and efficient for general linear SDP problems. Restricted by the fact that the algorithms are second-order methods and need to store and factorize a large (and often dense) matrix. Theoretically, the state-of-the-art high-accuracy SDP algorithms<ref>{{Cite journal |lastlast1=Jiang |firstfirst1=Haotian |last2=Kathuria |first2=Tarun |last3=Lee |first3=Yin Tat |last4=Padmanabhan |first4=Swati |last5=Song |first5=Zhao |date=November 2020 |title=A Faster Interior Point Method for Semidefinite Programming |url=https://ieeexplore.ieee.org/document/9317892/ |journal=2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS) |___location=Durham, NC, USA |publisher=IEEE |pages=910–918 |doi=10.1109/FOCS46700.2020.00089 |arxiv=2009.10217 |isbn=978-1-7281-9621-3|s2cid=221836388 }}</ref><ref>{{Cite arXiv |lastlast1=Huang |firstfirst1=Baihe |last2=Jiang |first2=Shunhua |last3=Song |first3=Zhao |last4=Tao |first4=Runzhou |last5=Zhang |first5=Ruizhe |date=2021-11-18 |title=Solving SDP Faster: A Robust IPM Framework and Efficient Implementation |arxivclass=math.OC |eprint=2101.08208}}</ref> are based on this approach.
 
=== First-order methods ===
Line 234:
 
=== Approximate methods ===
Algorithms that solve SDPs approximately have been proposed as well. The main goal of such methods is to achieve lower complexity in applications where approximate solutions are sufficient and complexity must be minimal. A prominent method that has been used for data detection in multiple-input multiple-output (MIMO) wireless systems is Triangular Approximate SEmidefinite Relaxation (TASER),<ref>{{Cite journal|lastlast1=Castañeda|firstfirst1=O.|last2=Goldstein|first2=T.|last3=Studer|first3=C.|date=December 2016|title=Data Detection in Large Multi-Antenna Wireless Systems via Approximate Semidefinite Relaxation|url=https://ieeexplore.ieee.org/document/7733153/|journal=IEEE Transactions on Circuits and Systems I: Regular Papers|volume=63|issue=12|pages=2334–2346|doi=10.1109/TCSI.2016.2607198|arxiv=1609.01797|hdl=20.500.11850/448631|issn=1558-0806|doi-access=free}}</ref> which operates on the Cholesky decomposition factors of the semidefinite matrix instead of the semidefinite matrix. This method calculates approximate solutions for a max-cut-like problem that are often comparable to solutions from exact solvers but in only 10-20 algorithm iterations.
 
== Applications ==
Semidefinite programming has been applied to find approximate solutions to combinatorial optimization problems, such as the solution of the [[max cut]] problem with an [[approximation ratio]] of 0.87856. SDPs are also used in geometry to determine tensegrity graphs, and arise in control theory as [[Linear matrix inequality|LMIs]], and in inverse elliptic coefficient problems as convex, non-linear, semidefiniteness constraints.<ref>{{citation|last1=Harrach|first1=Bastian|date=2021|title=Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming|journal=Optimization Letters|language=en|doi=10.1007/s11590-021-01802-4|arxiv=2105.11440|s2cid=235166806}}</ref>
.