Content deleted Content added
LouScheffer (talk | contribs) →Possible use cases: Fix hyphenation |
m lc common adjective Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit |
||
(28 intermediate revisions by 15 users not shown) | |||
Line 3:
== Possible use cases ==
Even if they are never used in practice, galactic algorithms may still contribute to [[computer science]]:
* An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms.
* Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical.
* An impractical algorithm can still demonstrate that
== Examples ==
Line 13:
=== Primality testing ===
The [[AKS primality test]] is galactic. It is the most theoretically sound of any known algorithm that can take an arbitrary number and tell if it is [[prime number|prime]]. In particular, it is ''provably polynomial-time'', ''deterministic'', and ''unconditionally correct''. All other known algorithms fall short on at least one of these criteria, but the shortcomings are minor and the calculations are much faster, so they are used instead. [[Elliptic curve primality proving|ECPP]] in practice runs much faster than AKS, but it has never been proven to be polynomial time. The [[Miller–Rabin primality test|Miller–Rabin]] test is also much faster than AKS, but produces only a probabilistic result. However the probability of error can be driven down to arbitrarily small values (say <math>< 10^{-100}</math>), good enough for practical purposes. There is also a [[Miller–Rabin primality test#Deterministic variants|deterministic version]] of the Miller-Rabin test, which runs in polynomial time over all inputs, but its correctness depends on the [[generalized Riemann hypothesis]] (which is widely believed, but not proven). The existence of these (much) faster alternatives means AKS is not used in practice.
=== Matrix multiplication ===
The first improvement over brute-force [[matrix multiplication]] (which needs <math>O(n^3)</math> multiplications) was the [[Strassen algorithm]]: a recursive algorithm that needs <math>O(n^{2.807})</math> multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated [[group theory]], are the [[Coppersmith–Winograd algorithm]] and its slightly better successors, needing <math>O(n^{2.373})</math> multiplications. These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical."<ref>{{citation
| last = Le Gall | first = F.
| arxiv = 1204.1111
Line 31:
=== Sub-graphs ===
The problem of [[decision problem|deciding]] whether a graph <math>G</math> contains <math>H</math> as a [[graph minor|minor]] is [[NP-complete]] in general, but where <math>H</math> is fixed, it can be solved in polynomial time. The running time for testing whether <math>H</math> is a minor of <math>G</math> in this case is <math>O(n^2)</math>,<ref name="kkr12">{{cite journal |last1=Kawarabayashi | first1=Ken-ichi |last2=Kobayashi | first2=Yusuke |last3=Reed | first3=Bruce | authorlink3=Bruce Reed (mathematician) |year=2012 |title=The disjoint paths problem in quadratic time |journal=[[Journal of Combinatorial Theory]] | series=Series B |volume=102 |issue=2 |pages=424–435|doi=10.1016/j.jctb.2011.07.004 |doi-access=free }}</ref> where <math>n</math> is the number of vertices in <math>G</math> and the [[big O notation]] hides a constant that depends superexponentially on <math>H</math>. The constant is greater than <math>2 \uparrow \uparrow (2 \uparrow \uparrow (2 \uparrow \uparrow (h/2) ) ) </math> in [[Knuth's up-arrow notation]], where <math>h</math> is the number of vertices in <math>H</math>.<ref>{{cite journal |author=Johnson, David S. |title=The NP-completeness column: An ongoing guide (edition 19) |journal= Journal of Algorithms |volume=8 |issue=2 |year=1987 |pages=285–303 |citeseerx=10.1.1.114.3864 |doi=10.1016/0196-6774(87)90043-5 }}</ref> Even the case of <math>h = 4</math> cannot be reasonably computed as the constant is greater than 2 [[pentated]] by 4, or 2 [[tetrated]] by 65536, that is, <math>2 \uparrow \uparrow \uparrow 4 = {}^{65536}2 =
=== Cryptographic breaks ===
Line 40:
=== Hutter search ===
A single algorithm, "Hutter search", can solve any well-defined problem in an asymptotically optimal time, barring some caveats{{Such as|date=June 2025}}. It works by searching through all possible algorithms (by runtime), while simultaneously searching through all possible [[formal proof|proofs]] (by length of proof), looking for a proof of correctness for each algorithm. Since the proof of correctness is of finite size, it "only" adds a constant and does not affect the asymptotic runtime. However, this constant is so big that the algorithm is entirely impractical.<ref>{{cite arXiv|last=Hutter|first=Marcus|date=2002-06-14|title=The Fastest and Shortest Algorithm for All Well-Defined Problems|eprint=cs/0206022}}</ref><ref>{{Cite journal|last=Gagliolo|first=Matteo|date=2007-11-20|title=Universal search|journal=Scholarpedia|language=en|volume=2|issue=11|pages=2575|doi=10.4249/scholarpedia.2575|issn=1941-6016|bibcode=2007SchpJ...2.2575G|doi-access=free}}</ref> For example, if the shortest proof of correctness of a given algorithm is 1000 bits long, the search will examine at least 2<sup>999</sup> other potential proofs first.
Hutter search is related to [[Solomonoff's theory of inductive inference|Solomonoff induction]], which is a formalization of [[Bayesian inference]]. All [[computable]] theories (as implemented by programs) which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Again, the search over all possible explanations makes this procedure galactic.
=== Optimization ===
[[Simulated annealing]], when used with a logarithmic cooling schedule, has been proven to find the [[global optimum]] of any optimization problem. However, such a cooling schedule results in entirely impractical runtimes, and is never used.<ref>{{cite journal |last1=Liang |first1=Faming |first2=Yichen |last2=Cheng |first3=Guang |last3=Lin |title=Simulated stochastic approximation annealing for global optimization with a square-root cooling schedule |journal=Journal of the American Statistical Association |volume=109 |issue=506 |year=2014 |pages=847–863|doi=10.1080/01621459.2013.872993 |s2cid=123410795 }}</ref> However, knowing this ideal algorithm exists has led to practical variants that are able to find very good (though not provably optimal) solutions to complex optimization problems.<ref>{{cite journal |author= Ingber, Lester |title=Simulated annealing: Practice versus theory |journal=Mathematical and Computer Modelling |volume=18 |issue=11 |year=1993 |pages=29–57|doi=10.1016/0895-7177(93)90204-C |doi-access=free |citeseerx=10.1.1.15.1046 }}</ref>
=== Minimum spanning trees ===
Line 51:
=== Hash tables ===
Researchers have found an algorithm that achieves the provably best-possible<ref name="SuccinctDictionaries">{{cite conference |
| last1 = Bender
| first1 = Michael
Line 68:
}}</ref> But it remains purely theoretical: "Despite the new hash table’s unprecedented efficiency, no one is likely to try building it anytime soon. It’s just too complicated to construct."<ref name="OptimalBalance">{{cite web | last=Nadis | first=Steve | title=Scientists Find Optimal Balance of Data Storage and Time | website=Quanta Magazine | date=8 February 2024 | url=https://www.quantamagazine.org/scientists-find-optimal-balance-of-data-storage-and-time-20240208/ | access-date=12 February 2025}}</ref> and "in practice, constants really matter. In the real world, a factor of 10 is a game ender.”<ref name="OptimalBalance"/>
=== Connectivity in undirected graphs ===
Connectivity in undirected graphs (also known as USTCON, for Unconnected Source-Target CONnectivity) is the problem
| last = Reingold | first = Omer | author-link = Omer Reingold
| doi = 10.1145/1391289.1391291
Line 78 ⟶ 79:
| title = Undirected connectivity in log-space
| volume = 55
▲However, despite the asymptotically better space requirement, [[SL (complexity)#Important results|this algorithm is galactic]]. The constant hidden by the <math>O(\text{log N})</math> is so big that in any practical case it uses far more memory than the well known <math>O(\text{N})</math> algorithms, plus it is exceedingly slow. So despite being a landmark in theory (more than 1000 citations as of 2025) it is never used in practice.
=== Low-density
'''[[Low-density parity-check code]]s''', also known as '''LDPC''' or '''Gallager''' codes, are an example of an algorithm that was galactic when first developed, but became practical as computation improved. They were originally conceived by [[Robert G. Gallager]] in his doctoral dissertation<ref>{{cite thesis |last=Gallager |first=Robert G. |date= 1960 |title=Low density parity check codes |url=https://dspace.mit.edu/bitstream/handle/1721.1/11804/32786367-MIT.pdf |degree=Ph.D |publisher=Massachusetts Institute of Technology }}</ref> at the [[Massachusetts Institute of Technology]] in 1960.<ref>{{Cite news |last=Hardesty |first=L. |date=January 21, 2010 |title=Explained: Gallager codes |url=http://web.mit.edu/newsoffice/2010/gallager-codes-0121.html |access-date=August 7, 2013 |journal=MIT News}}</ref><ref name="G1962">{{cite journal |last=Gallager |first=R.G. |date=January 1962 |title=Low density parity check codes |journal=IRE Trans. Inf. Theory |volume=8 |issue=1 |pages=21–28 |doi=10.1109/TIT.1962.1057683 |s2cid=260490814 |hdl=1721.1/11804/32786367-MIT}}</ref> Although their performance was
Renewed interest in
|last1=MacKay |first1=David JC |author1-link=David J. C. MacKay|last2=Neal |first2=Radford M |author2-link=Radford M. Neal
|journal=Electronics Letters
|volume=32
|number=18
|pages=1645–1646
|year=1996
|publisher=IET
|doi=10.1049/el:19961141 |bibcode=1996ElL....32.1645M |url=https://docs.switzernet.com/people/emin-gabrielyan/060708-thesis-ref/papers/MacKay96.pdf}}</ref> and became popular as a patent-free alternative.<ref name="Closing">{{cite journal |author=Erico Guizzo |date=Mar 1, 2004 |title=CLOSING IN ON THE PERFECT CODE |url=https://spectrum.ieee.org/closing-in-on-the-perfect-code |url-status=dead |journal=IEEE Spectrum |archive-url=https://web.archive.org/web/20210902170851/https://spectrum.ieee.org/closing-in-on-the-perfect-code |archive-date=September 2, 2021}} "Another advantage, perhaps the biggest of all, is that the LDPC patents have expired, so companies can use them without having to pay for intellectual-property rights."</ref> Even though the turbo code patents have now expired, LDPC codes also have some technical advantages, and are used in many applications today.
== References ==
|