A galactic algorithm is one that runs faster than any other algorithm for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in practice. An example is an algorithm that is the fastest known way to multiply two numbers, provide the numbers have at least 10214857091104455251940635045059417341952 digits.[1] Since this requires (vastly) more digits than there are atoms in the universe, this algorithm is never used in practice. Galactic algorithms were so named by Richard Lipton and Ken Regan[2].
Despite the fact that they will never be used, galactic algorithms may still contribute to computer science:
- An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms.
- Computer sizes may catch up to the crossover point, so the previously impractical algorithm is used.
- An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or alternatively show that conjectured bounds are wrong. As Lipton says "This alone could be important and often is a great reason for finding such algorithms. For an example, if there were tomorrow a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring."
Examples
There are several well-known algorithms with world-beating asymptotic behavior, but only on impractically large problems:
- Matrix multiplication: The first improvement over brute-force multiplication, O(N3) is the Strassen algorithm, a recursive algorithm that is O(N2.807). This algorithm is not galactic and used in practice. Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, delivering O(N2.373). These are galactic - "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical."[3]
- The problem of deciding whether a graph G contains H as a minor is NP-complete in general, but where H is fixed, it can be solved in polynomial time. The running time for testing whether H is a minor of G in this case is O(n2),[4] where n is the number of vertices in G and the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), and where where 'h' is the number of vertices in H:[5]
.
References
- ^ David Harvey and Joris Van Der Hoeven. "Integer multiplication in time O(n log n)".
- ^ Lipton, Richard J., and Kenneth W. Regan (2013). "David Johnson: Galactic Algorithms". People, Problems, and Proofs. Heidelberg: Springer Berlin. pp. 109–112.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - ^ Le Gall, F. (2012), "Faster algorithms for rectangular matrix multiplication", Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2012), pp. 514–523, arXiv:1204.1111, doi:10.1109/FOCS.2012.80.
- ^ Kawarabayashi, Kobayashi & Reed (2012) .
- ^ Johnson, David S. (1987). "The NP-completeness column: An ongoing guide (edition 19)". Journal of algorithms. 8 (2): 285–303.