Content deleted Content added
cleanup Tag: Reverted |
m →External links: HTTP to HTTPS for SourceForge |
||
(11 intermediate revisions by 8 users not shown) | |||
Line 7:
To factorize a small integer {{mvar|n}} using mental or pen-and-paper arithmetic, the simplest method is [[trial division]]: checking if the number is divisible by prime numbers {{math|2}}, {{math|3}}, {{math|5}}, and so on, up to the [[square root]] of {{mvar|n}}. For larger numbers, especially when using a computer, various more sophisticated factorization algorithms are more efficient. A prime factorization algorithm typically involves [[primality test|testing whether each factor is prime]] each time a factor is found.
When the numbers are sufficiently large, no efficient non-[[quantum computer|quantum]] integer factorization [[algorithm]] is known. However, it has not been proven that such an algorithm does not exist. The presumed [[Computational hardness assumption|difficulty]] of this problem is important for the algorithms used in [[cryptography]] such as [[RSA (cryptosystem)|RSA public-key encryption]] and the [[Digital Signature Algorithm|RSA digital signature]].<ref>{{Citation |last=Lenstra |first=Arjen K. |title=Integer Factoring |date=2011
Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are [[semiprime]]s, the product of two prime numbers. When they are both large, for instance more than two thousand [[bit]]s long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by [[Fermat's factorization method]]), even the fastest prime factorization algorithms on the fastest classical computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any classical computer increases drastically.
Many cryptographic protocols are based on the presumed difficulty of factoring large composite integers or a related
== Prime decomposition ==
Line 24:
Among the {{math|''b''}}-bit numbers, the most difficult to factor in practice using existing algorithms are those [[semiprimes]] whose factors are of similar size. For this reason, these are the integers used in cryptographic applications.
In 2019,
| last1 = Kleinjung | first1 = Thorsten
| last2 = Aoki | first2 = Kazumaro
| last3 = Franke | first3 = Jens
| last4 = Lenstra | first4 = Arjen K.
| last5 = Thomé | first5 = Emmanuel
| last6 = Bos | first6 = Joppe W.
| last7 = Gaudry | first7 = Pierrick
| last8 = Kruppa | first8 = Alexander
| last9 = Montgomery | first9 = Peter L.
| last10 = Osvik | first10 = Dag Arne
| last11 = te Riele | first11 = Herman J. J.
| last12 = Timofeev | first12 = Andrey
| last13 = Zimmermann | first13 = Paul
| editor-last = Rabin | editor-first = Tal
| contribution = Factorization of a 768-Bit RSA Modulus
| contribution-url = https://eprint.iacr.org/2010/006.pdf
| doi = 10.1007/978-3-642-14623-7_18
| pages = 333–350
| publisher = Springer
| series = Lecture Notes in Computer Science
| title = Advances in Cryptology - CRYPTO 2010, 30th Annual Cryptology Conference, Santa Barbara, CA, USA, August 15-19, 2010. Proceedings
| volume = 6223
| year = 2010| isbn = 978-3-642-14622-0
}}</ref>
The largest such semiprime yet factored was [[RSA numbers#RSA-250|RSA-250]], an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel [[Skylake (microarchitecture)#Xeon Gold (quad processor)|Xeon Gold]] 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the [[general number field sieve]] run on hundreds of machines.
=== Time complexity ===
No [[algorithm]] has been published that can factor all integers in [[polynomial time]], that is, that can factor a {{math|''b''}}-bit number {{math|''n''}} in time {{math|[[Big O notation|O]](''b''<sup>''k''</sup>)}} for some constant {{math|''k''}}. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist.<ref>{{citation |last=Krantz |first=Steven G. |author-link=Steven G. Krantz |doi=10.1007/978-0-387-48744-1 |isbn=978-0-387-48908-7 |___location=New York |mr=2789493 |page=203 |publisher=Springer |title=The Proof is in the Pudding: The Changing Nature of Mathematical Proof |url=https://books.google.com/books?id=mMZBtxVZiQoC&pg=PA203 |year=2011}}</ref><ref>{{citation |last1=Arora |first1=Sanjeev |author1-link=Sanjeev Arora |last2=Barak |first2=Boaz |doi=10.1017/CBO9780511804090 |isbn=978-0-521-42426-4 |___location=Cambridge |mr=2500087 |page=230 |publisher=Cambridge University Press |title=Computational complexity |url=https://books.google.com/books?id=nGvI7cOuOOQC&pg=PA230 |year=2009|s2cid=215746906 }}</ref>
Line 34 ⟶ 59:
: <math>\exp\left( \left(\left(\tfrac83\right)^\frac23 + o(1)\right)\left(\log n\right)^\frac13\left(\log \log n\right)^\frac23\right).</math>
For current computers, GNFS is the best published algorithm for large {{math|''n''}} (more than about 400 bits). For a [[Quantum computing|quantum computer]], however, [[Peter Shor]] discovered an algorithm in 1994 that solves it in polynomial time. [[Shor's algorithm]] takes only {{math|O(''b''<sup>3</sup>)}} time and {{math|O(''b'')}} space on {{math|''b''}}-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using [[Nuclear magnetic resonance|NMR]] techniques on molecules that provide seven qubits.<ref>
{{cite journal | doi = 10.1038/414883a | title = Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance | journal = [[Nature (journal)|Nature]] | volume = 414 | pages = 883–887 | year = 2001 | last = Vandersypen | first= | display-authors= | pmid = 11780055
| bibcode = 2001Natur.414..883V
| s2cid = 4400832
}}</ref>
Line 41 ⟶ 78:
{{Math theorem |For every natural numbers <math>n</math> and <math>k</math>, does {{math|''n''}} have a factor smaller than {{math|''k''}} besides 1? |name=Decision problem |note=Integer factorization }}
It is known to be in both [[NP (complexity)|NP]] and [[co-NP]], meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization {{math|1=''n'' = ''d''({{sfrac|''n''|''d''}})}} with {{math|''d'' ≤ ''k''}}. An answer of "no" can be certified by exhibiting the factorization of {{math|''n''}} into distinct primes, all larger than {{math|''k''}}; one can verify their primality using the [[AKS primality test]], and then multiply them to obtain {{math|''n''}}. The [[fundamental theorem of arithmetic]] guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both [[UP (complexity)|UP]] and co-UP.<ref>
{{cite web | author = Lance Fortnow | title = Computational Complexity Blog: Complexity Class of the Week: Factoring | date = 2002-09-13 | url = http://weblog.fortnow.com/2002/09/complexity-class-of-week-factoring.html }}</ref> It is known to be in [[BQP]] because of Shor's algorithm. The problem is suspected to be outside all three of the complexity classes P, NP-complete,<ref>{{citation |last1=Goldreich |first1=Oded |author1-link=Oded Goldreich |last2=Wigderson |first2=Avi |author2-link=Avi Wigderson |editor1-last=Gowers |editor1-first=Timothy |editor1-link=Timothy Gowers |editor2-last=Barrow-Green |editor2-first=June |editor2-link=June Barrow-Green|editor3-last=Leader |editor3-first=Imre |editor3-link=Imre Leader |contribution=IV.20 Computational Complexity |isbn=978-0-691-11880-2 |___location=Princeton, New Jersey |mr=2467561 |pages=575–604 |publisher=Princeton University Press |title=The Princeton Companion to Mathematics |year=2008}}. See in particular [https://books.google.com/books?id=ZOfUsvemJDMC&pg=PA583 p. 583].</ref> and [[co-NP-complete]].
Line 53 ⟶ 96:
A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms.
An important subclass of special-purpose factoring algorithms is the ''Category 1'' or ''First Category'' algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors.<ref name="Bressoud and Wagon">
{{cite book | author = [[David Bressoud]] and [[Stan Wagon]] | year = 2000 | title = A Course in Computational Number Theory | | isbn = 978-1-930190-10-8 | pages = [https://archive.org/details/courseincomputat0000bres/page/168 168–69] | url-access = registration | url = https://archive.org/details/courseincomputat0000bres/page/168 }}</ref> For example, naive [[trial division]] is a Category 1 algorithm.
* [[Trial division]]
Line 137 ⟶ 190:
== External links ==
* [
* Richard P. Brent, "Recent Progress and Prospects for Integer Factorisation Algorithms", ''Computing and Combinatorics"'', 2000, pp. 3–22. [http://citeseer.ist.psu.edu/327036.html download]
* [[Manindra Agrawal]], Neeraj Kayal, Nitin Saxena, "PRIMES is in P." Annals of Mathematics 160(2): 781–793 (2004). [http://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf August 2005 version PDF]
|