Content deleted Content added
Citation bot (talk | contribs) Altered title. Added chapter. | Use this bot. Report bugs. | Suggested by Abductive | Category:Computational hardness assumptions | #UCB_Category 17/25 |
m →External links: HTTP to HTTPS for SourceForge |
||
(29 intermediate revisions by 22 users not shown) | |||
Line 1:
{{Short description|Decomposition of a number into a product}}
{{Redirect|Prime decomposition|the prime decomposition theorem for 3-manifolds|Prime decomposition
{{unsolved|computer science|Can integer factorization be solved in polynomial time on a classical computer?}}
In [[
To factorize a small integer {{mvar|n}} using mental or pen-and-paper arithmetic, the simplest method is [[trial division]]: checking if the number is divisible by prime numbers {{math|2}}, {{math|3}}, {{math|5}}, and so on, up to the [[square root]] of {{mvar|n}}. For larger numbers, especially when using a computer, various more sophisticated factorization algorithms are more efficient. A prime factorization algorithm typically involves [[primality test|testing whether each factor is prime]] each time a factor is found.
When the numbers are sufficiently large, no efficient non-[[quantum computer|
Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are [[semiprime]]s, the product of two prime numbers. When they are both large, for instance more than two thousand [[bit]]s long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by [[Fermat's factorization method]]), even the fastest prime factorization algorithms on the fastest classical computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any classical computer increases drastically.
Many cryptographic protocols are based on the presumed difficulty of factoring large composite integers or a related
== Prime decomposition ==
Line 24:
Among the {{math|''b''}}-bit numbers, the most difficult to factor in practice using existing algorithms are those [[semiprimes]] whose factors are of similar size. For this reason, these are the integers used in cryptographic applications.
In 2019,
| last1 = Kleinjung | first1 = Thorsten
| url = http://eprint.iacr.org/2010/006.pdf▼
| last2 = Aoki | first2 = Kazumaro
| title = Factorization of a 768-bit RSA modulus▼
| last3 = Franke | first3 = Jens
| last4 = Lenstra | first4 = Arjen K.
| last5 = Thomé | first5 = Emmanuel
| last6 = Bos | first6 = Joppe W.
| last7 = Gaudry | first7 = Pierrick
| last8 = Kruppa | first8 = Alexander
| last9 = Montgomery | first9 = Peter L.
| last10 = Osvik | first10 = Dag Arne
| last11 = te Riele | first11 = Herman J. J.
| last12 = Timofeev | first12 = Andrey
| last13 = Zimmermann | first13 = Paul
| editor-last = Rabin | editor-first = Tal
| doi = 10.1007/978-3-642-14623-7_18
| pages = 333–350
| publisher = Springer
| series = Lecture Notes in Computer Science
| title = Advances in Cryptology - CRYPTO 2010, 30th Annual Cryptology Conference, Santa Barbara, CA, USA, August 15-19, 2010. Proceedings
| volume = 6223
| year = 2010| isbn = 978-3-642-14622-0
}}</ref>
The largest such semiprime yet factored was [[RSA numbers#RSA-250|RSA-250]], an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel [[Skylake (microarchitecture)#Xeon Gold (quad processor)|Xeon Gold]] 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the [[general number field sieve]] run on hundreds of machines.
===
No [[algorithm]] has been published that can factor all integers in [[polynomial time]], that is, that can factor a {{math|''b''}}-bit number {{math|''n''}} in time {{math|[[Big O notation|O]](''b''<sup>''k''</sup>)}} for some constant {{math|''k''}}. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist
There are published algorithms that are faster than {{math|O((1 + ''ε'')<sup>''b''</sup>)}} for all positive {{math|''ε''}}, that is, [[Time complexity#Sub-exponential time|sub-exponential]]. {{As of|2022}}, the algorithm with best theoretical asymptotic running time is the [[general number field sieve]] (GNFS), first published in 1993,<ref>{{cite book |last1=Buhler |first1=J. P. |last2=Lenstra |first2=H. W. Jr. |last3=Pomerance |first3=Carl |chapter=Factoring integers with the number field sieve |title=The development of the number field sieve |date=1993 |publisher=Springer |isbn=978-3-540-57013-4 |pages=50–94 |doi=10.1007/BFb0091539 |hdl=1887/2149 |series=Lecture Notes in Mathematics |volume=1554 |url=https://doi.org/10.1007/BFb0091539 |access-date=12 March 2021 |language=English}}</ref> running on a {{math|''b''}}-bit number {{math|''n''}} in time:
: <math>\exp\left( \left(\left(\tfrac83\right)^\frac23 + o(1)\right)\left(\log n\right)^\frac13\left(\log \log n\right)^\frac23\right).</math>
For current computers, GNFS is the best published algorithm for large {{math|''n''}} (more than about 400 bits). For a [[Quantum computing|quantum computer]], however, [[Peter Shor]] discovered an algorithm in 1994 that solves it in polynomial time
{{cite journal
| doi = 10.1038/414883a
Line 55 ⟶ 74:
}}</ref>
In order to talk about [[complexity class|complexity classes]] such as P, NP, and co-NP, the problem has to be stated as a [[decision problem]].
It is not known exactly which [[complexity class]]es contain the [[decision problem|decision version]] of the integer factorization problem (that is: does {{math|''n''}} have a factor smaller than {{math|''k''}} besides 1?). It is known to be in both [[NP (complexity)|NP]] and [[co-NP]], meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization {{math|1=''n'' = ''d''({{sfrac|''n''|''d''}})}} with {{math|''d'' ≤ ''k''}}. An answer of "no" can be certified by exhibiting the factorization of {{math|''n''}} into distinct primes, all larger than {{math|''k''}}; one can verify their primality using the [[AKS primality test]], and then multiply them to obtain {{math|''n''}}. The [[fundamental theorem of arithmetic]] guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both [[UP (complexity)|UP]] and co-UP.<ref>▼
{{Math theorem |For every natural numbers <math>n</math> and <math>k</math>, does {{math|''n''}} have a factor smaller than {{math|''k''}} besides 1? |name=Decision problem |note=Integer factorization }}
▲
{{cite web
| author = Lance Fortnow
Line 63 ⟶ 86:
}}</ref> It is known to be in [[BQP]] because of Shor's algorithm.
The problem is suspected to be outside all three of the complexity classes P, NP-complete,<ref>{{citation |last1=Goldreich |first1=Oded |author1-link=Oded Goldreich |last2=Wigderson |first2=Avi |author2-link=Avi Wigderson |editor1-last=Gowers |editor1-first=Timothy |editor1-link=Timothy Gowers |editor2-last=Barrow-Green |editor2-first=June |editor2-link=June Barrow-Green|editor3-last=Leader |editor3-first=Imre |editor3-link=Imre Leader |contribution=IV.20 Computational Complexity |isbn=978-0-691-11880-2 |___location=Princeton, New Jersey |mr=2467561 |pages=575–604 |publisher=Princeton University Press |title=The Princeton Companion to Mathematics |year=2008}}. See in particular [https://books.google.com/books?id=ZOfUsvemJDMC&pg=PA583 p. 583].</ref> and [[co-NP-complete]].
It is therefore a candidate for the [[NP-intermediate]] complexity class.
In contrast, the decision problem "Is {{math|''n''}} a composite number?" (or equivalently: "Is {{math|''n''}} a prime number?") appears to be much easier than the problem of specifying factors of {{math|''n''}}. The composite/prime problem can be solved in polynomial time (in the number {{math|''b''}} of digits of {{math|''n''}}) with the [[AKS primality test]]. In addition, there are several [[randomized algorithm|probabilistic algorithm]]s that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of [[primality test]]ing is a crucial part of the [[RSA (algorithm)|RSA]] algorithm, as it is necessary to find large prime numbers to start with.
Line 111 ⟶ 135:
in [[Big O notation|little-o]] and [[L-notation]].
Some examples of those algorithms are the [[elliptic curve method]] and the [[quadratic sieve]].
Another such algorithm is the '''class group relations method''' proposed by Schnorr,<ref name=1982-schnorr>{{cite journal | last=Schnorr|first=Claus P.|year=1982|title=Refined analysis and improvements on some factoring algorithms|journal=Journal of Algorithms|volume=3|pages=101–127 | doi=10.1016/0196-6774(82)90012-8 | issue=2 | mr=0657269|url=http://www.dtic.mil/get-tr-doc/pdf?AD=ADA096348|archive-url=https://web.archive.org/web/20170924140543/http://www.dtic.mil/get-tr-doc/pdf?AD=ADA096348|url-status=dead|archive-date=September 24, 2017}}</ref> Seysen,<ref name=1987-seysen>{{cite journal| last=Seysen|first=Martin|year=1987|title=A probabilistic factorization algorithm with quadratic forms of negative discriminant|journal=Mathematics of Computation|volume=48|pages=757–780| doi=10.1090/S0025-5718-1987-0878705-X| issue=178 | mr=0878705|doi-access=free}}</ref> and Lenstra,<ref name=1988-lenstra >{{cite journal|last=Lenstra|first=Arjen K|year=1988|title=Fast and rigorous factorization under the generalized Riemann hypothesis|journal=Indagationes Mathematicae|volume=50|issue=4|pages=443–454|doi=10.1016/S1385-7258(88)80022-2|url=https://infoscience.epfl.ch/record/164491/files/nscan9.PDF }}</ref> which they proved only assuming the unproved [[
== Rigorous running time ==
Line 119 ⟶ 143:
=== Schnorr–Seysen–Lenstra algorithm ===
Given an integer {{
Denote by {{math|''P''<sub>Δ</sub>}} the set of all primes {{
The size of {{
The relation that will be used is a relation between the product of powers that is equal to the [[group (mathematics)|neutral element]] of {{math|''G''<sub>Δ</sub>}}. These relations will be used to construct a so-called ambiguous form of {{math|''G''<sub>Δ</sub>}}, which is an element of {{math|''G''<sub>Δ</sub>}} of order dividing 2. By calculating the corresponding factorization of {{math|Δ}} and by taking a [[Greatest common divisor|gcd]], this ambiguous form provides the complete prime factorization of {{
Let {{
{{ordered list
| Let {{math|Δ}} be a negative integer with {{math|1=Δ = −''dn''}}, where {{
| Take the {{
| Let {{math|''f''<sub>''q''</sub>}} be a random prime form of {{math|''G''<sub>Δ</sub>}} with {{math|
| Find a generating set {{
| Collect a sequence of relations between set {{
: <math>\left(\prod_{x \in X_{}} x^{r(x)}\right).\left(\prod_{q \in P_\Delta} f^{t(q)}_{q}\right) = 1.</math>
| Construct an ambiguous form {{math|(''a'', ''b'', ''c'')}} that is an element {{math|''f'' ∈ ''G''<sub>Δ</sub>}} of order dividing 2 to obtain a coprime factorization of the largest odd divisor of {{math|Δ}} in which {{math|1=Δ = −4''ac''}} or {{math|1=Δ = ''a''(''a'' − 4''c'')}} or {{math|1=Δ = (''b'' − 2''a'')(''b'' + 2''a'')}}.
| If the ambiguous form provides a factorization of {{
}}
To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the [[Adleman–Pomerance–Rumely primality test|Jacobi sum test]].
Line 149 ⟶ 173:
* [[Multiplicative partition]]
* [[p-adic valuation|{{mvar|p}}-adic valuation]]
* [[
== Notes ==
Line 166 ⟶ 190:
== External links ==
* [
* Richard P. Brent, "Recent Progress and Prospects for Integer Factorisation Algorithms", ''Computing and Combinatorics"'', 2000, pp. 3–22. [http://citeseer.ist.psu.edu/327036.html download]
* [[Manindra Agrawal]], Neeraj Kayal, Nitin Saxena, "PRIMES is in P." Annals of Mathematics 160(2): 781–793 (2004). [http://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf August 2005 version PDF]
|