Content deleted Content added
m →GapSVP |
m task, replaced: Twenty-first → Twenty-First, twenty-eighth → Twenty-Eighth, twenty-ninth → Twenty-Ninth |
||
Line 1:
In [[computer science]], '''lattice problems''' are a class of [[Mathematical optimization|optimization]] problems related to mathematical objects called [[Lattice (group)|lattices]]. The conjectured intractability of such problems is central to the construction of secure [[Lattice-based cryptography|lattice-based cryptosystems]]: Lattice problems are an example of [[NP-hardness|NP-hard]] problems which have been shown to be [[Computational hardness assumption|average-case hard]], providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against [[quantum computers]]. For applications in such [[cryptosystems]], lattices over [[vector space]] (often <math>\mathbb{Q}^n</math>) or [[free module
For all the problems below, assume that we are given (in addition to other more specific inputs) a [[Basis (linear algebra)|basis]] for the vector space ''V'' and a [[Norm (mathematics)|norm]] ''N''. The norm usually considered is the Euclidean norm [[Norm (mathematics)#Euclidean norm|''L''<sup>2</sup>]]. However, other norms (such as [[Norm (mathematics)#p-norm|''L''<sup>p</sup>]]) are also considered and show up in a variety of results.<ref>{{cite journal |author-link=Subhash Khot |first=Subhash |last=Khot |title=Hardness of approximating the shortest vector problem in lattices |journal=J. ACM |volume=52 |issue=5 |year=2005 |pages=789–808 |doi=10.1145/1089023.1089027 |s2cid=13438130 }}</ref> Let <math>\lambda(L)</math> denote the length of the shortest non-zero vector in the lattice ''L'', that is,
Line 15:
The exact version of the problem is only known to be [[NP-hard]] for randomized reductions.<ref name="ajtai" /><ref name="Ajtai 1998 10–19">{{cite book |first=Miklós |last=Ajtai |chapter=The shortest vector problem in ''L<sub>2</sub>'' is ''NP''-hard for randomized reductions |title=Proceedings of the thirtieth annual ACM symposium on Theory of computing |___location=Dallas, Texas, United States |publisher=ACM |year=1998 |pages=10–19 |doi=10.1145/276698.276705 |isbn=9780897919623 |s2cid=4503998 |chapter-url=http://portal.acm.org/citation.cfm?id=276705 }}</ref>
By contrast, the corresponding problem with respect to the [[uniform norm]] is known to be [[NP-hard]].
=== Algorithms for the Euclidean norm ===
To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time (<math>2^{\omega(n)}</math>) and <math>\operatorname{poly}(n)</math> memory, and algorithms requiring both exponential time and space (<math>2^{\Theta(n)}</math>) in the lattice dimension. The former class of algorithms most notably includes lattice enumeration<ref>{{Cite book|last=Kannan|first=Ravi|title=Proceedings of the fifteenth annual ACM symposium on Theory of computing - STOC '83|date=1983|chapter=Improved Algorithms for Integer Programming and Related Lattice Problems|journal=Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing|series=STOC '83|___location=New York, NY, USA|publisher=ACM|pages=193–206|doi=10.1145/800061.808749|isbn=978-0897910996|s2cid=18181112}}</ref><ref name=":0" /><ref>{{Cite book|last1=Gama|first1=Nicolas|last2=Nguyen|first2=Phong Q.|last3=Regev|first3=Oded|date=2010-05-30|title=Lattice Enumeration Using Extreme Pruning|journal=Advances in Cryptology – EUROCRYPT 2010|volume=6110|language=en|publisher=Springer, Berlin, Heidelberg|pages=257–278|doi=10.1007/978-3-642-13190-5_13|series=Lecture Notes in Computer Science|isbn=978-3-642-13189-9}}</ref> and random sampling reduction,<ref>{{Cite book|last=Schnorr|first=Claus Peter|date=2003-02-27|chapter=Lattice Reduction by Random Sampling and Birthday Methods|journal=Stacs 2003|volume=2607|language=en|publisher=Springer, Berlin, Heidelberg|pages=145–156|doi=10.1007/3-540-36494-3_14|isbn=978-3540364948|series=Lecture Notes in Computer Science|citeseerx=10.1.1.137.4293}}</ref><ref>{{Cite book|last1=Aono|first1=Yoshinori|last2=Nguyen|first2=Phong Q.|date=2017-04-30|title=Random Sampling Revisited: Lattice Enumeration with Discrete Pruning|journal=Advances in Cryptology – EUROCRYPT 2017|volume=10211|language=en|publisher=Springer, Cham|pages=65–102|doi=10.1007/978-3-319-56614-6_3|series=Lecture Notes in Computer Science|isbn=978-3-319-56613-9|url=https://hal.inria.fr/hal-01502051/file/Eprint-155.pdf}}</ref> while the latter includes lattice sieving,<ref name=":1" /><ref>{{Cite book|last1=Micciancio|first1=Daniele|last2=Voulgaris|first2=Panagiotis|date=2010|title=Faster Exponential Time Algorithms for the Shortest Vector Problem|url=http://dl.acm.org/citation.cfm?id=1873601.1873720|journal=Proceedings of the Twenty-
To solve the γ-approximation version SVP<sub>γ</sub> for <math>\gamma > 1</math> for the Euclidean norm, the best known approaches are based on using [[lattice basis reduction]]. For large <math>\gamma = 2^{\Omega(n)}</math>, the [[Lenstra–Lenstra–Lovász lattice basis reduction algorithm|Lenstra–Lenstra–Lovász (LLL) algorithm]] can find a solution in time polynomial in the lattice dimension. For smaller values <math>\gamma</math>, the Block Korkine-Zolotarev algorithm (BKZ)<ref>{{Cite journal|last=Schnorr|first=C. P.|date=1987-01-01|title=A hierarchy of polynomial time lattice basis reduction algorithms|journal=Theoretical Computer Science|volume=53|issue=2|pages=201–224|doi=10.1016/0304-3975(87)90064-8|doi-access=free}}</ref><ref>{{Cite journal|last1=Schnorr|first1=C. P.|last2=Euchner|first2=M.|date=1994-08-01|title=Lattice basis reduction: Improved practical algorithms and solving subset sum problems|journal=Mathematical Programming|language=en|volume=66|issue=1–3|pages=181–199|doi=10.1007/bf01581144|s2cid=15386054|issn=0025-5610|url=http://publikationen.ub.uni-frankfurt.de/files/4259/schnorr.pdf}}</ref><ref>{{Cite book|last1=Chen|first1=Yuanmi|last2=Nguyen|first2=Phong Q.|date=2011-12-04|title=BKZ 2.0: Better Lattice Security Estimates|journal=Advances in Cryptology – ASIACRYPT 2011|volume=7073|language=en|publisher=Springer, Berlin, Heidelberg|pages=1–20|doi=10.1007/978-3-642-25385-0_1|series=Lecture Notes in Computer Science|isbn=978-3-642-25384-3}}</ref> is commonly used, where the input to the algorithm (the blocksize <math>\beta</math>) determines the time complexity and output quality: for large approximation factors <math>\gamma</math>, a small block size <math>\beta</math> suffices, and the algorithm terminates quickly. For small <math>\gamma</math>, larger <math>\beta</math> are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine (running in lattices of dimension at most <math>\beta</math>), and its overall complexity is closely related to the costs of these SVP calls in dimension <math>\beta</math>.
Line 81:
[[Average case]] hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against [[quantum computers]].
The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. [[Lattice reduction]] algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly [[orthogonal]] vectors. The [[Lenstra–Lenstra–Lovász lattice basis reduction algorithm]] (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time.<ref>{{cite journal |first1=A. K. |last1=Lenstra |first2=H. W., Jr. |last2=Lenstra |first3=L. |last3=Lovász |title=Factoring polynomials with rational coefficients |journal=[[Mathematische Annalen|Math. Ann.]] |volume=261 |year=1982 |issue=4 |pages=515–534 |doi=10.1007/BF01457454 |s2cid=5701340 |url=https://www.math.leidenuniv.nl/~hwl/PUBLICATIONS/1982f/art.pdf |archive-url=https://web.archive.org/web/20110717132024/https://www.math.leidenuniv.nl/~hwl/PUBLICATIONS/1982f/art.pdf |archive-date=2011-07-17}}</ref> This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice. However, this belief was challenged when in the late 1990s, several new results on the hardness of lattice problems were obtained, starting with the result of [[Miklós Ajtai|Ajtai]].<ref name="ajtai">{{cite book |first=M. |last=Ajtai |chapter=Generating hard instances of lattice problems |title=Proceedings of the
In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and [[average-case complexity]] of some lattice problems.<ref name="ajtai" /><ref name="Ajtai 1998 10–19"/> Building on these results, Ajtai and [[Cynthia Dwork|Dwork]] created a [[public-key cryptosystem]] whose security could be proven using only the worst case hardness of a certain version of SVP,<ref>{{cite book |first1=Miklós |last1=Ajtai |first2=Cynthia |last2=Dwork |chapter=A public-key cryptosystem with worst-case/average-case equivalence |title=Proceedings of the
==See also==
|