Locality-sensitive hashing: Difference between revisions

Content deleted Content added
Bender the Bot (talk | contribs)
m External links: HTTP to HTTPS for SourceForge
 
(325 intermediate revisions by more than 100 users not shown)
Line 1:
{{Short description|Algorithmic technique using hashing}}
'''Locality Sensitive Hashing''' ('''LSH''') is a method of performing probabilistic [[dimension reduction]] of high-dimensional data. The basic idea is to [[Hash Function|hash]] the input items so that similar items are mapped to the same buckets with high probability (the number of buckets being much smaller than the universe of possible input items).
In [[computer science]], '''locality-sensitive hashing''' ('''LSH''') is a [[fuzzy hashing]] technique that hashes similar input items into the same "buckets" with high probability.<ref name="MOMD">{{cite web|url=http://infolab.stanford.edu/~ullman/mmds.html|title=Mining of Massive Datasets, Ch. 3.|last1=Rajaraman|first1=A.|last2=Ullman|first2=J.|author2-link=Jeffrey Ullman|year=2010}}</ref> The number of buckets is much smaller than the universe of possible input items.<ref name="MOMD" /> Since similar items end up in the same buckets, this technique can be used for [[Cluster analysis|data clustering]] and [[nearest neighbor search]]. It differs from [[Hash function|conventional hashing techniques]] in that [[hash collision]]s are maximized, not minimized. Alternatively, the technique can be seen as a way to [[dimension reduction|reduce the dimensionality]] of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
 
Hashing-based approximate [[nearest-neighbor search]] algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).<ref>{{cite conference |last1=Zhao |first1=Kang |last2=Lu |first2=Hongtao |last3=Mei |first3=Jincheng |title=Locality Preserving Hashing |conference=AAAI Conference on Artificial Intelligence | volume=28 | year=2014 |url=https://ojs.aaai.org/index.php/AAAI/article/view/9133/8992 |pages=2874–2880}}</ref><ref>{{cite book |last1=Tsai |first1=Yi-Hsuan |last2=Yang |first2=Ming-Hsuan |title=2014 IEEE International Conference on Image Processing (ICIP) |chapter=Locality preserving hashing |date=October 2014 |pages=2988–2992 |doi=10.1109/ICIP.2014.7025604 |isbn=978-1-4799-5751-4 |s2cid=8024458 |issn=1522-4880}}</ref>
==Definition==
 
Locality-preserving hashing was initially devised as a way to facilitate [[Pipeline (computing)|data pipelining]] in implementations of [[Parallel RAM|massively parallel]] algorithms that use [[Routing#Path_selection|randomized routing]] and [[universal hashing]] to reduce memory [[Resource contention|contention]] and [[network congestion]].<ref name=Chin1991>{{cite thesis |last=Chin |first=Andrew |date=1991 |title=Complexity Issues in General Purpose Parallel Computing |pages=87–95 |type=DPhil |publisher=University of Oxford |url=https://perma.cc/E47H-WCVP}}</ref><ref name=Chin1994>{{cite journal |last1=Chin |first1=Andrew |date=1994 |title=Locality-Preserving Hash Functions for General Purpose Parallel Computation |url=http://unclaw.com/chin/scholarship/hashfunctions.pdf |journal=Algorithmica |volume=12 |issue=2–3 |pages=170–181 |doi=10.1007/BF01185209|s2cid=18108051 }}</ref>
An ''LSH family <math>\mathcal F</math>'' is defined for
 
a [[metric space]] <math>\mathcal M =(M,d)</math>, a threshold
==Definitions==
<math>R>0</math> and an approximation factor <math>c>1</math>.
A finite family <math> \mathcal F</math> of functions <math>h\colon M \to S</math> is defined to be an ''LSH family''<ref name=MOMD /><ref name=GIM1999>{{cite journal
An LSH family
| author1 = Gionis, A.
<ref name=GIM1999>{{cite journal
| author2-link = Piotr Indyk | author2 = Indyk, P. | author3-link = Rajeev Motwani | author3 = Motwani, R.
| author= Gionis, A.
| coauthors = Indyk, P., Motwani, R.
| year = 1999
| title = Similarity Search in High Dimensions via Hashing
| url = http://people.csail.mit.edu/indyk/vldb99.ps ,
| journal = Proceedings of the 25th Very Large Database (VLDB) Conference
}}</ref><ref name=IndykMotwani98>{{cite conference
}}</ref>
| author1 = [[Piotr Indyk|Indyk, Piotr]].
<ref name=IndykMotwani98>{{cite journal
| author2 = [[Rajeev Motwani|Motwani, Rajeev]].
| author = Indyk, Piotr.
| coauthors = Motwani, Rajeev.
| year = 1998
| titlecontribution = Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality.
| contribution-url = http://people.csail.mit.edu/indyk/nndraft.ps ,
| journaltitle = Proceedings of 30th Symposium on Theory of Computing
| title-link = Symposium on Theory of Computing
}}</ref>
}}</ref>
<math>\mathcal F</math> is a
for
family of functions <math>h:{\mathcal M}\times{\mathcal M}\to S</math>
* a [[metric space]] <math>\mathcal M =(M, d)</math>,
satisfying the following conditions for any two
points <math>p,q\in {\mathcal M}</math>, and* a functionthreshold <math>hr>0</math>, chosen
uniformly* atan randomapproximation fromfactor <math>\mathcal Fc>1</math>:,
* and probabilities <math>p_1 > p_2</math>
* if <math>d(p,q) \le R</math>, then <math>h(p)=h(q)</math> (i.e.,<math>p</math> and <math>q</math> collide) with probability at least <math>P_1</math>,
* if <math>d(p,q) \ge cR</math>, then <math>h(p)=h(q)</math> with probability at most <math>P_2</math>.
 
Aif familyit issatisfies interestingthe whenfollowing condition. For any two points <math>P_1>P_2a, b \in M</math>. Suchand a familyhash function <math>\mathcal Fh</math> ischosen calleduniformly at random from ''<math>(R,cR,P_1,P_2)\mathcal F</math>-sensitive''.:
* If <math>d(a,b) \le r</math>, then <math>h(a)=h(b)</math> (i.e., {{mvar|a}} and {{mvar|b}} collide) with probability at least <math>p_1</math>,
* If <math>d(a,b) \ge cr</math>, then <math>h(a)=h(b)</math> with probability at most <math>p_2</math>.
 
Such a family <math>\mathcal F</math> is called <math>(r,cr,p_1,p_2)</math>-sensitive.
An alternative definition<ref name=Charikar2002>{{cite journal
 
| author = Charikar, Moses S..
=== LSH with respect to a similarity measure ===
| coauthors =
Alternatively<ref name=Charikar2002>{{cite conference
| last = Charikar|first= Moses S.|author-link=Moses Charikar
| year = 2002
| titlecontribution = Similarity Estimation Techniques from Rounding Algorithms
| journaltitle = Proceedings of the 34th Annual ACM Symposium on Theory of Computing 2002
| pages = (ACM380–388 1–58113–495–9/02/0005)…
| url = http://portal.acm.org/citation.cfm?id=509965
| accessdate = 2007-12-21
| doi = 10.1145/509907.509965
|isbn= 1-58113-495-9| citeseerx = 10.1.1.147.4064}}</ref> it is definedpossible withto respectdefine toan LSH family on a universe of items <math>{{mvar|U</math>}} thatendowed havewith a [[similarity]] function <math>\phi :\colon U \times U \to [0,1]</math>. AnIn this setting, a LSH scheme is a family of [[Hash_function | hash functionsfunction]]s <math>{{mvar|H</math>}} coupled with a [[probability distribution]] <math>{{mvar|D</math>}} over the functions{{mvar|H}} such that a function <math>h \in H</math> chosen according to <math>{{mvar|D</math>}} satisfies the property that <math>Pr_{h \in H}Pr [h(a) = h(b)] = \phi(a,b)</math> for anyeach <math>a,b \in U</math>.
 
===Amplification===
 
Given a <math>(d_1, d_2, p_1, p_2)</math>-sensitive family <math>\mathcal F</math>, we can construct new families <math>\mathcal G</math> by either the AND-construction or OR-construction of <math>\mathcal F</math>.<ref name=MOMD />
 
To create an AND-construction, we define a new family <math>\mathcal G</math> of hash functions {{mvar|g}}, where each function {{mvar|g}} is constructed from {{mvar|k}} random functions <math>h_1, \ldots, h_k</math> from <math>\mathcal F</math>. We then say that for a hash function <math>g \in \mathcal G</math>, <math>g(x) = g(y)</math> if and only if all <math>h_i(x) = h_i(y)</math> for <math>i = 1, 2, \ldots, k</math>. Since the members of <math>\mathcal F</math> are independently chosen for any <math>g \in \mathcal G</math>, <math>\mathcal G</math> is a <math>(d_1, d_2, p_{1}^k, p_{2}^k)</math>-sensitive family.
 
To create an OR-construction, we define a new family <math>\mathcal G</math> of hash functions {{mvar|g}}, where each function {{mvar|g}} is constructed from {{mvar|k}} random functions <math>h_1, \ldots, h_k</math> from <math>\mathcal F</math>. We then say that for a hash function <math>g \in \mathcal G</math>, <math>g(x) = g(y)</math> if and only if <math>h_i(x) = h_i(y)</math> for one or more values of {{mvar|i}}. Since the members of <math>\mathcal F</math> are independently chosen for any <math>g \in \mathcal G</math>, <math>\mathcal G</math> is a <math>(d_1, d_2, 1- (1 - p_1)^k, 1 - (1 - p_2)^k)</math>-sensitive family.
 
==Applications==
 
LSH has been applied to several problem domains, including:
 
*[[Near Duplicate Algorithms|Near Duplicate Detection]]
*Near-duplicate detection<ref>
*[[Image Similarity Identification]]
{{citation
*[[Gene Expression Similarity Identification]]
| author = Das, Abhinandan S.
*[[Audio Similarity Identification]]
| title = Proceedings of the 16th international conference on World Wide Web
| chapter = Google news personalization: scalable online collaborative filtering
| pages = 271–280
| year = 2007|doi=10.1145/1242572.1242610|display-authors=etal| isbn = 9781595936547
| s2cid = 207163129
}}.</ref>
*[[Hierarchical clustering]]<ref>
{{citation
| author = Koga, Hisashi |author2=Tetsuo Ishibashi |author3=Toshinori Watanabe
| title = Fast agglomerative hierarchical clustering algorithm using Locality-Sensitive Hashing
| journal = Knowledge and Information Systems |volume=12 |issue=1 |pages=25–53
| year = 2007 | doi=10.1007/s10115-006-0027-5|s2cid=4613827 }}.</ref><ref>
{{citation
| author = Cochez, Michael |author2=Mou, Hao
|title=Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data
|chapter=Twister Tries
| pages=505–517
| year = 2015 | doi=10.1145/2723372.2751521| isbn = 9781450327589
|s2cid=14414777
| url = https://jyx.jyu.fi/bitstream/123456789/46537/1/cochezmousigmod15finalcameraready.pdf
}}.</ref>
*[[Genome-wide association study]]<ref>
{{citation
| author = Brinza, Dumitru
| title = RAPID detection of gene–gene interactions in genome-wide association studies
| journal = Bioinformatics |volume=26 |issue=22 |year=2010 |pages=2856–2862 | doi=10.1093/bioinformatics/btq529| pmid = 20871107
|display-authors=etal|pmc=3493125}}
</ref>
*Image similarity identification
**[[VisualRank]]
*[[Gene expression]] similarity identification{{citation needed|date=October 2013}}
*[[Audio similarity]] identification
*[[Nearest neighbor search]]
*[[Audio fingerprint]]<ref>
{{citation
| title = dejavu - Audio fingerprinting and recognition in Python
| url = https://github.com/worldveil/dejavu| date = 2018-12-19}}
</ref>
*[[Digital video fingerprinting]]<ref>
{{citation
| title = A Simple Introduction to Locality Sensitive Hashing (LSH)
| url =https://www.iunera.com/kraken/fabric/locality-sensitive-hashing-lsh/#7-applications-of-lsh| date = 2025-03-27}}
</ref>
*[[Shared memory]] organization in [[parallel computing]]<ref name=Chin1991 /><ref name=Chin1994 />
*Physical data organization in database management systems<ref>
{{citation
|author1=Aluç, Güneş |author2=Özsu, M. Tamer |author3=Daudjee, Khuzaima
| title = Building self-clustering RDF databases using Tunable-LSH
| journal = The VLDB Journal | year = 2018 |volume=28 |issue=2 |pages=173–195 | doi = 10.1007/s00778-018-0530-9 |s2cid=53695535 }}
</ref>
*Training fully connected neural networks<ref>{{Cite arXiv|last1=Chen|first1=Beidi|last2=Medini|first2=Tharun|last3=Farwell|first3=James|last4=Gobriel|first4=Sameh|last5=Tai|first5=Charlie|last6=Shrivastava|first6=Anshumali|date=2020-02-29|title=SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems|class=cs.DC|eprint=1903.03129}}</ref><ref>{{ citation
| last1=Chen | first1=Beidi | last2 = Liu | first2 = Zichang | last3 = Peng | first3 = Binghui | last4 = Xu | first4 = Zhaozhuo | last5 = Li | first5 = Jonathan Lingjie | last6 = Dao | first6 = Tri | last7 = Song | first7 = Zhao | last8 = Shrivastava | first8 = Anshumali | last9 = Re | first9 = Christopher| title = MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training | journal = International Conference on Learning Representation | year = 2021 | url = https://openreview.net/forum?id=wWK7yXkULyh }}
</ref>
*Computer security<ref name="TLSH">
{{cite conference
| author1 = Oliver, Jonathan| author2 = Cheng, Chun | author3 = Chen, Yanggui
| title = 2013 Fourth Cybercrime and Trustworthy Computing Workshop | chapter = TLSH -- A Locality Sensitive Hash | year = 2013
| pages = 7–13 | url = https://www.academia.edu/7833902
| doi = 10.1109/CTC.2013.9
| isbn = 978-1-4799-3076-0
}}
</ref>
*[[Machine Learning]]<ref name="NL">
{{citation
| author1 = Fanaee-T, Hadi
| title = Natural Learning
| year = 2024
| arxiv = 2404.05903
}}
</ref>
 
==Methods==
 
===Bit Samplingsampling Forfor Hamming Distancedistance===
 
One of the easiest ways to construct an LSH family is by bit sampling.<ref name=IndykMotwani98 /> This approach works for the [[Hamming distance]] over {{mvar|d}}-dimensional vectors <math>\{0,1\}^d</math>. Here, the family <math>\mathcal F</math> of hash functions is simply the family of all the projections of points on one of the <math>d</math> coordinates, i.e., <math>{\mathcal F}=\{h\colon \{0,1\}^d\to \{0,1\}\mid h(x)=x_i \text{ for some } i\in \{1, \ldots, d\}\}</math>, where <math>x_i</math> is the <math>i</math>th coordinate of <math>x</math>. A random function <math>h</math> from <math>{\mathcal F}</math> simply selects a random bit from the input point. This family has the following parameters: <math>P_1=1-R/d</math>, <math>P_2=1-cR/d</math>.
That is, any two vectors <math>x,y</math> with Hamming distance at most <math>R</math> collide under a random <math>h</math> with probability at least <math>P_1</math>.
Any <math>x,y</math> with Hamming distance at least <math>cR</math> collide with probability at most <math>P_2</math>.
 
===Min-wise independent permutations===
 
{{main|MinHash}}
One of the easiest ways to construct an LSH family is by bit sampling <ref name=IndykMotwani98> </ref>.
This approach works for the [[Hamming distance]] over d-dimensional vectors <math>\{0,1\}^d</math>. Here, the family <math>\mathcal F</math> of hash functions is simply the family of all the projections of points on
one of the <math>d</math> coordinates, i.e., <math>{\mathcal F}=\{h:\{0,1\}^d\to \{0,1\}\mid h(x)=x_i,i =1 ... d\}</math>,
where <math>x_i</math> is the <math>i^{th}</math> coordinate of
<math>x</math>. A random function <math>h</math> from <math>{\mathcal F}</math> simply selects a random bit from input point. This family has the following parameters:
<math>P_1=1-R/d</math>, <math>P_2=1-cR/d</math>.
 
Suppose {{mvar|U}} is composed of subsets of some ground set of enumerable items {{mvar|S}} and the similarity function of interest is the [[Jaccard index]] {{mvar|J}}. If {{mvar|&pi;}} is a permutation on the indices of {{mvar|S}}, for <math>A \subseteq S</math> let <math>h(A) = \min_{a \in A} \{ \pi(a) \}</math>. Each possible choice of {{mvar|&pi;}} defines a single hash function {{mvar|h}} mapping input sets to elements of {{mvar|S}}.
===Min-Wise Independent Permutations===
Suppose <math>U</math> is composed of subsets of some ground set of enumerable items <math>S</math> and the similarity function of interest is the [[Jaccard index| Jaccard index]] <math>J</math>. If <math>\pi</math> is a permutation on the indices of <math>S</math>, for <math>A \subseteq S</math> let <math>h(A) = \min_{a \in A} \{ \pi(a) \}</math>. Each possible choice of <math>\pi</math> defines a single hash function <math>h</math> mapping input sets to integers.
 
Define the function family <math>{{mvar|H</math>}} to be the set of all such functions and let <math>{{mvar|D</math>}} be the [[discrete uniform distribution|uniform distribution]]. Given two sets <math>A,B \subseteq S</math> the event that <math>h(A) = h(B)</math> corresponds exactly to the event that the minimizer of {{mvar|&pi;}} over <math>A \picup B</math> lies inside <math>A \bigcapcap B</math>. As <math>{{mvar|h</math>}} was chosen uniformly at random, <math>Pr[h(A) = h(B)] = J(A,B)\,</math> and <math>(H,D)\,</math> define an LSH scheme for the Jaccard index.
 
Because the [[symmetric group]] on {{mvar|n}} elements has size {{mvar|n}}!, choosing a truly [[random permutation]] from the full symmetric group is infeasible for even moderately sized {{mvar|n}}. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" - a permutation family for which each element of the ___domain has equal probability of being the minimum under a randomly chosen <math>\{{mvar|&pi</math>;}}. It has been established that a min-wise independent family of permutations is at least of size <math>\operatorname{lcm(}\{\,1, 2, ...\ldots, n)\,\} \ge e^{n-o(n)}</math>,<ref name=Broder1998>{{cite journalconference
| authorlast1 = Broder, | first1 = A.Z. | author1-link = Andrei Broder
| coauthors last2= Charikar, M.;| Frieze,first2 = A.M.; Mitzenmacher,| M.author2-link = Moses Charikar
|last3= Frieze | first3 = A.M. | author3-link = Alan M. Frieze
|last4= Mitzenmacher | first4 = M. | author4-link = Michael Mitzenmacher
| year = 1998
| titlecontribution = Min-wise independent permutations
| journaltitle = Proceedings of the thirtiethThirtieth annualAnnual ACM symposiumSymposium on Theory of computingComputing
| pages = 327–336
| contribution-url = http://www.cs.princeton.edu/~moses/papers/minwise.ps
| accessdateaccess-date = 2007-11-14
| doi = 10.1145/276698.276781
| citeseerx = 10.1.1.409.9220}}</ref> and that this boundarybound is tight.<ref>
{{cite journal
| title=An optimal construction of exactly min-wise independent permutations
| coauthorsauthor1=Takei, Y. and| author2 = Itoh, T. and| author3 = Shinozaki, T.
| journal=Technical Report COMP98-62, IEICE, 1998
}}
</ref>.
 
Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families.
Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most {{mvar|k }}.<ref name=Matousek2002>{{cite journal
| author = Matousek[[Jiří Matoušek (mathematician)|Matoušek]], J.
| coauthors author2= Stojakovic, M.
| year = 2002
| title = On Restricted Min-Wise Independence of Permutations
| journal = Preprint
| url = http://citeseer.ist.psu.edu/689217.html
| accessdateaccess-date = 2007-11-14
}}</ref>.
Approximate min-wise independence differs from the property by at most a fixed <math>\{{mvar|&epsilon</math>;}}.<ref name=Saks2000>{{cite journal
| last1 = Saks | first1 = M. | author1-link = Michael Saks (mathematician)
| author = Saks, M.
| coauthors last2= Srinivasan,| first2 = A.;|last3= Zhou,|first3= S.; Zuckerman, D.
|last4= Zuckerman | first4 = D.
| year = 2000
| title = Low discrepancy sets yield approximate min-wise independent permutation families
| journal = Information Processing Letters
| volume = 73
| issue = 1-21–2
| pages = 29–32
| url = http://citeseer.ist.psu.edu/saks99low.html
| accessdateaccess-date = 2007-11-14
| doi = 10.1016/S0020-0190(99)00163-5
| citeseerx = 10.1.1.20.8264 }}</ref>.
 
===RandomOpen Projectionsource methods===
 
====Nilsimsa Hash====
The random projection method of LSH is designed to approximate the [[cosine]] distance between vectors. The basic idea of this technique is to choose a random [[hyperplane]] (defined by a normal unit vector <math>r</math>) at the outset and use the hyperplane to hash input vectors.
 
{{main|Nilsimsa Hash}}
Given an input vector <math>v</math> and a hyperplane defined by <math>r</math>, we let <math>h(v) = sgn(v \cdot r)</math>. That is, <math>h(v) = \pm 1</math> depending on which side of the hyperplane <math>v</math> lies.
 
'''Nilsimsa''' is a locality-sensitive hashing algorithm used in [[Anti-spam techniques|anti-spam]] efforts.<ref>{{cite web|author=Damiani |display-authors=et al|title=An Open Digest-based Technique for Spam Detection|year=2004|url=http://spdp.di.unimi.it/papers/pdcs04.pdf|access-date=2013-09-01}}</ref> The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements:
Each possible choice of <math>r</math> defines a single function. Let <math>H</math> be the set of all such functions and let <math>D</math> be the uniform distribution once again. It is not difficult to prove that, for two vectors <math>u,v</math>, <math>Pr[h(u) = h(v)] = 1 - \frac{\theta(u,v)}{\pi}</math>, where <math>\theta(u,v)</math> is the angle between <math>u</math> and <math>v</math>. <math>1 - \frac{\theta(u,v)}{\pi}</math> is closely related to <math>\cos(\theta(u,v))</math>.
 
# The digest identifying each message should not vary significantly for changes that can be produced automatically.
In this instance hashing produces only a single bit. Two vectors' bits match with probability proportional to the cosine of the angle between them.
# The encoding must be robust against intentional attacks.
# The encoding should support an extremely low risk of false positives.
 
Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.<ref>{{cite journal|author=Oliver|display-authors=etal|title=TLSH - A Locality Sensitive Hash|url=https://www.academia.edu/7833902|journal=4th Cybercrime and Trustworthy Computing Workshop|date=2013|accessdate=2015-06-04}}</ref>
===Stable Distributions===
 
====TLSH====
 
'''TLSH''' is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications.<ref name="TLSH"/> The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar.
 
An implementation of TLSH is available as [[open-source software]].<ref>{{cite web|url=https://github.com/trendmicro/tlsh |title=TLSH |website=[[GitHub]] |access-date=2014-04-10}}</ref>
 
===Random projection===
{{main|Random projection}}
[[File:Cosine-distance.png| thumb | <math>\frac{\theta(u,v)}{\pi}</math> is approximately proportional to <math>1-\cos(\theta(u,v))</math> on the interval [0, <math>\pi</math>]]
 
The random projection method of LSH due to [[Moses Charikar]]<ref name=Charikar2002 /> called [[SimHash]] (also sometimes called arccos<ref name=Andoni2008>{{cite journal
| author1 = Alexandr Andoni
| author2-link = Piotr Indyk
| author2 = Indyk, P.
| year = 2008
| title = Near-Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions
| journal = Communications of the ACM
| volume = 51
| number = 1
| pages = 117–122
| doi=10.1145/1327452.1327494
| citeseerx = 10.1.1.226.6905
| s2cid = 6468963
}}</ref>) uses an approximation of the [[cosine distance]] between vectors. The technique was used to approximate the NP-complete [[maximum cut|max-cut]] problem.<ref name=Charikar2002 />
 
The basic idea of this technique is to choose a random [[hyperplane]] (defined by a normal unit vector {{mvar|r}}) at the outset and use the hyperplane to hash input vectors.
 
Given an input vector {{mvar|v}} and a hyperplane defined by {{mvar|r}}, we let <math>h(v) = \sgn(v \cdot r)</math>. That is, <math>h(v) = \pm 1</math> depending on which side of the hyperplane {{mvar|v}} lies. This way, each possible choice of a random hyperplane {{mvar|r}} can be interpreted as a hash function <math>h(v)</math>.
 
For two vectors {{mvar|u,v}} with angle <math>\theta(u,v)</math> between them, it can be shown that
 
:<math>Pr[h(u) = h(v)] = 1 - \frac{\theta(u,v)}{\pi}.</math>
 
Since the ratio between <math>\frac{\theta(u,v)}{\pi}</math> and <math>1-\cos(\theta(u,v))</math> is at least 0.439 when <math>\theta(u, v) \in [0, \pi]</math>,<ref name=Charikar2002 /><ref name="Goemans Williamson 1995 pp. 1115–1145">{{cite journal | last1=Goemans | first1=Michel X. | last2=Williamson | first2=David P. | title=Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming | journal=Journal of the ACM | publisher=Association for Computing Machinery (ACM) | volume=42 | issue=6 | year=1995 | issn=0004-5411 | doi=10.1145/227683.227684 | pages=1115–1145| s2cid=15794408 | doi-access=free }}</ref> the probability of two vectors being on different sides of the random hyperplane is approximately proportional to the [[Cosine similarity#Cosine Distance|cosine distance]] between them.
 
===Stable distributions===
 
The hash function
<ref name=DIIM04>{{cite journal
| authorauthor1 = Datar, M.
| coauthorsauthor2 = Immorlica, N.,|author2-link= Nicole Immorlica | author3-link = Piotr Indyk | author3 = Indyk, P., | author4 = Mirrokni, V.S.
| year=2004
| title = Locality-Sensitive Hashing Scheme Based on p-Stable Distributions
Line 134 ⟶ 254:
}}</ref> <math>h_{\mathbf{a},b} (\boldsymbol{\upsilon}) :
\mathcal{R}^d
\to \mathcal{N} </math> maps a ''{{mvar|d'' }}-dimensional vector
<math>\boldsymbol{\upsilon}</math> onto athe set of integers. Each hash function
in the family is indexed by a choice of random <math>\mathbf{a}</math> and
<math>b</math> where <math>\mathbf{a}</math> is a ''{{mvar|d'' }}-dimensional
vector with
entries chosen independently from a [[Levy skew alpha-stable distribution|stable distribution]] and
<math>b</math> is
a real number chosen uniformly from the range [0,r]. For a fixed
Line 146 ⟶ 266:
\frac{\mathbf{a}\cdot \boldsymbol{\upsilon}+b}{r} \right \rfloor </math>.
 
Other construction methods for hash functions have been proposed to better fit the data.
<ref name=PJA10>{{cite journal
| author1 = Pauleve, L. | author2 = Jegou, H. | author3 = Amsaleg, L.
| year=2010
| title = Locality sensitive hashing: A comparison of hash function types and querying mechanisms
| url = http://hal.inria.fr/inria-00567191/en/
| journal = Pattern Recognition Letters
| volume = 31 | issue = 11 | pages = 1348–1358 | doi = 10.1016/j.patrec.2010.04.004 | bibcode = 2010PaReL..31.1348P | s2cid = 2666044 }}</ref>
In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.
 
===Semantic hashing===
==LSH Algorithm for the Nearest Neighbor Search==
Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have higher [[semantic similarity]].<ref>{{Cite journal|last1=Salakhutdinov|first1=Ruslan|last2=Hinton|first2=Geoffrey|date=2008|title=Semantic hashing|journal=International Journal of Approximate Reasoning|language=en|volume=50|issue=7|pages=969–978|doi=10.1016/j.ijar.2008.11.006|doi-access=free}}</ref> The hashcodes are found via training of an [[artificial neural network]] or [[graphical model]].{{citation needed|date=September 2021}}
 
One==Algorithm offor the main application of LSH is to provide efficient [[nearest neighbor search]] algorithms.==
Consider any LSH family <math>\mathcal F</math>. The
algorithm has two main parameters: the width parameter <math>k</math> and the
number of hash tables <math>L</math>.
 
One of the main applications of LSH is to provide a method for efficient approximate [[nearest neighbor search]] algorithms. Consider an LSH family <math>\mathcal F</math>. The algorithm has two main parameters: the width parameter {{mvar|k}} and the number of hash tables {{mvar|L}}.
In the first step, we define a new family <math>\mathcal G</math> of hash
functions <math>g</math>, where each function <math>g</math> is
obtained by concatenating <math>k</math> functions
<math>h_1,...,h_k</math> from
<math>\mathcal F</math>, i.e., <math>g(p)=[h_1(p), ...,h_k(p)]</math>.
In other words, a random hash function <math>g</math> is obtained by
concatenating <math>k</math> randomly chosen hash functions from
<math>\mathcal
H</math>.
The algorithm then constructs <math>L</math> hash tables, each
corresponding to
a different randomly chosen hash function <math>g</math>.
 
In the first step, we define a new family <math>\mathcal G</math> of hash functions {{mvar|g}}, where each function {{mvar|g}} is obtained by concatenating {{mvar|k}} functions <math>h_1, \ldots, h_k</math> from <math>\mathcal F</math>, i.e., <math>g(p) = [h_1(p), \ldots, h_k(p)]</math>. In other words, a random hash function {{mvar|g}} is obtained by concatenating {{mvar|k}} randomly chosen hash functions from <math>\mathcal F</math>. The algorithm then constructs {{mvar|L}} hash tables, each corresponding to a different randomly chosen hash function {{mvar|g}}.
In the preprocessing step we hash all <math>n</math> points from the data set
<math>S</math> into each of
the <math>L</math> hash tables.
Given that the resulting hash tables have only <math>n</math> non-zero
entries,
one can reduce the amount of memory used per each hash table to
<math>O(n)</math> using standard [[hash functions]].
 
In the preprocessing step we hash all {{mvar|n}} {{mvar|d}}-dimensional points from the data set {{mvar|S}} into each of the {{mvar|L}} hash tables. Given that the resulting hash tables have only {{mvar|n}} non-zero entries, one can reduce the amount of memory used per each hash table to <math>O(n)</math> using standard [[hash functions]].
Given a query point <math>q</math>, the algorithm iterates over the
<math>L</math> hash functions <math>g</math>.
For each <math>g</math> considered, it retrieves the data points that
are hashed
into the same bucket as <math>q</math>.
The process is stopped as soon as a point within distance <math>cR</math> from
<math>q</math> is found.
 
Given a query point {{mvar|q}}, the algorithm iterates over the {{mvar|L}} hash functions {{mvar|g}}. For each {{mvar|g}} considered, it retrieves the data points that are hashed into the same bucket as {{mvar|q}}. The process is stopped as soon as a point within distance {{mvar|cR}} from {{mvar|q}} is found.
Given the parameters <math>k</math> and <math>L</math>, the algorithm has
 
the following performance guarantees:
Given the parameters {{mvar|k}} and {{mvar|L}}, the algorithm has the following performance guarantees:
* preprocessing time: <math>O(nLkt)</math>, where <math>t</math> is the time to evaluate a function <math>h\in F</math> on an input point <math>p</math>;
* preprocessing time: <math>O(nLkt)</math>, where {{mvar|t}} is the time to evaluate a function <math>h \in \mathcal F</math> on an input point {{mvar|p}};
* space: <math>O(nL)</math>, plus the space for storing data points;
* query time: <math>O(L(kt+dnP_2^k))</math>;
* the algorithm succeeds in finding a point within distance <math>{{mvar|cR</math>}} from <math>{{mvar|q</math>}} (if itthere exists a point within distance {{mvar|R}}) with probability at least <math>\Omega1 - (\min\{ 1, LP_1- P_1^k\} ) ^ L</math>.;
 
For a fixed approximation ratio <math>c=1+\epsilon</math> and probabilities <math>P_1</math> and <math>P_2</math>, one can set <math>k = \left\lceil\tfrac{\log n}{\log 1/P_2}\right\rceil</math> and <math>L = \lceil P_1^{-k}\rceil = O(n^{\rho}P_1^{-1})</math>, where <math>\rho={\tfrac{\log P_1}{\log P_2}}</math>. Then one obtains the following performance guarantees:
* preprocessing time: <math>O(n^{1+\rho}P_1^{-1}kt)</math>;
<math>P_1</math> and <math>P_2</math>, one can set <math>k={\log n \over \log
1/P_2}</math>* andspace: <math>L = O(n^{1+\rho}P_1^{-1})</math>, whereplus the space for storing data points;
* query time: <math>O(n^{\rho=}P_1^{\log P_1\over \log P_2-1}(kt+d))</math>.;
Then one obtains the following performance guarantees:
* preprocessing time: <math>O(n^{1+\rho}kt)</math>;
* space: <math>O(n^{1+\rho})</math>, plus the space for storing data points;
* query time: <math>O(n^{\rho}(kt+d))</math>;
 
===Finding nearest neighbor without fixed dimensionality===
 
To generalize the above algorithm without radius {{mvar|R}} being fixed, we can take the algorithm and do a sort of binary search over {{mvar|R}}. It has been shown<ref>{{cite journal |last1=Har-Peled |first1=Sariel |last2=Indyk |first2=Piotr |last3=Motwani |first3=Rajeev |title=Approximate Nearest Neighbor: Towards Removing the Curse of Dimensionality |journal=Theory of Computing |date=2012 |volume=8 |issue=Special Issue in Honor of Rajeev Motwani |pages=321-350 |doi=10.4086/toc.2012.v008a014 |url=https://theoryofcomputing.org/articles/v008a014/v008a014.pdf |access-date=23 May 2025}}</ref> that there is a data structure for the approximate nearest neighbor with the following performance guarantees:
* space: <math>O(n^{1+\rho}P_1^{-1}d\log^2 n)</math>;
* query time: <math>O(n^{\rho}P_1^{-1}(kt+d)\log n)</math>;
* the algorithm succeeds in finding the nearest neighbor with probability at least <math>1 - (( 1 - P_1^k ) ^ L\log n)</math>;
 
===Improvements===
 
When {{mvar|t}} is large, it is possible to reduce the hashing time from <math>O(n^{\rho})</math>.
This was shown by<ref>Dahlgaard, Søren, Mathias Bæk Tejs Knudsen, and Mikkel Thorup. [https://arxiv.org/pdf/1704.04370 "Fast similarity sketching."] 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2017.</ref> and<ref>Christiani, Tobias. [https://arxiv.org/pdf/1708.07586 "Fast locality-sensitive hashing frameworks for approximate near neighbor search."] International Conference on Similarity Search and Applications. Springer, Cham, 2019.</ref> which gave
* query time: <math>O(t\log^2(1/P_2)/P_1 + n^{\rho}(d + 1/P_1))</math>;
* space: <math>O(n^{1+\rho}/P_1 + \log^2(1/P_2)/P_1)</math>;
 
It is also sometimes the case that the factor <math>1/P_1</math> can be very large.
This happens for example with [[Jaccard similarity]] data, where even the most similar neighbor often has a quite low Jaccard similarity with the query.
In<ref>Ahle, Thomas Dybdahl. "On the Problem of <math>p_1^{-1}</math> in Locality-Sensitive Hashing." International Conference on Similarity Search and Applications. Springer, Cham, 2020.</ref> it was shown how to reduce the query time to <math>O(n^\rho/P_1^{1-\rho})</math> (not including hashing costs) and similarly the space usage.
 
==See also==
*[[Nearest neighbor search]]
 
* {{Annotated link |Bloom filter}}
==Related Papers==
* {{Annotated link |Curse of dimensionality}}
* {{Annotated link |Feature hashing}}
* {{Annotated link |Fourier-related transforms}}
* {{Annotated link |Geohash}}
* {{Annotated link |Multilinear subspace learning}}
* {{Annotated link |Principal component analysis}}
*[[Random indexing]]<ref>Gorman, James, and James R. Curran. [https://aclanthology.org/P06-1046.pdf "Scaling distributional similarity to large corpora."] Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2006.</ref>
* {{Annotated link |Rolling hash}}
* {{Annotated link |Singular value decomposition}}
* {{Annotated link |Sparse distributed memory}}
* {{Annotated link |Wavelet compression}}
* {{Annotated link |Locality of reference}}
 
==References==
 
{{reflist}}
 
==Further reading==
 
*Samet, H. (2006) ''Foundations of Multidimensional and Metric Data Structures''. Morgan Kaufmann. {{ISBN|0-12-369446-9}}
[[Category:Search algorithms]]
*{{Cite conference| last1 = Indyk | first1 = Piotr | author1-link= Piotr Indyk| last2 = Motwani | first2 = Rajeev | author2-link = Rajeev Motwani| last3 = Raghavan | first3 = Prabhakar| last4 = Vempala | first4 = Santosh | author4-link = Santosh Vempala| title = Locality-preserving hashing in multidimensional spaces | doi = 10.1145/258533.258656 | book-title = Proceedings of the twenty-ninth annual ACM symposium on Theory of computing| conference = [[Symposium on Theory of Computing|STOC '97]]| pages = 618–625| year = 1997 | isbn = 978-0-89791-888-6| citeseerx = 10.1.1.50.4927| s2cid = 15693787 }}
[[Category:Classification algorithms]]
*{{Cite journal | last1 = Chin | first1 = Andrew| title = Locality-preserving hash functions for general purpose parallel computation | url = http://www.unclaw.com/chin/scholarship/hashfunctions.pdf| doi = 10.1007/BF01185209 | journal = [[Algorithmica]]| volume = 12 | issue = 2–3 | pages = 170–181 | year = 1994 | s2cid = 18108051}}
 
==External links==
* [http://web.mit.edu/andoni/www/LSH/index.html Alex Andoni's LSH homepage]
* [https://lshkit.sourceforge.net/ LSHKIT: A C++ Locality Sensitive Hashing Library]
* [https://github.com/simonemainardi/LSHash A Python Locality Sensitive Hashing library that optionally supports persistence via redis]
* [https://web.archive.org/web/20101203074412/http://www.vision.caltech.edu/malaa/software/research/image-search/ Caltech Large Scale Image Search Toolbox]: a Matlab toolbox implementing several LSH hash functions, in addition to Kd-Trees, Hierarchical K-Means, and Inverted File search algorithms.
* [https://github.com/salviati/slash Slash: A C++ LSH library, implementing Spherical LSH by Terasawa, K., Tanaka, Y]
* [https://github.com/RSIA-LIESMARS-WHU/LSHBOX LSHBOX: An Open Source C++ Toolbox of Locality-Sensitive Hashing for Large Scale Image Retrieval, Also Support Python and MATLAB.]
* [https://github.com/DBWangGroupUNSW/SRS SRS: A C++ Implementation of An In-memory, Space-efficient Approximate Nearest Neighbor Query Processing Algorithm based on p-stable Random Projection]
* [https://github.com/trendmicro/tlsh TLSH open source on Github]
* [https://github.com/idealista/tlsh-js JavaScript port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as node.js module]
* [https://github.com/idealista/tlsh Java port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as maven package]
 
{{DEFAULTSORT:Locality Sensitive Hashing}}
[[Category:Search algorithms]]
[[Category:Classification algorithms]]
[[Category:Dimension reduction]]
[[Category:Hashing]]
[[Category:Probabilistic data structures]]