Locality-sensitive hashing

This is an old revision of this page, as edited by Fyedernoggersnodden (talk | contribs) at 18:35, 5 April 2009 (Random Projection). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Locality Sensitive Hashing (LSH) is a method of performing probabilistic dimension reduction of high-dimensional data. The basic idea is to hash the input items so that similar items are mapped to the same buckets with high probability (the number of buckets being much smaller than the universe of possible input items).

Definition

An LSH family   is defined for a metric space  , a threshold   and an approximation factor  . An LSH family [1] [2]   is a family of functions   satisfying the following conditions for any two points  , and a function   chosen uniformly at random from  :

  • if  , then   (i.e.,  and   collide) with probability at least  ,
  • if  , then   with probability at most  .

A family is interesting when  . Such a family   is called  -sensitive.

An alternative definition[3] is defined with respect to a universe of items   that have a similarity function  . An LSH scheme is a family of hash functions   coupled with a probability distribution   over the functions such that a function   chosen according to   satisfies the property that   for any  .

Applications

LSH has been applied to several problem domains including

Methods

Bit Sampling For Hamming Distance

One of the easiest ways to construct an LSH family is by bit sampling [2]. This approach works for the Hamming distance over d-dimensional vectors  . Here, the family   of hash functions is simply the family of all the projections of points on one of the   coordinates, i.e.,  , where   is the   coordinate of  . A random function   from   simply selects a random bit from input point. This family has the following parameters:  ,  .

Min-Wise Independent Permutations

Suppose   is composed of subsets of some ground set of enumerable items   and the similarity function of interest is the Jaccard index  . If   is a permutation on the indices of  , for   let  . Each possible choice of   defines a single hash function   mapping input sets to integers.

Define the function family   to be the set of all such functions and let   be the uniform distribution. Given two sets   the event that   corresponds exactly to the event that the minimizer of   lies inside  . As   was chosen uniformly at random,   and   define an LSH scheme for the Jaccard index.

Because the symmetric group on n elements has size n!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized n. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" - a permutation family for which each element of the ___domain has equal probability of being the minimum under a randomly chosen  . It has been established that a min-wise independent family of permutations is at least of size  [4] and that this boundary is tight[5].

Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most k [6]. Approximate min-wise independence differs from the property by at most a fixed  [7].

Random Projection

The random projection method of LSH is designed to approximate the cosine distance between vectors. The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector  ) at the outset and use the hyperplane to hash input vectors.

Given an input vector   and a hyperplane defined by  , we let  . That is,   depending on which side of the hyperplane   lies.

Each possible choice of   defines a single function. Let   be the set of all such functions and let   be the uniform distribution once again. It is not difficult to prove that, for two vectors  ,  , where   is the angle between   and  .   is closely related to  .

In this instance hashing produces only a single bit. Two vectors' bits match with probability proportional to the cosine of the angle between them.

Stable Distributions

The hash function [8]   maps a d dimensional vector   onto a set of integers. Each hash function in the family is indexed by a choice of random   and   where   is a d dimensional vector with entries chosen independently from a stable distribution and   is a real number chosen uniformly from the range [0,r]. For a fixed   the hash function   is given by  .

One of the main application of LSH is to provide efficient nearest neighbor search algorithms. Consider any LSH family  . The algorithm has two main parameters: the width parameter   and the number of hash tables  .

In the first step, we define a new family   of hash functions  , where each function   is obtained by concatenating   functions   from  , i.e.,  . In other words, a random hash function   is obtained by concatenating   randomly chosen hash functions from  . The algorithm then constructs   hash tables, each corresponding to a different randomly chosen hash function  .

In the preprocessing step we hash all   points from the data set   into each of the   hash tables. Given that the resulting hash tables have only   non-zero entries, one can reduce the amount of memory used per each hash table to   using standard hash functions.

Given a query point  , the algorithm iterates over the   hash functions  . For each   considered, it retrieves the data points that are hashed into the same bucket as  . The process is stopped as soon as a point within distance   from   is found.

Given the parameters   and  , the algorithm has the following performance guarantees:

  • preprocessing time:  , where   is the time to evaluate a function   on an input point  ;
  • space:  , plus the space for storing data points;
  • query time:  ;
  • the algorithm succeeds in finding a point within distance   from   (if it exists) with probability at least  .

For a fixed approximation ratio   and probabilities   and  , one can set   and  , where  . Then one obtains the following performance guarantees:

  • preprocessing time:  ;
  • space:  , plus the space for storing data points;
  • query time:  ;


See also

  1. ^ Gionis, A. (1999). , "Similarity Search in High Dimensions via Hashing". Proceedings of the 25th Very Large Database (VLDB) Conference. {{cite journal}}: Check |url= value (help); Unknown parameter |coauthors= ignored (|author= suggested) (help)
  2. ^ a b Indyk, Piotr. (1998). , "Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality". Proceedings of 30th Symposium on Theory of Computing. {{cite journal}}: Check |url= value (help); Unknown parameter |coauthors= ignored (|author= suggested) (help)
  3. ^ Charikar, Moses S.. (2002). "Similarity Estimation Techniques from Rounding Algorithms". Proceedings of the 34th Annual ACM Symposium on Theory of Computing 2002: (ACM 1–58113–495–9/02/0005)…. doi:10.1145/509907.509965. Retrieved 2007-12-21. {{cite journal}}: Cite has empty unknown parameter: |coauthors= (help)
  4. ^ Broder, A.Z. (1998). "Min-wise independent permutations". Proceedings of the thirtieth annual ACM symposium on Theory of computing: 327–336. doi:10.1145/276698.276781. Retrieved 2007-11-14. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  5. ^ "An optimal construction of exactly min-wise independent permutations". Technical Report COMP98-62, IEICE, 1998. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  6. ^ Matoušek, J. (2002). "On Restricted Min-Wise Independence of Permutations". Preprint. Retrieved 2007-11-14. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  7. ^ Saks, M. (2000). "Low discrepancy sets yield approximate min-wise independent permutation families". Information Processing Letters. 73 (1–2): 29–32. doi:10.1016/S0020-0190(99)00163-5. Retrieved 2007-11-14. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  8. ^ Datar, M. (2004). "Locality-Sensitive Hashing Scheme Based on p-Stable Distributions". Proceedings of the Symposium on Computational Geometry. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)