Range query (computer science): Difference between revisions

Content deleted Content added
fix ref link
m Typo/quotemark fixes, replaced: ’s → 's, horizonal → horizontal
 
(31 intermediate revisions by 11 users not shown)
Line 2:
{{tone|date=December 2017}}
 
In [[datacomputer structurescience]]s, athe '''range query''' problem consists of preprocessing some input data into a [[data structure]] to efficiently answeranswering any number ofseveral queries onregarding anya subset of the input. Particularly, there is agiven groupinterval of problemselements that have been extensively studied where the input iswithin an [[Arrayarray data structure|array]]. ofFor unsortedexample, numbersa andcommon atask, queryknown consistsas of[[range computingminimum some functionquery]], suchis asfinding the minimum,smallest value oninside a specificgiven range ofwithin thea arraylist of numbers.
 
==Definition==
Given a [[function (programming)|function]] <math>f</math> that accepts an array, a range query <math>f_q(l, r)</math> on an array <math>a=[a_1,..,a_n]</math> takes two indices <math>l</math> and <math>r</math> and returns the result of <math>f</math> when applied to the subarray <math>[a_l, \ldots, a_r]</math>. For example, for a function <math>\operatorname{sum}</math> that returns the sum of all values in an array, the range query <math>\operatorname{sum}_q(l, r)</math> returns the sum of all values in the range <math>[l, r]</math>.{{Citation needed|date=October 2023}}
 
==Solutions==
A range query <math>q_f(A,i,j)</math> on an array <math>A=[a_1,a_2,..,a_n]</math> of ''n'' elements of some set {{mvar|S}}, denoted <math>A[1,n]</math>, takes two indices <math>1\leq i\leq j\leq n</math>, a function {{mvar|f}} defined over arrays of elements of {{mvar|S}} and outputs <math>f(A[i,j])= f(a_i,\ldots,a_j)</math>.
 
===Prefix sum array===
For example, for <math>f = \sum</math> and <math>A[1,n]</math> an array of numbers, the range query <math>\sum_{i,j} A</math> computes <math>\sum A[i,j] = (a_i+\ldots + a_j)</math>, for any <math>1 \leq i \leq j \leq n</math>. These queries may be answered in [[constant time]] and using <math>O(n)</math> extra space by calculating the sums of the first {{mvar|i}} elements of {{mvar|A}} and storing them into an auxiliary array {{mvar|B}}, such that <math>B[i]</math> contains the sum of the first {{mvar|i}} elements of {{mvar|A}} for every <math>0\leq i\leq n</math>. Therefore, any query might be answered by doing <math>\sum A[i,j] = B[j] - B[i-1]</math>.
{{Main|Prefix sum}}
 
Range sum queries may be answered in [[constant time]] and [[space complexity|linear space]] by pre-computing an array {{mvar|p}} of same length as the input such that for every index {{mvar|i}}, the element {{mvar|p<sub>i</sub>}} is the sum of the first {{mvar|i}} elements of {{mvar|a}}. Any query may then be computed as follows: <math display="block">\operatorname{sum}_q(l, r) = p_r - p_{l-1}.</math>
This strategy may be extended for every [[Group (mathematics)|group]] operator {{mvar|f}} where the notion of <math>f^{-1}</math> is well defined and easily computable.<ref name="morin">{{cite journal|first1=Danny|last1=Krizanc|first2=Pat|last2=Morin|author2-link= Pat Morin |first3=Michiel H. M.|last3=Smid|title=Range Mode and Range Median Queries on Lists and Trees|journal=ISAAC|year=2003|pages=517–526|url=http://cg.scs.carleton.ca/~morin/publications/|arxiv=cs/0307034}}</ref> Finally, this solution can be extended to two-dimensional arrays with a similar preprocessing.<ref name=menhe>{{cite journal|last1=Meng|first1=He|first2=J. Ian|last2=Munro|first3=Patrick K.|last3=Nicholson|title=Dynamic Range Selection in Linear Space|journal=ISAAC|year=2011|pages=160–169}}</ref>
 
This strategy may be extended forto any everyother [[Groupbinary (mathematics)|groupoperation]] operator {{mvar|<math>f}}</math> wherewhose the[[inverse notion offunction]] <math>f^{-1}</math> is well -defined and easily computable.<ref name="morin">{{cite journal|first1=Danny|last1=Krizanc|first2=Pat|last2=Morin|author2-link= Pat Morin |first3=Michiel H. M.|last3=Smid|title=Range Mode and Range Median Queries on Lists and Trees|journal=ISAAC|year=2003|pages=517–526|url=http://cg.scs.carleton.ca/~morin/publications/|arxiv=cs/0307034|bibcode=2003cs........7034K }}</ref> Finally,It thiscan solution canalso be extended to two-dimensionalhigher arraysdimensions with a similar preprocessingpre-processing.<ref name=menhe>{{cite journal|last1=Meng|first1=He|first2=J. Ian|last2=Munro|first3=Patrick K.|last3=Nicholson|title=Dynamic Range Selection in Linear Space|journal=ISAAC|year=2011|pages=160–169|arxiv=1106.5076 }}</ref> For example, if {{mvar|p<sub>i,j</sub>}} contains the sum of the first {{math|''i'' &times; ''j''}} elements of {{mvar|a}}, then <math display="block">\operatorname{sum}_q(l, r, t, b) = p_{r,b} - p_{l-1,b} - p_{r, t-1} + p_{l-1,t-1}.</math>
 
===Dynamic range queries===
A more difficult subset of the problem consists of executing range queries on dynamic data; that is, data that may mutate between each query. In order to efficiently update array values, more sophisticated data structures like the [[segment tree]] or [[Fenwick tree]] are necessary.{{Citation needed|date=October 2023}}
 
==Examples==
Line 18 ⟶ 25:
{{main|Range minimum query}}
 
When the function of interest in a range query is a [[semigroup]] operator, the notion of <math>f^{-1}</math> is not always defined, so the strategy in the previous section does not work. [[Andrew Yao]] showed<ref name="yao">{{cite journalbook |last=Yao, Adoi=10.1145/800070.802185 C|title chapter=Space-Timetime Tradeofftradeoff for Answeringanswering Rangerange queries (Extended Abstract) Queries|journal title=EProceedings of the 14thfourteenth Annualannual ACM Symposiumsymposium on the Theory of Computingcomputing - STOC '82 |year date=1982 | last1=Yao | first1=Andrew C. | pages=128–136 | isbn=0-89791-070-2 }}</ref> that there exists an efficient solution for range queries that involve semigroup operators. He proved that for any constant {{mvar|c}}, a preprocessingpre-processing of time and space <math>\thetaTheta(c\cdot n)</math> allows to answer range queries on lists where {{mvar|f}} is a semigroup operator in <math>\theta(\alpha_c(n))</math> time, where <math>\alpha_c</math> is a certain functional inverse of the [[Ackermann function]].
 
There are some semigroup operators that admit slightly better solutions. For instance when <math>f\in \{\max,\min\}</math>. Assume <math> f = \min</math> then <math>\min(A[1..n])</math> returns the index of the [[minimum]] element of <math>A[1..n]</math>. Then <math display="inline">\min_{i,j}(A)</math> denotes the corresponding minimum range query. There are several data structures that allow to answer a range minimum query in <math>O(1)</math> time using a preprocessingpre-processing of time and space <math>O(n)</math>. One such solution is based on the equivalence between this problem and the [[lowest common ancestor]] problem.
 
The [[Cartesian tree]] <math>T_A</math> of an array <math>A[1,n]</math> has as root <math>a_i = \min\{a_1,a_2,\ldots,a_n\}</math> and as left and right subtrees the Cartesian tree of <math>A[1,i-1]</math> and the Cartesian tree of <math>A[i+1,n]</math> respectively. A range minimum query <math display="inline">\min_{i,j}(A)</math> is the [[lowest common ancestor]] in <math>T_A</math> of <math>a_i</math> and <math>a_j</math>. Because the lowest common ancestor can be solved in [[constant time]] using a preprocessingpre-processing of time and space <math>O(n)</math>, range minimum query can as well. The solution when <math>f = \max</math> is analogous. Cartesian trees can be constructed in [[linear time]].
 
===Mode===
{{Main|Range mode query}}
 
The ''[[Mode (statistics)|mode]]'' of an array ''A'' is the element that appears the most in ''A''it. For instance the mode of <math>Aa=[4,5,6,7,4]</math> is {{val|4}}. In case of tiesa tie, any of the most frequent elements might be picked as the mode. A range mode query consists in preprocessingpre-processing <math>A[1,n]</math> such that we can find the mode in any range of <math>A[1,n]</math>. Several data structures have been devised to solve this problem, we summarize some of the results in the following table.<ref name=morin />
 
{| class="wikitable"
Line 41 ⟶ 48:
|}
 
Recently Jørgensen et al. proved a lower bound on the [[cell-probe model]] of <math>\Omega\left(\tfrac{\log n}{\log (S w/n)}\right)</math> for any data structure that uses {{mvar|S}} cells.<ref name=jorgensen>{{cite journalbook |last1 doi=Greve10.1007/978-3-642-14165-2_51 |first1 chapter=MCell Probe Lower Bounds and Approximations for Range Mode |last2 title=J{\o}rgensenAutomata, Languages and Programming |first2 series=Lecture Notes in Computer Science A.|last3 date=Larsen2010 |first3 last1=Greve K.|last4 first1=TruelsenMark |first4 last2=Jørgensen J.|title first2=CellAllan probeGrønlund lower| boundslast3=Larsen and| approximations forfirst3=Kasper rangeDalgaard mode|journal last4=Automata,Truelsen Languages| andfirst4=Jakob Programming|year volume=20106198 | pages=605–616 | isbn=978-3-642-14164-5 }}</ref>
 
===Median===
 
This particular case is of special interest since finding the [[median]] has several applications.<ref name=heriel>{{cite journalbook |first1arxiv=Sariel0807.0222 |last1doi=Har10.1007/978-Peled3-540-87744-8_42 |author1chapter=Range Medians |title=Algorithms -link ESA 2008 |series=SarielLecture Notes in Computer Science |date=2008 |last1=Har-Peled |first2first1=S.Sariel |last2=Muthukrishnan |author2-linkfirst2=S. Muthukrishnan (computer scientist)|titlevolume=Range5193 Medians|journal=ESA|year=2008|pages=503–514 |isbn=978-3-540-87743-1 }}</ref> On the other hand, the median problem, a special case of the [[selection problem]], is solvable in ''O''(''n''), using the [[median of medians]] algorithm.<ref name=tarjanmedian>{{Cite journal | last1 = Blum | first1 = M. | authorlink1 = Manuel Blum| last2 = Floyd | first2 = R. W. | authorlink2 = Robert W. Floyd| last3 = Pratt | first3 = V. R. | authorlink3 = Vaughan Pratt| last4 = Rivest | first4 = R. L. | authorlink4 = Ron Rivest| last5 = Tarjan | first5 = R. E. | authorlink5 = Robert Tarjan | title = Time bounds for selection | doi = 10.1016/S0022-0000(73)80033-9 | journal = Journal of Computer and System Sciences | volume = 7 | issue = 4 | pages = 448–461 | date =August 1973 | url = httphttps://people.csail.mit.edu/rivest/pubs/BFPRT73.pdf| doi-access = free }}</ref> However its generalization through range median queries is recent.<ref name=ethpaper /> A range median query <math>\operatorname{median}(A,i,j)</math> where ''A,i'' and ''j'' have the usual meanings returns the median element of <math>A[i,j]</math>. Equivalently, <math>\operatorname{median}(A,i,j)</math> should return the element of <math>A[i,j]</math> of rank <math>\frac{j-i}{2}</math>. Range median queries cannot be solved by following any of the previous methods discussed above including Yao's approach for semigroup operators.<ref name="morin kranakis" />
 
There have been studied two variants of this problem, the [[offline algorithm|offline]] version, where all the ''k'' queries of interest are given in a batch, and a version where all the preprocessingpre-processing is done up front. The offline version can be solved with <math>O(n\log k + k \log n)</math> time and <math>O(n\log k)</math> space.
 
The following pseudocode of the [[Quickselect|quickselect algorithm]] shows how to find the element of rank {{mvar|r}} in <math>A[i,j]</math> an unsorted array of distinct elements, to find the range medians we set <math>r=\frac{j-i}{2}</math>.<ref name="ethpaper">{{cite journalbook |last1 arxiv=Beat0901.1761 |first1 last1=Gfeller |first2 first1=PeterBeat | last2=Sanders |author2-link first2=Peter Sanders| (computertitle=Automata, Languages and Programming scientist)|title chapter=Towards Optimal Range Medians |journal series=IcalpLecture Notes in Computer Science (1)|year date=2009 | volume=5555 | pages=475–486 |doi=10.1007/978-3-642-02927-1_40| isbn=978-3-642-02926-4 }}</ref>
 
rangeMedian(A, i, j, r) {
Line 72 ⟶ 79:
end up in {{code|A.low}} is {{code|t}} and this number is bigger than {{code|r}} then we should keep looking for the element of rank {{code|r}} in {{code|A.low}}; otherwise we should look for the element of rank <math>(r-t)</math> in {{code|A.high}}. To find {{mvar|t}}, it is enough to find the maximum index <math>m\leq i-1</math> such that <math>a_m</math> is in {{code|A.low}} and the maximum index <math>l\leq j</math> such that <math>a_l</math>
is in {{code|A.high}}. Then <math>t=l-m</math>. The total cost for any query, without considering the partitioning part, is <math>\log n</math> since at most <math>\log n</math> recursion calls are done and only a constant number of operations are performed in each of them (to get the value of {{mvar|t}} [[fractional cascading]] should be used).
If a linear algorithm to find the medians is used, the total cost of preprocessingpre-processing for {{mvar|k}} range median queries is <math> n\log k</math>. The algorithm can also be modified to solve the [[online algorithm|online]] version of the problem.<ref name=ethpaper />
 
===Majority===
Finding frequent elements in a given set of items is one of the most important tasks in data mining. Finding frequent elements might be a difficult task to achieve when most items have similar frequencies. Therefore, it might be more beneficial if some threshold of significance was used for detecting such items. One of the most famous algorithms for finding the majority of an array was proposed by Boyer and Moore <ref>{{Citationcite book |lastlast1=Boyer|firstfirst1=Robert S.|title=MJRTY—A Fast Majority Vote Algorithm|date=1991|chapter-url=http://dx.doi.org/10.1007/978-94-011-3488-0_5|work=Automated Reasoning Series|pages=105–117|place=Dordrecht|publisher=Springer Netherlands|access-date=2021-12-18|last2=Moore|first2=J. Strother|title=Automated Reasoning |chapter=MJRTY—A Fast Majority Vote Algorithm |series=Automated Reasoning Series |volume=1 |doi=10.1007/978-94-011-3488-0_5 |isbn=978-94-010-5542-0 }}</ref> which is also known as the [[Boyer–Moore majority vote algorithm]]. Boyer and Moore proposed an algorithm to find the majority element of a string (if it has one) in <math>O(n)</math> time and using <math>O(1)</math> space. In the context of Boyer and Moore’sMoore's work and generally speaking, a majority element in a set of items (for example string or an array) is one whose number of instances is more than half of the size of that set. Few years later, Misra and Gries <ref>{{Cite journal|lastlast1=Misra|firstfirst1=J.|last2=Gries|first2=David|date=November 1982|title=Finding repeated elements|url=http://dx.doi.org/10.1016/0167-6423(82)90012-0|journal=Science of Computer Programming|volume=2|issue=2|pages=143–152|doi=10.1016/0167-6423(82)90012-0|issn=0167-6423|doi-access=free|hdl=1813/6345|hdl-access=free}}</ref> proposed a more general version of Boyer and Moore's algorithm using <math>O \left ( n \log \left ( \frac{1}{\tau} \right ) \right )</math> comparisons to find all items in an array whose relative frequencies are greater than some threshold <math>0<\tau<1</math>. A range <math>\tau</math>-majority query is one that, given a subrange of a data structure (for example an array) of size <math>|R|</math>, returns the set of all distinct items that appear more than (or in some publications equal to) <math>\tau |R|</math> times in that given range. In different structures that support range <math>\tau</math>-majority queries, <math>\tau </math> can be either static (specified during preprocessingpre-processing) or dynamic (specified at query time). Many of such approaches are based on the fact that, regardless of the size of the range, for a given <math>\tau</math> there could be at most <math>O(1/\tau)</math> distinct ''candidates'' with relative frequencies at least <math>\tau</math>. By verifying each of these candidates in constant time, <math>O(1/\tau)</math> query time is achieved. A range <math>\tau</math>-majority query is decomposable <ref name=":1">{{Cite book|last=Verfasser|firstauthor=Karpiński, Marek 1948-|url=httphttps://worldcat.org/oclc/277046650|title=Searching for frequent colors in rectangles|oclc=277046650}}</ref> in the sense that a <math>\tau</math>-majority in a range <math>R</math> with partitions <math>R_1</math> and <math>R_2</math> must be a <math>\tau</math>-majority in either <math>R_1</math>or <math>R_2</math>. Due to this decomposability, some data structures answer <math>\tau</math>-majority queries on one-dimensional arrays by finding the [[Lowest common ancestor]] (LCA) of the endpoints of the query range in a [[Range tree]] and validating two sets of candidates (of size <math>O(1/\tau)</math>) from each endpoint to the lowest common ancestor in constant time resulting in <math>O(1/\tau)</math> query time.
 
==== Two-dimensional arrays ====
Gagie et al.<ref>{{Citationcite book |lastlast1=Gagie|firstfirst1=Travis|title=Finding Frequent Elements in Compressed 2D Arrays and Strings|date=2011|chapter-url=http://dx.doi.org/10.1007/978-3-642-24583-1_29|work=String Processing and Information Retrieval|pages=295–300|place=Berlin, Heidelberg|publisher=Springer Berlin Heidelberg|isbn=978-3-642-24582-4|access-date=2021-12-18|last2=He|first2=Meng|last3=Munro|first3=J. Ian|last4=Nicholson|first4=Patrick K.|title=String Processing and Information Retrieval |chapter=Finding Frequent Elements in Compressed 2D Arrays and Strings |series=Lecture Notes in Computer Science |volume=7024 |doi=10.1007/978-3-642-24583-1_29 }}</ref> proposed a data structure that supports range <math>\tau</math>-majority queries on an <math>m\times n</math> array <math>A</math>. For each query <math>\operatorname{Q}=(\operatorname{R}, \tau)</math> in this data structure a threshold <math>0<\tau<1</math> and a rectangular range <math>\operatorname{R}</math> are specified, and the set of all elements that have relative frequencies (inside that rectangular range) greater than or equal to <math>\tau</math> are returned as the output. This data structure supports dynamic thresholds (specified at query time) and a preprocessingpre-processing threshold <math>\alpha</math> based on which it is constructed. During the preprocessingpre-processing, a set of ''vertical'' and ''horizontal'' intervals are built on the <math>m \times n</math> array. Together, a vertical and a horizontal interval form a ''block.'' Each block is part of a ''superblock'' nine times bigger than itself (three times the size of the block's horizontal interval and three times the size of its vertical one). For each block a set of candidates (with <math>\frac{9}{\alpha}</math> elements at most) is stored which consists of elements that have relative frequencies at least <math>\frac{\alpha}{9}</math> (the preprocessingpre-processing threshold as mentioned above) in its respective superblock. These elements are stored in non-increasing order according to their frequencies and it is easy to see that, any element that has a relative frequency at least <math>\alpha</math> in a block must appear its set of candidates. Each <math>\tau</math>-majority query is first answered by finding the ''query block,'' or the biggest block that is contained in the provided query rectangle in <math>O(1)</math> time. For the obtained query block, the first <math>\frac{9}{\tau}</math> candidates are returned (without being verified) in <math>O(1/\tau)</math> time, so this process might return some false positives. Many other data structures (as discussed below) have proposed methods for verifying each candidate in constant time and thus maintaining the <math>O(1/\tau)</math> query time while returning no false positives. The cases in which the query block is smaller than <math>1/\alpha</math> are handled by storing <math>\log \left ( \frac{1}{\alpha} \right )</math> different instances of this data structure of the following form:
 
<math>\beta=2^{-i}, \;\; i\in \left \{ 1,\dots,\log \left (\frac{1}{\alpha} \right ) \right \}
</math>
 
where <math>\beta</math> is the preprocessingpre-processing threshold of the <math>i</math>-th instance. Thus, for query blocks smaller than <math>1/\alpha</math> the <math>\lceil\log (1 / \tau)\rceil</math>-th instance is queried. As mentioned above, this data structure has query time <math>O(1/\tau)</math> and requires <math>O \left ( m n(H+1) \log^2 \left ( \frac{1}{\alpha} \right ) \right )</math> bits of space by storing a Huffman-encoded copy of it (note the <math>\log(\frac{1}{\alpha})</math> factor and also see [[Huffman coding]]).
 
==== One-dimensional arrays ====
Chan et al.<ref name=":0">{{Citationcite book |lastlast1=Chan|firstfirst1=Timothy M.|title=Linear-Space Data Structures for Range Minority Query in Arrays|date=2012|chapter-url=http://dx.doi.org/10.1007/978-3-642-31155-0_26|work=Algorithm Theory – SWAT 2012|pages=295–306|place=Berlin, Heidelberg|publisher=Springer Berlin Heidelberg|isbn=978-3-642-31154-3|access-date=2021-12-20|last2=Durocher|first2=Stephane|last3=Skala|first3=Matthew|last4=Wilkinson|first4=Bryan T.|title=Algorithm Theory – SWAT 2012 |chapter=Linear-Space Data Structures for Range Minority Query in Arrays |series=Lecture Notes in Computer Science |volume=7357 |doi=10.1007/978-3-642-31155-0_26 }}</ref> proposed a data structure that given a one-dimensional array<math>A</math>, a subrange <math>R</math> of <math>A</math> (specified at query time) and a threshold <math>\tau</math> (specified at query time), is able to return the list of all <math>\tau</math>-majorities in <math>O(1/\tau)</math> time requiring <math>O(n \log n)</math> words of space. To answer such queries, Chan et al.<ref name=":0" /> begin by noting that there exists a data structure capable of returning the ''top-k'' most frequent items in a range in <math>O(k)</math> time requiring <math>O(n)</math> words of space. For a one-dimensional array <math>A[0,..,n-1]</math>, let a one-sided top-k range query to be of form <math>A[0..i] \text { for } 0 \leq i \leq n-1</math>. For a maximal range of ranges <math>A[0..i] \text { through } A[0..j]</math> in which the frequency of a distinct element <math>e</math> in <math>A</math> remains unchanged (and equal to <math>f</math>), a horizontal line segment is constructed. The <math>x</math>-interval of this line segment corresponds to <math>[i,j]</math> and it has a <math>y</math>-value equal to <math>f</math>. Since adding each element to <math>A</math> changes the frequency of exactly one distinct element, the aforementioned process creates <math>O(n)</math> line segments. Moreover, for a vertical line <math>x=i</math> all horizonalhorizontal line segments intersecting it are sorted according to their frequencies. Note that, each horizontal line segment with <math>x</math>-interval <math>[\ell,r]</math> corresponds to exactly one distinct element <math>e</math> in <math>A</math>, such that <math>A[\ell]=e</math>. A top-k query can then be answered by shooting a vertical ray <math>x=i</math> and reporting the first <math>k</math> horizontal line segments that intersect it (remember from above that these line line segments are already sorted according to their frequencies) in <math>O(k)</math> time.
 
Chan et al.<ref name=":0" /> first construct a [[range tree]] in which each branching node stores one copy of the data structure described above for one-sided range top-k queries and each leaf represents an element from <math>A</math>. The top-k data structure at each node is constructed based on the values existing in the subtrees of that node and is meant to answer one-sided range top-k queries. Please note that for a one-dimensional array <math>A</math>, a range tree can be constructed by dividing <math>A</math> into two halves and recursing on both halves; therefore, each node of the resulting range tree represents a range. It can also be seen that this range tree requires <math>O(n \log n)</math> words of space, because there are <math>O(\log n)</math> levels and each level <math>\ell</math> has <math>2^{\ell}</math> nodes. Moreover, since at each level <math>\ell</math> of a range tree all nodes have a total of <math>n</math> elements of <math>A</math> at their subtrees and since there are <math>O(\log n)</math> levels, the space complexity of this range tree is <math>O(n \log n)</math>.
 
Using this structure, a range <math>\tau</math>-majority query <math>A[i..j]</math> on <math>A[0..n-1]</math> with <math>0\leq i\leq j \leq n</math> is answered as follows. First, the [[lowest common ancestor]] (LCA) of leaf nodes <math>i</math> and <math>j</math> is found in constant time. Note that there exists a data structure requiring <math>O(n)</math> bits of space that is capable of answering the LCA queries in <math>O(1)</math> time.<ref>{{Cite journalbook|lastlast1=Sadakane|firstfirst1=Kunihiko|last2=Navarro|first2=Gonzalo|date=2010-01-17|title=Fully-Functional Succinct Trees|url=http://dx.doi.org/10.1137/1.9781611973075.13|journal=Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms |chapter=Fully-Functional Succinct Trees |date=2010-01-17 |pages=134–149 |___location=Philadelphia, PA|publisher=Society for Industrial and Applied Mathematics|doi=10.1137/1.9781611973075.13|isbn=978-0-89871-701-3 |s2cid=3189222 |doi-access=free}}</ref> Let <math>z</math> denote the LCA of <math>i </math> and <math>j</math>, using <math>z</math> and according to the decomposability of range <math>\tau</math>-majority queries (as described above and in <ref name=":1" />), the two-sided range query <math>A[i..j]</math> can be converted into two one-sided range top-k queries (from <math>z</math> to <math>i</math> and <math>j</math>). These two one-sided range top-k queries return the top-(<math>1/\tau</math>) most frequent elements in each of their respective ranges in <math>O(1/\tau)</math> time. These frequent elements make up the set of ''candidates'' for <math>\tau</math>-majorities in <math>A[i..j]</math> in which there are <math>O(1/\tau)</math> candidates some of which might be false positives. Each candidate is then assessed in constant time using a linear-space data structure (as described in Lemma 3 in <ref>{{Cite journal|lastlast1=Chan|firstfirst1=Timothy M.|last2=Durocher|first2=Stephane|last3=Larsen|first3=Kasper Green|last4=Morrison|first4=Jason|last5=Wilkinson|first5=Bryan T.|date=2013-03-08|title=Linear-Space Data Structures for Range Mode Query in Arrays|url=http://dx.doi.org/10.1007/s00224-013-9455-2|journal=Theory of Computing Systems|volume=55|issue=4|pages=719–741|doi=10.1007/s00224-013-9455-2|s2cid=253747004 |issn=1432-4350}}</ref>) that is able to determine in <math>O(1)</math> time whether or not a given subrange of an array <math>A</math> contains at least <math>q</math> instances of a particular element <math>e</math>.
 
==== Tree paths ====
Gagie et al.<ref name=":2">{{Cite journal|lastlast1=Gagie|firstfirst1=Travis|last2=He|first2=Meng|last3=Navarro|first3=Gonzalo|last4=Ochoa|first4=Carlos|date=September 2020|title=Tree path majority data structures|url=http://dx.doi.org/10.1016/j.tcs.2020.05.039|journal=Theoretical Computer Science|volume=833|pages=107–119|doi=10.1016/j.tcs.2020.05.039|issn=0304-3975|arxiv=1806.01804}}</ref> proposed a data structure which supports queries such that, given two nodes <math>u</math> and <math>v</math> in a tree, are able to report the list of elements that have a greater relative frequency than <math>\tau</math> on the path from <math>u</math> to <math>v</math>. More formally, let <math>T</math> be a labelled tree in which each node has a label from an alphabet of size <math>\sigma</math>. Let <math>label(u)\in [1,\dots,\sigma]</math> denote the label of node <math>u</math> in <math>T</math>. Let <math>P_{uv}</math> denote the unique path from <math>u</math> to <math>v</math> in <math>T</math> in which middle nodes are listed in the order they are visited. Given <math>T</math>, and a fixed (specified during preprocessingpre-processing) threshold <math>0<\tau<1</math>, a query <math>Q(u,v)</math> must return the set of all labels that appear more than <math>\tau|P_{uv}|</math> times in <math>P_{uv}</math>.
 
To construct this data structure, first <math>{O}(\tau n)</math> nodes are ''marked''. This can be done by marking any node that has distance at least <math>\lceil 1 / \tau\rceil</math> from the bottom of the three (height) and whose depth is divisible by <math>\lceil 1 / \tau\rceil</math>. After doing this, it can be observed that the distance between each node and its nearest marked ancestor is less than <math>2\lceil 1 / \tau\rceil</math>. For a marked node <math>x</math>, <math>\log(depth(x))</math> different sequences (paths towards the root) <math>P_i(x)</math> are stored,
Line 102 ⟶ 109:
for <math>0\leq i \leq \log(depth(x))</math> where <math>\operatorname{par}(x)</math> returns the label of the direct parent of node <math>x</math>. Put another way, for each marked node, the set of all paths with a power of two length (plus one for the node itself) towards the root is stored. Moreover, for each <math>P_i(x)</math>, the set of all majority ''candidates'' <math>C_i(x)</math> are stored. More specifically, <math>C_i(x)</math> contains the set of all <math>(\tau/2)</math>-majorities in <math>P_i(x)</math> or labels that appear more than <math>(\tau/2).(2^i+1)</math> times in <math>P_i(x)</math>. It is easy to see that the set of candidates <math>C_i(x)</math> can have at most <math>2/\tau</math> distinct labels for each <math>i</math>. Gagie et al.<ref name=":2"/> then note that the set of all <math>\tau</math>-majorities in the path from any marked node <math>x</math> to one of its ancestors <math>z</math> is included in some <math>C_i(x)</math> (Lemma 2 in <ref name=":2"/>) since the length of <math>P_i(x)</math> is equal to <math>(2^i+1)</math> thus there exists a <math>P_i(x)</math> for <math>0\leq i \leq \log(depth(x))</math> whose length is between <math>d_{xz} \text{ and } 2 d_{xz}</math> where <math>d_{xz}</math> is the distance between x and z. The existence of such <math>P_i(x)</math> implies that a <math>\tau</math>-majority in the path from <math>x</math> to <math>z</math> must be a <math>(\tau/2)</math>-majority in <math>P_i(x)</math>, and thus must appear in <math>C_i(x)</math>. It is easy to see that this data structure require <math>O(n \log n)</math> words of space, because as mentioned above in the construction phase <math>O(\tau n)</math> nodes are marked and for each marked node some candidate sets are stored. By definition, for each marked node <math>O(\log n)</math> of such sets are stores, each of which contains <math>O(1/\tau)</math> candidates. Therefore, this data structure requires <math>O(\log n \times (1/\tau) \times \tau n)=O(n \log n)</math> words of space. Please note that each node <math>x</math> also stores <math>count(x)</math> which is equal to the number of instances of <math>label(x)</math> on the path from <math>x</math> to the root of <math>T</math>, this does not increase the space complexity since it only adds a constant number of words per node.
 
Each query between two nodes <math>u</math> and <math>v</math> can be answered by using the decomposability property (as explained above) of range <math>\tau</math>-majority queries and by breaking the query path between <math>u</math> and <math>v</math> into four subpaths. Let <math>z</math> be the lowest common ancestor of <math>u</math> and <math>v</math>, with <math>x</math> and <math>y</math> being the nearest marked ancestors of <math>u</math> and <math>v</math> respectively. The path from <math>u</math> to <math>v</math> is decomposed into the paths from <math>u</math> and <math>v</math> to <math>x</math> and <math>y</math> respectively (the size of these paths are smaller than <math>2\lceil 1 / \tau\rceil</math> by definition, all of which are considered as candidates), and the paths from <math>x</math> and <math>y</math> to <math>z</math> (by finding the suitable <math>C_i(x)</math> as explained above and considering all of its labels as candidates). Please note that, boundary nodes have to be handled accordingly so that all of these subpaths are disjoint and from all of them a set of <math>O(1/\tau)</math> candidates is derived. Each of these candidates is then verified using a combination of the <math>labelanc (x, \ell)</math> query which returns the lowest ancestor of node <math>x</math> that has label <math>\ell</math> and the <math>count(x)</math> fields of each node. On a <math>w</math>-bit RAM and an alphabet of size <math>\sigma</math>, the <math>labelanc (x, \ell)</math> query can be answered in <math>O\left(\log \log _{w} \sigma\right) </math> time whilst having linear space requirements.<ref>{{Cite journal|lastlast1=He|firstfirst1=Meng|last2=Munro|first2=J. Ian|last3=Zhou|first3=Gelin|date=2014-07-08|title=A Framework for Succinct Labeled Ordinal Trees over Large Alphabets|url=http://dx.doi.org/10.1007/s00453-014-9894-4|journal=Algorithmica|volume=70|issue=4|pages=696–717|doi=10.1007/s00453-014-9894-4|s2cid=253977813 |issn=0178-4617|url-access=subscription}}</ref> Therefore, verifying each of the <math>O(1/\tau)</math> candidates in <math>O\left(\log \log _{w} \sigma\right) </math> time results in <math>O\left((1/\tau)\log \log _{w} \sigma\right) </math> total query time for returning the set of all <math>\tau </math>-majorities on the path from <math>u </math> to <math>v </math>.
 
==Related problems==
All the problems described above have been studied for higher dimensions as well as their dynamic versions. On the other hand, range queries might be extended to other data structures like [[Tree (data structure)|trees]],<ref name="morin kranakis">{{cite journalbook |first1doi=P|last1=Bose|author110.1007/978-link=Jit3-540-31856-9_31 Bose|first2chapter-url=Ehttps://cglab.|last2=Kranakis|first3=P.|last3=Morin|author3ca/~morin/publications/ds/rmq2-link= Pat Morinstacs.pdf |first4=Y.|last4=Tang|titlechapter=Approximate rangeRange modeMode and rangeRange medianMedian Queries queries|journaltitle=InStacs Proceedings2005 of|series=Lecture theNotes 22nd Symposium on Theoretical Aspects ofin Computer Science (STACS |date=2005), Volume|last1=Bose 3404|first1=Prosenjit of|last2=Kranakis Lecture|first2=Evangelos Notes|last3=Morin in|first3=Pat ComputerScience|yearlast4=2005Tang |first4=Yihui |volume=3404 |pages=377–388 |urlisbn=http://cg.scs.carleton.ca/~morin/publications/978-3-540-24998-6 }}</ref> such as the [[level ancestor problem]]. A similar family of problems are [[Range searching|orthogonal range]] queries, also known as counting queries.
 
== See also ==
Line 115 ⟶ 122:
 
==External links==
*[httphttps://opendatastructures.org/versions/edition-0.1c/ods-java/node64.html Open Data Structure - Chapter 13 - Data Structures for Integers]
*[httphttps://www.cs.au.dk/~gerth/papers/isaac09median.pdf Data Structures for Range Median Queries - Gerth Stolting Brodal and Allan Gronlund Jorgensen]
 
{{CS-Trees}}