Cartesian tree: Difference between revisions

Content deleted Content added
split long sentence per GA1
can vs may per GA1
Line 4:
In [[computer science]], a '''Cartesian tree''' is a [[binary tree]] derived from a sequence of distinct numbers. The smallest number in the sequence is at the root of the tree; its left and right subtrees are constructed recursively from the subsequences to the left and right of this number. When all numbers are distinct, the Cartesian tree is uniquely defined from the properties that it is [[Heap (data structure)|heap]]-ordered and that a [[Tree traversal|symmetric (in-order) traversal]] of the tree returns the original sequence.
 
Cartesian trees were introduced by {{harvtxt|Vuillemin|1980}} in the context of geometric [[range searching]] data structures. They have also been used in the definition of the [[treap]] and [[randomized binary search tree]] data structures for [[binary search]] problems, in [[comparison sort]] algorithms that perform efficiently on nearly-sorted inputs, and as the basis for [[pattern matching]] algorithms. A Cartesian tree for a sequence maycan be constructed in [[linear time]].
 
==Definition==
Cartesian trees are defined using [[binary tree]]s, which are a form of [[rooted tree]]. These are mathematical structures rather than computer [[data structure]]s, although Cartesian trees have been used as part of the definition of certain data structures. A ''rooted tree'' consists of a collection of nodes, with bidirectional links connecting ''parent'' and ''child'' nodes, such that repeatedly following parent links, starting from any node, will always reach a unique ''root'' node. In a ''binary tree'', each node may havehas at most two children, designated as its left and right child. As well, each node may have some associated data (a number, in the case of a Cartesian tree). A Cartesian tree for a given sequence of distinct numbers may be defined from a [[recursion|recursive]] construction. To construct the Cartesian tree for a given sequence of distinct numbers, set its root to be the minimum number in the sequence,<ref name=minmax>In some references, the ordering of the numbers is reversed, so the root node instead holds the maximum value and the tree has the max-heap property rather than the min-heap property.</ref> and [[recursion|recursively]] construct its left and right subtrees from the subsequences before and after this number, respectively. As a base case, when one of these subsequences is empty, there is no left or right child.{{sfnp|Vuillemin|1980}}
 
It is also possible to characterize the Cartesian tree directly rather than recursively, using its ordering properties. In any tree, the ''subtree'' rooted at any node consists of all other nodes that can reach it by repeatedly following parent pointers. The Cartesian tree for a sequence of distinct numbers is defined by the following properties:
Line 13:
#A [[Tree_traversal#In-order,_LNR|symmetric (in-order) traversal]] of the tree results in the original sequence. That is, the left subtree consists of the values earlier than the root in the sequence order, while the right subtree consists of the values later than the root, and a similar ordering constraint holds at each lower node of the tree.
#The tree has the [[Binary heap|min-heap property]]: the parent of any non-root node has a smaller value than the node itself.<ref name=minmax/>
These two definitions are equivalent: the tree defined recursively as described above is the unique tree that has the properties listed above. If a sequence of numbers contains repetitions, a Cartesian tree maycan be determined for it by following a consistent tie-breaking rule before applying the above construction. For instance, the first of two equal elements maycan be treated as the smaller of the two.{{sfnp|Vuillemin|1980}}
 
==Efficient construction==
A Cartesian tree maycan be constructed in [[linear time]] from its input sequence.
One method is to simply process the sequence values in left-to-right order, maintaining the Cartesian tree of the nodes processed so far, in a structure that allows both upwards and downwards traversal of the tree. To process each new value <math>a</math>, start at the node representing the value prior to <math>a</math> in the sequence and follow the path from this node to the root of the tree until finding a value <math>b</math> smaller than <math>a</math>. The node <math>a</math> becomes the right child of <math>b</math>, and the previous right child of <math>b</math> becomes the new left child of <math>a</math>. The total time for this procedure is linear, because the time spent searching for the parent <math>b</math> of each new node <math>a</math> can be [[Potential method|charged]] against the number of nodes that are removed from the rightmost path in the tree.{{sfnp|Gabow|Bentley|Tarjan|1984}}
 
An alternative linear-time construction algorithm is based on the [[all nearest smaller values]] problem. In the input sequence, one may define the ''left neighbor'' of a value <math>a</math> to be the value that occurs prior to <math>a</math>, is smaller than <math>a</math>, and is closer in position to <math>a</math> than any other smaller value. The ''right neighbor'' is defined symmetrically. The sequence of left neighbors maycan be found by an algorithm that maintains a [[stack (data structure)|stack]] containing a subsequence of the input. For each new sequence value <math>a</math>, the stack is popped until it is empty or its top element is smaller than <math>a</math>, and then <math>a</math> is pushed onto the stack. The left neighbor of <math>a</math> is the top element at the time <math>a</math> is pushed. The right neighbors maycan be found by applying the same stack algorithm to the reverse of the sequence. The parent of <math>a</math> in the Cartesian tree is either the left neighbor of <math>a</math> or the right neighbor of <math>a</math>, whichever exists and has a larger value. The left and right neighbors maycan also be constructed efficiently by [[parallel algorithm]]s, somaking this formulation mayuseful be used to developin efficient parallel algorithms for Cartesian tree construction.<ref>{{harvtxt|Berkman|Schieber|Vishkin|1993}}.</ref>
 
Another linear-time algorithm for Cartesian tree construction is based on divide-and-conquer. The algorithm recursively constructs the tree on each half of the input, and then merges the two trees by taking the right spine of the left tree and left spine of the right tree (both of which are paths whose root-to-leaf order sorts their values from smallest to largest) and performs a standard [[Merge algorithm#Merging two lists|merging]] operation, replacing these two paths in two trees by a single path that contains the same nodes. In the merged path, the successor in the sorted order of each node from the left tree is placed in its right child, and the successor of each node from the right tree is placed in its left child, the same position that was previously used for its successor in the spine. The left children of nodes from the left tree and right children of nodes from the right tree are left unchanged. The algorithm is also parallelizable since on each level of recursion, each of the two sub-problems can be computed in parallel, and the merging operation can be [[Merge algorithm#Parallel merge|efficiently parallelized]] as well.{{sfnp|Shun|Blelloch|2014}}
Line 27:
==Applications==
===Range searching and lowest common ancestors===
[[File:Cartesian tree range searching.svg|thumb|300px|Two-dimensional range-searching using a Cartesian tree: the bottom point (red in the figure) within a three-sided region with two vertical sides and one horizontal side (if the region is nonempty) maycan be found as the nearest common ancestor of the leftmost and rightmost points (the blue points in the figure) within the slab defined by the vertical region boundaries. The remaining points in the three-sided region maycan be found by splitting it by a vertical line through the bottom point and recursing.]]
Cartesian trees may be used asform part of an efficient [[data structure]] for [[Range Minimum Query|range minimum queries]], a [[range searching]] problem involving queries that ask for the minimum value in a contiguous subsequence of the original sequence.<ref>{{harvtxt|Gabow|Bentley|Tarjan|1984}}; {{harvtxt|Bender|Farach-Colton|2000}}.</ref> In a Cartesian tree, this minimum value maycan be found at the [[lowest common ancestor]] of the leftmost and rightmost values in the subsequence. For instance, in the subsequence (12,10,20,15) of the sequence shown in the first illustration, the minimum value of the subsequence (10) forms the lowest common ancestor of the leftmost and rightmost values (12 and 15). Because lowest common ancestors maycan be found in constant time per query, using a data structure that takes linear space to store and that maycan be constructed in linear time, the same bounds hold for the range minimization problem.<ref>{{harvtxt|Harel|Tarjan|1984}}; {{harvtxt|Schieber|Vishkin|1988}}.</ref>
 
{{harvtxt|Bender|Farach-Colton|2000}} reversed this relationship between the two data structure problems by showing that data structures for range minimization could also be used for finding lowest common ancestors. Their data structure associates with each node of the tree its distance from the root, and constructs a sequence of these distances in the order of an [[Euler tour]] of the (edge-doubled) tree. It then constructs a range minimization data structure for the resulting sequence. The lowest common ancestor of any two vertices in the given tree can be found as the minimum distance appearing in the interval between the initial positions of these two vertices in the sequence. Bender and Farach-Colton also provide a method for range minimization that can be used for the sequences resulting from this transformation, which have the special property that adjacent sequence values differ by ±1. As they describe, for range minimization in sequences that do not have this form, it is possible to use Cartesian trees to reduce the range minimization problem to lowest common ancestors, and then to use Euler tours to reduce lowest common ancestors to a range minimization problem with this special form.{{sfnp|Bender|Farach-Colton|2000}}
 
The same range minimization problem may also be given an alternative interpretation in terms of two dimensional range searching. A collection of finitely many points in the [[Cartesian plane]] maycan be used to form a Cartesian tree, by sorting the points by their <math>x</math>-coordinates and using the <math>y</math>-coordinates in this order as the sequence of values from which this tree is formed. If <math>S</math> is the subset of the input points within some vertical slab defined by the inequalities <math>L\le x\le R</math>, <math>p</math> is the leftmost point in <math>S</math> (the one with minimum <math>x</math>-coordinate), and <math>q</math> is the rightmost point in <math>S</math> (the one with maximum <math>x</math>-coordinate) then the lowest common ancestor of <math>p</math> and <math>q</math> in the Cartesian tree is the bottommost point in the slab. A three-sided range query, in which the task is to list all points within a region bounded by the three inequalities <math>L\le x\le R</math> and <math>y\le T</math>, maycan be answered by finding this bottommost point <math>b</math>, comparing its <math>y</math>-coordinate to <math>T</math>, and (if the point lies within the three-sided region) continuing recursively in the two slabs bounded between <math>p</math> and <math>b</math> and between <math>b</math> and <math>q</math>. In this way, after the leftmost and rightmost points in the slab are identified, all points within the three-sided region maycan be listed in constant time per point.{{sfnp|Gabow|Bentley|Tarjan|1984}}
 
The same construction, of lowest common ancestors in a Cartesian tree, makes it possible to construct a data structure with linear space that allows the distances between pairs of points in any [[ultrametric space]] to be queried in constant time per query. The distance within an ultrametric is the same as the [[widest path problem|minimax path]] weight in the [[minimum spanning tree]] of the metric.<ref>{{harvtxt|Hu|1961}}; {{harvtxt|Leclerc|1981}}</ref> From the minimum spanning tree, one can construct a Cartesian tree, the root node of which represents the heaviest edge of the minimum spanning tree. Removing this edge partitions the minimum spanning tree into two subtrees, and Cartesian trees recursively constructed for these two subtrees form the children of the root node of the Cartesian tree. The leaves of the Cartesian tree represent points of the metric space, and the lowest common ancestor of two leaves in the Cartesian tree is the heaviest edge between those two points in the minimum spanning tree, which has weight equal to the distance between the two points. Once the minimum spanning tree has been found and its edge weights sorted, the Cartesian tree maycan be constructed in linear time.{{sfnp|Demaine|Landau|Weimann|2009}}
 
===As a binary search tree===
Line 40:
Because a Cartesian tree is a binary tree, it is natural to use it as a [[binary search tree]] for an ordered sequence of values. However, defining a Cartesian tree based on the same values that form the search keys of a binary search tree does not work well: the Cartesian tree of a sorted sequence is just a [[path graph|path]], rooted at its leftmost endpoint, and binary searching in this tree degenerates to [[sequential search]] in the path. However, it is possible to generate more-balanced search trees by generating ''priority'' values for each search key that are different than the key itself, sorting the inputs by their key values, and using the corresponding sequence of priorities to generate a Cartesian tree. This construction may equivalently be viewed in the geometric framework described above, in which the <math>x</math>-coordinates of a set of points are the search keys and the <math>y</math>-coordinates are the priorities.{{sfnp|Seidel|Aragon|1996}}
 
This idea was applied by {{harvtxt|Seidel|Aragon|1996}}, who suggested the use of random numbers as priorities. The data structure resulting from this random choice is called a [[treap]], due to its combination of binary search tree and binary heap features. An insertion into a treap maycan be performed by inserting the new key as a leaf of an existing tree, choosing a priority for it, and then performing [[tree rotation]] operations along a path from the node to the root of the tree to repair any violations of the heap property caused by this insertion; a deletion maycan similarly be performed by a constant amount of change to the tree followed by a sequence of rotations along a single path in the tree.{{sfnp|Seidel|Aragon|1996}} A variation on this data structure called a zip tree uses the same idea of random priorities, but simplifies the random generation of the priorities, and performs insertions and deletions in a different way, by splitting the sequence and its associated Cartesian tree into two subsequences and two trees and then recombining them.{{sfnp|Tarjan|Levy|Timmel|2021}}
 
If the priorities of each key are chosen randomly and independently once whenever the key is inserted into the tree, the resulting Cartesian tree will have the same properties as a [[random binary search tree]], a tree computed by inserting the keys in a randomly chosen [[permutation]] starting from an empty tree, with each insertion leaving the previous tree structure unchanged and inserting the new node as a leaf of the tree. Random binary search trees had been studied for much longer, and are known to behave well as search trees (they have [[logarithm]]ic depth with high probability); the same good behavior carries over to treaps. It is also possible, as suggested by Aragon and Seidel, to reprioritize frequently-accessed nodes, causing them to move towards the root of the treap and speeding up future accesses for the same keys.{{sfnp|Seidel|Aragon|1996}}
Line 46:
===In sorting===
[[File:Bracketing pairs.svg|thumb|300px|Pairs of consecutive sequence values (shown as the thick red edges) that bracket a sequence value (the darker blue point). The cost of including this value in the sorted order produced by the Levcopoulos–Petersson algorithm is proportional to the [[logarithm]] of this number of bracketing pairs.]]
{{harvtxt|Levcopoulos|Petersson|1989}} describe a [[sorting algorithm]] based on Cartesian trees. They describe the algorithm as based on a tree with the maximum at the root,{{sfnp|Levcopoulos|Petersson|1989}} but it maycan be modified straightforwardly to support a Cartesian tree with the convention that the minimum value is at the root. For consistency, it is this modified version of the algorithm that is described below.
 
The Levcopoulos–Petersson algorithm can be viewed as a version of [[selection sort]] or [[heap sort]] that maintains a [[priority queue]] of candidate minima, and that at each step finds and removes the minimum value in this queue, moving this value to the end of an output sequence. In their algorithm, the priority queue consists only of elements whose parent in the Cartesian tree has already been found and removed. Thus, the algorithm consists of the following steps:{{sfnp|Levcopoulos|Petersson|1989}}