Treaps: The Magical Awesome BBT
Table of Contents
Treaps are an awesome variant of balanced binary tree. By making some use of randomness, we have a BBT that is expected to be of logarithmic height, but is also easy to code, requiring none of the specialcase bashing that characterizes redblack trees. Here, we'll code up a complete treap implementation in Lisp (particularly, pylisp) that is also fully functional.
Binary Trees
Note
What I'm calling a "binary tree" here is normally called a binary search tree. In true Lisp fashion, I'm cutting out the unnecessary word "search" for now — we'll never have the occasion to discuss standard binary trees.
Quick review: what's a "binary tree"? Simply put, it's a tree where each node has both a key and a value, and for each node, the left child has a smaller key and the right child has a larger key (we specifically avoid the case of nonunique keys, because we'll be discussing BBTs as a way of simulating maps). Recursing, we can see that the left subtree of any node has only nodes with smaller keys, and the right subtree of any node has only nodes with larger keys. This enables fast lookups for any key: in effect, we binary search the values. Of course, this only works if the left and right subtrees of all the nodes we pass are approximately equal in size. Now, how do we guarantee that? Well, that's the great problem of "balancing" a binary tree.
There exist many, many algorithms to balance binary trees, but most of them are absolutely horrible, horrible things with thousands of special cases. Which is a pity, really, as binary trees lend themselves to functional code very well, unlike, say, hash tables, which cannot be copied in part. There are further benefits to balanced binary trees. For example, say you're writing an address book, and you need to store keyvalue pairs of contacts and phone numbers. A naive programmer would use a hash table, but this is incorrect. After all, you only ever display the names in order, and thus by storing the contacts as a binary tree, you transform \(O(n \log n)\) code (because of the sort) into \(O(n)\) code (just the iteration). The ability to send small updates as opposed to complete new hash tables is another advantage of balanced binary trees. And it is for this reason that we embark on our epic journey to balance binary trees using treaps.
One crucial operation on binary trees is rotation. What this does is modify the tree in such a way as to preserve the order properties of the key.
Figure 1: Left and right rotation in a binary search tree
In the above picture, I'm not actually drawing subtrees that don't matter —
the greek letters represent full trees that we just preserve while rotating. If
you're confused about which subtree ends up going where, take a deep breath,
or maybe a short nap, and start from the top. In any case, it should be clear
by inspection that this transformation retains the tree's order properties,
which is really its most important feature. Further, it provides a way to
move nodes upward in the tree — just as x
moves upward as a result of the
transformation above, we could have another rotation to move it yet further
and further, until it became the root of the tree. In fact, any two binary
trees can be transformed into each other using only tree rotations (assuming
of course that they have the same number of elements). I should note that going
left to right in the above diagram is called a right rotation, whereas right
to left is called a left rotation.
Heaps
And now our second fundamental data structure that we will build upon: the "heap". A heap is a very specialpurpose data structure, often used to implement priority queues. The rule is simple: every node has a priority, and the priority of any node is less than those of its two children. And, working recursively, we see that any node has the lowest priority of its subtree.
Note
Technically, what I've described can be properly called a "minheap", because the root is the minimum element. Alternatively, one can make each node larger than its children, creating a "maxheap". Normally, one does not actually care which a heap is, since the main property is that the root is extremal.
Our Goals
Our goal is to create a treap implementation. We'll be using Lisp, in particular pylisp; feel free to follow along in whatever language you want. Our treap will support the following interface:

(get key treap)
 Get the value associated with
key
from our treap. 
(set key val treap)
 Return a new treap, which is the same as
the old one except associates
key
withval
. 
(del key treap)
 Return a new treap, which is the same as the
old except does not associate anything with
key
. 
(in key treap)
 Return
#t
ifkey
is in your treap, and#f
if not. 
(merge left right)
left
andright
are proper treaps; all elements ofleft
are lesser in key than any element ofright
, and viceversa. Return the treap that contains all the elements from eitherleft
orright
.
(split key treap)
 Return two treaps, one for all nodes with key
less than
key
, and one for all nodes with larger keys. We assume that the treap does not containkey
. 
(treap>list treap)
 Creates a nested list with format
(key leftchild rightchild)
.
Note that since our treap will be fully functional, set
, del
, and
merge
will return new treaps with the necessary modifications made;
furthermore, we will require that each of these methods, with the
exception of treap>list
, run in \(O(\log n)\) time.
Theoretical Backing
We'll define a treap as a sort of crosspolination between a binary
tree and a heap. In particular, each node will have a key
, a val
,
and a priority p
, as well as a left child left
and a right child
right
(both of which may be #0
, that is, null). We will require
that our treap is a binary tree in terms of its keys and values, but a
heap in terms of its priorities. Balancing will be achieved by
assigning the priorities at random; we hope that this will tend to
move middling values into the root of the tree. We'll prove this
result — that we expect the height of the treap to be logarithmic
— later. For cleanness of code, we'll further require that the
priorities and keys be unique.
Now, first, does there always exist a treap for a given set of keys and priorities? And, can we be sure that this treap is unique? A simple argument answers both in the positive. Clearly, the root node must be the node with lowest priority. This partitions the remaining nodes into two sets: those with key less than the root, and those with key greater. We recurse onto each of these sets, using as our base case the treap with a single node (clearly, it is unique).
Alright, fine; what about proving that we'll get logarithmic height? Well, let's make the assumption that our random keys are being drawn from a uniform distribution. Now consider the root node, and ask: what's the size of the larger of its children, relative to the total tree size? Well, let's say without loss of generality that the right subtree is the larger one. Consider the probability that this child is less than three quarters of the whole tree. What's the probability of this? One half (verify). So, we expect that for every two levels, our largest subtree must shrink by a factor of four thirds. But wait, this means that our tree is shrinking geometrically! So, we have a logarithmic bound. And that means that we win, and have a good, balanced binary tree.
Implementation
Alright, so we've got our theory down pat, and it's time to start firing our massive sexpressionspewing assault rifles. First things first, we have to define our nodes:
(class::simple Node (p key val left right))
For those of you unfamiliar with pylisp, this will define a class with fields
p
, key
, and so on, and provide a basic constructor for this class.
How might we transform this Node
into a nice list? Well, nothing to
it!
(def treap>list (root) (if (is root Node) `(,root.key ,(treap>list root.left) ,(treap>list root.right)) root))
Pylisp note: the is
method checks if an object is an instance of a
certain class, here used to bottom out our recursion.
Since they will be useful, let's also implement left and right rotations:
(def leftrotate (node) (Node node.left.p node.left.key node.left.val node.left.left (Node node.p node.key node.val node.left.right node.right))) (def rightrotate (node) (Node node.right.p node.right.key node.right.val (Node node.p node.key node.val node.left node.right.left) node.right.right))
Lookup and Containment
Now, let's get cracking. First up is lookup, that is, the get
function. Simple enough. Since our treap is already a binary tree,
we can use the standard binary tree lookup algorithm, completely
ignoring priorities:
(def get (key root) (cond ((not root) (signal '(error notfound key get) key)) ((< key root.key) (get key root.left)) ((> key root.key) (get key root.right)) ((= key root.key) root.val)))
Simple enough, eh? What about containment? Oh, it's even easier:
(def in (key root) (cond ((not root) #f) ((< key root.key) (in key root.left)) ((> key root.key) (in key root.right)) ((= key root.key) #t)))
Alright, onward to mutation methods!
Insertion
Now on to the more difficult modification operations. How do we
insert an element, how do we update a value? Well, we can spider down
to the place we want to add the node very easily — it's the same as
our get
above. Now, we can just add the new node there, and if its
priority is larger than its new parent's, we're golden. What if it's
not? Well, then we have to move it upwards in the tree. How do we do
that? Well, we mentioned how we move elements up and down before:
it's called a tree rotation. So, all we have to do is, if our
inserted element breaks the heap property of our treap, move it
upwards by way of the appropriate rotation.
This is a complicated method, so let's do it stepbystep. First, we'll need to import a module to get random numbers:
(use random)
Next, recall that we'll be generating a priority for this new element
upon insertion, and then passing it up and down the call stack as
necessary. This may seem like a waste of code — why not generate it
once we get to the leaves and are ready to generate our new node? The
simple answer is, "trust me on this one". Anyway, here's our skeletal
set
:
(def set (key val root #:(p (random.randint 0 1000000000))) (cond ((not root) ?) ; Just generate the new node ((< key root.key) ?) ; Recurse leftwards, then do a leftrotation if necessary ((> key root.key) ?) ; Same as the (<) case, but recurse and rotate to the right ((= key root.key) ?))) ; Update existing node. So, just create the new node
Alright, let's fill this baby in. The first case is easy:
((not root) (Node p key val #0 #0))
Note that I'm using #0
as a representation of the null treap.
Next, let's do the last case, since it's also pretty easy:
((= key root.key) (Node root.p root.key val root.left root.right))
Well, that's simple enough as well. Note that I'm not updating the priority. Simply, there's no good reason to, and it'll lead to extra rotations that I could do without.
Now we have just the middle two cases left, and since they're identical, we can do them together. It's not too hard. We have to create the new node, recursing as necessary. Then we just have to return that node, perhaps first carrying out an appropriate rotation:
((< key root.key) (let (new (Node root.p root.key root.val (set key val root.left #:(p p)) root.right)) (if (< new.left.p new.p) (leftrotate new) new))) ((> key root.key) (let (new (Node root.p root.key root.val root.left (set key val root.right #:(p p)))) (if (< new.right.p new.p) (rightrotate new) new)))
Putting it together, we have:
(def set (key val root #:(p (random.randint 0 1000000000))) (cond ((not root) (Node p key val #0 #0)) ((< key root.key) (let (new (Node root.p root.key root.val (set key val root.left #:(p p)) root.right)) (if (< new.left.p new.p) (leftrotate new) new))) ((> key root.key) (let (new (Node root.p root.key root.val root.left (set key val root.right #:(p p)))) (if (< new.right.p new.p) (rightrotate new) new))) ((= key root.key) (Node root.p root.key val root.left root.right))))
Don't worry, this is by far the scariest method we'll see. And note
that this is completely functional — note a single set!
call was
made. Doesn't recursion mix so well with functional programming? Yes
it does. Especially when treaps get involved.
Split & Merge
Before we tackle deletion, let's take a quick detour to splitting and merging. What splitting does it produces two treaps from one, a left treap and a right treap, where the left treap contains only values less than a given key and the right treap contains only values greater than a given key. We assume that the key is not in the treap (otherwise, where would it go?). Of course, the left and right treaps that result from the splitting must themselves be valid treaps.
How are we going to do this? Well, we're gonna cheat. We're gonna add in a new node, with the given key, but give it a really low priority so that it becomes the root. Then we can just pluck off left and right subtreaps and return those. Nice and simple, huh?
(def split (key root) (let (ins (set key #0 root #:(p ( 1)))) `(,ins.left ,ins.right)))
Nice and easy, there we go. We're inserting a node with priority
1
, which is guaranteed to be the smallest priority (all other nodes
have a positive priority from zero to a billion). And because
insertion is functional, we never have to explicitly remove our dummy
node.
Now, what about merging? Merging takes two treaps that might have
been the result of a split
, and merges them into one treap. How?
Well, given the two treaps, one can easily find the root element of
the new treap — it's the root of one of the two treaps. Now, we
have our root (say it came from the right tree), and we have its right
child — that's just its original right child. After all, that right
child already contains all of the larger elements, by definition. What
about its left child? Well, we still have two treaps worth of elements
to consider: the original left child of our new root, and the original
left treap. And so we merge those, recursively.
(def merge (left right) (cond ((not left) right) ((not right) left) ((< left.p right.p) (Node left.p left.key left.val left.left (merge left.right right))) (#t (Node right.p right.key right.val (merge right.left left) right.right))))
What's that #t
doing there? Well, that's the default clause in my
conditional, you might say. The reason it isn't (> left.p right.p)
is to
guard against the minute chance of nonunique priorities.
Note
The first time I wrote this, there were errors. They are now fixed. The new code is deceptively similar; but then again, even minor changes are crucial in recursive functions (since those errors propogate).
Deletion
Alright. You might say, this treap stuff seems okay so far, but so do redblack trees, and then you hit deletion. As you're about to see, deletion for treaps is very nice and intuitive. Intuitive. That's right. Onward!
Now, consider the node you want to delete. If your node is a leaf node, your job is easy — just delete it. If your node has only one child, your job is also pretty easy — just remove the node and connect its single child in its place. But what if the node has two children? Then, when you remove it, you have to somehow combine two the two child subtreaps into one. But how? How might we do such a merger?
Oh, yeah, merge
.
The rest of the method is a simple recursion back and forth:
(def del (key root) (cond ((not root) (signal '(error notfound key del) key)) ((< key root.key) (Node root.p root.key root.val (del key root.left) root.right)) ((> key root.key) (Node root.p root.key root.val root.left (del key root.right))) ((= key root.key) (merge root.left root.right))))
Note the pure, happy simplicity of this method! Oh, the joys of treaps!
Usage
I like to add some basic tester code to the bottom of my files. Not exhaustive unit tests, mind, you (we don't want import taking forever, do we?), but some basic usage, just to make sure everything works. Let's do that here:
(use tester) (test "Sanity check" (let (treap #0) (set! treap (set 5 'a treap)) (set! treap (set 7 'b treap)) (assert (= (get 5 treap) 'a)) (assert (= (get 7 treap) 'b)) (set! treap (set 2 'c treap)) (assert (= (get 2 treap) 'c)) (set! treap (set 2 'd treap)) (assert (= (get 2 treap) 'd)) (set! treap (del 5 treap)) (assert (not (in 5 treap))))) (test "Fairly Balanced" (let (treap #0) (signal '(warning test slowtest) "Balancing test") (for (i (range 1000)) (set! treap (set i #0 treap)) (print i)) (assert (< (depth treap) 40))))
A Quick Problem
Suppose we want to create a sorted array; that is, we want to be able to retrieve the \(n\)th largest key, in \(O(\log n)\) time. How do we do this? There is a rather simple way, involving treaps (or, really, any balanced binary tree). As a hint, I suggest considering the fact that we're not using the value stored in each node for anything, thus leaving it open to be appropriated for some purpose.
Wrapping Up
Overall, the code above weighs in at under 100 lines, and that's including the tester code. Perhaps that's not impressive. If it isn't, go write redblack trees in Java. We'll see who's laughing then.
What about cool optimizations? Anything in that department? Well, it's a good thing you ask. Because, you see, we've got all of these priorities floating around. And of course they're supposed to be random. But don't you just get the urge to fiddle with them?
One simple optimization is to decrease the priority of a node every
time it is accessed (whether by get
or set
). This will slowly
migrate your commonlyaccessed nodes up to the top of the treap, where
they're faster to get at. Of course, this might make accesses slower,
as you need to rewrite the treap. Your mileage may vary. Look before
you leap, benchmark when in doubt, beware of dog.
I'm not providing code for this variant. Because in my experience, on
pylisp, this behavior is rarely what you want, unless of course you
have a treap of several thousand elements but you only ever access
three of them. In which case you need a better brain, not a better
algorithm. But I'll leave this optimization as an exercise to the
reader. As a hint, I'd suggest going over your code for split
carefully if you do optimize this, and also make sure to deal with the
case where priorities are nonunique.
Acknowledgments
Thanks to Sam Fingeret for first introducing me to treaps, and for helping out with the details of the proof that the height is expected to be logarithmic. Thank you as well to Mark Velednitsky for editing this writeup, even though he can't stand the sight of Lisp.
Thanks also to folks on Hacker News who provided a lively discussion.