# Left-Leaning Red-Black Trees

B-trees are complicated to implement because overstuffing and splitting requires converting back and forth between 2-nodes and 3-nodes. Not only does this make the implementation harder to build and maintain, but it also results in a data structure that is sometimes slower than the simple binary search tree. In this reading, we will study how to design a self-balancing binary search tree using ideas from B-trees.

## Rotation

Depending on the order items are inserted into the tree, there are many different ways to construct a binary search tree containing the same set of items. However, insertion is not the only way to build different configurations of the same set of items. We can change the structure of a binary tree through rotation.

Given a particular node in a tree, we can either rotate it to the left or to the right by making it the child of one of its parents. In these definitions, `G` and `P` correspond to their labels in the tree shown below.

### Rotate left

Let `x` be the right child of `G`. Make `G` the new left child of `x`. We can think of this as temporarily merging `G` and `P`, then sending `G` down and left. In the example below, the height of the tree increases as a result of rotating left around `G`. ### Rotate right

Let `x` be the left child of `P`. Make `P` the new right child of `x`. We can think of this as temporarily merging `G` and `P`, then sending `P` down and right. In the example below, the height of the tree decreases as a result of rotating right around `P`. Let’s see how this plays out on a small example.

Visualize the implementation of `rotateRight` around the node with value 3 in Java Tutor.

Each rotation makes a local adjustment to gradually balance the tree, sometimes increasing or decreasing the height of the tree. More generally, rotations work because they respect the binary search tree invariant. Items that are ordered between B and D in the example below stay ordered between them after a rotation. ## Left-leaning BSTs

Assuming that we can maintain balance, binary search trees are an efficient search structure with relatively simple logic and implementation. Our challenge is to design a balanced search tree that gets the best of both worlds: a simple, fast implementation based on binary search trees with the self-balancing invariants of B-trees.

Goal
Develop a self-balancing BST from 2-3 trees.

How might we represent a 2-3 tree as a BST? 2-3 trees have two types of nodes.

2-node
A node with 2 non-null children.
3-node
A node with 3 non-null children.

2-3 trees with only 2-nodes are already regular binary search trees.

How can we represent 3-nodes in a BST?

Use an idea from rotation: we can separate a 3-node with the keys B and D into two nodes exactly as depicted above. Each 3-node can be converted into two 2-nodes connected with an extra “glue” edge.

We arbitrarily choose the left-leaning representation. Converting from the left-leaning BST back to a 2-3 tree is tricky since it’s hard to tell which BST nodes represent 3-nodes rather than 2-nodes. For the same reason, maintaining a left-leaning BST is similarly challenging to implement.

## Left-leaning red-black trees

To make the distinction clear, we color the “glue” edge red, hence the name left-leaning red-black (LLRB) tree. Left-Leaning red-black trees have a 1-1 correspondence with 2-3 trees, so every 2-3 tree has a unique LLRB tree associated with it. Red edges connect separated 3-nodes and help us to maintain correspondence with 2-3 trees but otherwise don’t have any special functionality.

LLRB tree invariants follow entirely from 1-1 correspondence with 2-3 trees.

Perfect black balance
Every root-to-null path has the same number of black edges. (All 2-3 tree leaf nodes are the same depth from the root.)
Left-leaning
Red edges lean left. (We arbitrarily choose left-leaning, so we need to stick with it.)
Color invariant
No node has two red edges connected to it, either above/below or left/right. (This would result in an overstuffed 2-3 tree node.)

### What would a 2-3 tree do?

We can implement LLRB tree algorithms using the fact that LLRB trees have a 1-1 correspondence with 2-3 trees. 2-3 tree operations like overstuffing leaves and splitting nodes can be implemented with rotations and recoloring in an LLRB tree. We can always ask, “What would a 2-3 tree do?”

Let’s learn how to insert into an LLRB tree by applying 1-1 correspondence with 2-3 trees.

All new nodes are added with a red edge from their parent. Once the node has been added to its correct place in the tree, we maintain LLRB tree invariants by applying rotations and color flips.

• Right link red? Rotate left.
• Two left reds in a row? Rotate right.
• Both children red? Flip colors.

The implementation for LLRB trees follows exactly these 3 rules after inserting the given `key`.

``````private Node add(Node h, Key key) {
if (h == null) { return new Node(key, RED); }

int cmp = key.compareTo(h.key);
if (cmp < 0)      { h.left  = add(h.left,  key); }
else if (cmp > 0) { h.right = add(h.right, key); }

if (isRed(h.right) && !isRed(h.left))      { h = rotateLeft(h);  }
if (isRed(h.left)  &&  isRed(h.left.left)) { h = rotateRight(h); }
if (isRed(h.left)  &&  isRed(h.right))     { flipColors(h);      }

return h;
}
``````

Since the height of an LLRB tree is guaranteed to be logarithmic, the runtime for `add` is in O(log N). If the `key` is not in the tree, then recursing down to find the insertion position takes Theta(log N) time since new leaves will be around log N depth from the root. Then, returning from each recursive call requires maintaining invariants by rotating and flipping colors. Each rotation or color flip is a constant time operation, so the overall runtime of add is in O(log N).