Bit Reversal Line Rasterization

On the 3rd of June 2023, at the X demoparty in Someren in the southern Netherlands, quiss aka Matthias Kramm presented Boo as his entry in the C64 4K Intro category. You can watch it on youtube here. While the demo mainly featured a couple of spinning cubes, the edges of those cubes were drawn using a new algorithm he calls “Bit Reversal Rendering”. This brought the number of CPU cycles needed to plot each pixel of a line on a Commodore 64 down from 7 to 5, reducing the overhead on a number of key graphical effects. This earned him first place, and the adoration of the assembled demosceners.
There are maybe half a dozen line plotting methods in existence and most of them date back to the cold war, so presenting a brand new one as a 4 kilobyte C64 intro that does parallel processing on the floppy disk controller is pretty cool. See my previous post for a survey of line drawing functions.
He later published details of how the algorithm works, but didn’t include an implementation. In this article I provide such an implementation, along with some further notes and explanations. I have also reproduced his animations in interactive form.
The code
Without further ado, here’s my implementation, presented in the same style as The Gamer’s Guide to Line Algorithms. If you’d like a refresher on traditional line rasterization, I’d suggest reading that first.
struct Point {
x: i32,
y: i32,
}
pub fn bit_reversal_line(start: Point, end: Point) -> Vec<Point> {
let octant = Octant::from_points(&start, &end);
let start = octant.to_octant0(start);
let end = octant.to_octant0(end);
let delta_x = end.x - start.x;
let delta_y = end.y - start.y;
let slope_int = delta_y / delta_x;
let slope_frac = delta_y % delta_x;
let sequence: Vec<i32> = (0..=delta_x).map(|x| x.reverse_bits()).collect();
let mut sequence_sorted = sequence.clone();
sequence_sorted.sort();
let threshold = sequence_sorted[slope_frac as usize];
let mut y = 0;
(0..=delta_x).map(|step| {
let point = Point::new(step + start.x, y + start.y);
y += slope_int;
if sequence[step as usize] < threshold {
y += 1;
}
octant.from_octant0(point)
})
.collect()
}
See here for the octant functions. Let’s compare it to the classic Bresenham line function:
pub fn bresenham_line(start: Point, end: Point) -> Vec<Point>
let octant = Octant::from_points(start, end);
let start = octant.to_octant0(start);
let end = octant.to_octant0(end);
let delta_x = end.x - start.x;
let delta_y = end.y - start.y;
let mut error = 2 * delta_y - delta_x;
let mut y = 0;
(0..=delta_x).map(|step| {
let point = Point::new(step + start.x, y + start.y);
if error > 0 {
y += 1;
error -= 2 * delta_x;
}
error += 2 * delta_y;
octant.from_octant0(point)
})
.collect()
}
We can see that there’s a lot in common:
-
Both functions use the octant transform to ensure that the line is going left to right with $0 \leq {slope} \lt 1$ (and then transform it back again afterwards.)
-
Both calculate
delta_x
anddelta_y
. -
Both combine these two values to get some representation of the slope of the line which is used in the drawing loop (but in very different ways.)
-
Both always increment the $x$ value with every iteration, which they can do because of the octant transform.
-
Both of them use the value that they derived from the slope (in Bresenham the “error”, in bit reversal the “threshold”) to decide at each iteration if they need to increment the $y$ value.
However there’s a lot that’s different! Bresenham is just doing some basic (if weird-looking) arithmetic, while Matthias’ function builds this weird structure based on mirroring the binary representations of a list of integers, which it then indexes into?!? What is going on? Well, as usual we’re going to take a diversion through some theory before we get to how it works.
Decisions, decisions
Both algorithms reduce the line drawing problem to a decision process: When to increment the $y$ coordinate? In aid of this a decision variable is maintained, in both cases derived from the slope of the line. Bresenham calls it the error
where the bit reversal line has the threshold
.
How then are these decisions made? Bresenham, as well as all the other functions discussed in the guide do it with geometry. They use some simple maths to determine which choice would give a set of pixels closest to the “real” line.
Alternatively, we can work natively in the digital domain, leaving all notions of continuously-defined “real” lines behind, and think purely about pixels/tiles in and of themselves. There’s extensive literature on the topic, with specific tools for working with what are called Digital Lines directly.
In the euclidian world, a straight line has a constant slope. On the lattice of the digital grid however, lines cannot be “straight” in precisely this sense: one pixel might go to the right at 90 degrees, while the next one goes up and to the right at 45 degrees! That’s not a constant slope. We need a different definition of straightness to guide our decision process, one that works discretely rather than continuously.
Spoiler alert: the bit reversal algorithm works by relaxing these constraints to get a line that isn’t digitally straight, but has other useful properties.
Off the chain

The central object of consideration in the study of Digital Straight Line Segments is the Chain Code. First described by Herbert Freeman in 1961, a chain code involves assigning each possible direction on the grid a number, and then encoding the position of each pixel relative to its predecessor using the appropriate digit.
This is quite a general technique that was often applied eg. to produce compressed encodings of complex shapes by tracing around their edges, as shown in the diagram. In the case of Digital Straight Lines however, we need only care about two directions: east and northeast. This is because of the symmetry of the plane, which we exploit via the octant transform. As such we can encode any line as a string (“word” in the classical literature) of ones and zeroes. Extremely convenient.
This can easily be related to our decision procedure: line functions are always deciding between right and up-right: 0
and 1
in chain code terms.
This sets us up for Freeman’s 1970 definition of Digital Straightness, based on three properties of chain code strings:
-
At most two types of elements can be present, and these can differ only by one.
Pretty obvious: If you go eg. up and then turn hard right then you’ve got a corner, the opposite of straight. Being as we’ve already restricted ourselves to the
0
and1
directions, this isn’t an issue for us. -
One of the two element values always occurs singly.
On the left here we can see that, while we only have two elements, they’re both occuring in pairs. There’s no singleton, so the line isn’t straight.
Note that it’s fine for both values to occur singly, but at least one of them has to.
-
Successive occurrences of the element occurring singly are as uniformly spaced as possible.
Essentially the code, and thus the line has to be “even.” The second line in our example is nicely symmetrical, where the first one isn’t.
This is the trickest of the three rules, and as originally stated by Freeman it’s not a rigorous definition. A few years later it was developed by Rosenfeld into the “Chord Property,” a statement about self-similarity. It involves assigning symbols to the different run-lengths in the chain code, and then recursively applying rules 1 and 2 to these higher-order codes, so you have runs of runs, and runs of runs of runs and so on. While interesting, it’s not relevant here.
Now that we understand the properties of Digital Straight Line Segments, we can consider which of them we might relax in search of a new line algorithm.
Removing either of the first two rules means changing the set of symbols in the chain code, opening the door to downright crooked lines. Removal of the third rule however just means re-arranging the elements. Therefore we can relax rule 3 without ever changing the length of the line (in terms of the number of tiles - see the $L_\infty$ discussion in the guide article.) It helps that rule 3 is the less strictly defined one to begin with.
With the help of Freeman’s properties, we can look at the decision procedure in a new light. We know in advance how many times we need to increment the $y$ coordinate, the relevant decision is when to actually do it. All previous line algorithms are based around doing it at the position that most closely matches the real line, via geometry. Freeman tells us that all we really need is to do it “evenly,” for some value of even. What mathematical structure might aid us in that goal?
The self-avoiding sequence
The Bit Reversal Line involves two clever ideas:
-
The clever way an integer sequence is used to decide when to increment the $y$ coordinate
-
The clever way the particular integer sequence is constructed
1. Is the really the innovation of the Bit Reversal algorithm, but we’re going to tackle 2. first, in order to motivate 1.
Let’s say we’re drawing a line from $(0,0)$ to $(8,5)$. We know the line will comprise 8 tiles (the $L_\infty$ distance) and we know that the $x$ coordinate will need to be incremented on every iteration, while the $y$ coordinate must be incremented on 5 of them. Our task is to decide how to distribute those 5 y-increments among the 8 tiles.
Traditional line algorithms would calculate the slope as $\frac 5 8 = 0.625$ and use a rounding operation to fairly insert one y-increment for every one-and-three-fifths x-increments. This is why they are so unstable and flickery when the line is animated - they’re recalculating the geometrically perfect chain code on every frame. When the slope increases and more increments are required, all the existing increments get shifted along so that the spacing remains perfectly even.
We want a way to add y-increments to a line without having to move the existing ones. As the slope increases, we want to add new increments between the old ones, such that Freeman’s evenness rule is always respected.
Fortunately for us, there is a remarkable mathematical object with precisely this behaviour!
The Van der Corput sequence, first described by its namesake in 1935, is what’s known as a low discrepancy sequence. Its values are fractions equidistributed over the unit interval - exactly the evenness property we’re after. The terms proceed like so:
\[\frac 1 2,\quad \frac 1 4,\quad \frac 3 4,\quad \frac 1 8,\quad \frac 5 8,\quad \frac 7 8,\quad \frac 1 {16},\quad \frac 9 {16},\quad ...\]At each step it picks one of the largest subintervals, and puts a new point in the middle of it. This puts each new point nicely distant from the previous one, as well as the other existing points. It’s is easier to see with an animation:
The way in which the sequence is constructed is quite ingenious. You take the natural numbers, write them in binary, reverse the bits, and then put decimal points at the front so they become fractions.
Integer | Binary | Reverse Binary | Decimal | Fraction |
---|---|---|---|---|
1 | 0001 | 0.1000 | 0.5 | $\frac 1 2$ |
2 | 0010 | 0.0100 | 0.25 | $\frac 1 4$ |
3 | 0011 | 0.1100 | 0.75 | $\frac 3 4$ |
4 | 0100 | 0.0010 | 0.125 | $\frac 1 8$ |
5 | 0101 | 0.1010 | 0.625 | $\frac 5 8$ |
6 | 0110 | 0.0110 | 0.375 | $\frac 3 8$ |
7 | 0111 | 0.1110 | 0.875 | $\frac 7 8$ |
8 | 1111 | 0.1111 | 0.9375 | $\frac 1 {16}$ |
If you’re unsettled by the sight of binary numbers with decimal points, I would suggest simply not worrying about it.
Flip mode
How on earth does this work? Well, the integers are incrementing by one each time. In binary, the least significant bit flips on every step, and then the next bit flips every other step, and the next bit flips every three steps and so on.
After you’ve reversed the bitstrings, the most significant bit is flipping on every step! This causes the proceeding points to alternate between the left and right halves of the interval, giving us the nice self-avoiding behaviour. The next bit causes the point to alternate between the two halves of that half, and so on. Very neat!
These points provide an ideal way to evenly position the notches in our line. Even better, it provides a way to position the notches stably as the line is animated. As the slope gets steeper and we need more and more notches, we can keep pulling terms from the Van der Corput sequence, and add each new notch such that it nicely avoids all the previous ones, which can stay in place.
Putting it together
Now let’s (finally) look at the code and see how it uses the VdC sequence to lay out the notches.
let delta_x = end.x - start.x;
let delta_y = end.y - start.y;
let slope_int = delta_y / delta_x;
let slope_frac = delta_y % delta_x;
After calculating $\Delta x$ and $\Delta y$ we divide one by the other to get the slope as usual, but we calculate the integer part and the fractional part separately. This is schoolkid-style division with remainder.
(Because of our assumption that $0 \leq {slope} \lt 1$, most of the time delta_x < delta_y
and so slope_int == 0
and slope_frac == delta_x
)
let sequence: Vec<i32> = (0..delta_x).map(|x| x.reverse_bits()).collect();
let mut sequence_sorted = sequence.clone();
sequence_sorted.sort();
let threshold = sequence_sorted[slope_frac as usize];
First, we calculate enough terms of the Van der Corput sequence for each of the tiles in our line. Then comes the weird bit. We sort the sequence, and extract the slope_frac
-th term, and call this the threshold
.
In the drawing loop, we check each bit-reversed x position (which we don’t need to recalculate because it’s still stored in the sequence
vector) against the threshold:
if sequence[step as usize] < threshold {
y += 1;
}
If the value is under the threshold, we increment the y coordinate, adding a notch to the line.
This guarantees exactly slope_frac
notches will be placed, distributed according to the Van der Corput sequence. As the slope of the line changes, the existing notches will remain stable with respect to the line, with new ones added between the old.
The Sorting Trick
In the $(0,0)$ to $(8,5)$ example we considered earlier slope_frac
would be 5, so 5 notches are needed and the first 5 Van der Corput fractions should define their placement.