I’m an ethusiastic promoter of the C language. One of the outmoded cultural phenomena associated with C are bit-twiddling hacks: a collection of brain-teasers that implement improbably complex algorithms using bitwise arithmetic. They are often developed under the guise of code optimization, but they mostly serve to impress your friends and confuse enemies.
I have also previously written about fractals; they’re pieces of mathematical curiosa that enjoyed a near-mythical status in 1980s, but are no longer talked about in polite company. One of the better-known fractals is the Sierpiński triangle. It is constructed by taking an ordinary triangle and then successively removing the middle one-fourth from what’s left:
The fractal has some interesting properties; most notably, the surface area decreases by 25% in each iteration while the perimeter increases by 50%, so the respective limits are 0 and ∞. This is despite the figure retaining the same overall footprint as the starting triangle.
Anyway — there is this astonishingly simple bit-twiddling hack that somehow produces the Sierpiński triangle (demo):
#include <stdint.h> #include <stdio.h> #define LEN (1 << 6) int main() { for (uint32_t y = 0; y < LEN; y++) { for (uint32_t x = 0; x < LEN; x++) printf("%s", (x & y) ? " " : "MM"); putchar('\n'); } }
In essence, we iterate over a pair of integer coordinates, x and y, and then color each cell depending on whether bitwise x & y is zero or non-zero. That’s it! For a pair of counters running from 0 to 63, the result is this:
Increasing the range of the counters adds more detail, producing a finer-grained fractal with more and more nested triangles. But… why?
A hand-wavy explanation is that the bit-twiddling part is mostly a red herring. There’s nothing clever about bitwise AND; the magic is the positional numeral system! If we visualize the process of counting from 0 to 63 in binary, we get the following bit pattern:
The value of the least significant bit is toggling with every tick, the next bit is flipping back and forth at half the frequency, and so on. This can be thought as a fractal pattern in itself: as we increase the size of the counter, the visualization acquires more and more self-similar detail, with no hard limit in sight. In fact, if you squint your eyes, the pattern does look a bit like a succession of somewhat squished (log-scale) triangles.
For a more precise tie-in to the Sierpiński shape, we do need to peek under the hood of the x & y operation. We’re calculating a bitwise AND of two counters, but what is the result of that? It feels wrong, man!
Well, let’s start by looking at the most significant bit. If you’re familiar with binary numbers, it should be obvious that in our 6-bit (64-value) case, the MSB of the x coordinate will be zero for all values less than 32, and one for all values equal to or above that threshold:
\(\begin{array}{|c|c|} \hline \textbf{Counter state} & \textbf{Binary} \\ \hline x = 0 & 000000 \\ x = 1 & 000001 \\ x = 2 & 000010 \\ ... & ... \\ x = 31 & 011111 \\ x = 32 & \color{crimson}1\color{black}00000 \\ x = 33 & \color{crimson}1\color{black}00001 \\ ... & ...\\ x = 63 & \color{crimson}1\color{black}11111 \\ \hline \end{array}\)
This divides the x axis into two halves; the same is true for the y coordinate, so we end up with four quadrants in the two-dimensional plot. Only in the bottom right quadrant (32x32 cells), both of the MSBs are set:
\(\begin{array}{c|c} 0 \quad 0 & 0 \quad 1 \\ \hline 1 \quad 0 & \color{crimson}1 \quad 1 \end{array}\)
In other words, plotting the MSB of x & y nets us the following:
Next, let’s have a look at the second most significant bit (#4). The value cycles at twice the frequency of the MSB, so the x axis is divided into four sections. The same is happening in the y axis, producing a pattern of four 16x16 squares where bit #4 is set in the output of x & y. They’re pictured below, superimposed on top of the previously-marked bottom quadrant for the MSB:
After that, bit 3 is toggled eight times in each axis, lighting up the following regular grid of 8x8 blocks:
The same goes for bit 2, lighting up a regular pattern of 4x4 blocks:
After two more steps, we end up with the following result; the dark blue cells are coordinates that were never lit up in any of the passes, netting the familiar triangle:
In effect, the algorithm is an iterative block-removal approach in disguise; it just doesn’t look that way because the passes are “parallelized” by the arithmetic logic unit of a CPU — at least up to the maximum hardware-supported bit width of an integer.
I write well-researched, original articles about geek culture, electronic circuit design, and more. If you like the content, please subscribe. It’s increasingly difficult to stay in touch with readers via social media; my typical post on X is shown to less than 5% of my followers and gets a ~0.2% clickthrough rate.