*01.00* About
Welcome to quaternion.cafe! A portal into the world of sensor fusion with quaternions. This vim-inspired tutorial has interactive code editors and 3D visualizations scattered throughout. The content is centered around fusing and integrating a gyroscope with an accelerometer, and explains (with code) how to accomplish this task. All the visualization use real datasets from an actual IMU to illustrate some of the concepts.
I spent the last 5 years making this IMU-based telescope accessory (looking for beta testers!) and during that time I had to learn everything from scratch. The knowledge has been synthesized. Beep boop. And thus I created quaternion cafe. There's a lot of information about sensor fusion nobody seems to talk about, so I'm trying to fill some gaps.
Table of Contents
*02.00* Quaternions?
What is a quaternion? Well, don't be alarmed, but a quaternion is a 4D representation
of a rotation in 3D space.
It contains a scalar component (often denoted as w
) and a 3D vector. Putting
it together, you get:
(w,x,y,z)
.
Quaternions are used heavily in many industries
such as computer graphics, video games, virtual-reality headsets, and more.
Why?
- Avoids gimbal lock that's notorious with euler methods
- Fast to compute. It's just a couple multiplications in a trench coat
- Smooth interpolation! You can slerp between orientations
*02.01* The Basics
So if a quaternion has 4 components (w,x,y,z)
, what does each number
mean? Quaternions are similar to the axis-angle representation of rotation, but
they are encoded a little differently.
Where axis-angle is the rotation amount and the direction of rotation (θ,n̂)
a quaternion is
[cos(θ/2), sin(θ/2)n̂]
.
Here's a little function that might explain it better.
/* Produce a quaternion from an axis-angle representation.*/ function quat_from_axis(theta: number, x: number, y: number, z: number) { const halfcos = Math.cos(theta * 0.5); const halfsin = Math.sin(theta * 0.5); return new Quaternion( halfcos, axis[0] * halfsin, axis[1] * halfsin, axis[2] * halfsin, ); }
If you'd like to know more about the specifics behind quaternion mathematics, I highly recommend chapter 8 of this online textbook. The rest of this article will focus on applying quaternions to real sensor data.
*02.02* Code Library
The code editors you'll find on this website have access to a very small quaternion library that I've created. You can find similar functions in nearly every quaternion library out there. Here is the basic cheatsheet.
/*Produces a pure quaternion, where the `w` element is 0 and the real part is populated with the provided x, y, z values.*/ quat_pure(x: number, y: number, z: number); /*Normalize a quaternion by dividing all the elements by the square sum of its components*/ quat_normalize(q: Quaternion); /*Produce a quaternion from an axis-angle representation. Note that the resulting quaternion must be normalized*/ quat_from_axis(axis: number[], theta: number); /*Produce a quaternion from the cross product between two vectors. Note that the resulting quaternion must be normalized*/ quat_from_two_vectors(vector_a: number[], vector_b: number[]); /*Produce a quaternion from euler angles*/ quat_from_euler(x: number, y: number, z: number); /*Multiply two quaternions together*/ quat_mult(q1: Quaternion, q2: Quaternion); /*Multiply some value across each component of the quaternion*/ quat_product(q: Quaternion, val: number); /*Add two quaternions together, component-wise*/ quat_add(q1: Quaternion, q2: Quaternion); /*Subtract two quaternions, component-wise.*/ quat_sub(q1: Quaternion, q2: Quaternion); /*Compute the conjugate of a quaternion*/ quat_conj(q: Quaternion);
*02.03* Rotating a Quaternion
If you're familiar with euler angles you might have seen Rodrigues' rotation or perhaps rotation matrices. These are formulas to rotate an arbitrary vector by some axis-angle. Quaternions can do something similar.
For example, here's the code to rotate an arbitrary vector by 30 degrees.
// Define how much we want to rotate the base vector by let q_rotationAmount = quat_from_euler(0, 30, 0); // Define the base vector let q_vec = quat_pure(10, 10, 10); // Apply the quaternion rotation method q_vec = quat_mult(q_rotationAmount, q_vec); q_vec = quat_mult(q_vec, quat_conj(q_rotationAmount));
Here's the math notation for that formula.
\[ p \otimes Q \otimes p' \]
In english, you rotate the pure vector quaternion
Q
by some rotation quaternion p
and then you rotate the result of that by the conjugate of quaternion
p
. This "sandwhich method" can rotate any quaternion by any other quaternion.
We'll be using this technique to create a quaternion that can convert between the
local frame and the earth frame.
*03.00* Orientation
What do we mean by 3D orientation? Well, imagine wearing a VR headset. That headset needs to know where you are looking in the virutal world and update as you move around. To do this, it will use an array of sensors to compute and track orientation.
There are generally 2 sensors you need to accomplish this. An accelerometer and a gyroscope. The accelerometer is a sensor that can detect the direction of gravity relative to the body coordinate system. It's constrained to roll and pitch calculations. Any motion along the yaw axis will be meaningless to an accelerometer.
The gyroscope can detect how the device is moving in 3D space (often as angular motion in degrees-per-second). And, yeah, there's a secret third type of sensor that's often employed called a magnetometer which can sense the earths magnetic field. This one is typically used to measure yaw but it is very suceptible to hard and soft iron distortions.
Each sensor has a cornucopia of challenges. For example, MEMS gyroscopes often have a variable bias that depends on time (random-walk), temperature, and even the earths rotation. Accelerometers are very noisy. And magnetometers require wizardry to calibrate (and are highly susceptible to surrounding magnetic interference).
But these devices are the foundation of any modern MEMS-based orientation system.
*03.01* Euler integration
Now let's suppose we have a gyro that's returning angular rates of motion. How do we integrate those values and keep track of the total amount of rotation accumulated over time? There are many ways to numerically integrate the values. You may find Runge-Kutta to be practical if your device is spinning quickly (yep, you can uses this method on quaternions!). But simple euler integration is also acceptable. The formula for euler integration is:
\[ \dot{q}(t) = \tfrac{1}{2}\,\Omega\!\big(\omega(t)\big)\,q(t) \]
Or, in English, half the angular motion multiplied by the quaternion you're integrating into multiplied by the timestep. This paper[pdf] has all the details if you want to better understand the math.
> I grow tired of these symbols. Show me the code.
Alright, I'm going to mount an IMU to a telescope and sample it at 200Hz while I rotate the platform 90 degrees horizontally. Let's see what it looks like when we integrate those values.
/* This function is automatically called and loaded into the 3D visualization. It should return an array of quaternions to be rendered */ function process() { const rows = loadDataset("horizontal.json"); let results = []; // Quaternion to hold the integrated values let q_body = new Quaternion(1, 0, 0, 0); for (const row of rows) { /* Each row contains ax,ay,az,gx,gy,gz,bx,by,bz,odr - accelerometer (x,y,z) - gyroscope (x,y,z) - bias offset (x,y,z) - output data rate */ const dt = row.odr; const data = [ rads(row.gx - row.bx), rads(row.gy - row.by), rads(row.gz - row.bz), ]; let q_gyr = quat_pure(data[0], data[1], data[2]); const q_dot = quat_mult(q_body, q_gyr); const q_delta = quat_product(q_dot, 0.5 * dt); q_body = quat_sub(q_body, q_delta); q_body = quat_normalize(q_body); results.push(q_body.copy()); } return results; }
> I thought you said you were moving the telescope horizontally?
I did! So why is the cube moving diagonally? This brings us to our first major lesson: the gyroscope (and accelerometer) are in the local frame of reference. That means the motion it senses will be relative to the weird angle the device is mounted at. Not relative to our earthly vantage point. So how do we correct for this?
*03.02* Tilt Compensation
With quaternions, the solution here will be domain-specific. Since this IMU is mounted to a fixed telescope, all we need to do is create a quaternion that can convert between the body coordinate system and the earth coordinate system.
To do this, you can sample the accelerometer at the start of the maneuver,
calculate the cross-product of the
accelerometer with the ideal gravity vector (0,0,1)
and the resulting quaternion can be used to
apply this coordinate-system conversion.
Pretty rad!
Here's an example which modifies our previous attempt and applies tilt-compensation.
function process() { const rows = loadDataset("horizontal.json"); let results = []; let q_body = new Quaternion(1, 0, 0, 0); /* Extract the first accelerometer vector */ const first_acc = [ rows[0].ax, rows[0].ay, rows[0].az ]; /**** Compute a quaternion that translates the body frame-of-reference into the earth frame by applying the cross-product of the first accelerometer reading with the ideal gravity vector. ****/ const q_bodyToEarth = quat_from_two_vectors(first_acc, [0,0,1]); for (const row of rows) { const dt = row.odr; const data = [ rads(row.gx - row.bx), rads(row.gy - row.by), rads(row.gz - row.bz), ]; // Euler integration let q_gyr = quat_pure(data[0], data[1], data[2]); const q_dot = quat_mult(q_body, q_gyr); const q_delta = quat_product(q_dot, 0.5 * dt); q_body = quat_sub(q_body, q_delta); q_body = quat_normalize(q_body); /* Tilt compensation Use the rotation technique to convert our accumulated body orientation into the earth coordinate frame. ********/ let q_world = q_body.copy(); q_world = quat_mult(q_bodyToEarth, q_world); q_world = quat_mult(q_world, quat_conj(q_bodyToEarth)); results.push(q_world.copy()); } return results; }
Now that's pretty cool! It even looks like I nailed the 90 degrees correctly when performing the manuever.
The specific method I used is fine for this example, but there's a better way. In a real system, you would estimate your initial quaternion with the accelerometer and then update that estimate by fusing the gyroscope readings and apply a filter to keep it synced with reality.
To learn more, you can research these keywords:
- Complementary filter
- Mahoney filter
- Kalman filter
*04.00* Chirality
Every sensor has an axis mapping that determines which face points in which direction and whether the rotation about that axis is positive or negative. Some folks call this "handed-ness", I call it chirality. The handed-ness of your system will determine how you will integrate the values.
When choosing a sensor package, you need to make sure to understand the axis mapping and you may even need to remap the sensor readings in software so they match the expected coordinate system that your quaternion library is using. Check out section [06.00] for a real-world example of this exact problem.
This cropes up most frequently when you convert the quaternion to and from euler angles (typically for "presentation" but there's a lot of other reasons you might do this). A mismatched chirality is going to result in all kinds of weird behavior.
*04.01* Starting Orientation
As mentioned in section [03.02], the preferred way to implement sensor fusion is by estimating the initial quaternion and then keeping that estimate up-to-date. This will improve accuracy because each time the sensor changes position, the body coordinate system also changes. If you didn't take this into account, motion may be incorrectly attributed to the wrong axis.
That's where complementary filters or a mahoney filter will come into play. In fact, a fun little example of this would be to change the starting orientation and watch how the cube integrates incorrectly.
Let's try it!
function process() { const rows = loadDataset("horizontal.json"); let results = []; let q_body = quat_from_euler(rads(88), 0, 0); /* Extract the first accelerometer vector */ const first_acc = [ rows[0].ax, rows[0].ay, rows[0].az ]; /**** Compute a quaternion that translates the body frame-of-reference into the earth frame by applying the cross-product of the first accelerometer reading with the ideal gravity vector. ****/ const q_bodyToEarth = quat_from_two_vectors(first_acc, [0,0,1]); for (const row of rows) { const dt = row.odr; const data = [ rads(row.gx - row.bx), rads(row.gy - row.by), rads(row.gz - row.bz), ]; // Euler integration let q_gyr = quat_pure(data[0], data[1], data[2]); const q_dot = quat_mult(q_body, q_gyr); const q_delta = quat_product(q_dot, 0.5 * dt); q_body = quat_sub(q_body, q_delta); q_body = quat_normalize(q_body); /* Tilt compensation Use the rotation technique to convert our accumulated body orientation into the earth coordinate frame. ********/ let q_world = q_body.copy(); q_world = quat_mult(q_bodyToEarth, q_world); q_world = quat_mult(q_world, quat_conj(q_bodyToEarth)); results.push(q_world.copy()); } return results; }
All I changed was the starting orientation and now the cube has some weird sway. It's definitely a silly example, but illustrates the point nonetheless.
*04.02* Debugging
It wasn't until I made a 3D visualization tool that I could characterize the errors inherent in my earlier firmware. I'm sure more rigorous systems exist, but here's the approach I took to help understand how my system was behaving.
First, I added a NAND Flash unit to my circuit. Then I built a custom binary file format that allowed me to stream sensor packets at native speed directly to the memory unit. With this telemetry system in place, it was pretty easy to recover maneuver information and download it to my computer.
From there, I made a few tools that could read the binary files and replay the sensor readings. This minimal infrastructure was insanely helpful because now I could tweak the integration formula and see how it affects the same dataset. Basically a hardware unit test.
*05.00* References
So there you have it! My crash course guide to sensor fusion. I'll leave you with some of my favorite resources that have been instrumental in helping me demystify this complicated topic.
*06.00* Bonus Content
Here's a dataset where I move the telescope vertically up and down. I promise the motion was fairly precise, so why does it lean so far to the side? This problem stumped me for years (I'm not joking). But it all came down to two facts.
1. In this dataset, the device is mounted to the telescope with a fixed 67.8°
roll angle distortion. (A byproduct of the mounting port).
2. And the chirality is messed up for my sensor.
According to the datasheet for my sensor, the Y axis senses motion "backwards". There's a fun fact about this...
> Flipping the sign for any one component changes the handed-ness of the system.
It also changes the interpretation of all rotations. So be careful and intentional when remapping axes.
To solve this challenge, if you were to flip the sign on the z component
and pre-multiply an axis-angle of [67.8°, (0,1,0)]
acrosss
all sensor readings (gyroscope and accelerometer),
the manuever wil look much better.
Good luck and thanks for reading!
function process() { const rows = loadDataset("vertical.json"); let results = []; let q_body = new Quaternion(1, 0, 0, 0); /* Extract the first accelerometer vector */ const first_acc = [ rows[0].ax, rows[0].ay, rows[0].az ]; /**** Compute a quaternion that translates the body frame-of-reference into the earth frame by applying the cross-product of the first accelerometer reading with the ideal gravity vector. ****/ const q_bodyToEarth = quat_from_two_vectors(first_acc, [0,0,1]); for (const row of rows) { const dt = row.odr; const data = [ rads(row.gx - row.bx), rads(row.gy - row.by), rads(row.gz - row.bz), ]; // Euler integration let q_gyr = quat_pure(data[0], data[1], data[2]); const q_dot = quat_mult(q_body, q_gyr); const q_delta = quat_product(q_dot, 0.5 * dt); q_body = quat_sub(q_body, q_delta); q_body = quat_normalize(q_body); // Tilt compensation let q_world = q_body.copy(); q_world = quat_mult(q_bodyToEarth, q_world); q_world = quat_mult(q_world, quat_conj(q_bodyToEarth)); results.push(q_world.copy()); } return results; }