State estimation using accelerometer

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Are you considering ONLY when the rocket is going up, boost + coast, then ignoring every thing after chute ejection?
If so then gryo rates, angle changes, should not be to severe.

The tutorial you linked to only uses Acc when NOT moving.
Did you check the papers I linked to is previous posts?
SparkyVTFlyer's paper and code is a better to study.
 
You can always (begrudgingly) dunk that hand into those deep pockets and purchase a higher end solution :) Something like an XSENS Mti-1 or Mti-3 if you need the mag reference or the MTi-7 if you want to fuse it to an NEO-M9N or whatever GNSS. Outputs high rates (and fused) and with the option of quaternions, euler angles and other options.

TP
 
UPDATE
I have soldered up a new altimeter using some of your suggestions. I am using a KX 134 high gee accelerometer and a BMP 388 pressure sensor. I am using the Arduino nano 33 iot to control all of this and it has a built-in LSM6DS3 IMU that I am using as well. I used this tutorial
https://toptechboy.com/9-axis-imu-lesson-6-determine-tilt-from-3-axis-accelerometer/
To calculate angles using the gyros onboard. My problem is that the angles can become hard to use when the rocket is rolling as the X and Y axis are constantly changing.
Does anyone know a solution to this problem?
Thanks for your help.

https://github.com/SparkyVT/HPR-Rocket-Flight-Computer/blob/master/Main Code/Rotation.ino
 
Oh, thats me. Yes, that little formula I came up with will work well, and it can even calculate rotation data from the gyro quickly and accurately with only an Arduino Uno.

Since I wrote that, I've made many improvements to the code. I have a Github site with the latest version here: https://github.com/SparkyVT/HPR-Rocket-Flight-Computer

Thanks for sharing that. I took a look at the rotation.c file, and in particular the getQuatRotn() function. This method looks like a lot lower computation cost than other quaternion incremental updates that I have seen. Do you have a source where you got the equations? Do you know what the limitations are, if any, for accuracy?

The speed trig looks like a smart approach, too, to trade some code space to trade for speed.

It seems like you could just do the quaternion update for every gyro sample (e.g. 1000 Hz), and then compute the rotation matrix and convert to flight angle and roll at a lower frequency if desired. When you said that an Arduino Uno could keep up with doing everything each cycle, was that with the 8-bit Atmega running at 16 MHz and no floating point unit?
 
Do you have a source where you got the equations? Do you know what the limitations are, if any, for accuracy?

This is my own implementation of quaternion differentials. I didn't get it from any web source or anything. I'm a mathematics undergrad, but I was suprised at how long it took me to understand this and get it right. I'm not sure I still completely understand it.🙃 If you notice, I wasted some RAM to maintain consistency between the array position and quaternion dimensions.

As for accuracy, I haven't done any mathematical estimates of the error. Instead, I optimized the output with some Python scripts using LOTS of real flight data. I have a another degree in Ops Research with a specialization in numerical optimization. Its intended to be a 32-bit floating point implementation for maximal accuracy. Its the best I could get it when compared to a 64-bit floating point implementation of the same algorithm updated at every cycle. Honestly, the gyro offset is probably an order of magnitude bigger source of error than this code.

It seems like you could just do the quaternion update for every gyro sample (e.g. 1000 Hz), and then compute the rotation matrix and convert to flight angle and roll at a lower frequency if desired.

I found that updating the quaternions 100 times per second gave the best accuracy. I think this is the case because at 1000Hz the 32-bit pieces get really small and the error builds. Accuracy is preserved by updating the partial rotations at every cycle, but only updating the quaternion at 100 times per second. I haven't done any mathematical error estimates (yet) so I could be wrong, but when compared to my other method I became convinced it was "close enough" and focused attention on other parts of the code.

When you said that an Arduino Uno could keep up with doing everything each cycle, was that with the 8-bit Atmega running at 16 MHz and no floating point unit?

Yes on the Uno, but it used another rotation routine called getDCM2DRotn(), which is still in the rotation.ino file. This is the one I wrote the NAR research paper on. I originally ran my code on an 8-bit Uno at 16MHz and quaternions were too slow. I came up that algorithm after studying the Direct Cosine Method over a TON of beers one night. I was surprised it worked, and it took me another month to write the mathematical proof of why it worked. I'm still not totally sure that proof is correct, but its in the NAR paper. The algorithm is very fast on an Uno due to the use of trigonometric small angle approximations and only having a few floating point operations.

Below are a few plots comparing the two methods from one of my flights this past weekend. The error plot compares the two methods against a 64-bit implementation of quaternion differentials updated at every sample, which is assumed to be the most precise as possible. DCM2D tends to lose accuracy the closer it gets to 90 degrees, which is expected since it uses Euler angles which have the gimbal-lock problem at 90 degree rotations. Since I didn't care about any rotations past apogee, the sacrifice was worth the performance, especially on an Uno.

Quaternion Pitch Yaw.pngDCM2D Pitch Yaw.pngDCM2D vs Quaternion.pngError Plot.png
 
Last edited:
I'm a mathematics undergrad, but I was suprised at how long it took me to understand this and get it right. I'm not sure I still completely understand it.🙃

That makes me feel better about not figuring out Quanterions.
Thanks for explainations and your code in GIT.
 
As for accuracy, I haven't done any mathematical estimates of the error. Instead, I optimized the output with some Python scripts using LOTS of real flight data. I have a another degree in Ops Research with a specialization in numerical optimization. Its intended to be a 32-bit floating point implementation for maximal accuracy. Its the best I could get it when compared to a 64-bit floating point implementation of the same algorithm updated at every cycle. Honestly, the gyro offset is probably an order of magnitude bigger source of error than this code.

...

Below are a few plots comparing the two methods from one of my flights this past weekend. The error plot compares the two methods against a 64-bit implementation of quaternion differentials updated at every sample, which is assumed to be the most precise as possible. DCM2D tends to lose accuracy the closer it gets to 90 degrees, which is expected since it uses Euler angles which have the gimbal-lock problem at 90 degree rotations. Since I didn't care about any rotations past apogee, the sacrifice was worth the performance, especially on an Uno.

Fantastic work. Did any of your test cases have a lot of spin and/or a lot of weathercocking? Which gyro was used in your test data?
 
Did any of your test cases have a lot of spin and/or a lot of weathercocking? Which gyro was used in your test data?

The two gyros I tested are the L3GD20H for my old units and the LSM9DS1 for my new ones. They both work well and I like the LSM9DS1 because it has a 24G digital accelerometer. Its advertised as a 16G accelerometer, but if you actually inspect the gain from the data sheet its capable to 24Gs.

Yes, some of the test cases spun quite a bit, up to about a max of 1500dps, and some will tip over at apogee. Most of the time the results are quite accurate. Sometimes they are not. I really only care when flying 2-stagers so that I can shut-down 2nd stage ignition if the user defined rotation limit is exceeded.

I think the inaccuray is because of an improper calculation of the gyro offset. Even just a few LSB's off will make the algorithm drift at the 2000dps gain setting over a 20-30 second flight. There's still a lot of room for improvement. I'm thinking of running a test using the onboard temperature sensor to collect the offset over a wide range. I can store this in EEPROM and do a linear interpolation for the offset based on temperature. This will probably be more accurate than my "try to hold the rocket perfectly still on the pad while I sample the sensor" method. That's on my list of improvements to code up.

I also found that a variable gain is a bad idea. The offsets are not proportional to the gain and the onboard ADC takes several miliseconds to adjust to the new gain setting. Until the adjustment is complete the gyro will give inaccurate readings.

Here's a video where the rocket spun a bit more and tipped over at the top. The apogee charges go off right when the rocket hits 90 degrees, and you can see this in the data at the end of the clip.

 
Last edited:
I love that a post from a 7th grader doing a cool project has spawned technical conversations with Aerospace engineers and mathematicians about the cool work they are doing. Great job to all of you who have posted in this thread and great job to @diyaerospace for knowing more as a 7th grader than I ever will. Rocketry is AWESOME!

Sandy.
 
The two gyros I tested are the L3GD20H for my old units and the LSM9DS1 for my new ones. They both work well and I like the LSM9DS1 because it has a 24G digital accelerometer. Its advertised as a 16G accelerometer, but if you actually inspect the gain from the data sheet its capable to 24Gs.

I also found that a variable gain is a bad idea. The offsets are not proportional to the gain and the onboard ADC takes several miliseconds to adjust to the new gain setting. Until the adjustment is complete the gyro will give inaccurate readings.




A couple of ideas for improving accuracy:

1. Compare the magnetometer vector measured by the LSM9DS1 to what would be expected based on the rocket attitude estimate, and rotate the estimate in that direction
2. Have you seen any changes in the gyro biases with orientation, which would suggest a cross-coupling between acceleration and measured gyro rates?

-Adrian
 
Compare the magnetometer vector measured by the LSM9DS1 to what would be expected based on the rocket attitude estimate, and rotate the estimate in that direction

I've tried this and there are some problems with this technique. Any rotation in the plane normal to the magnetic vector won't show up in the magnetic readings. There is no mathematical way to solve this. I have noticed that the roll closely tracks the magnetometer readings when the rocket is mostly vertical, but once it starts to tip over this is no longer the case. The magnitude of the magnetic vector also changes with attitude, which I assume is due to the use of ferrous all-thread in the avionics bay. I know that you can correct for this, but it isnt' generalizable because it depends on the specific construction of each rocket. I'd need a calibration routine that could do this for any rocket, and honsetly, the gyro orientation data seemed close enough to not be worth the effort.

Have you seen any changes in the gyro biases with orientation, which would suggest a cross-coupling between acceleration and measured gyro rates?

I haven't looked for this, so I can't answer it. Part of the reason is that its difficult to separate the two effects with only flight data. Unless the IMU is perfectly aligned with the rocket centerline, any rotation will generate axial accelerations due to centrifugal force. Which components of axial acceleration are due to the rocket and which are due to centrifugal force? I haven't tried to do that math yet, and it will depend on the exact difference between the IMU and rocket centerline.

I originally attempted to fix the rotation error problem by using higher precision gains available on the gyro. The idea was to use maximum precision until the rotation rate exceeded the gain level, and then switch to the next higher gain. If the offset was proportional to the gain, then I could do some simple math to correct it.

I wrote some code to calcuate the gyro offsets at different gains. I ensured the unit was perfectly still and sampled the sensor. When the gain changed I definitely could tell there was a settling period of several miliseconds. Additionally, the final offsets were not proportional to the gains. They were all over the place. I would need to sample and store the offset for each axis at each gain level to make it work. Thats not terrible, 3 axes x 4 gain levels x 2 bytes = 24 bytes of EEPROM, but it was way more work than I expected.

The real killer was the settling period. In a moving rocket I'd have to forgo gyro sampling for several miliseconds whenever I changed the gain. I was afraid what this might do to the accuracy so I never tested it. Maybe its not too bad. Ultimately I gave up because I felt it was "close enough" for what I needed, which was a reliable way to detect an abnormal attitude prior to firing second stages. I've seen some flight computers solve this accuracy problem by using two gyroscopes: one for roll and another for pitch/yaw.
 
Kinda. I have a calibration routine that calculates offsets once it’s installed inside the av-bay. However, I haven’t done any corrections for ferrous metals. That was harder to code than a simple offset calculation.
No no. In a hard iron free zone you need to determine the gain corrections for each axis so the magnitude of the field strength vector is a constant in any arbitrary orientation
 
Wow, great discussion.
My math isn't too bad but not great. I have though to use the magnetometer as an absolute Earth reference to measure tilt but no luck trying to work out the math. I have cal'ed the magnetometer and measurements do match what the Earth field should be.

Gyro bias cal is important else you get a constant rotation value.
Accelerometer bias cal also counts. I am using Acc data for three thing:
1- Launch rail angle before liftoff.
2- integrate acc data for velocity and position during boost and coast.
3- Coast data to determine true drag coefficient.
Acc is then useless once chute ejects.

Putting a sensor package has been something I have always wanted to do, even back in the '60s when I first flew rockets. Finally have this but only two flights of data. I am simply recording the data and doing all calibration and analysis post flight.
Basically this is just for fun, yea, doing high level math for fun...lol.
 
I haven't looked for this, so I can't answer it. Part of the reason is that its difficult to separate the two effects with only flight data. Unless the IMU is perfectly aligned with the rocket centerline, any rotation will generate axial accelerations due to centrifugal force. Which components of axial acceleration are due to the rocket and which are due to centrifugal force? I haven't tried to do that math yet, and it will depend on the exact difference between the IMU and rocket centerline.

I had in mind a static calibration that would measure differences in the gyro bias in 1G vs -1G when it's flipped. I have a LSM9DS1 that I may try this with.
 
My math isn't too bad but not great. I have though to use the magnetometer as an absolute Earth reference to measure tilt but no luck trying to work out the math.
The magnetometer vector would be a vector of the magnitude of the earth's magnetic field in whatever direction, and the outputs of the 3D magnetometer should conform to the 3D Pythagoras theorem equation. That tells me a little trig is needed and you should be able to very easily determine the direction of the magnetic field vector relative to the airframe. It will get a bit complicated since the actual field orientation is dependent on the magnetic declination at the flight site. So the magnetometer should be able to assist with determining orientation, although the process is not entirely straightforward. Airframe roll will really complicate the situation too.
 
The magnetometer vector would be a vector of the magnitude of the earth's magnetic field in whatever direction, and the outputs of the 3D magnetometer should conform to the 3D Pythagoras theorem equation. That tells me a little trig is needed and you should be able to very easily determine the direction of the magnetic field vector relative to the airframe. It will get a bit comf plicated since the actual field orientation is dependent on the magnetic declination at the flight site. So the magnetometer should be able to assist with determining orientation, although the process is not entirely straightforward. Airframe roll will really complicate the situation too.
Its a very complicated problem but you cannot get absolute 3D orientation from the mag field vector alone. You can get tilt from vertical you are at either the magnetic north and south poles because there the field direction is conveniently vertical. If you are static and know the declination you can get the 3rd othogonal axis by taking the cross product of the mag vector and the gravity vector from your acceleromater and get a true 3D orientation. But in flight you can't.
 
The HMC6343 has a very basic (in principle) magnetic hard iron calibration system that simply adds an offset to each axis IIRC. Of course, like a lot of things the HMC6343's datasheet says it does, it actually doesn't do; however in principle that should get you in the rough direction.
Then on the other end of the spectrum is the XSens Mti-3 and -7 with a very sophisticated fully spherically mapped calibration option - no doubt with some interesting math behind it.

https://www.xsens.com/hubfs/Downloads/Manuals/MT_Magnetic_Calibration_Manual.pdf
TP
 
The HMC6343 has a very basic (in principle) magnetic hard iron calibration system that simply adds an offset to each axis IIRC. Of course, like a lot of things the HMC6343's datasheet says it does, it actually doesn't do; however in principle that should get you in the rough direction.
Then on the other end of the spectrum is the XSens Mti-3 and -7 with a very sophisticated fully spherically mapped calibration option - no doubt with some interesting math behind it.

https://www.xsens.com/hubfs/Downloads/Manuals/MT_Magnetic_Calibration_Manual.pdf
TP
Or you can motion calibrate any magnetic sensor with free code here...

https://learn.adafruit.com/adafruit...libration/magnetic-calibration-with-motioncal
 
I wrote some code for magnetometer calibration that runs on the embedded device. It takes advantage of the fact that you operate in a constant field and, if all you need is orientation, don't care about the absolute magnitude of the field. It is just a variation on accelerometer calibration where you know that the vector sum should always be 1G. (more or less)

As a bonus, it is easy to check readings after calibration to check and see if recalibration is required. Calibration requires six measurements, as different as possible.

There are other ways to do it of course.
 
I wrote some code for magnetometer calibration that runs on the embedded device. It takes advantage of the fact that you operate in a constant field and, if all you need is orientation, don't care about the absolute magnitude of the field. It is just a variation on accelerometer calibration where you know that the vector sum should always be 1G. (more or less)
This sounds like what I was alluding to in my earlier post.
 
Its a very complicated problem but you cannot get absolute 3D orientation from the mag field vector alone. You can get tilt from vertical you are at either the magnetic north and south poles because there the field direction is conveniently vertical. If you are static and know the declination you can get the 3rd othogonal axis by taking the cross product of the mag vector and the gravity vector from your acceleromater and get a true 3D orientation. But in flight you can't.

That is basically what I was trying for. Get the 3D vectors of the Mag before launch. Then call these the 'static Earth reference' and calculate the 3D change. Sounds easy but is not. Have not come up with the math.

I use the Adafruit SensorLab to cal the mag. Works great. It gives an offset for each axis (soft iron) then 9 values (3x3 matrix) to correct for hard iron.
If there is any ferrous metal in the Sensor payload bay then the cal must be done with the magnetometer in the bay. This could be done way before launch if the mag and ferrous stay in the same relationship.
 
That is basically what I was trying for. Get the 3D vectors of the Mag before launch. Then call these the 'static Earth reference' and calculate the 3D change. Sounds easy but is not. Have not come up with the math.

I use the Adafruit SensorLab to cal the mag. Works great. It gives an offset for each axis (soft iron) then 9 values (3x3 matrix) to correct for hard iron.
If there is any ferrous metal in the Sensor payload bay then the cal must be done with the magnetometer in the bay. This could be done way before launch if the mag and ferrous stay in the same relationship.
Good luck, in our commercial product for a client we find the mag sensor not that useful as first order data, and it really isn't needed to get a good inertial estimate of a vehicle's state. You will get better return on your effort focusing on the gyro and accel offset estimators and integration code. Without the latter there is no return on perfecting the magnetic sensor engine. If you have the latter you don't need the mag. The mag sensor however can be utilized for some secondary purposes that are useful for enhancements.
 
Back
Top