Calibrating a horizontally oriented motor test stand loadcell

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

wonderboy

Well-Known Member
TRF Supporter
Joined
Jun 11, 2012
Messages
310
Reaction score
335
Location
S.E. Michigan
Hi guys! Our club acquired a motor test stand from a former member. It is very heavy and well built. It is used in a horizontal orientation, and I'm curious as to the best way to calibrate the load cell. My question really comes down to applying the known load to the loadcell. Are they any clever ways to apply a known load while in the horizontal orientation? If I tip the stand up vertically, I'd have to account for (weigh) the structure that holds the motor, which if you can tell from the pictures is a bit complicated. The motor "holder" is supported on a couple linkages and is coupled to the load cell with a mechanical fuse (brass rod turned to a specific dimension to break at the limit load). In the end though, perhaps tipping vertically is the simplest. Once I weigh all the moveable components that make up the holder, I can just put a label on there indicating the weight of all this.

Let me know your thoughts.

Here are the pics of the stand:
DSC_5519.JPG

...and the load cell itself:
DSC_5520.JPG
 
I am thinking of a string attached to the motor, run horizontal to a pulley at the edge of the table, down to attach weights. This is how we did similar things in Physics class labs.
 
The best way is to send the load cell out to a calibration lab.
 
Thanks for the replies guys. I thought that I need to calibrate the load cell each time I use it. Also, isn't it important to calibrate the load cell once installed in its intended application?

If not, I could easily pull the load cell out of the fixture and test it all by itself (very easily). I was complicating this process by thinking I had to calibrate it while horizontal and installed on the test stand. If I can just use the factory data card, that makes life a LOT easier.

Thanks again for the info guys!
 
Thanks for the replies guys. I thought that I need to calibrate the load cell each time I use it. Also, isn't it important to calibrate the load cell once installed in its intended application?

If not, I could easily pull the load cell out of the fixture and test it all by itself (very easily). I was complicating this process by thinking I had to calibrate it while horizontal and installed on the test stand. If I can just use the factory data card, that makes life a LOT easier.

Thanks again for the info guys!
I calibrate at least several times during a test session if not before each test. It may not be necessary for some, but over time I see more signal drift---presumably from the associated electronics---than I'm comfortable with.

Best -- Terry
 
Thanks Terry, that is where my head was too. I figured I'd want to calibrate at least each time I set up the stand to test a motor. I think I'm going to figure out how to do this with the stand in a vertical orientation. The nice feature of this stand is that it has the mechanical "fuse" (turned brass rod) which I can easily disconnect from the load cell. With the stand in the vertical orientation and the fuse disconnected, I can place a scale under the motor mount structure and see what that weighs. Then, with a test mass I can add this to the total and (with the fuse reconnected) see what the load cell output is.

Thanks for everyone's advice!
 
Calibration should not drift!

The zero offset might drift a bit but that is expected and easily compensated for. Either via a shunt cal or just massaging the data. The scale factor shouldn't drift at all. Changes are a sign of trouble. Probably in your electronics.

Calibration requires the application of several well known loads. (Well known depending on your requirements with NIST traceable being the gold standard. A set of barbell weights doesn't count.) Bare minimum would be 20% and 80% of rated load. You should do enough points both increasing and decreasing so you can measure linearity and hysteresis.
 
Calibration should not drift!

You should do enough points both increasing and decreasing so you can measure linearity and hysteresis.

Boy, that makes me flashback to Six Sigma and determining Gauge R&R.

Consider a Bow scale as one quick and easy option. You can measure the load horizontally.
Of course if that engine holder has a lot of mass the initial thrust curve will be affected as the thrust overcomes the resting mass. The bow scale will not be able to determine that.

Bow scales are available pretty cheap.
Of course you get what you pay for.
 
Calibration should not drift!

The zero offset might drift a bit but that is expected and easily compensated for. Either via a shunt cal or just massaging the data. The scale factor shouldn't drift at all. Changes are a sign of trouble. Probably in your electronics.

Calibration requires the application of several well known loads. (Well known depending on your requirements with NIST traceable being the gold standard. A set of barbell weights doesn't count.) Bare minimum would be 20% and 80% of rated load. You should do enough points both increasing and decreasing so you can measure linearity and hysteresis.
Perhaps it shouldn't drift...but back in grad school, when we ran samples (inductively coupled plasma, direct current plasma) we always ran standards several times in the course of a day because the signal would drift enough to make re-running necessary for good accuracy.
 
Since load cells are so much more difficult to calibrate than pressure sensors we decided to avoid that rabbit hole. Most load cells want 10 - 15VDC excitation with a minimum of 5VDC, but most amplifiers only supply 5VDC excitation so that puts everything on the edge of working at all, much less calibrating correctly.

You only need pressure readings to be able to characterize propellants. We use nozzles with no expansion cones so calculating thrust from pressure is very easy. Along with the pressure sensor, we are looking at using a short stroke hydraulic cylinder with a pressure sensor instead of a load cell to measure the thrust. Much easier calibrations and faster data rates.
 
Load cells come with an output rating of mV/V of its rated load, so the higher the stimulus voltage the greater the output voltage at a given load. If using a 10V stimulus resulted in a 30 mV differential output at its rated capacity, a 5V stimulus voltage would only produce a 15mV differential output. The downside to higher stimulus voltages is the thermal heating of the piezo-resistive elements of the load cell that causes output drift. The power dissipation of the resistive elements quadruples with the doubling of the stimulus voltage.

I use a NBS traceable Instron to calibrate load cells and have yet to come across a quality load cell that failed to meet manufactures specifications for output accuracy. Unless the load cell has been compromised during its service life, using the stated values on the calibration certificate will provide all the accuracy you need for motor testing.

No need to calibrate the load cell in the same orientation as the test stand mounting, as the sensitivity is independant of its mounting attitude.
 
Last edited:
I have used probably 20 load cells in industry over the past 20 years, so not that many - typical decent industrial environment, ok temperature ranges, little vibration etc. Nothing extreme.

With one single exception, every time a customer called saying the load-cells were 'junk' we'd go and check the amplifier. Most often, someone tried to re-zero the amplifier by randomly twisting the trim pots until they got a single point answer they wanted and moved on. Do that a time or two and you have to go back to ground zero and start from scratch. Once we would get the amplifier span and zero set properly, the load cells performed as expected.

The one time the customer was correct that the load cell was junk, the machine had run into a tool left in place and put a massive compressive load that was not what the load-cell was designed for. You didn't need a dial caliper to find the problem, an eyeball and an elementary school ruler told the tale on that one!

For fact, the applications I'm familiar with are very benign compared to many. Nice environment, not very dynamic and on well built machinery. For that type of scenario, I cringe when I hear the words 'user calibrated.' I'm sure there are places where that works, just not in my field of experience. Reset zero all day long, but 're-calibrate' is a totally different thing in my experience.

Sandy.
 
Perhaps it shouldn't drift...but back in grad school, when we ran samples (inductively coupled plasma, direct current plasma) we always ran standards several times in the course of a day because the signal would drift enough to make re-running necessary for good accuracy.
You have run ICPs? I design atomic spectrometers currently, including ICP-OES. :)
 
Back
Top