Safe ejection velocity

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

JohnCoker

Well-Known Member
TRF Supporter
Joined
Apr 13, 2013
Messages
2,444
Reaction score
1,363
In a ThrustCurve.org discussion, it was suggested that flagging motors with available delays that were too different from the optimal delay would be a nice feature to avoid zippers (thanks @BigDaddy). This is a good idea, but I'm not sure how to implement it. I imagine that some combination of estimated velocity and rocket mass would work, but it's not clear what "rule of thumb" would drive it.

I think this mostly applies to model rockets, since HPR fliers either have adjustable delays or are using electronics, so we can probably assume cardboard/plastic/balsa materials.

Does anyone have a concrete idea of how to determine what's a "safe ejection velocity" for model rockets? Note that I have total mass and have the velocity from the shorter delay from the simulation. Velocity from the longer delay can be calculated from mass and drag under the force of gravity.
 
In a ThrustCurve.org discussion, it was suggested that flagging motors with available delays that were too different from the optimal delay would be a nice feature to avoid zippers (thanks @BigDaddy). This is a good idea, but I'm not sure how to implement it. I imagine that some combination of estimated velocity and rocket mass would work, but it's not clear what "rule of thumb" would drive it.

I think this mostly applies to model rockets, since HPR fliers either have adjustable delays or are using electronics, so we can probably assume cardboard/plastic/balsa materials.

Does anyone have a concrete idea of how to determine what's a "safe ejection velocity" for model rockets? Note that I have total mass and have the velocity from the shorter delay from the simulation. Velocity from the longer delay can be calculated from mass and drag under the force of gravity.

Is this for someone that don't use a sim program?

Sim programs do that right, (text in red) just by using said delay in said rocket. I think there are too many variables to conclude what would be a "safe ejection velocity." Just the motors alone are subject to variable delays. The biggest variable would be the recovery system itself, after the robustness of the body tube. Modifying a model with different shock cord material, different lengths etc. (which I do) can make a big difference on what velocity at deployment is safe from causing damage etc.
Just saying a suggestion for not using a specific delay on one guys rocket may be perfectly fine on some one else's.
 
Just saying a suggestion for not using a specific delay on one guys rocket may be perfectly fine on some one else's.

Indeed.
I would not bother implementing the proposed change.

Any relevant conclusion would need to be derived taking into account materials used in airframe, construction, shock cord, etc. Then add the nose cone weight, airframe weight, shock cord attachment method, shock cord elasticity, shock cord packing, etc, etc. Basically, one would need to build an extremely accurate .rkt model of the rocket and all of its subcomponents. Without all these variables, the answer is no more than a WAG.

If one is competent enough to build a proper .rkt of his/her rocket and run flight simulations, then they most certainly know enough about engine selection to pick the right delays without relying on ThrustCurve.org.

So, who would use it?

IMHO,
a
 
Last edited:
afadeev, Thanks for elaborating!! I always go longer on the delay if in doubt, never had a problem.
 
Good posts. I asked, since I don't usually lug a laptop with me to a launch, just a phone, but do build accurate sims.

I usually write down what I plan on flying in each rocket but sometimes plans change. A quick sanity check is sufficient now with TC to be good enough.

After more thought, probably a waste off development time for TC to worry about speed at ejection. Just too many variables.
 
Instead of trying to compute it, why don't you allow a user to enter a preference, like you do with Stable Velocity. Then the user can do all the hard fiddly estimating about materials and risk tolerance and stuff during their design phase, but use ThrustCurve to check motors against the final design quickly.
 
If you can calculate a time to apogee for a rocket, you could flag (red text, asterisk, etc. ) any motor with a delay +- 3 or 4 seconds before or after the optimum.
 
If you can calculate a time to apogee for a rocket, you could flag (red text, asterisk, etc. ) any motor with a delay +- 3 or 4 seconds before or after the optimum.
Sometimes the simplest solutions are the best.
 
In a ThrustCurve.org discussion, it was suggested that flagging motors with available delays that were too different from the optimal delay would be a nice feature to avoid zippers (thanks @BigDaddy). This is a good idea, but I'm not sure how to implement it. I imagine that some combination of estimated velocity and rocket mass would work, but it's not clear what "rule of thumb" would drive it.

I think this mostly applies to model rockets, since HPR fliers either have adjustable delays or are using electronics, so we can probably assume cardboard/plastic/balsa materials.

Does anyone have a concrete idea of how to determine what's a "safe ejection velocity" for model rockets? Note that I have total mass and have the velocity from the shorter delay from the simulation. Velocity from the longer delay can be calculated from mass and drag under the force of gravity.

For LPR, I think this can be done with four variables: the motor, the mass of the rocket, the length, and diameter.

Modrocs all have about the same Cd - about 0.5. For the same frontal area (diameter), longer ones will have a little more Cd, shorter ones a little less. Frontal area, Cd, and mass gives you sectional density.

At modroc speeds you could probably get within the accuracy needed (2s spread between delays on most motors) simply by computing a zero-drag burnout velocity based on mass and motor, and then look up delay times from a velocity/sectional density lookup table.
 
If you can calculate a time to apogee for a rocket, you could flag (red text, asterisk, etc. ) any motor with a delay +- 3 or 4 seconds before or after the optimum.
I do simulate to apogee, so I already have coast time. (That's where the recommended delay comes from.) Simulating beyond apogee would just continue the same algorithm.

Where does "3 or 4 seconds" come from? I think most modroc motors have delays closer than 6-8s apart, but I guess it could help with rocket/motor combos that have extremely long coast times.
 
I thinking any delay 3-4 seconds before or after apogee is going to be too short or too long. You could use any number you want as a warning criteria.
 
Back
Top