Arianespace Vega launch failure due to swapped cable connection

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Isn't this what happened on the second stage of Apollo 6, when a healthy engine was shut down inadvertently? A bit of cross-wiring?
 
Isn't this what happened on the second stage of Apollo 6, when a healthy engine was shut down inadvertently? A bit of cross-wiring?

Yes, I was trying to remember that one, I think you're right. Some sensor or signal cable got crossed over to the wrong motor.
 
I know not all rocket motors get full up tests on the stand prior to flight, Some have one-time use parts, so they test a fraction of the production run.

But testing the TVC seems straightforward. Do you think at some point they powered it up and commanded a simple 'up-down-left-right' motion and saw that it moved - but didn't catch that one axis was the wrong way?
 
I know not all rocket motors get full up tests on the stand prior to flight, Some have one-time use parts, so they test a fraction of the production run.

But testing the TVC seems straightforward. Do you think at some point they powered it up and commanded a simple 'up-down-left-right' motion and saw that it moved - but didn't catch that one axis was the wrong way?

You would think* that testing TVC function would be a key part of the QC process. It probably will be now!

* "You would think..." is the official motto of 2020.
 
Two vehicle failures in the last three launches. It's going to get hard to sell these rides or get insurance for them.
 
I do see a positive coming out of this.
Wiring connections will be dummy proofed (which seems odd they were not in this day and age of electronics) , and more robust systems check will undoubtedly be preformed.
And possibly by higher level techs or engineers.
Along with the loss of an expensive item, no doubt some will also loose their jobs, or be demoted at least.
The only Dumb mistake, is the mistake that is made and no lesson was learned.
 
I do find this amusing in that they called it human error during assembly. To me it indicates a flawed design that will let that occur, so I would deem it a design flaw. It also shows that FMEAs were either not done of they didn't listen to the outcomes. End to end testing not being done has recently bitten Boeing on their first capsule test mission with thruster directions reversed, with the problem only being discovered on orbit.

This is not a new problem. I think it was one of the early expermental aircraft that flipped on takeoff killing the pilot due to roll gyros installed backwards or something. The Russians have also had similar problems from time to time during their rocket development.
 
I do find this amusing in that they called it human error during assembly. To me it indicates a flawed design that will let that occur, so I would deem it a design flaw. It also shows that FMEAs were either not done of they didn't listen to the outcomes. End to end testing not being done has recently bitten Boeing on their first capsule test mission with thruster directions reversed, with the problem only being discovered on orbit.

This is not a new problem. I think it was one of the early expermental aircraft that flipped on takeoff killing the pilot due to roll gyros installed backwards or something. The Russians have also had similar problems from time to time during their rocket development.

COMPLETELY agree with this. During critical design review, these sorts of things should be found and addressed. It's NOT a new lesson, it's one that they did not learn from previous failures of others, and chose not to leverage industry best practices.
 
In his latest update, Scott Manley says that they confirmed the error in closeout photos. That reminds me of the Boeing Starliner abort test, where they could see the chute mistake in the photo documentation.

I wonder how hard it would be to train an AI to catch differences from a reference model? Because human eyes don't seem to be sufficient.
 
I wonder how hard it would be to train an AI to catch differences from a reference model? Because human eyes don't seem to be sufficient.

Garbage in, garbage out. The AI would still need reference points to make up for the bad practice, which would have to be designed in as well. Without a 'red dot matching red dot and a green dot matching green dot' on identical connectors, it would be hard for anyone to know the difference, much less an AI, as apparently happened here. Why not simply spec a part that's easily mistake proof?

The real issue here is that NOWHERE in aerospace in the last 50 years(or rail, or ships, or general industry, for that matter(I've worked in all of them)) does anyone use identical connectors that aren't keyed differently. That's a lesson that humans learned a long, long time ago.......except in this case, they chose to re-learn it.

There is an entire aerospace industry devoted to nothing other than connectors, pins, wiring harnesses, and the design, installation, and repair of such. There are literally tens of thousands of externally identical connector shells that can be ordered and produced as different master and sub key slots, male and female, on top of differing pin configuration layouts.....ALL as catalog, Commercial Off The Shelf units, already built, bagged, tagged, and ready to ship.

The critical design review process is specifically set up to catch such flaws as interchangeable duplicates as has been reported happened here. Changes after design review have to be vetted in the same manner as the original design review.

One of the key features that aerospace(and similar industries) design uses in through the Bill of Materials (BOM) engineering audit before anything is ordered or built. It's an actual engineering/logistics job that is usually staffed by a team of SMEs in each part of the BOM (materials, wiring, fasteners, etc).
The final, as proposed BOM goes through a step in critical design review where LIKE ITEMS flagged in the BOM are flagged for specific attention and engineering design review to make sure that they're NOT physically co-located unless engineering controls and design dictate so. THEN, a failure mitigation designator is usually put in place, such as a RED square being placed on a bulkhead connector, and the corresponding wiring harness gets a RED nylon overwrap AND a quality assurance inspection step added to ensure proper installation and compliance with controls.

The final step to catch it is on the item itself where assembly engineers look over each part as Bill and Ted are putting it all together to catch these potential failures, and call an "ALL STOP" until mistake proofing measures are put in place and complied with.

Absolute last chance is when the item goes to testing. Again, engineering controls are usually put in place, with multiple QA sign offs, to ensure that everyone agrees on the definition of "Up, down, left, right"...usually because it's ALSO spec'd with a mistake proofing step as "Triangle/Square/Circle/Star" or some such manner that can't be confused because you're operating it facing North and I'm inspecting it looking East and we both fail to communicate.

Again, that's nothing new, and even tiny little local electrical firms practice such controls.

It's been said that most humans only learn things 4 ways:
1.Read it
2.Write it
3.Repeat it
4.Traumatic experience

and that some humans add a 5th way:
5.Repeated traumatic experience due to a failure of reading and/or writing
 
Last edited:
I wonder how hard it would be to train an AI to catch differences from a reference model? Because human eyes don't seem to be sufficient.

Manufacturers of electronic circuit boards (PCBs) often use AOI (Automated Optical Inspection) which does basically exactly that. It works well for PCBs that have very regular and repeated patterns. Applying that to a rocket may be more difficult.
 
Manufacturers of electronic circuit boards (PCBs) often use AOI (Automated Optical Inspection) which does basically exactly that. It works well for PCBs that have very regular and repeated patterns. Applying that to a rocket may be more difficult.

Vision systems have their uses, but they also have their limitations. The collection of empty tylenol bottles in my desk will attest to the engineering time required to get them tuned in. They do not tolerate variability well and often fail an inspection for a very minor deviation from the master image.

The newer AI based systems are great for self-improvement over time but you need a sufficiently large sample set in order to really make that happen.
 
Back
Top