Virgin Galactic crash blamed on “single human error”

[Read the post]

2 Likes

So they considered a single-point human induced error that would lead to a catastrophic outcome and they did not engineer a barrier to said human error?

Is that what happened? Wow.

My FMEA experience is limited, but this is exactly the scenario you want to avoid in the final process design. Because, humans.

4 Likes

It is a prototype system. If you put in a barrier, you may be preventing the pilots from doing something out of ordinary they have to do in a situation that is unexpected.

Treading an uncharted territory is never the safest of the occupations. Being a test pilot carries risks.

6 Likes

It doesn’t necessarily have to be a barrier. The mechanism can just warn the pilot, instead. Then it’s not single-point anymore because the human has to make two mistakes: the underlying mistake, and disregarding the warning.

3 Likes

You have to be careful with warnings. Too many, and the important ones get drowned in the din of the rest. These things cut both ways.

Also, too much safety and the pilots will get lulled into false security, belief that they can do anything and the computer will keep them safe. Some Airbuses crashed because of this.

3 Likes

“Mystery” may be over-stating the pilot’s successful deployment of his parachute.

1 Like

So it was a misjudgement of whether it was time to pull the lever, that had unexpectedly catastrophic consequences. Yes, best make that more foolproof before you take on passengers.

What you say is true, but I think in this case we can - like the NTSA - confidently assert that SC erred on the side of too few controls and too few warnings. We can also assert that it should not have taken 20:20 hindsight to realise that zero barriers to an action that leads to catastrophic loss of the aircraft was at least one too few.

May have been an intentional tradeoff (probability of technology failure vs pilot error), that turned out to be wrong. New, prototype tech is more likely to fail than well-tested mature system with known boundaries.

I’ve been told that modern airplanes require at least three failures or mistakes before the plane is in jeopardy. Triple redundancy just makes sense, but double redundancy seems reasonable for a prototype.

Reading the article about how the pilot survived, I was impressed at how many things he said he wasn’t sure about. Knowing the limits of ones knowledge is the mark of a true professional.

6 Likes

A few months after my wife and I took a glider flight (with a pro pilot), the same airfield had a glider crash (with fatalities), because the release from the tow plane happened too soon after take off- and the release was entirely in our, the tourists’, control. And yet this was a rather rare event, despite being non-foolproof and having catastrophic consequences with amateur tourists. Why would we expect even less risk than with a glider, for something that is obviously more risky?

1 Like

Yeah, I believe some terrible errors have at their root an ego that just knew better --or, conversely, a potential bad outcome was deemed unlikely due to the requisite skill and expertise of those in the position to make decisions.

Being able to recognize your limits and accept systems built in consideration of inevitable human error is a key piece of building a reliable system.

2 Likes

Agreed. Alarm fatigue is a real concern. My world (healthcare) is struggling mightily in striking a balance.

Also true. As I understand, Boeing and Airbus have differing philosophies on this approach. This, too, is a balancing act.

Circling back to “barriers” What I really mean is that there is a process: checklist, warning system, physical lock, 2nd person verification --something that keeps a single “oops” from ruining everyone’s day.

Consider the outcome if a commercial airliner landed without deploying the landing gear. While there is no way to engineer away that possibility from ever happening, it is possible to design a process to prevent it from ever occurring due to human error.

An example of that from healthcare is the elimination of wrong-site surgery. Many hospitals have adopted a “safe-surgery” process to ensure wrong site/wrong person surgery. If the process is practiced as intended, an error rate of zero is attainable.

Anyway, my point was that certain functions are too critical to live in one person’s control if there is an alternative. It seems, from my armchair view, that the feathering function needs a process to ensure a human error rate of zero (or statistically zero).

2 Likes

Except that with planes we have huge data sets of evidence (99% of flight activity is monitored and recorded) on which we can anticipate potential failure. Preventing known failure is a child’s game in comparison to anticipating potential failure. I am pretty sure that there were hundreds of other potentially fatal errors the pilot and co-pilot could have made but didn’t make. Only with hindsight do we know that this lever was key.

How he successfully deployed his parachute under extremely adverse circumstances is a mystery (or if you prefer luck) e.g. if he had broken or damaged his fingers or hand (rather than his shoulder and elbow) in the fall (easily imaginable) it is highly unlikely he could have unbuckled his seat belt.

What I find interesting in his story is how he described his personal preparation for the flight, which by the sound of it was on his own accord rather than required by the company. To me that is key, a prepared mind…

2 Likes

IIRC the designers had to replace the entire propulsion system at the last minute (after the original propellant kept blowing up and requiring new test-beds and new engineers); then had to redesign the shuttle to match the new parameters, with different aerodynamics and control surfaces. In my mental image of the cockpit, all the controls and switches are mounted in their current locations with duct tape.

2 Likes

The primary controls on cars and aircraft all have the same issues. Push on the elevator at the wrong moment and your aircraft will go into an unrecoverable dive. One factor which would be relevant is the design of the user input device. If the movable control was right beside the control to adjust the seat position, and the same shape, then you have a serious design problem.

2 Likes

The flaps and gear levers on some bombers had this problem. Many a bomber rapidly descended to its belly just after landing when the pilot raised the gear instead of the flaps.
http://blogs.discovermagazine.com/lovesick-cyborg/2014/11/08/the-mystery-of-virgin-galactics-pilot-error/
The levers were redesigned, with a wheel attached to one and a triangle to the other one.

The article I managed to find that mentioned this is actually directly relevant here.

Best example I can think of are parachute and seat harnesses in sailplanes. In an emergency you need to quickly release the seat but not the parachute, but pilots have been found in crashed aircraft with the parachute released and the seat harness still hooked up.

Pilots are taught never to leave their parachute in the glider no matter how tired they are. Always step out with the parachute and remove it later.

Obviously re-engineering one or both of the releases would help as well.

1 Like

I’m sure some changes need to be made but this does strike me as a bit odd. Do car manufacturers get blamed for operator errors causing a crash and death or do we limit liability to actual component failure?

I think I know exactly the flight you are alluding to. The french Airbus flight that crashed because the pilots believed it to be unstallable, and one had the nose up, the other down, and it was with a new digital control?

Even Airbus can make design mistakes that allow for catastrophic human decisions.

1 Like