Originally published at: https://boingboing.net/2019/12/03/collective-rationalization.html
…
This was gone over in painstaking detail by sociologist Diane Vaughn, in her book, “The Challenger Launch Decision”.
She went in to the research expecting to find how standards were violated, circumvented, how a failure occurred. The theme of the book is that no such thing happened. Instead, they pushed the standards themselves, eating into safety margins: when no disaster occurred on previous flights, the degraded standard became…the new standard, from which greater deviance could be normalized when the next time-vs-safety issue came up.
I did a whole presentation 20 years ago, drawing parallels between the Titanic and the Challenger, and indeed with dozens of similar disasters: British shipping safety standards deteriorated for 40 years before the Titanic, causing many minor disasters: only the Titanic was big enough to shock everybody into changing them all back (double hulls, enough lifeboats for everybody, iceberg patrols).
Diane Vaughn commented on the Columbia disaster that the post-analysis showed a similar thing had occurred again: they had to PROVE the wings were vulnerable to falling ice by shooting some chunks of ice into one, and the whole room went quiet when it caused the same damage as killed Columbia and everybody realized the risk had been there all along - they’d just been lucky.
Vaughn sadly noted that, (unlike my story of the Titanic changing all maritime standards) NASA had simply not learned from Challenger. Incredibly, the deep corporate culture had remained vulnerable to the same lie-to-yourself mentality.
I wouldn’t trust NASA to fly me to Montana, much less the Moon.
And then “enough lifeboats for everybody” killed 900 people on the SS Eastland. Never underestimate the law of unforseen consequences.
I was to add another one, which I will call Engineering Domain Shear.
This is where you are building a new system, but engineers in your domain are expensive, or there aren’t enough of them, or you decide to take advantage of the latest trends in engineering as a whole, so you bring in some new people and they totally miss the point of what you are trying to achieve.
The assumptions which used to apply in your domain are pushed aside, and development ploughs on ahead. Disaster ensues.
so you bring in some new people and they totally miss the point of what you are trying to achieve
Oh… Oh… wow, AI applied to a certain field in which I work has this in spades… bring in lots of smart people and cut them loose on your problem, cuz, well, young genius, right?
I call it the “Finite Number of Monkeys” (™ ©Me 2019 ) approach.
A finite number of even very smart-looking monkeys ain’t going to solve it, at least, p<ε with ε→0 almost every time.
And does anyone else consider it ironic that the list apparently (look at the URL) was exported from Powerpoint?
This topic was automatically closed after 5 days. New replies are no longer allowed.