I’m going to take a wild guess that you don’t work in software R&D or QA.
It certainly could. The bigger problem from this perspective isn’t so much the likelihood of a poor decision made in any specifiable context. Once you’ve specified the context, the machine is obviously more reliable.
The problem is that computers can’t second guess their programming. Humans make poor decisions all the time, especially under pressure – that’s acknowledged. What they don’t necessarily do, and what machines definitely do, is to persevere in making poor decisions time after time again even when confronted with all kinds of evidence that their decision making is faulty. Human beings can step back and ponder whether the actions they’re taking are really conducive to the goals they’re trying to achieve. Computers just can’t do that.
Unless they’re explicitly programmed with some limited self-correction, which increases system complexity and therefore the number of bugs (and therefore the risk of deploying such systems).
I asked you before to look into the limitations of self-driving cars – what they specifically can’t do. They’re very much in-line with the criticisms I make above.
Sure, self-driving cars in most situations are much better drivers than human beings. Except when a bridge gets heavily damaged by a storm, google maps isn’t updated with that information, and several hundred human beings die all at once because the automated cars didn’t realize the bridge was unsafe.
Much of your argument has depended on the limitations of human beings. At no point have I disagreed with you that human beings have limited capabilities. Nowhere have I suggested otherwise, which makes me wonder why you keep making the same point I’ve already agreed with.
What you don’t seem to acknowledge is that machines also have limitations, ones that are relevant to this discussion.
Edit: Perhaps it is presumptuous of me to say so, but it seems to me you often point out problems caused by excessive bureaucracy. We’re on the same page there – bureaucracy leads to faulty decision making for a number of reasons.
This is primarily because it tries to proceduralize decision-making – to make it as automatic as possible so as not to rely on the judgment of individual human beings.
The way machines make decisions is more similar to how bureaucracies make decisions than to how human beings make decisions. They continue to do stupid things because there are no policies in place for second-guessing the policies that are in place.
I hope that analogy helps to clarify my argument.