Why does Elon Musk think people are going to put AI in charge of our nuclear missiles without human oversight? The idea of giving computer full control over any important system has freaked people out since the 60s, at least. To the point where there’s three or four different Star Trek episodes about why computers shouldn’t be in charge of stuff in the original series alone.
I can see putting AI in charge of strategic planning, and that’s probably not smart at this point, but who the hell does he honestly think is going to let the AI push the button?
If he’s worried the AI will advise a nuclear strike because it’s the most probable path to victory… well that’s nothing a human hasn’t already advised in the past, so it’s not much different from the current state of being. The main difference is that, if you tell the AI what the acceptable loss rate, and probability of retaliation is, it’s probably less likely to make that suggestion, because it’s better at math, and doesn’t have any irrational urge to blow people up.