What are the real risks we humans could face from a rogue AI superintelligence?

The issue is not “we may not be able to effectively maintain control”, it’s that we deliberately build systems which we do not attempt to control. We cannot effectively control what happens when we spin the barrel of a gun and play Russian roulette, but this is not the gun’s fault.

The idea of “machine hostility” is non-sensical. A machine (e.g. a shovel) cannot hate; it cannot be used for nefarious purposes unless empowered to do so by humans. If a machine is going around killing all humans, it’s because humans have allowed this to happen. The idea that in ANY scenario humans are somehow helpless to prevent a robot apocalpyse, when we are designing, implementing, and producing the same robots - does not compute.

1 Like