What are the real risks we humans could face from a rogue AI superintelligence?

You misunderstand me. There’s likely to come a point in building self-configuring systems that failure to understand it’s processes amounts to being unable to direct them.

I don’t mean to be rude, but that’s a very poor analogy. Guns do not re-engineer themselves. Also, I don’t think ascribing fault is even remotely helpful. The proactive options are to either find a way to make self-evolving machines not go rampant or to prevent their creation (which is almost certainly a waste and distraction since there’s no practical way to stop everyone from pursuing research avenues towards them).

Three mistakes there.

  1. Hostility is not about motivation, it’s a relationship. It need imply no emotion or other cognition whatsoever. But if you prefer, substitute the words danger or antagonistic (as in antagonistic chemicals or antagonistic processes) for hostility. I wasn’t talking about machines hating.

  2. Current evidence points to human brains/minds being complex biochemical machines. Humans can hate, therefore there is a machine that can hate. We can’t rule out a soul of some sort, though Occam’s razor suggests we assume it’s less likely than the soul’s nonexistence. But regardless, as long as it’s part of the physical universe, it’s in principle reproducible. However, this isn’t something that concerns me, and to understand why see #3 below.

  3. Whether or not they ever become self-aware, machines are likely to be extremely alien in their thinking and motivations. The odds of us reproducing an anthropomorphic mentality are long compared to something that thinks very differently from us. Standard processes we call our human emotions are unlikely to apply to them. The one caveat here is machine intelligence achieved by directly duplicating the human brain’s mental functions, but that’s a comparatively distant bridge to cross at least decades in the future. Non-conscious algorithms are already here and already beyond our full understanding.

See the law of unintended consequences.

Straw man argument. I didn’t say or suggest that humans are helpless to prevent it. It is precisely how to prevent it that interests me, because we don’t yet know.

If you want more detailed discussion, please listen to the podcast. I don’t care to rehash the whole thing when it’s already there.

I enjoy Bostrom’s work, but IMO the focus on material resources is misplaced. Largely autonomous algorithms already run some pretty crucial aspects of our data-driven civilization, with more being automated every day, and they interact in largely unpredictable ways. Soon systems engineers are going to have to be ecological managers.