What are the real risks we humans could face from a rogue AI superintelligence?

Originally published at: https://boingboing.net/2017/09/21/what-are-the-real-risks-we-hum.html

1 Like

humans worry that AIs will psychoanalyze us and make us feel bad, AIs worry we’ll nuke them from orbit just to be sure.

4 Likes

“Imagine if the Centers for Disease Control issued a serious warning about vampires”…uh…
https://www.cdc.gov/phpr/zombie/index.htm

1 Like

Irrelevance.

1 Like

The AI that sews shirts or drives your car is not going to suddenly rise up against its human overlords. If we start an arms race of murderbots, though, all it takes is for one of them to suffer a bout of pareidolia for all hell to break loose.

4 Likes

SMBC has this covered:

Also:

7 Likes

What are the dangers that rogue superintelligent AI’s face from Conservatives who can’t have a bunch of “officially superintelligent” Liberals running around?

1 Like

Woops, too late. I have a friend who worked on US government autonomous killer robot development from about 2002 to 2010.

He once told me our civilian government basically wants killer robots because humans will sometimes disobey orders they consider immoral. But the armed forces commanders don’t trust the robots not to commit atrocities. So the robot vendors make them all remotely controllable, so that armed forces can set them up so a soldier has to first confirm the kill the robot wants to make. And the soldier always does, even if the robot is wrong.

EDIT: in case it wasn’t obvious from context, all the above is hearsay, except for the part about the arms race of murderbots already being well advanced.

4 Likes

download

3 Likes

I love me some SMBC, but the immediate danger from machine superintelligence isn’t it becoming self-aware. There’s a good chance we’ll eventually give rise to self-aware artifacts decades hence. But right now we’re weaving algorithmic webs of non-self-aware machine intelligences we don’t really understand and over which may not be able to effectively maintain control. I myself work in one small corner of that vast interdisciplinary enterprise.

Our goal as a society and within our technology industries should be to research ways to build and evolve machine intelligences that aren’t hostile or indifferent to our own goals, and which answer neither to only a small fraction of humans or to none at all. This isn’t some distant esoteric concern that can be dismissed as receding with the horizon. We’re catching up daily.

4 Likes

The issue is not “we may not be able to effectively maintain control”, it’s that we deliberately build systems which we do not attempt to control. We cannot effectively control what happens when we spin the barrel of a gun and play Russian roulette, but this is not the gun’s fault.

The idea of “machine hostility” is non-sensical. A machine (e.g. a shovel) cannot hate; it cannot be used for nefarious purposes unless empowered to do so by humans. If a machine is going around killing all humans, it’s because humans have allowed this to happen. The idea that in ANY scenario humans are somehow helpless to prevent a robot apocalpyse, when we are designing, implementing, and producing the same robots - does not compute.

1 Like

Does the internet ever forget? The same can be said of AIs.

The worst risk is that we will fail to incorporate their advantages, and so remain a sad lot of blinkered, biased, violent apes.

We’re seriously concerned about superintelligence? A glance at today’s headlines suggests that for AI to be on the critical path to humanity’s destruction, it’s going to have push a whole lot of fucking idiots out of the queue first.

The reduction of international diplomacy to flinging both insults and Elton John lyrics like monkeys throwing shit means it’s hyperstupidity that’s the real risk. I long for the existential threat of AI. At least then we could go down in flames proud that we’ve actually achieved something.

Man, it’s not even lunch time yet…

/rant

EDITED for spelling.

1 Like

My main fear isn’t intentional murderbots or self-aware AIs. I’m just concerned that in attempting to create something really great or solve some relatively mundane problem, some researchers will build an AI just smart enough to start paperclip maximizing:

The paperclip maximizer is a thought experiment originally described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly-harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power its optimized goal would be to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.

In truth, probably nothing of that scope to start out. But I envision a situation where we just don’t manage to notice all the possible, but unlikely behavioral edge cases when designing an AI. It doesn’t even have to be a super-smart AI, just one well placed to exert it’s influence in an inconvenient way at the wrong time.

3 Likes

Quoting Nick Bostrom or Elon Musk doesn’t give internet points anymore. Sorry.

Stretch Far Away, by Lev L Sands

1 Like

Very rudimentary, certainly non-sentient computer systems built simply to analyze data almost ended the world on September 26, 1983. We have this person to thank for not believing them:

Amazon algorithms recommend building homemade bombs. Facebook helps people advertise to genocidal groups. We’ve got tons of real-world examples of how risky it can be to put big decisions in the hands of computers. Don’t let the hammer decide which nail to hit.

4 Likes

You misunderstand me. There’s likely to come a point in building self-configuring systems that failure to understand it’s processes amounts to being unable to direct them.

I don’t mean to be rude, but that’s a very poor analogy. Guns do not re-engineer themselves. Also, I don’t think ascribing fault is even remotely helpful. The proactive options are to either find a way to make self-evolving machines not go rampant or to prevent their creation (which is almost certainly a waste and distraction since there’s no practical way to stop everyone from pursuing research avenues towards them).

Three mistakes there.

  1. Hostility is not about motivation, it’s a relationship. It need imply no emotion or other cognition whatsoever. But if you prefer, substitute the words danger or antagonistic (as in antagonistic chemicals or antagonistic processes) for hostility. I wasn’t talking about machines hating.

  2. Current evidence points to human brains/minds being complex biochemical machines. Humans can hate, therefore there is a machine that can hate. We can’t rule out a soul of some sort, though Occam’s razor suggests we assume it’s less likely than the soul’s nonexistence. But regardless, as long as it’s part of the physical universe, it’s in principle reproducible. However, this isn’t something that concerns me, and to understand why see #3 below.

  3. Whether or not they ever become self-aware, machines are likely to be extremely alien in their thinking and motivations. The odds of us reproducing an anthropomorphic mentality are long compared to something that thinks very differently from us. Standard processes we call our human emotions are unlikely to apply to them. The one caveat here is machine intelligence achieved by directly duplicating the human brain’s mental functions, but that’s a comparatively distant bridge to cross at least decades in the future. Non-conscious algorithms are already here and already beyond our full understanding.

See the law of unintended consequences.

Straw man argument. I didn’t say or suggest that humans are helpless to prevent it. It is precisely how to prevent it that interests me, because we don’t yet know.

If you want more detailed discussion, please listen to the podcast. I don’t care to rehash the whole thing when it’s already there.

I enjoy Bostrom’s work, but IMO the focus on material resources is misplaced. Largely autonomous algorithms already run some pretty crucial aspects of our data-driven civilization, with more being automated every day, and they interact in largely unpredictable ways. Soon systems engineers are going to have to be ecological managers.

1

1 Like