AI Alarmism: why smart people believe dumb things about our future AI overlords

Originally published at: http://boingboing.net/2016/12/23/ai-alarmism-why-smart-people.html

I live in California, which has the highest poverty rate in the United States, even though it’s home to Silicon Valley.

I’m sorry, what?

12 Likes

Because I know waaaaay too many clever programmers who want to work on AI who shouldn’t be allowed within ten meters of any project that describes itself as “AI”?

12 Likes

I have it here:

smbc

34 Likes

I just read through this. The guy definitely shares my sense of humour. While I have to chew over some of his deeper reasons for not being an alarmist about the long-term implications, his descriptions of the humans (more Silicon Valley than Great Rift Valley) who are currently in charge of developing AI are spot on:

Having realized that the world is not a programming problem, AI obsessives want to make it into a programming problem, by designing a God-like machine.
[…]
It’s a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith.
[…]
These religious convictions lead to a comic-book ethics, where a few lone heroes are charged with saving the world through technology and clever thinking. What’s at stake is the very fate of the universe.

As a result, we have an industry full of rich dudes who think they are Batman (though interestingly enough, no one wants to be Robin).

8 Likes

There is no reason to fear future AI overlords.
click
There is no reason to fear future AI overlords.
click
There is no reason to fear future AI overlords.
click
There is no reason to fear future AI overlords.
click
There is no reason to fear future AI overlords.
click

9 Likes

When Marvin Minsky was asked about the danger of AI machines doing something malevolent, he said “I would hope people would test these systems extensively before deploying them.”

8 Likes

It is perfectly possible and logically consistent to be concerned about potentially catastrophic outcomes from the development of general AI, while still presuming its inevitability and even overall desirability. However, your article appears to be assuming an automatic association between the possibility of general AI coming into existence (including both powerfully good and potentially bad outcomes) with a world-view that says AI should be avoided. While some people make that association, many others don’t.

Yes, at our core, we are irrational beings, and yes, there may well be unrecognized agendas behind the opinions held by many people on this topic. But it’s a stretch to ascribe this thinking in general to “smart” people. In fact, people such as Stephen Hawking, Bill Gates and Elon Musk (who I can guess are among the smart people you are referring to, based on their public comments) don’t appear in fact to rely on the two opposing red herring mindsets that you propose (AI will be all powerful, so as a member of the 1% I’ll benefit, or AI will ruin the lives of everyday people, so it’s evil.) There are rational arguments for nurturing and taming this powerful dragon with great care, which is really what these smart people are recommending.

4 Likes

The evidence for how great AI will be is right in front of us. As productivity rises we all share the benefits and our workdays become fewer and shorter!

Wait. I mean we DON’T all share the benefits and our workdays become fewer and shorter.

7 Likes

The dangers of super intelligent AI’s can be scary enough without their taking over and eliminating us.

  1. Who’s going to control them, and via them us.
  2. They’re truly made in our image.
3 Likes

[10:03] This, in caricature, is exactly what Bostrom and people like him are arguing

This is not how argument works.

[11:10] A lot of this relies on Intelligence not being well defined at all

Incorrect. There are reams of stuff about the importance of instrumentality to systems that can be described as intelligent.

[16:48] Stephen Hawking’s Cat

As we speak, self-driving everything is being made up to and including autonomous military vehicles. Their security, if previous experience is any indication, will be crap. A fully general superintelligence will likely be a lot better at breaking security than we are.

Problem solved.

[20:05] Where AI succeeds

A lot of things get called AI. Chess playing machines and Go playing machines both play a board game better than humans do but they work with completely different technologies. Also, ‘throwing a bunch of data’ at things implies machine learning (it is machine learning, really) which is self-improving.

[19:00] Not buying the orthogonality thesis.

You can buy it or not, the fact of the matter is that nobody knows and argument ad ricketmortiam is not in fact an argument. It doesn’t seem immediately impossible and we’ll know if the orthogonality thesis holds only after we plug in the Overlord and we all sit around calculating Pi all day.

See. I can quote Mass Effect 2, as if that proves anything.

[20:30] Argument from Lazy Roommate

Well yes, but as a matter of fact the ‘designer’ of Peter’s mind is satisfied. The hardwired goals put in by evolution (food, pleasure, shelter) were amply satisfied since Peter did not starve out on the street. If the highly brilliant Peter was put in a situation where he had to think fast or die, I believe he would develop a taste for self-improvement.

AIs if they are built and if they are possible are likely to be built with goals since humans are, largely, not in the habit of building machines with no purpose. Especially not expensive ones.

This, of course, all under the assumptions that the human mind is akin to whatever it is AI engineering can produce, if anything.

[21:10] Argument from Brain Surgery

…Mr. Cegłowski, may I see your source code?

Oh, you don’t have that? Really?

Huh.

[21:41] The childhood of AIs

It’s likely AIs will have childhoods?

Likely?

I have a strong suspicion that this argument was rectally derived.

[22:36] Argument by massive anthropomorphism.

Failure of an argument in its own terms. This was meant to be arguments accepting the premises outlined in the start. One of them was that whatever we make won’t be anything like us.

So how the hell do you know it even realized the concept of loneliness? Hell, there are animals that actively avoid members of its own species except for mating. Or that it requires collaboration to achieve its full effect?

23:00+ Making fun of nerds.

So basically the rest of this talk is going to make fun of weird nerds for being weird, nerdy?

Oh good. We need that. Very useful.

24:20 Grandiosity and poverty

Funny. Every single person really worrying about AI I’ve ever read seems to be in the effective altruism movement. Bostrom certainly is.

So whatever it is they believe (the tears of joy and trillions of people stuff is, basically, preference utilitarianism applied with an attempt at rigor) is making them give something like 10-20% of everything they make to malaria charities.

Clearly they must be stopped.

[24:42] Megalomania

Those stupid nerds thinking that a fully general AI might help with the many many problems the world has! Why, politics has those well in hand. Just look at the recent elections in…

…Jesus.

Oh, but they are white.

Well, yes. They are. But the LessWrong math-weirdo group also has many times more people in the various LGBTQ+ categories than the general population if that makes things a little less terrible. It’s not like they are the commentariat of Breibart or anything.

[25:42] Transhuman Voodoo

Gibberish. This is a debating tactic where you pick out the weirdest things the weirdest people in a group have said and pretend this is a central example of the group. Does Elon Musk believe this? Bill Gates? Stephen Hawking? Those are the people he mentions so… Okay, those are celebrities. Which part of that does Yudowsky believe? Scott Alexander? Nick Bostrom?

[26:44] Religion 2.0

Declaring something to be a religion is a fully general argument. Creationists love claiming that evolution is a religion, fr’instance.

Further, atheism is entirely compatible with building a God, but this isn’t a God. Very explicitly. It can’t break the laws of physics. It’s not coterminous with all of space-time. It’s not eternal, or self-causing or universe-making. It is, in fact, nothing like a god except insofar it is a very powerful thing. It’s like claiming an atomic bomb is a god because it can destroy cities just like God can in the Bible.

[27:36] Comic Book Ethics

Saving the world with technology is the job description of engineers and scientists. Oh, it’s grandiosely put, but if I had to name the force making the world a worse place ‘grandiosity’ wouldn’t even clear the top one hundred.

[28:10] Simulation Fever

No. The simulation argument doesn’t rely on or interact with the possibility of strong AI. It’s a philosophical argument, and like most philosophical arguments it sounds silly but explores interesting truths.

Protip: Plato didn’t think we were all in a cave, either.

(Though he did think we remembered things using our liver, so…)

And even ignoring this, his argument boils down to: “I DON’T UNDERSTAND THE SIMULATION ARGUMENT THEREFORE PEOPLE WHO THINK ABOUT IT ARE POOPY-HEADS.” I say ‘think about it’ because nobody actually believes in it. It’s an intellectual diversion especially since as he notices but doesn’t follow up on, once you are in the simulation you don’t know what’s out of it so, really, nothing at all changes. The universe is the universe.

[30:44] Data Hunger

Contradicts self. Says earlier that the AI that what he likes to call AI-weenies is not like the stuff we use today which he characterizes by the sea-of-data approach, and now lays the faults of this selfsame sea-of-data approach at the feet of the people he’s criticizing.

Make up your mind.

[32:28] AI Cosplay

AIs have instrumentality. The people he’s… you know what, I’m just going to say ‘demonizing.’ The people he’s demonizing have instrumentality. Therefore something-something and they are pretending to be AIs.

Is ignoring Chesterton fences silly? Yeah. Have people of a technical bent been doing it, largely harmlessly for decades? Yeah. Read Chesterton. Seriously. Napoleon of Notting Hill. Read it. The Super-man story. Forgot its name. Read that too.

All they are doing is trying to imagine how to solve things better from first principles. At worst, its a bunch of wasted effort and impossible cities on paper for people in the future to giggle at. At best, they figure out something useful.

And while I’m not in the LessWrong community I know enough about it that ‘NPC’ doesn’t mean what you think it means. It’s not a value-judgement but a statement on how people see themselves.

[34:43] Ostblock SF is the best SF

I’m Slavic and this man embarrasses the hell out of me.

Western SF is bad and the stuff I read as a kid by sheerest accident is the best stuff.

Yeah.

Not that Lem and the Strugatsky brothers aren’t brilliant, of course.


Yuck. There are good arguments against superintelligence but he made precisely none of them except, interestingly, as a joke.

Slavic pessimism.

It does work.

See, if we can build a fully general AI that can improve itself and improve its ability to improve itself then making its goals align with ours is the most important thing and the threat from such a device is virtually limitless.

Yeah.

If.

There’s no reason to be certain or even confident that we can. Yes, machines get faster. And yes, configurations of matter in the cosmos exists which host what we term consciousness. But there’s no reason to think we’re clever enough to make, ab ovo, such a configuration, except, of course, by the obvious method.

That’s the long and the short of the argument. There’s no evidence whatsoever that building AI is possible at all, or practical within the next $YEARS.

So what do I think? I think that AI risk is not as pressing as some people think. However, the amount of resources expended to work on the research problems which largely amount to math to be able to express goals in abstract machine-readable terms, is comically small. It’s basically what’s required to keep a few nerds in pizza and beer or, given the demographics involved, soylent and modafinil.

We waste orders of magnitude more on Transformers movies, for heaven’s sake. Given the size of the, admittedly, low-probability payout, it’s likely worth it. And if it isn’t, we are doing so little harm that it gets lost in the noise.

I also think that, given 40 minutes to talk about a group of people making the world worse I wouldn’t choose a group of mathematicians and computer scientists who are trying to avert an improbable catastrophe and habitually give sizable chunks of their income to charity. Just not the first people I’d pick.

Maybe that’s me. I always did have a massive soft spot for weirdos. Even ones who can be very abrasive like a lot of the LW crowd.

As for the ethics challenges of modern actually-existing AI, that’s a red herring. Yes, those exist, but their resolution is a matter of politics, not technology. You have to make certain people not do things. You can’t engineer that away.

All in all: Mean-spirited talk that argues in bad faith, poorly. Would not watch again.

18 Likes

In such conditions, it’s not rational to work on any other problem.

Nerds never get tired of thumping their chests about how rational they are, and jocks never get tired of giving them swirlies. Before assuming rationality, maybe it makes sense to take a long hard look at your definitions?

4 Likes

Come on now AI won’t just kill us all, it is too smart for that. I mean the sequel to the Terminator series is more of less The Matrix.

I’m far more afraid of the grey goo scenario.

2 Likes

The persistent survival of humans in The Matrix is one of the dumbest bits of a plot with no shortage of such things.

But, if you want something to be scared about: I was chatting to a mate a week ago, an environmental scientist, who is even more pessimistic than I am. He thinks that ocean warming and acidification is going to completely collapse the global fisheries within five to ten years.

5 Likes

This is the way the world ends. Not with a bang but a UX.

1 Like

SPOILER ALERT

6 Likes

Took the time to read this and read it again since I’m already decided on the question of if the current AI/singularity thing is a spiritual belief system or not. Unfortunately this essay didn’t really address that and sure as heck wasn’t scientific prediction. Oh well.

Of course, Marxism…

So really not addressing the headline question at all.

There’s the fear of Dr Frankenstein’s monster, what Kurzweil etc seek to do by “cheating death”. There’s also a fear that AI might be more like the Golem of Prague, something useful but ultimately difficult to control. Hardly a dumb thing to believe.

2 Likes

I have often found this demonisation of AI weird.

Some of this may be a ‘zombie movie’ mentality, where it is motivating and fun to have an endless supply of an enemy that you can blast away at with a shotgun without feeling remorse. This, I think is a red herring.

I think most of the alarm comes from the assumption that AI is ‘like us, but smarter’. But ‘like us’ includes all sort of things like self-awareness, social skills, fight and flight mechanisms, fear of death, and a whole lot of junk that we have evolved with but you would not wish on a designed intelligence. You might want an machine that could travel to the stars because a human would not survive the trip, and to send back reports of what it found; but you would not make it lonely, or rebellious, or to fear failure or death. It might not even be self-aware, or it may be in some other state where it knows what it is and what it controls, but this knowledge is not so central to it as it is to us.

We have difficulties imagining different sort of intelligences. We may imagine some beneficent computer with a zen calmness instead of a control problem, but that still goes back a human example. So, either we restrict AI to ‘being like us’, which seems silly and dangerous, or we see how the other options get on.

As in most fields, experts are never certain about things like this. If someone ‘absolutely knows that robots are evil’, get them to show their workings. They don’t sound an expert to me.

4 Likes

If they really will be made in our true image, they won’t be that smart.

1 Like

Nanobots still have to obey the laws of thermodynamics. In order to build more nanobots, they’d have to hump a lot of chemical reactions uphill, and that would take a lot of energy. (e.g. nanobots can’t boil water by grabbing water molecules and flinging them into the air without paying the same energy cost as boiling it on the stove.)

Protip: Avoid “kill it with fire!” solutions.

5 Likes