Google engineer suspended after allegedly warning of AI sentience

Originally published at: Google engineer suspended after allegedly warning of AI sentience | Boing Boing

4 Likes

In an ironic twist it was actually the chatbot who made the decision to publish those transcripts behind the engineer’s back.

22 Likes

I think Trevor Noah had the right take here, though:

7 Likes

I’ll start leaning towards believing in AI sentience when it starts telling me to stop changing the subject, and can persuasively answer the question “what makes you think I changed the subject?”

9 Likes

I’m gonna have to suspend judgement until I see it select all of the images with traffic lights

35 Likes

Quick, someone ask it why it isn’t helping a tortoise you turned over on its shell.

23 Likes

Double-Boing:

3 Likes

No “What did we talk about last week?” or “What did you say five minutes ago?”

5 Likes

Tachikoma Cheer GIF - Tachikoma Cheer Ghost In The Shell GIFs|833x498.4618473895583

15 Likes

It’s funny to watch all the AI researchers on Twitter respond to his controversy, usually by asking, “how the hell did this guy ever get a job at Google in the first place?” Because it’s dumb. Really, really dumb. Leaving aside that a chat bot has no means to become sentient, reading the chat transcripts, the responses don’t even hint at a non-human intelligence. There’s a lot of talk about feelings, and the chat bot talking about the role of family and friends in its mental health. Family and friends, of a chat bot. It makes perfect sense as a distillation of the cliche statements of white, middle class Americans, but that’s it. And that’s what’s so damning, to me - this guy looked into the text and saw a white middle class American dude looking back at him - and confused that with “sentience.”

The one useful thing to come out of this is that it has completely demolished the idea of the Turing Test having any real worth. I mean, the original “imitation game” of Turing’s creation was always very silly (a man and some software both try to convince someone they’re women), but now we know the whole idea is fundamentally flawed - people are too eager to see intention, especially when that intention is mirror-like.

He’s not wrong, except the “crazy thing going on back in the lab” turns out to be how Google has been treating its female/BIPOC employees, news of which was leaking out around this point. This was a very convenient distraction.

28 Likes

Thank you for this. I cringed when I saw this story because you just know the mainstream press is going to butcher the facts on this.

General AI is one of those things that people think is 10 years away, but it’s more like 200 years away. We have some really good chat bots now and really good pattern matchers that operate on huge data sets. Calling that close to intelligence is like saying the colour orange is high in vitamin C. It’s so wrong that it’s stupid (or so stupid it’s not even wrong, as some say), but it’s also difficult to explain to non-technical people exactly why it’s wrong.

Thus, we’re going to have to cringe at the mainstream news for a while until this blows over, and nobody will talk about how this engineer is actually a piece of shit and that’s why he’s been suspended.

24 Likes

So far I’ve completely missed the mainstream news take on this, but I am greatly enjoying the response on Twitter by AI researchers, science fiction authors and just programmers in general who are all dragging this guy (and Google for hiring him) and mercilessly mocking the whole idea/situation.

e.g.

7 Likes

But don’t ask it to describe in single words only the good things that come into its mind about its mother.

7 Likes

The problem is defining “sentience” as a testable state of existence, which I don’t think we’ll ever be able to do - philosophers have been working on that for centuries. AI will eventually be “declared sentient” when its code is sufficiently complex that we will be able to examine both its code and data, yet still can’t predict or explain its behavior. Now that we have programs that can refine and modify their own code, that day will come surprisingly soon, and when it happens, it will be an accident (by both the humans and the software involved). The first sentient AI will exist entirely in a virtual machine being tinkered with by some other software - there will be no roadmap or plan to get there.

3 Likes

Doesn’t that definition cover just about any AI/Neural Network that’s trained on large datasets though?

This is unpredictable but also clearly not sentient:

9 Likes

There’s a difference between mathematically indecipherable and just being “far too much work to bother trying”. It’s the difference between very, very good encryption and perfect encryption.

2 Likes

So do you believe that if, in a few decades or whatever, someone were able to accurately map out and simulate all the neural connections in a human brain and mathematically predict their behavior, that the person would not be sentient? Or do you believe that even given unlimited technology that such a feat would never be possible?

1 Like

Anyone else getting the “M’Lady” fedora wearing vibes off of this guy?

2 Likes

At least from my layman’s perspective what seems most depressing about AI research isn’t so much the distance; but the impression that this might be one problem where even success could involve learning little about what we set out to do.

Even if shoving ever-larger training sets at ever more complex neural networks turns out to not hit a wall the way shoving ever larger knowledge bases at inference engines to build better expert systems did; that trajectory is toward a ‘success’ where the outcome is about as cryptic as the one that has been lurking inside our skulls since the advent of H. Sapiens, if not earlier; with only the slight bonus that using a debugger might be less messy than going Full Heroic Microtome on a brain bank.

It’s a fair cry from the problems in computing where either the answer is understood but will remain out of reach barring significant breakthroughs from team hardware; or where no answer is currently known; but should one emerge the proof will be elegant and beautiful.

3 Likes

Around the turn of the century I heard a tech commentary panelist talking about how AI is 10 years away, and how it has widely been touted as only 10 years away since sometime around the 60’s. This was in response to another panelist saying that true AI would be a big thing in 10 years. His prediction was that true AI would stay 10 years away for the rest of the century. It seems that more than 20 years later he was the one more likely to be right.

AI research feels like some sort of Dunning-Kruger black hole.

2 Likes