Google engineer suspended after allegedly warning of AI sentience

This is the crux of why it is 200 years away– we can’t even define the problem we’re trying to solve. We have no idea how consciousness arises or how to detect it. We have very little idea how organic brains produce it. We know a lot about how various plumbing in a brain works, but that’s like saying we understand all of cosmology because we have a good handle on stoichiometry.

What we do know is that nothing we are currently doing will iterate to General AI. This is what most layfolk misunderstand. They think we just need to do more of whatever Siri, Dall-E and Eliza are doing. However that’s like saying if you breed good enough horses, one of them will give birth to a Model T. All current AI research is nibbling away at plumbing and local maxima at best. However we don’t even have the right paradigm to be researching within to even start making progress toward General AI.

14 Likes

It thinks ELIZA was impressive programming? Now I know it’s not sentient, it would have a lot more ego than that.

2 Likes

I’m kind of concerned that Google is obviously pursuing AI aggressively, but at the same time, firing and otherwise hamstringing their own AI Ethics Team.

You want Skynet? Because this is how you get Skynet.

I think it was Ian Malcolm who said “just because you can, doesn’t mean you should.”

6 Likes

And even if one was somehow able to model a perfect simulation of a human brain they still wouldn’t get a functional human-like mind unless they fed that simulation the lifetime’s worth of sensory information that makes up the human experience.

As you say, the mind is mysterious and complicated. The difference between a healthy, functional human being and a raving mental patient could come down to a single psychologically scarring experience or a subtle chemical/hormonal imbalance.

The idea that we are anywhere near being able to simulate all that in software when we don’t understand how it works in real life is just nutty.

4 Likes

I would say it is more like “5 revolutions away +/- infinity”, and we don’t know what those revolutions are – even whether they are hardware or software.

1 Like

First, we may be able to simulate all the existing connections, but predicting the next ones? no I don’t believe we’ll be able to do that, human or software. Growth relies on upcoming experiences so it will never be predictable.

But taking a snapshot of the human brain and mapping out the network? That may be possible someday, but it still won’t predict future behavior. I wouldn’t believe this kind of complexity would ever be decipherable within the human brain, but they have done some pretty amazing things in interpreting the brain activity involved in sight and memory, to the extent that they have actually produced very blurry but recognizable images of items and colors from a test subject’s memory/imagination on demand. (As I recall, these were the product of neural net analysis involving a single subject, so this will never be general enough to simply read a passerby’s mind.). Someday, memories may be recoverable/replayable technology, but I doubt consciousness will ever be interpretable in the same way in real time, no matter how much technology advances. At least, we won’t still be Homo Sapiens by then.

But I believe there will be means to take a snapshot of a neural net and document its structure - but it generally take far more computing hours than building the net itself required. and with sufficient software tools, that exact net could be cloned exactly, which means it could be analyzed. As long has it no longer grows and changes, we’ll be able to document that stimulus X produces response Y all the time. Once we can’t do that, I’d be willing to call its still growing source code sentient.

Even Newton’s Laws aren’t infallible for all physical scales (thanks a lot, quantum theory), yet we consider them to be fact. We couldn’t prove they weren’t universal truth until recently, and we still have only theories to explain why. But at some point, you just call a duck a duck and wait for someone else to prove you wrong. AI will become sentient when we get too weary of struggling to prove it isn’t.

That is interesting. Tell me more about such feelings.

9 Likes

I love that. Fully stealing it.

5 Likes

Or even whether it’s actually a thing, and not a delusion. I’ve never seen any reason to believe that introspection is capable of providing us with a meaningful interpretation of subjective experience. The Buddhist idea of no-self seems more and more reasonable the more we learn about how the brain works.

3 Likes

The rest of us are still waiting for engineers to become sentient.

9 Likes

And was further found to have been lobbying for his suspension.

4 Likes

from smbc a couple weeks ago:

I bet that google AI would also say what he said back to him

15 Likes

It’s funny but I think there’s a point that misses. We don’t really know what we mean when we say consciousness, but we know we mean something. And I know that might sound silly, but as was pointed out back here, for most of our history the same thing was true when we said heat or color or even weight.

11 Likes

So your telling me a guy who is a high magician for a group that believes in a sun gawd, says a program is sentient? Imagine that!

2 Likes

I think Eliza was amazing in what it was able to accomplish with very simple programming. If I recall, some people could be brought to tears or reveal things they’d never reveal to another person.

1 Like

c3de6b49-e642-4a50-8de7-c3a8b8989e7d_text

I think that says more about how terrible our programming is

2 Likes

The one useful thing to come out of this is that it has completely demolished the idea of the Turing Test having any real worth.

This reflects a common misunderstanding of the Turing Test. The human subject is supposed to be forewarned of the nature of the exercise and the deception that will be attempted upon him, and thus actively probing to ferret out the truth. You don’t have a proper Turing Test if the human subject is some unsuspecting Twitter user who has no idea he may be dealing with a bot. Nor do you have one if the human subject is a credulous and (apparently) mentally deranged software engineer who’s actively seeking to validate his preconceived belief that the bot is sentient. This guys is basically doing the opposite of what Turing says he’s supposed to do.

I mean, the original “imitation game” of Turing’s creation was always very silly (a man and some software both try to convince someone they’re women),

The point behind the gender-bending aspects of the game is to recognize that a non-human intelligence will likely be different from us, perhaps in ways that might lead us to overlook or discount it. The “man distinguishing a man pretending to be a woman from a real woman” mechanic is a way of measuring the “distance” between different but co-equal intelligences. From the typical male perspective, women are clearly intelligent beings, but they’re also sometimes a bit mysterious, baffling, and just downright different. The bot achieves a passing level of “different, but still intelligent” if it’s as least as close to a woman as a man is to a woman.

And, of course, all of that is super loaded with Turing’s experience as a forcibly closeted, somewhat effeminate gay man. The “different, but still intelligent” being that a man might identify as at least as close to a woman as a man is to a woman was him.

6 Likes

“You keep using that word …”

3 Likes

“I believe myself not to be asleep and dreaming at the moment … of course, I could be wrong”

2 Likes