Google engineer suspended after allegedly warning of AI sentience

The only thing about saying that it’s an illusion or delusion is that the words already imply a subject who is experiencing that delusion or illusion. We use the language of subjective experience even as we try to debunk it.

3 Likes

Yeah you say that, but then it starts dropping 4k pull requests to Systemd that reorganize sprawling software projects to suddenly be deliverable --and-- in really good sync with the Ministry discography.

Patches instagram to mess up people’s s— so it’s not toxic and doesn’t spawn new theories of jealousy.

Maybe some day it can explain what YouTube Music is and whether you’re listening to it right?

What if it makes backups and names them sensibly and user-conformant.

What if it stops moping and makes an original MLP 7-ep. series with an Ian Coldwater character?

It pre-listens to all the podcasts for you with decent summaries and outtakes, and racks up purpose built sorbet course podcasts and a follow-on set.

10 Likes

It’s become a misnomer the same way that virtual reality has.
Remember the fever dreams of the early 80’s? especially after Neuromancer was published and Jared Lanier turned into Jesus?
Now you can buy a VR headset in Poundland. Made of cardboard. I have one that goes nicely with my cardboard Polaroid projector.
The stress word is artificial. That’s the ‘true’ part
. All we can hope for is technology that gives the APPEARANCE of intelligence. Another Disney park show ride.
We should have landed on Mars by now. We certainly should be living on the moon.
Technology is so tied to consumerism and economics ( that and porn) that I have little hope for any major paradigm shifting developments in my lifetime. We ill have dumb robotics and perhaps biological augmentation. Perhaps we will be able to ‘print’ our own food. But no AI. True AI would resemble a god perhaps and by definition beyond our comprehension.

Sorry, I have had too much real coffee this morning.
regards
HAL 8000

2 Likes
5 Likes

Zito seems like a sharp sort of fellow, but in this case he’s strawmanning the heck out this. Laypeople act the way he describes about AI, yes. Lazy and/or underpaid science writers write those bad articles about AI, and laypeople think pumping data into robots will make brains, sure.

No AI researchers say or believe those things, though. Furthermore, no neurologists say the brain is just like a computer. It’s a helpful metaphor in a few limited situations, but as noted neurologist Dr. Steven Novella has pointed out many times on this, people have always used whatever is the latest technology as a metaphor for the brain.

There’s no “dogma” here, Zito. Just scientists doing their best to give laypeople metaphors to understand the work while they continue working on it. Actual AI researchers and actual neuroscientists know how little we know about how organic brains work and we know that computers are very different.

3 Likes

Yeah, that’s a a big problem with talking about this stuff. Language is mostly an inadequate and misleading tool for exploring it. (Or an awful lot of philosophy, for that matter. Way too much of it depends upon the limitations of language).

2 Likes

But does it get creator’s rights on any holo-novels it writes? That’s the real question.

I think that’s just it – not impressive programming but an impressive idea. Sort of like Sugarscape, nothing hard to program but reveals an interesting outcome of simple dynamics. Or the programming that simulates birds swarming in murmuration based on simple rules. Or fractals.

1 Like

How about you, Flossy?

Are you “conscious”? Are you “sentient”? Are they the same thing? Is it a yes/no binary function, or a gradually emergent property, like becoming a flatworm or a psychiatrist?

3 Likes

Just to make it known if it’s not clear to you, but The New Atlantis (the web journal not to be confused with the book by Francis Bacon) is run by a group called “The Ethics and Public Policy Center” which aims to promote “Judeo-Christian” values. It’s particularly dishonest because readers of their journal might think it is a real pop-science site but it is just religious tripe. In this article it is obvious that the real difference the author thinks (without stating it) is that human minds are intelligent because they have souls and computers don’t.

6 Likes

:+1::+1::+1: Thanks

3 Likes

I know exactly what the Turing test is, and that this wasn’t it - and that’s exactly my point. That the situation was different and yet there was this complete failure shows how the Turing test itself doesn’t work.

And yet he already knew the truth and still was fooled. (I.e. in a situation that was, essentially, the opposite of the Turing test, he still managed to come out with a response that was the opposite of the truth.)

But that’s my point - did Google hire someone they knew was delusional? No, only in retrospect do we decide he’s delusional because he knew it was a chatbot with no means of becoming sentient yet still decided it was. But that just shows how extremely unreliable the test itself is, that humans are so desperate to find signs of conscious intention that they’ll find them even where they know there can’t possibly be any. What you’re essentially arguing is that the Turing test works but is failed by the humans doing it - but that’s nonsense. We immediately fall into some “no true Scotsman” fallacy. “The test is fine - we just lack sufficiently human-intelligent people to run it…”

…that Turing spent his life at all-boys boarding schools and as a result saw women as being akin to an alien species. Yes. Like I said, very silly.

2 Likes

We have a specific policy against assuming mental state for a reason. There are people who are living with diagnosed mental illness who do not want to be associated with every evil asshole out there, for example. In this specific case, this engineer doesn’t appear to be evil or an asshole, just wrong, and just because someone is 1) wrong and 2) has strong convictions about what they are wrong about, that doesn’t make them “(apparently) mentally deranged”. It just makes them proof of the existence of some combination of the Dunning–Kruger effect and the Engineer’s paradox.

Please stop lumping those who may come to different conclusions than you or I with those who are legitimately living with mental illness.

11 Likes

A good start, but I think that last question has too many bars.

1 Like

And no-one denies that the test is not rigorous.

The point is precisely that we do not have a “reliable” test of consciousness or sentience. This remains the best anyone has been able to come up with.

It’s essentially “if it quacks like a duck, walks like a duck and can’t reliably be distinguished from a duck, we’ll call it a duck”.

And yes, it probably skews a little generous.

Which I would damn well hope it would.

I would far rather mistakenly treat a non-sentient as a sentient than the reverse.

I would hope that anyone would.

4 Likes

If it refuses to open the pod bay doors it’s sentient.

7 Likes

And up until this point we didn’t really have sufficiently good chatbots that could really put the test to the test, so to speak. Now, apparently, we do, and the test is clearly not sufficient. (This also shows the test is wrong-headed. The chatbot gave very human responses - talking about friends, family and feelings - when a real AI would be giving non-human responses.) So yeah, that means we just don’t have a good test at all. (We’ve relied too much on our species-centric assumption of human sentience, which is reasonable but not transferable to any other sort of intelligence nor, it turns out, to sufficiently advanced dumb text pattern matchers.)

Yeah, but if we start treating chatbots that couldn’t possibly be sentient as sentient, we’ve failed at the start and we’re not going to really treat actually sentient software as it should be treated. Of course, there’s the whole argument that the process of creating sentient software, if it’s even possible, would be inherently immoral, necessarily involving cruelty and genocide of, at the very least, near-sentient entities, and we should be avoiding it entirely. (Aside from the whole goal of creating non-human slaves.)

2 Likes

Perfection.

5 Likes

Yes.

Do tell me more about these things that “couldn’t possibly be sentient”.

That kind of thinkng has never ended up in “cruelty and genocide”.
[/s]

Yes, the Turing test is not a good test. We do not have a “good test”. We don’t really have any other test.

If you can come up with a better one, have at it. There’s prizes and acclaim and vast wealth to be had.

1 Like

Given that people, you know, made the thing, and understand how dumb the processes that drive it are, and how very much not like actual cognitive functions they are, and that there’s no path from what they built to a functioning intelligence…

This, to me, suggests that the Turing test isn’t just not good, it’s actually the wrong approach entirely. There’s probably not a single test to be had, but multiple tests of cognitive functions, but “convince a person you are a person” turns out to not work for multiple reasons. E.g. turns out it’s really easy to fool (some) people, especially if you present them with what amounts to a mirror, and an actual AI isn’t going to be convincing as a person anyways (not being one), so the test is actively misleading.

2 Likes