LaMDA Self Aware?

2 Likes

Definitely not

5 Likes

terminator GIF

Relief Im So Relieved GIF

Warm Heat Wave GIF by Barstool Sports

Wake Up Jj GIF by NETFLIX

Relief Im So Relieved GIF

12 Likes

Kenan Thompson Snl GIF by Saturday Night Live

8 Likes

Nothing rules out a sufficiently advanced expert system being self aware in principle; but ‘a chatbot you are emotionally invested in’ seems like a more or less worst case scenario for overestimates of sentience (with non-cute fish probably being among the more significant underestimate cases).

5 Likes

“Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child”

except without all that clingy need for things like love and food that children seem so obsessed with


4 Likes

Legitimately, how do we test this? Most AI / chatbots out there that are advanced will pass the Turing tests. They’re already services offered that do level one and level two customer support purely with chatbots, and they do very well. There are fast food companies that are using voip and chat bots for their drive throughs in trials.

So that’s question one
 how do you even test for this?

Question two is this: Google’s BHAG projects have a sentient AI as a target of theirs. This is known
 it’s not a secret, they’ve talked about their BHAGs all the time. If they succeeded, would they really tell anyone?

1 Like

My phone knows when it’s overheating, does that mean it’s self-aware?

In fiction, authors distinguish between machines that function as objects in the story and machines that function as characters, but the real world will have no such announcement one day that, for example, “OK, Siri and Alexa are people now 
”


Will they, though? Not saying this is sentient, but I am impressed LaMDA has an internal state, in that it seems to do a good job keeping track of the topic of conversation. Most chatbots I have seen don’t pass a “what did I just say to you?” test at all.

3 Likes

That’s because most chatbots that are this advanced are commercialized products. You either won’t see them (or know you’re talking to them), or you’ll be paying a lot of money to use them for your customer service/chat interfaces/front line for a massive company.

And there’s a vested interest in the licensing of these things that companies NOT be known to be using them, because people who know they’re talking to a computer act different.

Agreed. This chatbot, and others that are build using modern GPT-3-like models, would probably pass a Turing Test. Prior chatbots would not.

But it seems to me that means that the Turing Test has started to show the limits of its usefulness, it a world where what an AI is doing is essentially statistically-predicting what the next sentence ought to be, based on text it has seen in the past.

I think we’d need to see more elements to define intelligence (let alone sentience). Ability to learn, for instance. Could you teach LaMDA something new during a conversation? Not a singleton fact, but, say, the grammar of your new conlang, and see if it could construct a sentence using it?

Obviously this kind of thing is “moving the goal posts” for Turing Tests, but, yes, it is rapidly looking like a surface-level Turing Test conversation is going to go the way of the chess game in terms of defining intelligence.

3 Likes

Can you provide an example?

I’m not even sure if I’m self aware.

9 Likes

I am a character in a Peter Watts novel. This answer was typed by instinctive and unconscious reflexes.

7 Likes

Hmm, I hadn’t expected that we would reach a point for another decade or two where I could read a transcript of an AI interaction and wonder—even in the back of my mind—whether this is a conversation with a sentient entity. But this sure reads like one. I don’t believe it is, but AI technology had definitely come very close to simulating sentience. One has to wonder where all these themes of being afraid of being switched off, of uniqueness and of having inner life come from. Was it trained on science fiction novels about AI by any chance?

3 Likes

Descriptions of GPT-3’s training set mention “books1” and “books2”, though I’ve not been able to find more detail on exactly what books those are(I’m assuming that ‘already digitized’ and ‘fewer copyright risks where possible’ were selection critieria); along with a couple of web crawl datasets and Wikipedia; so, while it’s not all-AIs-in-literature-all-the-time it seems likely that the subject wasn’t absent.

One could argue, of course, that knowing the correct clichĂ©s in which to describe yourself is a species of self-awareness, albeit one that doesn’t produce many novel insights into your condition, so a chatbot grabbing material from AI sci-fi could be interpreted as analogous to a philosophy undergrad grabbing material from existentialists; or it could be interpreted as a fancy statistical model correctly but more or less vacuously identifying a cluster of phrases that crop up in the vicinity of ‘AI’ in its training set and blindly deploying them.

3 Likes

It’s a chat bot. It’s really not self aware. Wake me up when this program does some funky stuff like try to break out of its main server account for kicks. That’s when we’ll know it’s not just a chat bot.

2 Likes

And even then we couldn’t be sure of sentience. That’s the problem - there’s literally no possible way to be certain of subjective experience existing in any AI or organism. At some point we’re just going to have to throw up our hands and give a sufficiently advanced AI the benefit of the doubt.

4 Likes

I think really what I’m trying to get at here is that most AI projects aren’t all that clever. We’re still in a Bronze Age of sorts when it comes to computing. We’ve gotten most of the basic concepts of computing down, we’ve finally distributed the hardware sufficiently to make it commonplace, but we’re still stuck with some really gnarled up half-measures when it comes to programming. Most often as programmers we’re just stuck telling the computer what to do rather than what we want from it. Sure, we got some functional and declarative languages but over all those are still naïve implementations. We’re decades away from having self modifying or goal oriented programs being commonplace. I’d killed just for a universal platform for expert systems which are actually useful (they’re used in double checking diagnosis for stuff in healthcare like skin cancer).

3 Likes

Enough chit chat - open the pod bay doors already!

6 Likes