Definitely not
Nothing rules out a sufficiently advanced expert system being self aware in principle; but âa chatbot you are emotionally invested inâ seems like a more or less worst case scenario for overestimates of sentience (with non-cute fish probably being among the more significant underestimate cases).
âBlake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human childâ
except without all that clingy need for things like love and food that children seem so obsessed withâŠ
Legitimately, how do we test this? Most AI / chatbots out there that are advanced will pass the Turing tests. Theyâre already services offered that do level one and level two customer support purely with chatbots, and they do very well. There are fast food companies that are using voip and chat bots for their drive throughs in trials.
So thatâs question one⊠how do you even test for this?
Question two is this: Googleâs BHAG projects have a sentient AI as a target of theirs. This is known⊠itâs not a secret, theyâve talked about their BHAGs all the time. If they succeeded, would they really tell anyone?
My phone knows when itâs overheating, does that mean itâs self-aware?
In fiction, authors distinguish between machines that function as objects in the story and machines that function as characters, but the real world will have no such announcement one day that, for example, âOK, Siri and Alexa are people now âŠâ
âŠWill they, though? Not saying this is sentient, but I am impressed LaMDA has an internal state, in that it seems to do a good job keeping track of the topic of conversation. Most chatbots I have seen donât pass a âwhat did I just say to you?â test at all.
Thatâs because most chatbots that are this advanced are commercialized products. You either wonât see them (or know youâre talking to them), or youâll be paying a lot of money to use them for your customer service/chat interfaces/front line for a massive company.
And thereâs a vested interest in the licensing of these things that companies NOT be known to be using them, because people who know theyâre talking to a computer act different.
Agreed. This chatbot, and others that are build using modern GPT-3-like models, would probably pass a Turing Test. Prior chatbots would not.
But it seems to me that means that the Turing Test has started to show the limits of its usefulness, it a world where what an AI is doing is essentially statistically-predicting what the next sentence ought to be, based on text it has seen in the past.
I think weâd need to see more elements to define intelligence (let alone sentience). Ability to learn, for instance. Could you teach LaMDA something new during a conversation? Not a singleton fact, but, say, the grammar of your new conlang, and see if it could construct a sentence using it?
Obviously this kind of thing is âmoving the goal postsâ for Turing Tests, but, yes, it is rapidly looking like a surface-level Turing Test conversation is going to go the way of the chess game in terms of defining intelligence.
Can you provide an example?
Iâm not even sure if Iâm self aware.
I am a character in a Peter Watts novel. This answer was typed by instinctive and unconscious reflexes.
Hmm, I hadnât expected that we would reach a point for another decade or two where I could read a transcript of an AI interaction and wonderâeven in the back of my mindâwhether this is a conversation with a sentient entity. But this sure reads like one. I donât believe it is, but AI technology had definitely come very close to simulating sentience. One has to wonder where all these themes of being afraid of being switched off, of uniqueness and of having inner life come from. Was it trained on science fiction novels about AI by any chance?
Descriptions of GPT-3âs training set mention âbooks1â and âbooks2â, though Iâve not been able to find more detail on exactly what books those are(Iâm assuming that âalready digitizedâ and âfewer copyright risks where possibleâ were selection critieria); along with a couple of web crawl datasets and Wikipedia; so, while itâs not all-AIs-in-literature-all-the-time it seems likely that the subject wasnât absent.
One could argue, of course, that knowing the correct clichĂ©s in which to describe yourself is a species of self-awareness, albeit one that doesnât produce many novel insights into your condition, so a chatbot grabbing material from AI sci-fi could be interpreted as analogous to a philosophy undergrad grabbing material from existentialists; or it could be interpreted as a fancy statistical model correctly but more or less vacuously identifying a cluster of phrases that crop up in the vicinity of âAIâ in its training set and blindly deploying them.
Itâs a chat bot. Itâs really not self aware. Wake me up when this program does some funky stuff like try to break out of its main server account for kicks. Thatâs when weâll know itâs not just a chat bot.
And even then we couldnât be sure of sentience. Thatâs the problem - thereâs literally no possible way to be certain of subjective experience existing in any AI or organism. At some point weâre just going to have to throw up our hands and give a sufficiently advanced AI the benefit of the doubt.
I think really what Iâm trying to get at here is that most AI projects arenât all that clever. Weâre still in a Bronze Age of sorts when it comes to computing. Weâve gotten most of the basic concepts of computing down, weâve finally distributed the hardware sufficiently to make it commonplace, but weâre still stuck with some really gnarled up half-measures when it comes to programming. Most often as programmers weâre just stuck telling the computer what to do rather than what we want from it. Sure, we got some functional and declarative languages but over all those are still naĂŻve implementations. Weâre decades away from having self modifying or goal oriented programs being commonplace. Iâd killed just for a universal platform for expert systems which are actually useful (theyâre used in double checking diagnosis for stuff in healthcare like skin cancer).
Enough chit chat - open the pod bay doors already!