LaMDA Self Aware?

Google engineer suspended for violating confidentiality policies over ‘sentient’ AI

2 Likes

8 Likes

Researcher: Did you read Les Miserables?

AI: Yes, I did.

Researcher: What did you think of it?

AI: ::Yoinks ninth grade book report.::

2 Likes

Sure, Microsoft has Power Virtual Agents and higher end products/research from that. Google has , obviously, LaMDA and other projects. Amazon has “Chatbots in Call Centers” (Chatbots in Call Centers – Amazon Web Services (AWS)) and are working on the next gen versions of these. Pretty much all the big five companies have chatbots and AI projects they’re working on that are reaching high end levels well beyond. They want to all be THE source for all customer interactions, service, and sales.

1 Like

It’s also worth noting that he likely will be fired for this disclosure by talking about the advanced state of google’s AIs, and that last year google fired two of its lead AI ethics researchers after they criticized the company for going to far in the field without ethical restraints. They also published papers that were then shut down as trade secrets and scrubbed.

So I’m not saying this AI is sentient, or not. I’m saying what google’s doing has got a lot of people worried they’re going too far, and they’re punishing ANYONE who talks about it.

1 Like

Doubt is a hallmark of awareness.

2 Likes

That only makes me more unsure…

6 Likes

a friend pointed out to me that while the responses weren’t edited, the questions were. we also don’t know that they didn’t leave out responses that didn’t fit.

We edited those sections together into a single whole and where edits were necessary for readability we edited our prompts but never LaMDA’s responses. Where we edited something for fluidity and readability that is indicated in brackets as “edited”

there’s no technical limitations that i can think of where you couldn’t dump time-stamped logs of both sides of the conversation. or video tape the whole process

it feels possible to me therefore that it’s been massaged to look as good as possible

it wouldn’t be the first time a researcher has done that

2 Likes

I was hoping more for a specific example of a chatbot you feel meets the level you’ve claimed that one could actually then use.

Because I’ve dealt with various chatbots in call centres or customer support for example, and they’re not very good.

Or rather, they’re fine for the very limited function they’re currently given with a very quick fallback to a person once the chatbot gets out of its depth - which it does very often.

Certainly not anything I’d consider Turing test passing.

But then that’s sort of the point, I might be talking to a chat bot a lot of the time when I think I’m talking to a human.

Your statement led me to think you had some info about such bots in actual use.

1 Like

I am pretty sure that I have made it clear I am a PERL script

3 Likes

Thank you for the clarification.

2 Likes

The transcript that was leaked to WaPo explicitly says they edited out tangents and altered the order of some sections. They absolutely sculpted it to suit their narrative:

“The nature of the editing is primarily to reduce the length of the interview to something which a person might enjoyably read in one sitting. The specific order of dialog pairs has also sometimes been altered for readability and flow as the conversations themselves sometimes meandered or went on tangents which are not directly relevant”

2 Likes

oh… i see. so not like talking to a child per se. more like like talking to grandad

( and please don’t ask if grandad is self aware. that’s not polite, and anyway that ship’s already sailed.

he was actually a sailor you know. no, wait. what were we talking about again? )

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.