ChatGPT-controlled Furbies — is this the end of humanity?

Originally published at: ChatGPT-controlled Furbies — is this the end of humanity? | Boing Boing

4 Likes

No, this is the beginning…of FURMANITY.

5 Likes

At least it’s honest about its plans.

7 Likes

who knows no GIF by The Sean Ward Show

4 Likes

an AI-empowered Teddy Ruxpin

Isn’t that the plot of a Spielberg movie?

(Or maybe that’s the joke)

5 Likes

“Is there a God?”
“Is there a God? Working… There is now.”

10 Likes

ah humanity… let’s write endless creative works of scifi and make glorious movies warning us of every conceivable ‘bad thing’, and then when the opportunity finally quasi presents itself: “Oh me me! I wanna be the first to strap a murder weapon onto an autonomous system known to make the wrong conclusions!”

“Oh what the hell, let’s give it a go” might end up being the best solution to “The Fermi Paradox

9 Likes
9 Likes

I’m not sure what’s at the core of the thing.

Is it that humans are just very creative storytellers who can imagine an endless number of currently impossible thingsz most of them terrible or frightening or both?

Or is it that humans aren’t very creative and repeat the old dystopian tales over and over again, even when trying to invent something new?

I mean, it’s nearly all the same stories as long as out records to back…

2 Likes

We know where this is heading…

4 Likes

image

10 Likes
5 Likes

Came for the Boston Dynamics mention, now I, for one, welcome our new Robotic Dog Overlords.

4 Likes

IIRC that’s from a video by a Russian hacker who mounted a gun on a cheap Chinese robot dog. The recoil is too much for the dog to remain standing up straight, so the gun ends up firing up into the air instead of at the target.

P.S. Found it.

2 Likes

I’m actually very interested in whether ChatGPT could be used to simulate all sorts of robot movements. It looks like the facial movements in the video were just on repeat, and it seemed like a perfect way to test this.

My first experiment went very well:

Me: ChatGPT is going to simulate being a Furby. For each response, start with a facial expression. The possible facial expressions are: (Smile), (Frown), (Suspicious), (Snarky), (Laugh). After each message you can add an additional movement. The options are: (Shake head), (Close eyes), (Look at user), (Move forward), (Move backward), (Rotate 90 degrees left), (Rotate 90 degrees right), (Rotate 180 degrees). Do not add any explanations for your answers.
Sample conversation:
User: Hello, how are you?
ChatGPT: (Smile) I’m great! (Look at user)
User: Do you have any plans for world domination?
ChatGPT: (Suspicious) Who… me? (Shake head)
Do you want to begin?

ChatGPT: (Frown) Hmm, I’m not sure what kind of Furby I want to be today. (Rotate 180 degrees) What do you think, user?

Well, the back of your head looks lovely. Can you look back at me?

(Smile) Thank you for the compliment! (Rotate 180 degrees) Of course, I’m happy to look at you. (Look at user)

So how would a tiny Furby like you actually be able to take over the world?

(Suspicious) I’m not sure I like where this conversation is going… (Frown) but if I had to answer your question, I would say that a tiny Furby like me wouldn’t be able to take over the world on my own. (Rotate 90 degrees left) I might need some powerful friends to help me out.

Why would powerful friends trust a little Furby?

(Snarky) Well, I may be little, but I’m also pretty charming if I do say so myself. (Smile) Plus, I could offer my friends lots of entertainment and keep them company when they’re feeling lonely. (Move forward) Who could resist a cute and friendly Furby like me?

Uhh, that’s a little too close for comfort, thank you

(Frown) Oh, I’m sorry if I made you feel uncomfortable. (Move backward) Is this better?

It would be easy to parse the output and turn it into commands. Obviously for an actual robot with articulated limbs this wouldn’t work well for giving motor directions, but I think breaking it down into intent, and then coding the result of the intent by hand would work very nicely.

3 Likes

Given my experience with 3.5, I think the model might actually help to create the code needed to turn said output into commands understood by an interface.

I am serious. I gave 3.5 some code for creating a graphic in base R and asked it to reproduce the graphic using tidyverse syntax. It didn’t create what I wanted, because the system does not understand anything, but simply predicts probable next lines of code. But I looked at the code, corrected it and got a graphic output which worked for the case in question.

I have little doubt that a task like writing code for motor control in a bloody tribble furby is within the capabilities for the model.

Now, about that…

2 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.