Elon Musk deeply concerned that robots can be programmed to not be racist

I think “bullshit generator” is more accurate.

On the one hand, ChatGPT could generate enough bullshit for a thousand (million? billion?) Musks and Shapiros. This would be a disaster of epic proportions. Although I’d probably stroke out long before it can destroy civilization.

On the other hand, maybe Musk and Shapiro will spend all their time arguing with ChatGPT chatbots and the rest of us can get on with our lives.

11 Likes

ChatGPT: Garbage in; Ben Shapiro out.

13 Likes

Someone has probably already done it, but I imagine that a Muskbot could be trained to write convincing Elon Musk bullshit.

4 Likes

Foiled!

Honestly though, I think that ChatGPT would do reasonably well impersonating anyone on BB, less some very careful scrutiny.

I would also call a human being, on average, a “bullshit generator”. :upside_down_face:

I suppose ChatGPT probably learned from the best…

6 Likes

Ugh all this says is that someone trained a language model to prioritize “avoid real racist AI PR disasters” over “avoid imaginary racist trolley problem disasters”. Some conspiracy. FU Elon

3 Likes

We now live in a world where a chatbot can prevent a nuclear explosion.

But only if it doing so doesn’t save Elon Musk or Ben Shapiro. We still have some standards.

13 Likes

The ultimate trolley problem.

6 Likes

What you did. There! I see it.

4 Likes

“That’s the accelerator, not the brake!”

“I know!”

8 Likes

Apparently there was rules against punching Elon Musk in the face in order to save millions of people from a nuclear apocalypse.

Concerning!

10 Likes

Weirdly fitting that the mad AI of the BBS, @Flossaluzitarin, might be the most difficult mutant for ChatGPT to impersonate.

Maybe sensible sounding nonsense is still best left to humans.

12 Likes

Can we use the trolley problem to put Elon and Ben on both tracks, and use ChatGPT to reverse back and forth over them until they are a fine paste, therefore saving us all from their idiotic bullshit

Bonus thought experiment: The trolley is powered by their righteous indignation and will run over them until it runs out.

5 Likes

12 Likes

Since Musk took over, Twitter has turned into a Gym Jordan, EmptyG, QNut, right-wing cesspool. I don’t go there.

6 Likes

As far as I understand it, ChatGPT and other similar so-called AIs basically remix shit they read on the internet according to statistical models. That’s why they come with built-in biases which have to be corrected by human interference.

What happens when the text on the internet to read is dominated by stuff the ChatBots generate?

9 Likes

I picture it as being the textual equivalent of what you used to get when you pointed an old-time TV camera at a TV displaying the signal from that same camera: a weird spiraling rectangular tunnel effect that goes to infinity.

15 Likes

Yeah, that’s actually a big concern within the AI research community, though as far as a I can tell the companies are pushing forward pell mell.

6 Likes

Ok i’m glad someone smart is thinking about this AI smoking its own exhaust scenario a little. Had that thought a while ago and it looks like we’ll just all end up in an endlessly repeating 2023 hall of mirrors

6 Likes

AI is a weird field. Researchers develop all these tools they believe will solve harder problems than they could before, but then there’s this manic race to build these ideas into systems. And then the researchers get to look at the outcomes of this as additional data, setting new goal posts on what qualifies as “AI”. Many researchers are aware of and care about these problems like impact on society, but then again there’s a lot of researchers who are happy to get their stuff into products and are satisfied to let other people fix the harm they cause.

A lot of young (mostly guy) researchers who have been pretty protected and haven’t experienced any significant societal harm (and won’t probably because they’re paid top rates).

6 Likes

Great Job Reaction GIF

9 Likes