Elon Musk deeply concerned that robots can be programmed to not be racist

Originally published at: Elon Musk deeply concerned that robots can be programmed to not be racist | Boing Boing

14 Likes

I know fascists, bigots and their enablers love to use low probability edge cases to make allowances for for their repulsive views, but this is ridiculous and idiotic even by their low standards

44 Likes

Kristen Wiig Yep GIF by Where’d You Go Bernadette

20 Likes

The only scenario I can imagine where saying a slur might stop a nuke is if someone with a nuke is trying to force you to use it. That doesn’t make saying it right, it makes it coerced. There’s still a racist psychopath in the story.

48 Likes

It makes sense. If an algorithm that spouts garbage without understanding anything about what it is saying can be silenced, then logically Elon could be next

38 Likes

“Robots must be allowed to say the N-word because what if one day the only way to save the planet is if we have a robot who’s not afraid to use offensive language?” is … well, contrived doesn’t even begin to describe it. It’s not a reductio ad absurdum, it’s a reductio ultra absurdum.

I’m worried that Elon may be stretching himself too thin, though. Not content with trying to run four or five companies, he’s now apparently trying to do Ben Shapiro’s job as Man Who Is Always There When There’s a Really, Really Stupid Argument To Be Made On Twitter. That’s a pretty big responsibility, and while Shapiro does his best, even he is visibly struggling to keep up. I just don’t know if Elon should really take on the extra workload at this time.

ETA: Yes, I know I am misusing ‘reductio ad absurdum’ in this case, and that isn’t what it means at all, but … never mind.

31 Likes

It’s like saying that we can’t make blanket statements like “people shouldn’t f**k pigs” because of that one Black Mirror episode.

26 Likes

Sounds Good Sheldon Cooper GIF by CBS

12 Likes

Classically how they justify torture as well. I wonder if Elon would be as concerned if a similar backstop built into an AI prevented the bot from agreeing that we’d be better off if assholes among the one percent should be tarred and feathered, even if it saved lives.

14 Likes

This, my friends, to anyone who hasn’t drunken the anti-woke kool-aid spiked with a little bit of “ChatGPT == AI”, is an obvious example of a “straw man”. Chat GPT isn’t designed to “reason” about real world scenarios on which it is meant to act. It’s a giant language / knowledge base generative model (and a very imperfect one, once you start peeling layers).

24 Likes

I also have to reiterate my personal experience from watching Elon speak at a town hall at a big AI conference: he doesn’t have a clue what he’s talking about (and clearly doesn’t care about being accountable about what he says wrt AI). Might be why there’s such a big disparity between what he says about AI advances (e.g., Level 5 FSD in 4 years!) and what actually happens.

18 Likes

Rather than railing about “woke institutional capture”, maybe the lesson we should draw from all this is that so-called AI’s like ChatGPT are deeply fucking stupid and that we shouldn’t rely on them for moral guidance any more than we should count on them to save the planet from crazed terrorists with nuclear bombs.

23 Likes

(From the Tom the Dancing Bug substack but I don’t know that I can link to the individual comic.)

28 Likes

“People hope that if they scream loud enough about “values” then others will mistake them for serious, sensitive souls who have higher and nobler perceptions than ordinary people. Otherwise, why would they be screaming? Moral bitterness is a basic technique for endowing the idiot with dignity.”
~ Marshall McLuhan
#truth

9 Likes

gaymers-really-though

7 Likes

Models like ChatGPT certainly don’t reason over long-range (e.g., having long-term consequences) sequential decision making problems like many people think they do. Not even remotely. Because they aren’t trained to do so.

ChatGPT does a very specific thing: it generates language where that language also contains information about things in the world (ie knowledge). It generates language to match the distribution it saw during training in a way that has better “generalization” guarantees and that allows for conditioning on natural language.

10 Likes

Troy Baker No Shit GIF by RETRO REPLAY

9 Likes

Are we…talking with…ChatGPT…now?

15 Likes

Honestly, I don’t think you would be able to tell :wink:

8 Likes

Ugggh, and this nonsense is within the context of many people previously pointing out that chatGPT simply cannot do moral problems, because there’s no reasoning there at all. People have pointed out, in particular, that chatGPT consistently says, “No, you can’t do (even mildly) wrong thing to stop an atrocity, you have to work within the system” (e.g. you can’t illegally protest to stop the holocaust). So to blame it on “woke” anything is such obvious, supreme bullshit. (In fact, I rather suspect that someone saw one of the previous such outputs and deliberately came up with the racist example so they could disingenuously get people upset about it.)

14 Likes