Originally published at: Elon Musk's AI is too woke for its target audience | Boing Boing
…
Some would say that it’s actually just ChatGPT but with an extra wrapper of bigotry tacked on.
This was a better explanation article:
The assumption when Grok launched was that because it was trained in part on Twitter inputs, that the end result would be some racial-slur spewing, right-wing version of ChatGPT. The TruthSocial of AIs, perhaps. But instead to have it launch as a surprisingly thoughtful, progressive AI that is melting the minds of those paying $16 a month to access it is about the funniest outcome we could have seen from this situation.
Was it something it ate?
So, now X is researching AI lobotomies.
Not to worry, as Musk has already vowed to correct the “political bias” (read: basic human decency).
What’s he going to do, call up the historical archives in Germany and offer to digitise and auto-translate all the Nazi propaganda so he can dump it into Grok’s training model?
[Spoiler: that’s exactly what he’s going to do]
I love how the “facts over feelings” crowd are getting their feelings hurt when an AI that is supposed to answer back in facts goes and (checks notes) answers back in facts contrary to their feelings.
If this unfeeling, logical computer program can recognize reality, why can’t you?
I agree with that but just a caveat: LLMs don’t produce “facts” except by accident. That people are prepared to consider them as facts is one of the actual real and big threats of AI.
Fair point, you can train an AI to learn false things. But as I understand it, this one was trained initially by the world at large. It starts with a “baseline” of reality, which I am sure is being distorted by internet chuds as we speak.
The point is these aren’t the kind of AIs that learn facts at all. They don’t have any model of reality they are trying to fill in. They learn what facts sound like and try to replicate that. The hope is that correct information makes them sound that much better, but they can’t avoid saying something wrong, only saying something uncommon.
But also this:
That Tim Pool guy shows up in my fb reels all the time. I was half convinced he’s performative satire. Never ever takes his beanie off (just what is he hiding under there?). In the background he has a katana crossed with a flintlock pistol hanging on his wall. It’s like a bad joke.
You should check out his music video. (don’t, actually, just take my word for it…it’s pretty bad)
it seems like it came out of nowhere - so i’ve assumed from the get go it must be using some sort of of the shelf tech. though it would be fitting if they were secretly licensing chat gpt.
Which is exactly why the very name is a misnomer; the regurgitation/remixing of large amounts of data is NOT “intelligence.”
So much this. LLMs are not AI’s in the traditional sense, and it’s beyond annoying that that horse is probably out of the barn and down in the meadow by now.
I fight that fight every day. Sure the LLM thing isn’t going to work but continually interrupting people when they say “hallucinations” to correct as “faulty outputs” for example does work to some extent.
Joseph Weizenbaum in computer power and human reason is audibly irritated at people projecting intelligence onto Eliza despite what they were told about it before. This is the same but with even more investment pushing more and more bullshit.
Oh.
I think I may have to re evaluate my life.
Again.
For a Nazi-adjacent like Tim Pool, Microsoft Tay would be too woke.