On the one hand, ChatGPT could generate enough bullshit for a thousand (million? billion?) Musks and Shapiros. This would be a disaster of epic proportions. Although I’d probably stroke out long before it can destroy civilization.
On the other hand, maybe Musk and Shapiro will spend all their time arguing with ChatGPT chatbots and the rest of us can get on with our lives.
Ugh all this says is that someone trained a language model to prioritize “avoid real racist AI PR disasters” over “avoid imaginary racist trolley problem disasters”. Some conspiracy. FU Elon
Can we use the trolley problem to put Elon and Ben on both tracks, and use ChatGPT to reverse back and forth over them until they are a fine paste, therefore saving us all from their idiotic bullshit
Bonus thought experiment: The trolley is powered by their righteous indignation and will run over them until it runs out.
As far as I understand it, ChatGPT and other similar so-called AIs basically remix shit they read on the internet according to statistical models. That’s why they come with built-in biases which have to be corrected by human interference.
What happens when the text on the internet to read is dominated by stuff the ChatBots generate?
I picture it as being the textual equivalent of what you used to get when you pointed an old-time TV camera at a TV displaying the signal from that same camera: a weird spiraling rectangular tunnel effect that goes to infinity.
Ok i’m glad someone smart is thinking about this AI smoking its own exhaust scenario a little. Had that thought a while ago and it looks like we’ll just all end up in an endlessly repeating 2023 hall of mirrors
AI is a weird field. Researchers develop all these tools they believe will solve harder problems than they could before, but then there’s this manic race to build these ideas into systems. And then the researchers get to look at the outcomes of this as additional data, setting new goal posts on what qualifies as “AI”. Many researchers are aware of and care about these problems like impact on society, but then again there’s a lot of researchers who are happy to get their stuff into products and are satisfied to let other people fix the harm they cause.
A lot of young (mostly guy) researchers who have been pretty protected and haven’t experienced any significant societal harm (and won’t probably because they’re paid top rates).