Originally published at: https://boingboing.net/2024/03/06/public-doesnt-trust-ai-and-it-doesnt-trust-the-tech-sector.html
…
It looks fine on the post. No idea why it doesn’t show here, though it’s probably something nasty in Wordpress’s dogshit post editor.
The particular Dune quote i find most telling on the issue of current ‘A.I.’ is “from the Orange Catholic Bible”:
“Thou shalt not make a machine in the likeness of a human mind.”
― Frank Herbert, Dune
Since given Markov (probabilistic) large language models that’s pretty much what has been done; with an extra dubious dose of reddit scraping.
Look at the people promoting AI: Musk, Zuckerberg, Andreesen. They’ve done nothing to earn anyone’s trust. Altman is slightly better, but it’s clear he’s mainly in it for the money too.
People shouldn’t trust AI and shouldn’t trust the tech sector - and I’ve been working in the tech sector for the last 27 years.
Given we have AI used against us no wonder we don’t trust it. The recent AI developments are largely tools we can use to help us get through all the obstacles (natural and human made) in our life.
As a technology professional: It’s about time
[Citation Needed]
But it hasn’t dropped low enough for people not to be glued to their tech for hours a day. I’m curious how this compares to trust in other industries like finance, energy, healthcare, etc?
Though keep in mind that to be better than Musk, Zuckerberg, and Andreesen is so easy, anyone could do it just by shutting the hell up once in a while.
Is @beschizza messing with us or is the tech messing with him?
As an amateur creative writer I disagree.
The whole point of large language models is to write something statistically predictable. That precludes any real innovation or knowing subversion of tropes and phrasing by an author.
Functionally these things are less like human minds than even early AI experiments like SHRDLU, because there is no attempt to process the meaning of the words. I don’t think anyone at the time anticipated how much you can make something sound like a human without any kind of human-like interpretation at all.
Yep.
They all have a knob called “temperature” that varies the predictability, but not all LLM interfaces show that.
In the last 10 years, social networking gave us cyberbullying, fraud, doxxing, SWATing, misinformation, disinformation, Trumpism/MAGA, hydroxychloroquine, and TikTokkers dancing in Costco. Yep, certainly trust that technology.