Like A I itself it’s basically marketing bullshit term. The way people use these terms now is akin to how people thought Elon Musk was a rocket scientist and engineer rather than a dubious business graduate with a basic degree and a talent for bullshitting in a way that other investors lap up likes it’s dogbutt (investors being dogs in this analogy).
What I am scared of is that people take the bullshit outputs of AI systems seriously and work off that bad assumption. This might immanentiae the “hallucinated” eschaton. Never underestimate techbros propensity to huff their own farts. I remember I had a really smart computing graduate student working for me in my library and they were incensed that our bibliographic record didn’t accord with Amazons as Amazon was obviously right!. A slew of examples of Amazon being dead wrong while explaining why it was so often wrong didn’t help. I get that when I say something critical about the shit outputs Google makes too. Who am I to criticise the truth of our techbro overlords.
I get it too when ‘bros loose their shit over the EU AI act “that could apply to a thermostat “!
In one sense: precisely. Replace the marketing term AI with automation (as Emily Bender says) and you have what the EU seeks to regulate. Thermostats have standards and if people allow them to kill people someone will be responsible. Saying “the AI did it”, needs to never be an option to avoid responsibility.
“Hey, you know that lawyer-girlfriend who can’t exist in Dr Haber’s gray world because her Black identity is such a part of her existence, and George has get her back with help from his alien space turtle friends?”
“I guess.”
“Well, we’re not doing any of that.”
"Sounds wise. The whole point of the book is that you don’t want to be too woke.
“Exactly.”
Yes yes yes. Speaking as a professional data-cruncher, I do wish the term “AI” would crawl back into its original wonky cave, located in the Venn-diagram overlap of computer science and psychology, trying to actually answer questions about how reasoning happens.
“Unsupervised machine learning” is the term for what’s being bandied about as AI: take some data, do some stats, make a forecast, let the humans decide if the forecast makes sense.
Even the word “learning” is a stretch, but when “machine learning” became accepted technical jargon, “learning” was understood to have a very strict definition related to building a statistical model. It by no means meant “learning” as a human would experience it.
So yeah, I’m also quite over this AI love-fest the media is showering us with. All thanks to those like @Melizmatic and @robertmckenna who are tirelessly pointing out the bullshit of it all.
Yes, and it is similarly misguided like the term “artificial intelligence” itself. A more fitting term is “bullshitting”, in the Harry Frankfurt sense.
There’s this adage I surely read somewhere here on the BBS:
Q: What’s Artificial Intelligence?
A: A poor choice of words in the 1950s.
Careful, though: technically, LLMs and image generators are algorithms obtained by supervised machine learning. The difference between supervised and unsupervised ML is that in the former, the algorithm creates a function y = f(x) (which transforms inputs x into outputs y) by using observed pairs (x, y), whereas in the latter, there are no observations for the target y available.
My suggestion is to use the term “plausible [output] generator”: LLMs like ChatGPT are “plausible sentence generators”, Midjourney, Dall-E & co are “plausible image generators” etc.