Originally published at: https://boingboing.net/2024/03/29/new-york-citys-ai-chatbot-advises-businesses-to-steal-tips-from-workers.html
…
I just can’t imagine where an LLM would have learned about the virtues of stealing. Truly a mystery for the ages.
This is what happens when you buy the Truth Social chatbot just because it’s described as “The Best Chatbot”
Conversational LLMs are literally bullshit generators and should never be used to give advice.
They’ve made so many sites useless now as more than once already I went into CS chat to get a straight answer for something and see a little disclaimer that they’re now using some sort of LLM instead of people so I need to double check that every answer I get is legit on my own somehow.
This is Chat GPT, the one that people claim is good.
All our imagination of machine intelligence from the 1950s down has been of extremely literal machines going “DOES NOT COMPUTE” and exploding when Captain Kirk tells it “I am lying” or tell Han Solo the odds to absurd precisions…
The ones we actually built? Fast talking bullshit generating con artists who suck at math!
It always cracks me up!
“Yes, you can take a cut of your worker’s tips. The penalties for doing it are less than the profits so why wouldn’t you?”
It would seem that Microsoft has been hiring coders from Uber and other thieving “gig economy” companies.
Part of the reason of course being that these are not at all AI in the traditional sense. They’re statistical models. They don’t interpret, they don’t even learn once their training phase is done. The name and all the hype about being on the verge of human-level thought are pure marketing nonsense.
Yup, they are word prediction engines, determining the most likely words based on the prompt words they are given, the order of words in their training set, and the words that came before. That LLM’s can generate anything resembling actual information is a miraculous side effect of how much information humans have organized with words that are then used to train these models.
The problem for an agent like this, though, is that people ask questions only when the most probable answer is not always the right one. If I ask a bot if something is legal, then I want something based on objective facts, not which words are the most probable to follow my question.
As I’ve written before, hallucinations are a feature not a bug. These models do not “know” anything. They are mathematical behemoths generating a best guess based on training data and labeling, and thus do not “know” what you are asking it to do. You simply cannot fix them. Hallucinations are not going away.*
This topic was automatically closed after 5 days. New replies are no longer allowed.