New York City's AI chatbot advises businesses to steal tips from workers

Originally published at: https://boingboing.net/2024/03/29/new-york-citys-ai-chatbot-advises-businesses-to-steal-tips-from-workers.html

9 Likes

I just can’t imagine where an LLM would have learned about the virtues of stealing. Truly a mystery for the ages.

13 Likes

This is what happens when you buy the Truth Social chatbot just because it’s described as “The Best Chatbot”

6 Likes

Conversational LLMs are literally bullshit generators and should never be used to give advice.
They’ve made so many sites useless now as more than once already I went into CS chat to get a straight answer for something and see a little disclaimer that they’re now using some sort of LLM instead of people so I need to double check that every answer I get is legit on my own somehow.

9 Likes

Clog Season 3 GIF by The Simpsons

6 Likes

This is Chat GPT, the one that people claim is good.

1 Like

8 Likes

All our imagination of machine intelligence from the 1950s down has been of extremely literal machines going “DOES NOT COMPUTE” and exploding when Captain Kirk tells it “I am lying” or tell Han Solo the odds to absurd precisions…

The ones we actually built? Fast talking bullshit generating con artists who suck at math!

It always cracks me up!

7 Likes

“Yes, you can take a cut of your worker’s tips. The penalties for doing it are less than the profits so why wouldn’t you?”

3 Likes

It would seem that Microsoft has been hiring coders from Uber and other thieving “gig economy” companies.

Part of the reason of course being that these are not at all AI in the traditional sense. They’re statistical models. They don’t interpret, they don’t even learn once their training phase is done. The name and all the hype about being on the verge of human-level thought are pure marketing nonsense.

5 Likes

Yup, they are word prediction engines, determining the most likely words based on the prompt words they are given, the order of words in their training set, and the words that came before. That LLM’s can generate anything resembling actual information is a miraculous side effect of how much information humans have organized with words that are then used to train these models.

The problem for an agent like this, though, is that people ask questions only when the most probable answer is not always the right one. If I ask a bot if something is legal, then I want something based on objective facts, not which words are the most probable to follow my question.

1 Like

As I’ve written before, hallucinations are a feature not a bug. These models do not “know” anything. They are mathematical behemoths generating a best guess based on training data and labeling, and thus do not “know” what you are asking it to do. You simply cannot fix them. Hallucinations are not going away.*

2 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.