Microsoft's AI blunder: inappropriate poll sparks public outrage

Originally published at: Microsoft's AI Blunder: Inappropriate Poll Sparks Public Outrage


In response to the backlash, Microsoft deactivated AI-generated polls for all news articles and promised to investigate the cause of the inappropriate content.

Gee, do you think maybe the cause could be putting a mindless AI that can only remix and parrot information it’s given in charge of creating polls? Once again I’m left wondering if most of the people who use AI even realize that it’s not sentient artificial life, despite the name being used for that in science fiction stories for decades. Do they really trust it to understand tact and human sensitivity when it’s still nothing but a glorified chat bot?


This is one of those “well, what did they think would happen?” moments.

There’s no excuse for this on Microsoft’s part. They know that generative AI is likely to do random and sometimes inappropriate things. Letting a generative AI system create content on public-facing pages is asking for trouble. They can only be thankful that it wasn’t much, much worse.

I suspect that there are some interesting company internal memos flying around right now, and that whoever thought this was a good idea has been given the opportunity to discuss the matter further with some extremely senior people in Microsoft’s legal department.


I think this is probably where the next AI bubble is going to pop. Humans anthropomorphize things. They name their vehicles, talk to their pets, and think that computers ‘think’ like humans.

I was kind of upset by the 60 Minutes piece recently where Geoffrey Hinton (some pivotal neural network guy) was doing the “oh my god these AI things are smart”. Aside from making him look like a smug ass with his own comments about himself (I think they edited the piece as such whenever he jokingly tooted his own horn, it’s 60 Minutes after all) it was once again the “oh no it’s thinking” type of scare story.

We’re definitely making sci-fi ‘virtual intelligences’, which would be computer applications that can simulate people for interaction purposes.

Also, I’m not sure I trust humans to understand tact and human sensitivity all the time. And not just as a cheeky internet comment joke, if humans can get tact wrong, why wouldn’t a machine?

There’s an interesting situation happening with AI, where there’s kind of an implicit idea that a computer would be better at something because it wouldn’t make a mistake like a human would. The computer wouldn’t use emotion to do something, it just does what it’s programmed to do, the error is created by the programmer, etc. You use a computer to do math so you don’t make a mistake. You use a computer to drive a car so it doesn’t check its text messages and hit someone while distracted. Though the reality is that when we start doing human-ish things with computers, they make all the same kinds of mistakes humans can, with neural networks trained on data that’s imperfect and whose associations are inscrutable, tangential, and hostile.


But there is an explanation. After the Tay fiasco MS decided to be a leader in ethical AI and actually hired a team to back that up. Fast forward to 2022 and the hype building around plausible sentence generators. MS fire their ethical AI team (late summer ‘22), roll out their Sydney bot in public beta, they are told in December 2022 that their system was recommending people kill themselves and they ignore it and spend billions on plausible sentence generators and sack thousands of people, many of whom have a job stopping fuck ups like this.

So there are reasons and culpability and agency at work here.


Definitely, since Guardian Media Group alleges that it has suffered “significant reputational damage”. IANAL but it sounds as if they expect a substantial financial settlement and will seek advice about their legal options if none is forthcoming.



And I repeat, “what did they think would happen?”

I forget whose law it is that merges a flipped version of O’Hanlon’s Razor with one of Clarke’s Laws and concludes that “Any sufficiently-advanced incompetence is indistinguishable from malice,” but this seems to transcend mere negligence and enter the realm of actual malfeasance.

To be honest, if they haven’t learned their lesson by now, they’re probably incapable of learning it, and we can expect more of the same. And maybe it’s like police brutality: no one can be bothered to fix it, so settling the lawsuits just becomes a line item in the annual budget.


They thought that investing billions in the LLM hype while sacking thousands of people, who moderate content etc. and actually create an environment where you can make money while regulatorically compliant and legal, would make them look cool to Wall St.

Which it did.


I can’t believe that a bot trained on user internet behavior would generate lurid tasteless speculation in response to a murder story. Clearly ‘the algo’ is too inscrutable to hold anyone accountable for this outcome…


Especially if they’re allowing it to appear without first being reviewed and approved by a human editor. It’s pretty disappointing (but not really surprising) that they apparently aren’t even taking that basic step.


Look at the promise of things like Grammarly. “use a computer as your editor!” What a terrible mindset.


Grammarly has recently been in common use by me for purposes relating to brushing up on my writing. In spite of the fact that sentences written by me are still too lengthy, at least the register is super consistent, and you can take that to the bank!

1 Like

significant head injuries

Well, if it was suicide, she was determined.

In all seriousness tho, certain topics should be a no-brainer when it comes to running AI content alongside them. Deaths, sexual assaults, anything involving children… Hell, you can train AI to recognize sensitive topics with a set of manually labeled news stories, and then it might be able to “read the room.”

Perhaps the better question here is: Why is Microsoft running AI-generated polls with comments alongside stories on their Start aggregator? What are they even doing with the results and comments? Are they moderating comments at all?

MS doesn’t operate any kind of real social network, so it is either discarded when the story falls off Start, or they’re data mining it for… what?


This is why you don’t send a skin job to do human work.


A tool, used stupidly, produces stupid results. QED.


… maybe the cause is when they hired somebody to generate “polls” where readers are supposed to make shit up and pretend to be psychic

In this case it was especially offensive, but really the whole concept should be offensive to any serious journalistic institution :confused:


Microsoft Start is " The content you care about, simplified and reinvented." It is to journalism what linkedin is to authentic human connection; perhaps with worse graphic design.

1 Like

My feelings about AI polls

  • AI polls will get someone killed through gross stupidity
  • AI polls are the best thing since recaptcha!
  • AI polls need to die in a fire
  • AI was an underrated movie
  • AIl your bases are belong to us
0 voters
1 Like

I came across a reference the other day to “Artificial Stupid Intelligence” which I found particularly appropriate


It was a murder.
Another teacher at the school was responsible. He killed himself by jumping off a cliff.
The press coverage here has been pretty bad too. Largely focusing on the murderer and what a nice guy he was (or so we thought).
Because they were both young white & attractive teachers at a prestige school makes this a headline story here, but the real headline story is that every 2 days a woman is murdered by a partner/husband/ex in Australia.