They experimented, they got unexpected data. Such shit happens. So why the we-are-sorry mandatory apology ritual?
Most of the world population, including the majority of their customer base, isn’t that logical.
(Humanity! Feh!)
The first case of Sentiens-Machinacide.
The Robots won’t forgive you Bill Gates!
Because most people only learned of the worst of the worst, it happened in full public view as opposed to an internal QA team, and that is really not a good look for Microsoft.
“We didn’t design it that way”. Yes you did!
I mean yes, the objective was clearly not “a bot that spews racist garbage and harasses people.” But Tay’s design allowed for this to happen, regardless of good intentions.
Wouldn’t happen if they had limited their data output to nine digit codes.
Exactly. If they knew the outcome in advance, there would be no point in doing the experiment.
Now what will Microsoft do (for extra credit) if one of their bit-bots figures out that this smarmy patent-trolling company is evil to its rotten core?
And?
What’s the problem? Did anybody get killed?
Offended? no. Amused to high heaven? You betcha.
What still baffles me is that the data were unexpected. They dressed up a naive expert system as a teenage girl and sent it to go learn from internet denizens.
That’s…not exactly tactical genius.
Sometimes you just have to throw that grenade to that lake to see what’ll float up.
Wasn’t it fun, anyway?
Oh, I found it highly amusing; I’m just not sure how people sharp enough to be on Microsoft’s AI research team didn’t know that they were stepping on a land mine filled with shit sandwich when they decided on this plan. Watching them was good fun; but being them might not have been.
We live in a country where Donald Trump can be taken seriously as a presidential candidate. What the hell did they expect? Rainbows?
Conversation? AI? …excuse me while I go update the cruise control feature on my vehicle to an autopilot feature, by calling it one.
it’s also fairly ridiculous they didn’t consider something like this a possibility, and have … oh i don’t know … a person there to watch and bless the tweets.
facebook’s ai beats amateur go champion, google’s ai beats go world champion, and… microsoft’s ai turns nazi fembot. i feel for their stockholders.
This fiasco only proves that the Twitter still allows the ideology of the “Dark Enlightenment” to run amok.
How was this some kind of grand experiment with surprising and unexpected results? They included the ability to make it repeat what you tweet at it, and didn’t expect someone to guess that and make it say horrible things?
I’m only disgusted by the people who targeted it, though. I don’t get why anyone would be angry at Microsoft.
According to the article, people had to exploit a vulnerability in order to get it to do this, so “we didn’t design it that way” seems accurate.
a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.
Well, given that in order to get it to spew racist garbage, people had to exploit some sort of vulnerability in the bot, I can totally understand them pulling it down – basically, someone tampered with the experiment.
Yeah. Honestly, even if it wasn’t an exploit, and just “we forgot to account for people being shitheads” I can’t see why anybody would be angry at Microsoft, or why they’d need to apologize. I’m glad they explained what happened, though – even if I sorta wish they went into more detail on what the exploit was, so other people attempting similar things might avoid the same result.