Microsoft's AI blunder: inappropriate poll sparks public outrage

I think this is probably where the next AI bubble is going to pop. Humans anthropomorphize things. They name their vehicles, talk to their pets, and think that computers ‘think’ like humans.

I was kind of upset by the 60 Minutes piece recently where Geoffrey Hinton (some pivotal neural network guy) was doing the “oh my god these AI things are smart”. Aside from making him look like a smug ass with his own comments about himself (I think they edited the piece as such whenever he jokingly tooted his own horn, it’s 60 Minutes after all) it was once again the “oh no it’s thinking” type of scare story.

We’re definitely making sci-fi ‘virtual intelligences’, which would be computer applications that can simulate people for interaction purposes.

Also, I’m not sure I trust humans to understand tact and human sensitivity all the time. And not just as a cheeky internet comment joke, if humans can get tact wrong, why wouldn’t a machine?

There’s an interesting situation happening with AI, where there’s kind of an implicit idea that a computer would be better at something because it wouldn’t make a mistake like a human would. The computer wouldn’t use emotion to do something, it just does what it’s programmed to do, the error is created by the programmer, etc. You use a computer to do math so you don’t make a mistake. You use a computer to drive a car so it doesn’t check its text messages and hit someone while distracted. Though the reality is that when we start doing human-ish things with computers, they make all the same kinds of mistakes humans can, with neural networks trained on data that’s imperfect and whose associations are inscrutable, tangential, and hostile.

9 Likes