What is machine learning anyway? This three minute animation delivers an excellent simple explanation

Originally published at: What is machine learning anyway? This three minute animation delivers an excellent simple explanation | Boing Boing

1 Like

What is machine learning anyway?

The THEY want you to believe that robots/machine learning are coming for your lively hood/pay/jobs to keep you scared into this capitalism slavery.


Well, they are.

1 Like

Roko’s Basilisk has taken note of your disloyalty.


What is machine learning, anyway?

Machine learning is automation of bias.


That is the exact phrase I’ve been using to explain Machine Learning to my students. “Machine learning is automation of bias.”
The clearest example I can think of is the program that was (supposed to be) taught to recognise photographs of sheep. It inevitably started labelling fields of green grass as sheep, because that was the most common factor in the vast majority of it’s data set. It couldn’t define a sheep, it couldn’t recognise one. It had simply developed a bias that it filed under the label “sheep”.
A good interactive example (current as of this post) is Google Translate for Finnish. These are gender neutral phrases:

  • Hän sijoittaa. (That person invests.)
  • Hän pesee pyykiä. (That person does the laundry.)
  • Hän urheilee. (That person is playing sports.)
  • Hän hoitaa lapsia. (That person takes care of the children.)
  • Hän tekee töitä. (That person works.)
  • Hän tanssii. (That person dances.)
  • Hän ajaa autoa. (That person drives a car.)

But Google Translate was automated based on millions of volumes that were full of translator bias. So instead of coming out as gender neutral, Google Translate gives us:

  • He invests.
  • She washes the laundry.
  • He’s playing sports.
  • She takes care of the children.
  • He works.
  • She dances.
  • He drives a car.

The datasets aren’t curated, they’re not filtered to weed out unjust bias and prejudice. They’re often just massive banks of open data that is assumed to be good enough.

I try to get my students to think about the real-world implications of bias automation. We already have such systems in place for IDing suspects and in sentence recommendation for criminal courts. If we can’t even reliably recognise a sheep or translate a gender neutral phrase, what is centuries of prejudice in criminal data doing to the people in our justice system?


I work in enterprise data & analytics, and my company recently did a watch party & discussion of Coded Bias, which really brought the issues around this to the attention of folks who may not have considered it before.

There’s a bit in it describing an issue at IBM - after it was pointed out in one of their ML projects, they actually did some work to address it, including some tests to actually verify results to confirm the particular bias(es) in question were no longer skewing results. That was very cool to see!

Of course, there are tons of possible things to look at, so it needs to be an ongoing process.

1 Like

I think this post was more useful to me than the video : )


Indeed. And even if we were to minimize bias, machine learning has another fatal flaw: it still gives us systems where have no way of knowing what it is the system actually learned, and if we discover an error, no way to fix that exact problem, we can only retrain the system and hope it’s better.

This topic was automatically closed after 5 days. New replies are no longer allowed.