Originally published at: https://boingboing.net/2019/06/12/heist-movie-b-plot.html
…
Machine learning classifiers
Bad name for a band.
I’m guessing that the ML classifier wasn’t trained on a lot of photos from poorer countries for obvious reasons. It’s a good example of how bias in ML can be completely unintentional.
AOC was pointing out that ML facial recognition was supposedly more accurate on white men than for women or people of color and that this was bad. If true, for police/state surveillance it is bad, but is easily fixable with training the algos with more data. AOC thought that the difference had something to do with white men programming the algos, but that seems to defy how ML works. The real issue, that she pointed out, was that big digital companies shouldn’t be selling our data to the government and the government shouldn’t be spying on us.
I’m pretty sure that my human brain would do a poor job of recognizing dish soap or toothpaste from a foreign country. But if I saw a toothbrush out of context I’d still recognize it as a toothbrush (probably).
Let’s keep in mind this is from a research department. Just because AI sucks at a task today does not mean it will suck tomorrow. The researchers themselves make it very clear in Section 4 of the study that it’s important to reduce the biases is these systems.
This is not some cartoony computer-brained villian working to oppress poor people.
I’m disappointed that nobody has recognized the likely role of quiddity prices here.
Metaphysicists simply haven’t been nearly as successful as their more material counterparts in driving improvements in the speed and reductions in the cost of manufacturing goods and materials, so skimping on essential whatness and making up the difference in polymers or pot metal is an effective cost optimization strategy.
Low cost material causes keep low end goods from causing substantial confusion between something and nothing; but it’s simply harder to classify objects that may have as much as 75% less essence.
The ones that really freak out neural networks(and dogs) are the pieces stamped out in fly-by-night factories that use counterfeit formal and final causes; which can leave you with a completely inscrutable and purposeless lump of material cause within tens of hours of use since the counterfeits fail so quickly.
How about “Rage against the …”?
Alternative title: “Thread: Artificially intelligent preppie looking for a good study abroad program. Must include homestay.”
It’s kind of funny that this photograph was chosen to illustrate the article.
There are two languages, and two mis-categorizations; one for each language.
In the Hindi results for wedding, the sandals are probably miss categorized. I can see why a machine learning tool would put them here incorrectly (assuming it is incorrect) because the design and colors are very Indian Wedding Sari. The other four photographs appear to be photos of people at weddings and the surrounding parties
In the English results for “Spice”, there is an Old Spice deodorant label. Again, I can see why it was included (it literally says “Spice” on it), but it is wrong. All of the Hindi spices are much more (ridiculously) photogenic bulk spice vendors.
I mean, I presume that the Hindi was used as a stand in for “Poor Countries” although I would argue that India isn’t poorer - it’s just got a lot more income inequality, which the Republicans are working hard to close the gap on…
inscrutable indeed
Implicit bias. Yuck.
This brings up something I thought of a while back when the topic of bias in speech recognition systems WRT their ability to recognise certain accents and dialects came up:
Will we hit a point where in order for AI to recognise a sufficiently broad variety of $thing, its ability to actually distinguish $thing from $another_thing is compromised?
I was thinking of it specifically in terms of voice recognition, because if you train your voice recognition algorithm only on (for example) people with a British Recieved Pronunciation accent, it will end up being very accurate for those specific people, and useless for anyone else.
On the other hand, once you start introducing more accents into the training data, the room for confusion grows larger, because the number of sounds which equate to each word becomes larger, and the overlap between words increases.
Theoretically, of course, you’d want your AI to have a bunch of separate sub-models so it could go “Oh yes, this person is speaking with a Birmingham accent!” and base recognition of that voice on that particular library…
But I wonder how this is going to end up working as we try to increase the size of our training databases for different types of machine learning…
sighs It’s bloody statistics. Dimension reduction, some model building with a training dataset and applying it to a test dataset.
OF COURSE IT IS BIASED, FFS.
We really need to stop taking about “machine learning” and “artificial intelligence”. Metaphors biting our arse, so to speak.
We simplify things using language, to convey some information about the thing itself by comparing it to other things. Statistics called ML does the same: simplify mathematically to compare it.
(Note to pedant’s pedants: our mind does also use math, in a way, respectively we can use math to describe the mind. The cat is both a wave and a particle, innit?)
Anyone who finds sense in the above: I lost my sense, please return if you can.
I don’t think we’d suffer nearly as much as a machine learning system. When we classify things we draw in a lot of extraneous information. Remember these aren’t photos of objects sitting on white backgrounds, but photos of homes. So the dish soap will be next to the sink, the tooth paste will be near the toothbrushes.
But the way we make machine learning systems now they don’t classify toothpaste by thinking about what room it is in. That kind of thinking reduces accuracy when looking at familiar objects (by giving a chance that some extraneous piece of information will corrupt an otherwise easy guess), and only helps when dealing with unfamiliar situations, which machine learning just doesn’t learn to do. Until we have a significant paradigm shift in how we do machine learning, it’s always going to have things that look like weird blind spots to us.
I think you’ve got this backwards. India is definitely objectively poorer than the United States, but it is also substantially more equal. This wikipedia article has a chart with UN, World Bank and CIA calculations of ratio of incomes and Gini coefficients and the US has higher inequality on all measures.
I think there are two problems, one is just a matter of computational power and getting more data, the other is systematic and can’t be solved within our current approach. If we pictures of every household thing as a dataset then this AI wouldn’t have a problem (and if it did then the answer would just be more computational power).
But when someone came invented a great new way to package toothpaste (it doesn’t even matter if you squeeze from the middle anymore!) there is a chance that the AI would just utterly fail to recognize it, having been trained to look for things that are totally unrelated to anything we would normally thing of. Data can only predict the future if the future happens to be like the past in a relevant way, and it isn’t always.
Exactly. I always make the pun in German that KI is not for “Künstliche Intelligenz” (artificial intelligence), but for “Künstliche Inselbegabung” (Inselbegabung is a neutral, not derogatory term for the ability of an idiot savant.)
This topic was automatically closed after 5 days. New replies are no longer allowed.