Eminent psychologists condemn "emotion detection" systems as being grounded in junk science

Originally published at: https://boingboing.net/2019/07/20/resting-counterintuitive-face.html


Kudos to the psychology community! It’s about time the scientific community pushed back hard on the BS that so many of the security and policing product manufacturers have been pushing. There are a lot of products dumped onto the market that do a poor job of what they’re marketed for - facial recognition, remote biometric scanners and analysis systems etc.


Even when disruptive tech products mostly work, as opposed to mostly not work, they still gloss over the false positive rates. For so many of these things, a one in a million rate of false positives are not spread evenly across the (often involuntary) user base, but the same one out of every million individuals cop this shit every single time.

This junk science is not just making everyone’s life a little worse, it’s making some people’s lives a living hell.


But but… 100% of their training set was guilty!


I thought that nonsense came to a halt after this:



There’s been a couple of posts on BB recently linking to Wired videos about body language etc. Surely that stuff is mostly bullshit too?
Having read Kahneman’s ‘Thinking fast and slow’ I am hoping that within my lifetime there will be a great cleaning out of 20th century junk science and bullshit connoisseurship.


This kind of crap is completely unsurprising. Remember phrenology and graphology/“handwriting analysis” (which is still in use)? There ya go…


In other news, science is hard, nothing is as easy as these snake oil salesmen claim and humans are complex systems not generally reducible to easy binary categories. As psychology comes to grips with its own demons, I think we will see more of this kind of realization.


Depends on how far you go with it. As a not-very-good example, I, like many dog lovers & trainers, can tell a dog’s mood by body stance. Which muscles are in tension, ear position, hair along the backbone ridge, jaw and lip position, tail position all are quite reliable.
Granted, humans have learned to lie quite convincingly, which makes a lot of body position worthless.


And the cops still continue their love affair with polygraphs (“lie detectors”). They even have a new name, because its all sciency and stuff: “psychophysiological detection of deception (PDD) examinations.” It doesn’t make the press much anymore, but I assure you that nonsense is still going strong. Particularly in such bastions of science and reason as the FBI and CIA.

I mean seriously, with an eight syllable word like psychophysiological its gotta be science, right?

The tech bros don’t seem to have got a hold of it yet, but soon somebody or other will, I am sure.


I suspect that’s the major issue: it would be downright odd if humans somehow lacked the ability to nonverbally convey certain emotional messages; but what is being sold is generally the promise of ferreting out deliberately obscured ones.

If dogs were machiavellian masters of deception and frequently advancing covert agendas, what we know about their stances would be a lot less useful in telling us about their intentions. As it is they usually seem to be (either deliberately or just because they don’t include this particular metacognitive feature) broadcasting their status pretty honestly.

1 Like

they certainly say they can, and do so very authoritatively, people will even believe them based on deference to the experience - but have you asked the dog?

my take is it’s fairer to say that when you know a dog you can fairly reliably read it’s mood. that’s usually true of people too. but a random stranger?


yeah, and the point of the op was you don’t even need to be lying. it claims there are a variety of reasons people present a given facial configuration. add in different cultures and you get real confusion.


I don’t know my dogs and other ones I’ve owned sometimes can lie pretty convincingly when they want to.
Lie #1, my dogs are always starving.
Lie #2, my Mastiff is pretty good at lying to the other dog, a terrier, that there’s something that needs to be attended to outside “woof woof”. She does this all the time when she wants some exclusive head scritches or maybe something I’m cooking.
My dogs have other lies they tell, but I don’t wish to go on further.


Unfortunately, I have “resting micro-expression face” so they’re on to me.



Old fashioned! Retrophrenology is the way to go. Maybe adjust the instigators of these schemes with a stout ball pein hammer? https://wiki.lspace.org/mediawiki/Retrophrenology

1 Like

"… unskeptical about the claims Big Tech makes about the efficacy of its products, taking at face value Big Tech’s sales-literature boasts about being able to detect and manipulate our opinions.

If we’re ready to believe that Big Tech lies about its taxes, its infosec practices, its anti-harassment policies, its privacy policies, its lobbying activities, and everything else it claims about itself, shouldn’t we also ask whether its products actually work ?"

We definitely should, and the answer is: in many cases, they simply don’t, and just don’t provide the efficiency gains that they promise. The proof is in the productivity growth numbers, that have been declining for years – the opposite of a Moore’s law curve – and now are flatlining or even in some cases negative (meaning it costs a bit more or takes a bit more time to complete a product than the year before).
This is especially true in corporate / enterprise software (but contrary to many conusmer products and services). Just think of allt he useless intranets, time reporting systems, electronic health records (if you’re unfortunate to be an MD in the US, for example) – you know it.
The reasons?

  • First, enterprise software is generally designed like shit.
  • Second, tech industry constantly downplays or ignores the time and resources needed to maintain, patch, update, fix the products, and the negative more or less unintentional or unforeseen consequences of the product (like, one single email is fast, but managing massive spam, phising, etc steals the time gained).
  • Third, software affords new bureaucracy – new procedures, new administrative chores – which also steals time from the actual work. See EHR.
  • Fourth, with increased digitalisation, more and more systems are integrated, which means the technological complexity increases exponentially. Resulting in more unexpected outages, systems breakdowns, etc. See your daily newsfeed.

So in the last decades, technology has failed to deliver the dramatically increased productivity it did during the first two thirds of the 20th century.
The bizarre twist to this is that the burden now have been shifted to people: that’s why you see all the headlines and ads about “How you can become more productive.”


I think a lot of the basis for these emotion detection systems is that they think there is a common (probably instinctual) key that defines what expressions and gestures mean. That simply isn’t true. To reduce this to it’s most basic form (not to absurdity, I hope), if you ask someone in Bulgaria a question and they nod their head, they’re telling you no. It’s a gesture nobody thinks about. It “feels” instinctual, but it’s cultural.


1 Like

Like most of this stuff, I suspect the answer is ‘it depends’.

If you’re using it retail, to try and gauge the response of the response of an individual or small group you already know to some degree, for small stakes, it probably has some merit. Hell, I’m sure it has some merit: think of someone you know who is terrible at reading a room, and is constantly making inappropriate comments at inappropriate times? That person is the proof that this stuff does actually have a use and does actually work for most of us most of the time.^

The problem - as is so often the case - comes when some bright spark with an MBA tries to implement it wholesale, on entire populations, with inadequate oversight and damaging outcomes.

If I misread my partner’s cues and make a verbal gaff … well, the couch is reasonably comfy. I’ll be ok, and we’ll work it out.
If the state misreads the cues of an entire population and then acts punitively based on that suspicion … that’s a whole different scale of problem.

^ see also @dreamrnj example about nodding in Bulgaria. Alternately, you could try hitching a ride in Greece.