London councils plan to slash benefit payments with an "anti-fraud" system known to have a 20% failure rate

Originally published at: https://boingboing.net/2019/03/05/universal-credit-2-0.html

4 Likes

Longstanding experience with automated fraud-detection systems has shown that the judgments of these systems come a veneer of empirical legitimacy that turns the presumption of innocence on its head.

@doctorow: I believe you mean to say “become a veneer”.

(Sorry to pick nits. It’s only because I am an admirer of your work.)

3 Likes

Random selection for human review seems like a far better choice.

Anything else smacks of a money funnel to the rich.

I have some experience of these guys and similar methods, and a lot of what they do is valid.
The problem is not necessarily BAE’s fraud detection rules/algorithms (though it may be). A 20% false positive rate suggests it needs more learning data and some refinement.

But the real problem is deployment. This should be deployed as an ‘anomaly flagging system’ and the humans to whom the anomaly is flagged should be trained to ask ‘what is the anomaly - why might it have been flagged - is it a real anomaly - is there a possible explanation other than fraud - let’s talk to the subject to find out’ - and so on before any fraud is assumed. But the other issue is that talking to the claimant can ‘contaminate’ any investigation where there is genuine fraud. But better to do this than sacrifice genuine claimants to some automated system that deems them fraudulent without checking.

When I was working in this area there was some care to say these systems were simply flagging possibly suspicious circumstances - and not simply “here’s some fraud”. But this system will probably be deployed as a ‘fraud detection system’ and the humans will likely not have been told it has a 20% false positive rate and is only flagging anomalies that may or may not be genuine, and that they should be open to rational reasons why the flagging may be false. They will simply assume “fraud” and work from there. Hence why vulnerable people will be adversely affected.

I have no issue with the automated scoring that produces an output - I have a seriously big issue with the output going into an automated system where fraud is assumed simply because of the algorithm’s output. The output MUST be reviewed by properly trained humans who are aware of the system’s parameters and shortcomings, before it goes into any automated system to remove or reduce benefits, or even before there is any call on the claimant to prove they are NOT committing fraud.

ETA and if the humans reviewing cases only get the score, and do not get the full detail of how each score was reached and what rules were triggered, then the system should not be rolled out. So-called ‘back box’ systems where it cannot be clearly explained why each conclusion / output was reached are not suitable as they do not enable detailed human review of the circumstances. If a score is out of 10 and more than 8 equals possible fraud, unless exactly how the 8 was arrived at (and it may be for different reasons from the last or next 8 score) can be shown to the human, it is not fit for purpose. If all the human gets is “it’s an 8”, how are they supposed to properly investigate and assess?

7 Likes

The failure rate is beside the point. The system’s main purpose is to re-confirm to Little England Thatcherites the existence of “the undeserving” and “the moochers”, this time with the science-y sheen of algorithms and machine-learning that the young ones are going on about. A secondary purpose is to funnel taxpayer funds to a crony corporation. Perhap in third place is actually catching cheats.

Humans? They’re a cost centre in the same way human content moderators at Facebook and Twitter are. The whole point of these systems in late-stage capitalist societies is to eliminate staffing expenses by having black-box algorithms informed by the priorities of the oligarchs make the decisions.

10 Likes

It’s also a side-effect of the massive - and I mean massive - cuts to Local Authority funding in England over the last seven years, to the point that some areas with the poorest regions (and yes, that includes several areas of London) barely have enough money to handle their core statutory responsibilities; fripperies like leisure services or, yes, coherent fraud services, have long fallen to an absolute minimum.
And whilst some authorities are probably happy that they are being offered a privatised and unaccountable solution to a particular problem, others are definitely not.

6 Likes

I got kicked of my state’s health insurance program when their algorithm incorrectly assumed I was working three jobs and was working 75 hours a week. I was out of town and I got the mail informing me of this one month after it had been discontinued (that’s when the notice was issued) and a few days after the appeal date (Naturally the state has super tight deadlines for this.)

Meanwhile, you know who has a clear idea of how many hours I work and whether or not I need insurance? Me. Weird… but it turns out I know these things.

7 Likes

Michigan tried this with unemployment insurance fraud, and ended up paying out millions after the computer system convicted 50,000 people of fraud-- most of them with no human review. They had to pay back the money after it became a scandal. But for people who lost jobs, homes, went bankrupt, lost their kids, etc in the meantime there was no real compensation. The system had known issues, like the one being proposed here, but the agency figured if you didn’t respond to a letter they sent to your last known address it was all your fault. https://www.detroitnews.com/story/news/politics/2017/08/11/michigan-unemployment-fraud/104501978/

8 Likes

Longstanding experience with automated fraud-detection systems has shown that the judgments of these systems come a veneer of empirical legitimacy that turns the presumption of innocence on its head.

And as machine learning gets used in more and more contexts, expect this statement to be relevant to… well, pretty much everything.

Of course one of the major problems is that they use these machine learning systems precisely so that they don’t have to have human beings doing any of that work. As you say, the problem is deployment - and. sadly, we can consistently correctly guess how these systems will be deployed.

2 Likes

Perhaps. But humans reviewing every single claim in detail when it is made and as it progresses over time is not feasible and fraudsters know it (much fraud is deliberate and intentional but much is also ‘opportunistic’ or neglectful, as claimants’ circumstances change over time).

Humans reviewing only claims that have been ‘carefully’ selected/assessed for anomalous (and negatively so) circumstances, takes far less resource.

But in many cases, yes, still too much resource for hard-pressed authorities, and the temptation is strong to cut corners and rely on automation and deal with the resultant errors as best they can. In fact, I suspect it would actually cost them less to resource it properly than cut corners and then bear the cost of the mistakes. But they do not know how to, or are not permitted to build proper business cases, or - even worse - perhaps they are actively prevented from the necessary investment in the first place even if they want to make it.

The jump from “expensive humans” to “just automate” is too often the wrong jump - people are seduced into it, but not enough attention is paid to intermediate jumps of “support the humans to be more effective”. The automators do not get big bucks by saying “we can help your people do better” - they get them by saying “we can replace your people”. Too much overpromising and underdelivering and not enough of the honest reverse of that.

2 Likes

The design of the system doesn’t really matter. If this is done with a model, or entirely by humans, the cruelty and bias will be baked into the system.

When this was done entirely by humans, targets for the number of sanctions issued and threats of job loss were used to “find” a large number of claims that could be turned down

Every means-tested benefit system has ended up the same way, with cruelty and indignity baked in by by a cruel, puritan desire that the needy are all grasping frauds who should suffer.

4 Likes

And it can’t be fixed, the cruelty is guaranteed. You can’t ever fix it by fixing that system in isolation. You’d need to fix the whole system. Which you will never, ever be permitted to do.

If you gave reasonable welfare that let you live your life okay and didn’t torture recipients with uncertainty and arbitrary rules then people would simply refuse to do the horrible jobs that the present system makes them do. The meaningless, tortuous, unfulfilling jobs that people do in their millions, mostly to make the lower middle class too horrified to object to doing somewhat less torturous jobs whose sole purpose is to make some middle-manager feel important.

It’d be fascinating, really, if it weren’t so depressing.

7 Likes

Did anyone else read ‘claims’ as ‘claimants’ the first time?

I assume that this falls under the umbrella of their “Applied Intelligence” offerings; but that’s not exactly the stuff you usually pay BAE for…

3 Likes

4 Likes

I believe he meant to say ‘come with a veneer’.

Unfortunately the betting ring on Doctorow’s malaprops has been down closed.

1 Like

BAE realised a while back that the defence business was not going to be all hardware very soon and so diversified a bit - in this case by buying a specialist ‘data intelligence’ consulting firm who also have a range of data analysis software tools focused on fraud. They no doubt deploy similar tools for spooks and spies as well as the military. Someone will probably have to shoot me now.

2 Likes

I-DANIEL-BLAKE

2 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.