Anyone who claims that machine learning will save money in high-stakes government decision-making is lying

Originally published at: https://boingboing.net/2018/01/12/automating-inequality.html

3 Likes

algorithmic systems don’t simply cost money to implement. They cost money to maintain. They cost money to audit. They cost money to evolve with the domain that they’re designed to serve. They cost money to train their users to use the data responsibly. Above all, they make visible the brutal pain points and root causes in existing systems that require an increase of services.

I’ve had to explain these facts to non-government clients all the time for more than two decades and I’ll be doing it 20 years from now. You’d think decision makers would have internalised them one generation into the Internet age, but no: there are still a lot of MBAs looking for a magic bullet that will let them make next quarter’s numbers and that will spit out the One True Number that will allow them do away with all that pesky due diligence; and my goodness, there are still a lot of magic bullet salesmen out there, too.

The consequences in my business are lost profits and lost business opportunities. The ones discussed here involve human lives.

13 Likes

Cory, a person I worked with around 10 years ago is now doing machine learning projects with the CDC. He’s very proud of some of what his team has accomplished. Thanks to you, I now know he is lying.

Is his claim that he could be applying the same skills in silicon valley for double the pay also a big fat lie?

If you’d like, I can send you his email address so you can dress him down directly.

2 Likes

I thought we already knew that! 42!

3 Likes

Wow, way to turn a reasonable analysis into an insultingly simplistic headline.

1 Like

This sounds very similar to Cathy O’Neil’s book, Weapons of Math Destruction.

The underlying problem isn’t the math, or the machine learning, the problem is that these measurement tools are designed and trained by humans with their own biases and goals. If you want the policy to show result X, you can figure out how to cherry pick the training data so you can get the results you want. And you can ignore feeding back the results that might contra-indicate the conclusions you want to arrive at.

2 Likes

[Stops replacing Turkish Administration with Intel ML USB3 dongle, slaps brains back in place.] Fine.

“Machine learning”, “artificial intelligence” - in the current state those systems are powerful tools, but no more - like a big excavator. And they require the same type of maintenance, and the same type of training for operators.
The future of those systems heavily depends on how they will be trained - meaning, science of education, which does not exist.
For more:
The Dawn of The New AI Era.

1 Like

Indeed, I consider the headline to be no favour to the book, which seems (by the actual review) to be quite reasonable.

Personally, I consider AI and big data to be just another tool in the desperate fight that mankind has been waging since the dawn of sentience - how to turn a 10,000 variable problem into a single number output.

We yearn to learn the “star” rating of how “good” a book or movie is. We want a ranking of “quality of life” in countries. We want to know what raise does my coworker “deserve”.

To me, the main danger is that Big Data and AI hide the fact that we’ve compressed all those 10,000 variables (by considering 10 or 20 of them). When it’s done by a human, at least we’re aware that they’ve thrown away all but 2 or 3 variables. As well, algorithms tend to be “locked in”. As humans, we can take a variable that is normally assigned no weight, and realize that in this particular circumstance, it’s actually important.

And of course, there’s GIGO for computers - “Garbage in = Garbage out. But that garbage, having gone through an expensive and somewhat mysterious process, is somehow ennobled, and worth far more than the inputs.”

1 Like

Perhaps we should tell that to whoever wrote the title of the book. Far too much of the recent online discussion of these effects have focussed on the tool rather than the people who decide how to use the tool.

This topic was automatically closed after 5 days. New replies are no longer allowed.