Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives

Originally published at: http://boingboing.net/2016/09/06/weapons-of-math-destruction-i.html
I’ve been writing about the work of Cathy “Mathbabe” O’Neil for years: she’s a radical data-scientist with a Harvard PhD in mathematics, who coined the term “Weapons of Math Destruction” to describe the ways that sloppy statistical modeling is punishing millions of people every day, and in more and more cases, destroying lives. Today, O’Neil brings her argument to print, with a fantastic, plainspoken, call to arms called (what else?) Weapons of Math Destruction.


As O’Neil puts it, “Models are opinions embedded in mathematics.”

This all by itself is a point worth screaming from the mountaintops–as a subset of the larger idea that science is a human endeavor undertaken by humans for humans in a human social context in service to human agendas to achieve human goals. (And I say this as a fan of both humans and science.) Models encode opinions, and so do technologies, taxonomies, equations, stats packages, medicines, database architectures, and all the rest. It’s only a problem when we forget that, and then (as this shows) it can be a huge problem.

Thank goodness for mathematicians like her who are willing to take the statement “the math says that…” and reframe it as “you claim, in a mathy accent, that…”.


It seems you are conflating current concerns about big data and machine learning with just old-fashioned lying with statistics which needs none of that. Certainly the Reagan administration didn’t use “big data” in any meaningful sense, and while it is certainly possible that the University of Phoenix is using algorithms and big data now, scammy for-profit schools have existed for decades (I remember all the sleazy “computer repair” schools that used to advertise in the back of Popular Mechanics in the 1980s).


As others have noted, the lazy thinking and breathtaking cruelty are not new. It’s the way this stuff is packaged that’s changed. It’s more opaque than before. It carries more authority than before.

My mind keeps wandering g back to the 80s and the introduction of 8 bit computing to the home market. Up to that point, computers were emblematic of that kind of casual, uncaring bias. For a brief time, it felt as if the little guy had a chance to push back with computers of our own.

Myself, I blame the capitalist software tradition: teaching everyone to program like everyone is taught to drive, that was no more than a marketing ploy. It’s the common, everyday understanding of these processes that inoculate against too much trust in these systems. But we’ve collectively hired a staff of specialists to worry about that stuff, it’s not really open for discussion.


From the article:

If the prison system was run like Amazon – that is, with a commitment to reducing reoffending,

This would require a bureaucracy with a commitment to reducing the needs for its own services, so that it would have smaller budgets and smaller staffs in the future.

No such thing has yet been determined to exist. See “War on Poverty”,“War on Drugs”,“War on Terror” for examples to the contrary.


If these machine-learning algorithms are black boxes, is it really reasonable to call them models?

Traditionally, aren’t models supposed to show and teach things?

A scale model, an iconic model, and a mathematical model do different things, but the user can take them apart and study each part.

A low-complexity and a high-complexity mathematical model fit different techniques for testing mathematical models: lower complexity means fewer variables, so it’s easier to test these varables using limited historical data, and higher complexity means more variables, but it’s easier to test some of these variables using experimental data, and to be specific about the reasoning behind the other variables.

And machine-learning algorithms aren’t doing that, are they?


Thanks for the article Cory, here is another recent article on the same topic: http://www.seattletimes.com/business/microsoft/how-linkedins-search-engine-may-reflect-a-bias

I don’t consider Dr. O’Neil to be that radical at all, at least not by the concerns raised here. They are legitimate concerns and worth paying attention to.


Edited for an added layer of dystopian-ness.


They are models to the people that build them and mostly seem uninterested in refining them. They are black boxes to the people and bureaucracies that use them and could have the ability to refine based on outcome, but are generally prevented.


… and yet:

Id est, these systems exist, just not in the great old U.S. of A.


To be fair - government address human needs that will usually exist as long as the government and the population do. That said - some have been discontinued - the Natioanl Recovery Administration from the Great Depression being a rather large example.

Here’s a few:
National Recovery Administration
Bureau of Insular Affairs
Civilian Conservation Corps.
United States Maritime Commission
US Shipping Board
Federal Security Agency


There is generally more job security in prolonging a problem than in solving it, and job security is a very strong motivator.

1 Like

sigh, faith in humanity destroyed.

happens a lot these days.

1 Like

There’s a difference between problems and basic human needs or ongoing administration.

1 Like

Explanations are only explanations if the people making them, and the people using them, can broadly understand them. (Even if, for example, Prestags doesn’t explain why militia infantry unit are rated 2A3.) But, the essay seems to state, with the results of machine-learning algorithms, neither the people selling them, nor the people using them, even know what’s in them.

1 Like

This is a weird turn of phrase. I mean, I know who she is and I’m happy to read her book, based on other things I’ve seen from her, but this phrase… I don’t think it means anything.


Even in the workplace 8-bits (and even early 16-bits) were a source of freedom away from the centrally managed mainframe – your computer was really yours – but then IT discovered that they could network the desktop computers, and control what could be installed on them and thus regain the control they once had over computing. (Yes, I know the problems of authoritarianism aren’t just at the IT department level, but your mention of the freedom that microcomputers gave reminded me of this).


I have always said that the scariest thing about the gov’t, big data and greedy corporations following everything we do and say, where we go and what we buy, is that they would use this data to make ASSUMPTIONS about people…THAT IS SO DAMN SCARY. When I go into the facebook page that shows how facebook categorizes you and tries to determine what you like by what you do online. I have to say…it is wildly inaccurate. FB targets dog products to me…I don’t and never had one. Not only are they getting it all wrong, they believe they are so smart. It’s just so, so, so wrong. Not only that, but I believe our society is living under a great deal of stress. I don’t care how much someone says "I don’t do anything wrong, therefore, I don’t care if they watch me, in the end, knowing your every move is being monitored is cruel at best. Knowing that you won’t be judged by who you really are, but by some algorithm, is a constant stress hanging over the heads of everyone in our society. Anxiety, Depression, and ultimately self-censorship will therefore will cause society to become devoid of innovation, creativity and high level problem solvers…Exactly how the Roman empire was brought down…

1 Like

Wow! There’s a reference that’s not going to be understood by many. Thanks!

This would also require a voting public that consistently believed that it was possible to reduce the crime rate by means other than imprisoning or killing people who they believe have criminal tendencies.

And before we go and blame those “right-wingers” that don’t think like we do, let me point out the similarities between writing off large segments of society as irredeemable criminals and writing off large segments of society as irretrievably right-wing.

Personally if I was that concerned about these injustices, I’d be personally spending time with right-wingers, sympathetically understanding their concerns and trying to address them with fairness while simultaneously trying to steer them towards my way of thinking and understanding of my position.

… Nah. Too much work. Sure, I’d like to see more social justice, but if it means spending time with “those people”, it’s just not worth it.