"Intellectual Debt": It's bad enough when AI gets its predictions wrong, but it's potentially WORSE when AI gets it right

Originally published at: https://boingboing.net/2019/07/28/orphans-of-the-sky.html

11 Likes

[W]hen you shave a corner or incompletely patch a bug, then you have to accommodate and account for this defect in every technology choice you build atop the compromised system, creating a towering, brittle edifice of technology that all comes tumbling down when the underlying defect finally reaches a breaking point.

I cannot imagine a better description of the current U.S. Constitution.

22 Likes

Reference to an old saying in the places where I’ve worked is that it’s bad when something goes wrong and you don’t know why, but you’re really in trouble when it works and you don’t know why.

Not least because Management figures that’s good enough and ships the SOB.

25 Likes

Technically difficult, perhaps, but sociologically, you just have to put in one rule: a human must take responsibility.

An aircraft can have all the AI it wants, as long as the pilot is responsible for its solutions. A bank AI can pick the wrong people to loan to, as long as the loan officer is held responsible for its decisions. A recognition system can misidentify all the black people it wants, as long as the human cop takes the fall for the mistake.

I deliberately placed the 3 examples with increasing levels of how much our society itself tries to eliminate the concept of “responsibility is the other side of the coin from power”.

Pilots are just naturally responsible; for one thing, they also die if the plane crashes, so they literally pay for the AI’s mistakes. Banking got away with murder, given the power to loan money but not really held responsible for the crash. And then there’s cops, with power of life and death, and everything from their service to friendly prosecutors and judges denies the public the power to hold them responsible.

The corporation is a way of shirking responsibility for decisions: the corporation is a kind of AI for making money, and our laws specifically exempt stockholders from any responsibility for the AI’s actions, beyond their invested money. Attempts to divorce power from responsibility (often successful) are already common.

The trick is to not let a mere technological change be used as a stalking-horse to sneak in another sociological change that favours less human responsibility. We’ll be seeing all those “passive voice” descriptions of disaster: “A decision was made by a system provided by a now-bankrupt subcontractor to a firm that was later sold to a conglomerate and no longer supports these systems.”

23 Likes

I can’t help but think this is related to the phenomenon in our investment community of claiming profits for itself while shunting risk off to those who have no possibility of profit and no say in the investment. This is the core business model of private equity, and why I can’t support Corey Booker for president – because of his time working for Bain Capital – despite any virtues he may have.

10 Likes

So I can outsource routine decision making to hyperefficient sociopaths, but only if I’m willing to put some low-level employee necks on the chopping block? You drive a hard bargain, sir!

11 Likes
4 Likes

I’ll have to find the time for the long read. This is an important topic.

Currently in neuroscience there is a major shift to AI and ML such that I have many colleagues who genuinely belive we are 10-15 years away from “the singularity” when the AI will suddenly leave us behind and it will understand how our brain works.
I’ve tried to ask the obvious questions of “what does that even mean for a computer/program to understand the outside world?” and “how the hell would we know if there is no human that understands well enough to test it?” and of course “how can we effectively and safely apply the results if no human understands them?”

At some point its like knowing that 42 is the answer to the ultimate question. If you don’t understand why its the answer then that knowledge is purely a farce.

15 Likes

An Asimov story for every occasion…

8 Likes

Yeah. It’s weird how well it does work. I’m an engineer, so I’m used to being paid five figures to oversee work that costs seven and eight figures. Engineers aren’t given all that much conditioning to be “responsible”, we don’t get ethics courses (an ethics seminar or two, maybe).

What we do get is loss of profession and possible fines and jail time if we sign off on bad work - especially if we sign off knowingly, it can be jail.

It’s surprising that most of us “low level” employees push back from the word go on suggestions to shave off materials or skip steps, just from that threat. Thing is, if you try to bribe us, the jail time is much worse for that, and YOU go to jail too, as the engineer can reduce his sentence by squealing. I can’t recall a case where some engineering disaster happened because the responsible engineer was bribed with, say, $2M, to save the client $20M on project costs. It would be a bad business gamble.

What you see with 737MAX, in all the analyses, is that nobody was specifically responsible for the sign-off on the offending system. The FAA “signed off”, but in practice let Boeing check itself - and they didn’t just pass it all, it’s still not clear WHICH Boeing employee really took responsibility for that decision. Being a corporation, they almost reflexively would have set up management structures where nobody was, or at least could be proven to be.

The FAA could not make a better move right now than to demand that Boeing pick somebody an offer up his head: loss of professional status, charges of negligence, just destroy somebody.

Then nobody at Boeing will agree to be put in that position again.

13 Likes

This sounds a lot like the rationale for psychoactive drug therapy for mental illness. There is no real theory for how these drugs actually, “work”, just metaphors that sound convincing to laymen. And because there is a real effect that doctors find useful and socially acceptable, its generally overlooked that mental illness lacks a conceptual foundation the way we have for things like diabetes and cancer and bone fractures.

As long as this medication does what its supposed to, should we really be concerned that so many people are unable to function at their jobs without them?

4 Likes

Im tempted to ask you for specific examples, but I can think of a few myself. Zooming out a bit, its not as if I think there’s anything fundamentally flawed with the constitution, just that it is hopelessly inadequate for the nation it supposedly defines. Ive heard stories about what it feels like when a big organization moves to a larger computing platform, and that sort of harrowing existential fear is what comes up for me when I imagine updating the constitution to serve a more modern civilization. Its still worth doing though. Id much rather upgrade it than discard it.

2 Likes

Legacy code, yes. Unfortunately, the GOP has rigged it so we can no longer meaningfully upgrade it without stepping outside the system.

1 Like

I get where you’re coming from, but it’s not clear to me that this is the same paradigm. Even before AI is involved, I trust the accountability infrastructure for the responsible gatekeepers in your AI examples (human loan officers and police officers) less than I trust the engineer from your non-AI example.

Edit: To clarify, I do understand that this depends a great deal on how AI is implemented. Having a single AI implementation for a large banking application could aggregate so much responsibility that no individual would ever willingly sign off. I’m just expressing pessimism - this seems to me like a dynamical system that would optimize itself to a bad place, with responsibility on the weakest possible link.

17 Likes

“an obsession only with answers — represented by a shift in public funding of research to orient around them — can in turn steer even pure academics away from paying off the intellectual debt they might find in the world, and instead towards borrowing more”

What I see every single day. More and more, both academic and non-academic research encourages working to an economically/technologically useable result while discouraging any efforts to wade through the details to try to understand why it produces the result as a waste of time and money. Getting funding for the basic research is becoming nearly impossible, which makes research institutions not hire anyone doing basic research because they won’t bring in the funds, which produces a new generation of researchers oblivious to the idea that one might ever waste their time trying to understand why rather than pushing forward to find the next drug and the next tech.

p.s. nonscientists have a huge influence in this through congress. The U.S. government is the biggest funder of scientific research there is. For years it invested primarily in basic research. It worked. About 15 years ago the Bush administration pushed for applied research funding to go from less than 50% to 98%. It isn’t working. A shift back would have real positive impacts on public science from healthcare to technology to climate research

Sorry about the long-winded post

7 Likes

giphy

2 Likes

The problem is that the set of intelligences that are usually right but we don’t fully understand include the human brain as well. Maybe we should treat untrustworthy AI like we treat untrustworthy humanity - through the AI version of laws, regulations, checks, and morals. Basically we need to invent AI culture.

4 Likes

I’m reminded of the Uber autonomous vehicle fatality. The system was designed such that it lulled the “backup” driver into a sense of complacency by mostly not needing intervention, but when it did need intervention - because it was designed to not brake for detected objects in the road to avoid false positives - it also was designed such that it wouldn’t alert the driver to that fact, even though the system was aware it was happening. They also apparently failed to tell the drivers it was set up this way (which knowledge might have caused them to be more alert). The result being that a low-level employee (who might have even been a contractor, i.e. “not an employee”) was set up to take the blame of a system fundamentally designed to have an accident that the human in the car would have great difficulty - or impossibility - avoiding.

4 Likes

Backups are great … until we forget how to use them, or destroy all the machines which can read them. See; NASA

3 Likes