"Intellectual Debt": It's bad enough when AI gets its predictions wrong, but it's potentially WORSE when AI gets it right

Zittrain specifically addresses the question of medicines whose efficacy is well understood, but whose underlying action is a mystery.

5 Likes

for instance the constitution never anticipated somebody with malicious intent finding their way to the presidencey. Checks and balances are failing, and proper procedure is epochs too long to keep up with the pace of corruption we are now experiencing. I’m not certain this system will recover.

2 Likes

Interesting read, but one extreme statement sticks out that makes me question the veracity of the whole article:

“A family reported that a hospital threatened to remove life support from their gravely ill daughter after a charity’s transfer of thousands of dollars failed to materialize”

Really? This was supposed to happen in the UK? I can imagine it as an extreme case in the US, where money rules. Zittrain needs to provide evidence that this happened, otherwise the whole article has devalued itself.

Also, his understanding of how pharmaceutical research works is basically flawed. It is not a “trial-and-error” process. Certainly, the original identification of potentially useful substances may, in some areas like antibiotics, be based on something like this in first-steps, but it is certainly not as random as he implies.
Furthermore, he mistakes pharmaceutical package inserts as clinical documents, when, in reality, they are legal documents.

This is what happens when people with some level of expertise in one area believes it gives them expertise on all areas.

3 Likes

Luckily embracing faith in revealed truths that are deemed more authoritative because their deriviation is unknowable is a strategy that has never had negative side effects before; so we can definitely just run with it now.

3 Likes

The constitution did anticipate bad actors. It didn’t anticipate a situation where the people supposed to check and balance a bad actor would willingly give up that power (looking at you Senate republicans), or worse still where both checks on a bad actor (adding a politicized Supreme Court) would give it up.

6 Likes

Bottom line, I still ha e faith that most people are decent, fair minded people aid be happy to share a country with. Its formalizing that trust into an enforcable contract thats hard. And the constitution makes no mention of political parties, and barely mentions corporate legal entities which have taken the lions share of governance for themselves.

After reading, _ In the beginning was the command line_ I found myself wondering if a government really needs a user interface in order to function. If the users are already organized into a donor class, then it has all the voice it needs, no real need to mess around with a plebiscite. But its expected to hold a pageant every year, so its covered just like a sporting event.

I think what will blow this tidy little racket up, is a side effect of global warming. And the donor class knows this, and knows we wont like their response, so they gave us Trump o hate on while they consolidate their power. They cant anticipate how its all going to go down, though, nobody can.

2 Likes

Not in the long run. Humans just don’t work that way, even when we think we do. We CAN’T maintain concentration for long periods without having to act.

And if and when that becomes the norm for cars, and we start to have drivers on the road who have never been in control since their road test? They’ll NEVER be experienced enough to drive safely, let alone take over at a moment’s notice from an AI.

3 Likes

I’ve posted this before, but it gets at the heart of the problem. It’s Zoubin Ghahramani talking about the “big lie” in machine learning, specifically that the set of training data spans the set of test data. In a sense, everything would be fine if we could guarantee that the big lie was actually true, but ML would be of limited use.
He talks about Bayesian methods as being useful in this area, and it’s not hard to see how. Essentially, the Bayesian approach can be used to put prior expectations about how deviations from test data might affect the result, and if the deviations are too big, then the system becomes very uncertain (as in, it knows it doesn’t know), which is an acceptable result. If these prior expectations are based on solid principles (e.g. objects have limits on how fast they can accelerate), then you can end up with a very well regularised system, and significant robustness to weird inferences. The problem then becomes one of translating common sense into prior stastical knowledge (and then of course doing the inference properly, which is probably non-trivial).
This talk by Ali Rahimi is also well worth a view - he discusses (rather controversially as it turns out) how much of machine learning can be compare to alchemy, in that lots of people are doing things without understanding quite why, but achieving useful things in the process. He argues that researchers need to get a handle on what makes these systems actually tick.

1 Like

so do I, but by “most” my faith tells me its about 51%

I’m a software engineer, and we have NONE of those perverse incentives to keep us from signing off on bad work. In fact, because it’s so easy to patch things, pushing live without adequately testing is becoming a cultural norm. You’ll need to fix that culture of “public failure is no big deal” first. I’ve been engaged in that kind of transformation for a while, and it’s a tough row to hoe.

5 Likes

Yep, that’s exactly true. It wouldn’t have actually helped (much) in preventing the accident, even if it might have made the drivers more alert - or at least try to be more alert. A situation where drivers have to respond (instantly) to a rare event when they otherwise don’t have to do anything is a losing dynamic to start with. Uber just didn’t do anyone any favors by having the drivers ignorant as to how it was all set up, though. The system was broken by design in multiple ways, and having a human being “take responsibility” was absurd because they couldn’t be responsible in that situation.

It would be a total disaster - you either have drivers who need to fully drive the vehicle, or you have fully autonomous vehicles. Anything in the middle just doesn’t work and shouldn’t even be considered. Anything less than fully autonomous vehicles should only be implemented as safety backups for drivers - which means ironically, just for safety’s sake, drivers have to end up doing driving tasks that the vehicle is perfectly capable of doing itself, just to keep them engaged.

3 Likes

There’s a Fritz Leiber (I think) story about a “reminder device” that sits on your shoulder and talks to your ear about what needs to get done, according to its algorithms, and then pokes you with a needle in the shoulder if you don’t do it.

A far cry from Richard Brautigan’s “machines of loving grace.”

1 Like

In the dark: why the Stranger Things ‘red room’ is confusing younger fans

I mean, this is the whole problem with the world today, isn’t it? Nobody understands process any more. You have a magic computer in your pocket and it means you’ll never know the glory of tangible, real-world process.

6 Likes

It apparently was a British family whose daughter was in Mexico for treatment.

3 Likes

…it has? Gerrymandering, sure, but I always thought the problem with the amendment procedure is that it requires three-fourths of the states to sign off on the change. And if you want to strike the second amendment out (since amendments can remove previous amendments, like the 21st did) you’ll never ever get those states with an active gun culture to sign off. Same thing with how a ‘Recognize Christianity As State Religion’ amendment won’t be supported by the progressive states and the abolition of the electoral college won’t be signed off on by the small states or the swing states.

This has nothing to do with anything the GOP did. It’s not like they wrote Article V.

Or am I wrong? I’m not American so there could be something obvious I’ve missed.

I think an important caveat, one that has been addressed by those mentioning the Uber debacle, is that the person who takes the responsibility must also bear some authority over the process or action. Responsibility without authority is Damoclesian sword for those on the lowest rungs of power. It becomes laughably easy to just create a contractual scapegoat, and it’s not a guess to say that those in power would abuse that capacity.

2 Likes

My grade school sex-ed was in the very early 80’s. When they covered the various prevention methods it really bugged me that for the IUD the description just said something like “Scientists still don’t know why IUDs work.”. I was like: how are they selling something like sticking copper in persons body without understanding it. Maybe it was understood then, but the curriculum writers decided to gloss over the details. I had a pretty good grasp of how science worked at the time, and this just set all my alarm bells off.

1 Like

Very interesting piece. Thanks

1 Like

AI intellectual debt + invisibility + obscurity - human control authority = danger

If this doesn’t give us nightmares:

… I believe it’s well past high time we soberly consider how long this problem Cory’s post describes has actually been with us [humanity], and how or if humans have been working effectively to address it.

An extremely familiar scenario, with software / coding being just the latest iteration, and of that, infosec being ever more the most prominent and punishing example.

Plus IMO the implicit opportunity cost apparently or often does not have enough negative aspects to outweigh the real or perceived [by management] sunk cost for big influential businesses, including businesses that involve human health and safety, to do the right thing (hey looking at you, manufacturers of medical devices and uh yeah Boeing…).

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers

This has late stage capitalism written all over it.

What bothers me is that these are invisible issues (as noted in the Arthur C. Clarke example in OP) until a catastrophe happens. It is simply not feasible for the human mind to infinitely expand and account for ever-increasing risk as use of AI proliferates–seemingly inescapably–in the public and private realm.

ETA: clarifying context

2 Likes

Thats not your faith, thats a carefully constructed illusion. The numbers for gun control, taxing the rich, and socialized medicine all hover around 70%. But the media doesnt find that a compelling narrative, so it has to look like theres just a teensie tiny advantage to our opponents.

This is a classic method used by jail wardens in pow camps and civilian prisons… keep the inmates at each others throats, life is much easier for the guards.

2 Likes