Debugging AI is a "hard problem"

Originally published at: https://boingboing.net/2018/05/24/tesseracts-vs-skynet.html

2 Likes

Paging Susan Calvin…

4 Likes

There will be no AI’s on my bridge.
adama

6 Likes

One of the things I find the most frightening about AI is that the coding itself is going to be so complicated that it will be beyond a human’s ability to manage or even comprehend. AIs will develop the ability to write and debug other programs/AIs, and they’ll do so in ways that we can’t wrap our heads around. And when we can’t wrap our heads around how it works, our concepts of containment and failsafes will cease to have much meaning in short order.

4 Likes

giphy-10

giphy-11

1 Like

What is this grid? Wouldn’t debugging be wildly nonlineaer, with not 3-4 but an endless number of variables? Maybe hoping to reduce to a few dimensions is part of the problem?

Well, computers are already so complicated that they’re beyond human ability to comprehend. Our solution is that we use computer software to assist us. We have abstraction layers which interpret complexity and represent it to us in ways we can parse and interact with.

In this sense, computers are already “designing themselves”: computer programs are writing other computer programs (compilers, for example). This lack of understanding of our own systems already leads to much hilarious problems with “containment and failsafes”. But we are comforted by the idea that in theory, a human brain could come to understand in full any particular aspect of the stack of abstractions.

As the stack grows, the slice that a human brain could fully master grows smaller relative to the whole. It’s true that the development of AI systems is adding a lot more layers on to the stack, but my point is that the stack is already big enough to create the situation you’re contemplating. So when considering the types of problems that AI systems could produce, rather than looking at singularity dystopias we should probably look to the kinds of problems we’re already seeing.

2 Likes

I hear what you’re saying that there are already examples of programming complexity beyond our understanding, but I think there is an event horizon of sorts. I think there will come a point of AI maturation (whether that’s self-awareness or something indistinguishable from it, who knows) that we just can’t look beyond because we’ve never been through something like i as a species.

To be clear, though, I’m not saying that things will be dystopian. I’m saying the uncertainty is scary as hell.

I agree in general, but I think the discreet point thing is too apocalyptic (in the Christian sense). We are currently living in the future we fear, and in fact that’s the only reason that we’re able to fear it. The present world gives us the framework to identify our current problems and worry that they will get worse.

I think this distinction matters mostly because people tend to see the future as something they can’t interact with but must await. But the future is here and now, just sitting there waiting for us to interfere with it. We already live in a world run by unfathomable superintelligent systems which self-design and self-replicate…what are we going to do about it?

1 Like

I think the disconnect we’re having is that I’m arguing that there will be a point at which something new in the literal sense to contend with, and I think you’re more of the position that we’re already there and it will be a difference of degree but not necessarily kind.

I take your point about leaning too heavily on apocalyptic premises, though.

2 Likes

Luckily they have couches you can store AIs on; so we can just unleash the psychoanalysts on the problem.

2 Likes

AIs will be raised like teenagers, not programmed.

The kinds of errors they make will be the same kinds of errors humans make,
known as the four Reduction Errors:
Illusion, Ignorance, Misunderstanding, and Confusion.

and many other kinds of errors following from those.
Ignorance, for instance, is known in Machine Learning as “It wasn’t in the training corpus”.

We will debug our AIs the same ways we debug our teenagers.
Except if we fail, we can just turn them off and start over.

1ymvbd

Debugging is hard. Debugging anything that utilizes outside data sets is much harder. Debugging anything where behavior changes based on those outside data sets is harder still.

In other news, water is wet.

3 Likes

So the implication is that until we started working with “AI”, we never had problems with inaccurate data or incomplete models…? This is baffling.

This topic was automatically closed after 5 days. New replies are no longer allowed.