AI has a legibility problem

Originally published at: http://boingboing.net/2017/05/04/humans-in-the-loop.html

1 Like

To be fair to artificial intelligence, this is also a problem with normal intelligence.

Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach.

This would be very relevant to my work, does anyone know where I can find more detail about this?

11 Likes

I think this is closely connected to the whole “if brains were so simple we could understand them, we wouldn’t be smart enough to understand them” limitation.

I suspect any sufficiently deep learning system may be in principle outside linear human comprehension.

This is also kind of related.

2 Likes

This is the DARPA program discussed, that I’ve heard is funded and starting off this month:

http://www.darpa.mil/program/explainable-artificial-intelligence

The notion is that if our automated systems can give an account (explanation) of why it makes the decisions it does, people may trust it better. What those explanations should be are the hard part. But as @doop says, this is not just a problem about automated systems. Look at how much people blame ‘the computer’ for every automated sports ranking system like PWR. But then look at how much people complain when human vote or deliberate to create the same, crying about transparency and about the arbitrary criteria people are using. Actually, superfans probably complain as much about PWR, but they seem to actually understand it and are able to blame specific games for why their team did not make the playoffs.

2 Likes

Yeah, in general it is radically harder to explain a process (e.g. tying your shoes) than to merely execute it. The explanation requires understanding not just the process, but how the process looks from an outside viewpoint, and also its relation to the entire space of similar processes (that is, all the ways to do it wrong).

We often don’t notice this because almost everything we know was taught to us, so explaining it is just a matter of remembering how it was explained to you. If you hit on something truly profound, like calculus or relativity, working out how to teach it to others can be a monumental undertaking.

Where AI is trained using evolution-like methods, there won’t be any log entry that shows why it makes the choices it does; the “why” is a separate, much harder question.

9 Likes

7 Likes

Huge ‘?’ What makes you describe human comprehension as linear?

All the evidence points to the realisation that human comprehension is not linear.

It is precisely because of the non-linear nature of human understanding why emulating it “algorithmically” is so challenging.

EDIT to add: if linearity brought us Trump and T MayHEm then the human race is really screwed and the Apocalypse neigh. Don’t take our last hope.

6 Likes

The deeper mechanism of human comprehension is complex and networked but the conscious product of it is linear. Which is what we are talking when it comes to AI “legibility” - i.e. the ability to translate the complex decision-making process into linear human language: Input A leads to B which prompts C to provide output D. This is what we would like to have - except the algorithms (and brains themselves) don’t work that way.

2 Likes

AI being able to explain its reasoning is a good first goal, and providing citation evidence in order to achieve that end is a good first step. But there must be a second goal. AI must also be able to provide evidence for alternative hypotheses. Specifically, it must provide evidence for high likelihood hypotheses that it rejected. Finally, it must present the numerical grounds for each of its hypothesis assessments. As noted in the articles that discuss biases that are rooted in the selection of training data (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing), trainers must be made aware of how their input leads eventually to certain conclusions.

3 Likes

And then because we train it by selecting for correct outcomes on test data, we’ll actually train it to do a good job of telling us what we want to hear as a explanation rather than actually giving a good explanation - sort of like we do with kids.

12 Likes

Regarding Human Consciousness:

Would appreciate a reference for such a statement. Definitions would also help:

What exactly is [quote=“8080256256, post:8, topic:100385”]
the conscious product of it
[/quote]

as opposed to the

Regarding AI “legibility”:

No, this is not what AI “legibility” refers to–at least not in this article. What you are describing here is a mechanistic communication model en vogue in the 70s / 80s. Complexity sciences and chaos theory has superseded such models and are far better explanations of anything and everything in the natural world (Including human consciousness).

AI "legibility as described in the article has nothing to do with prompts or outputs. It is about enabling humans to understand / read the reasoning behind the computer’s decision making processes.

It is about enabling humans (those humans not creating the software) to question the reasoning behind the algorithmic decision making. In your model it is the missing bits between how you get from A to B to C to D.

Here is the relevant quote:

To summarise: AI “legibility” matters precisely because human “processing” unlike algorithmic processing is not linear. We take in and amalgamate information from the offside and every-which-side and we are not happy to trust a system which although linear (and maybe complexly so) is mostly opaque about how it determines which line to follow.

4 Likes

Are we going to do better with software algorithms than we do with our machines made of humans, i.e. bureaucracies? From Credit Rating Agencies to insurance companies to housing/public assistance, the last thing organizations build requirements for or allocate money, time and energy and staff expertise to is having people understand the process, and how outcomes were reached. It’s considered the “soft” part of systems building, but it is absolutely critical.

I work at a non-profit that does things like helping teachers complete the certification process and rating the quality of early childhood centers, and helping centers with the licensing process. We spend an incredible amount of time and energy making opaque processes as clear as possible, but it’s a constant battle with both our private and government agency funders and partners to fund and respect this part of the work.

3 Likes

So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered.

Ah, so we are teaching AIs to make the choice and then justify it afterwords, just like humans do it!

3 Likes

I’m not sure if it’s correct to talk about neural networks ‘reasoning’ about things, this brings to mind the notion of thinking about things in a logical step-by-step manner. Neural networks are tools for finding statistical correlations, not much else. In order for an AI to explain its reasoning it has to first start reasoning, it needs to be a General AI, these don’t yet exist.

Most(?) complex neural networks are nonlinear, linear algebra can be used for simple networks, and even for some complex multi-level networks, but nonlinear functions are usually used in more complex models (e.g. backpropagation).

2 Likes

The problem with the discussion in this subthread is that there’s a technical sense of ‘linear’ (“involving or exhibiting directly proportional change in two related quantities,” i.e. it can be represented with a linear graph) and there’s a looser sense derived from that one (something like, systems with linear dynamics), and these two senses are related, but you have to be familiar with both types of usage (and calculus) to understand the difference. There’s also linear meaning line, sequential. This is a totally different meaning.

2 Likes

I usually assume, when talking about technical things like neural networks, that we’re all using the technical meaning. It’s probably a good idea to not always make that assumption, but it’s worth pointing it out, to avoid confusion.

2 Likes

I kicked off this sub thread because I was struck by @8080256256 use of linear, as in [quote=“8080256256, post:3, topic:100385”]
outside linear human comprehension.
[/quote]

The use of the term linear in this context struck and disturbed me. One of the key issues with AI and the proliferation of algorithms or as Cathy O’Neil aka math babe calls them Weapons of Math Destruction is that human comprehension and decision making (the product of that comprehension) is not linear.

2 Likes

google deep dream explains the why behind google images.

2 Likes

Reading the comments I see people are a bit confused as what the legibility problem really is.

You can think of neural networks as general function approximators. So, for instance, there is some function (very complex, highly nonlinear, with a lot of randomness and noise involved) from symptoms and disease. The neural network approximates this function by looking at a lot of data.

The problem is not that we can’t know why an AI made some decision. The why is always available, at least for neural networks. After all, we have access to the model behind the decision. The problem is that the model is not always intuitive or sensible form humans, specially as networks get deeper and the interactions more complex.

If the network is very simple (a single layer feedforward) the problem is easily solvable since the network acts directly on the input (it multiplies each input by some weights and sums them). The why is simply the weights the network has for each input, for example, a large weight in the “age” input means that age likely causes the disease. If we go a layer deeper, then the interactions are between the interactions in the first layer, so now the reasoning is something like “age” and “weight” cause the disease but maybe not “age” or “weight” independently. Because some networks are thousands of layers deep the reasoning becomes pretty much nonsensical for a human.

So the purpose of making AI legible is not to find out the why, is to figure out a way to make the why understandable to a human.

1 Like