Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Originally published at: https://boingboing.net/2018/03/09/ai-arms-race.html

1 Like

The other problem is unintentional attacks. In Portland about half of the stop signs (maybe less. Depends on the part of town.) has been altered in some way. Some humorous, some not. But getting AI to ignore data is going to be the hard part. (And getting it to ignore the right part of the data is going to be even harder.)

It is the chicken and the egg problem if your chicken lays Cadbury cream eggs.

4 Likes

I recommend this video for the state of the problem. The problem is one of the tools not being sufficiently well understood to justify much in the way of assertions about robustness. There are researchers who are working on addressing exactly these kinds of fundamental questions (alchemy->physics) and I’m aware of some interesting work in the field.

More pertinent to the question of whether it can every be fixed… This is an interesting talk I saw on the topic in which the starting point is made through a neat example about prior assumptions and how you always need to know what those are - essentially the Bayesian methodology is the only sensible approach. It turns out that what you’re basically assuming is correlations (broadly, smoothness) in your underlying model. It’s those (explicit) assumptions that make your model well behaved.

You model may be wrong, but you are at least explicit about it, and if it’s done properly, you’ll know when your data is telling you something weird (i.e. not modelled).

4 Likes

Machine Learning is much less sophisticated than one might imagine. If you look at the math, it’s basically an advanced form of regression. Instead of fitting a line, the idea is to fit a neural mesh to an arbitrary set of inputs. It’s a bit more computationally intensive, but the end result of training is no more sophisticated than a polynomial fit.

You can basically fit any set of input points to any arbitrary outputs. That’s why there is more to statistics than just doing regressions. You have to think about the model, about cause and effect, about the underlying real world behavior. It’s all too easy to take a pile of data and find statistically “significant” correlations. The idea is that the researcher should have and apply some domain knowledge.

Machine Learning embeds no domain knowledge. It’s just a raw curve fit. Of course, it is going to be easy to hack.

9 Likes

And this is why we’re not going to see many self driving vehicles outside of the “walled garden” of the interstates any time soon, IMHO.

2 Likes

I actually have a neural net trained on ten million instances of the hand coming out of the box and closing the box and in a miracle of machine learning my hand in a box closes the box 97% of the time!

5 Likes

Where I grew up, somebody liked to attack stop signs with reflective tape, to make them say SLOP. I don’t know who, and I definitely don’t know why.

stop

3 Likes

There was an article in the news here a while back about how testing on automatic driving software had shown that it could not recognize the motion of kangaroos. So automatic cars here are going to get kangaroo specific code. And because everybody knows that kangaroos live in Australia, that code won’t be needed anywhere else.

We take it for granted that a human driver in (say) Canada will just recognize a kangaroo without special training, possibly because humans watch TV and such. They are aware of context. They know that the local zoo has kangaroos and one might escape one day.

I see this audi on my bike commute sometimes. In place of normal flashing indicator lights it has a row of lights which open progressively in the direction it is indicating. So if we call the lights 1234, then it starts with 1 on then 12, 123, 1234 and back to 1. Everybody knows that indicators just flash on and off so now that audi have introduced this indicator, the only way to know if it is recognized correctly will be to test every type of automatic car on the road.

It is perfectly legal right now for me to go to an electronics shop and build my own indicator lights and as long as human drivers can still get the message it should be fine, but the software in automatic cars may behave in ways I can’t predict. For example it is common to pulse LEDs at around 1KHz. You get more efficiency that way. A human observer won’t notice the difference. But an electronic system might.

So every time somebody like me builds his own indicator, we have to test every type of automatic car in the country to ensure that it still works. Or maybe, I am not allowed to build my own indicators any more. Okay but what about people buying stop lights from online retailers? Will they be stopped at the borders now?

I work sometimes in aviation, where automatic controls are common, and I know that if you break separation standards, there will be an investigation, and possibly legal action, regardless of whether that action lead to death and destruction. I ride a bike to work and I know that if somebody collides with me, and I don’t get hurt, the police will do nothing at all.

Road transport is managed differently from air transport. Assumptions about autopilots in aircraft can not be applied to road transport. My country has one government employee for every commercial aircraft. It does not have the same level of effort going into road transport.

There is a lot of stuff to be worked out.

4 Likes

The relevant XKCD is https://xkcd.com/1958/

2 Likes

Where is this chicken? Asking for a friend.

7 Likes

So the answer is low-res cameras?

1 Like

Kangaroos are not the only problem. Wait until it encounters a kid on a pogo stick.

It’s partly a problem of classification. Nobody actually cares if the AI “recognizes” a kangaroo or a kid on a pogo stick. The question is if it will correctly interpret their speed and vector and avoid hitting them.

2 Likes

Is your machines learning?

1 Like

But its impossible to do that without domain knowledge. A kid on a pogo stick is going to be all over the place because thats what pogo sticks do. A human would know that. An AI which just understands speed and velocity vectors would not.

The point of the kangaroo model (and the tesla vs semi trailer issue some time back) is that the code which estimates distances uses assumptions about the distance between the body of an object and the ground. Objects which break those assumptions need special handling.

1 Like

So imagine Terminator, but the bot gets lost because it sees a sign for Connell’s Furniture

2 Likes

When you send someone’s self-driving car off somewhere north like Nunavut, it might have problems.

7 Likes

It’s really a disguised 1968 Mercury Cougar. Relays FTW!

2 Likes

Good! I’m happy to hear that my intuitive understanding of this sort of thing seems confirmed.

Stop signs are a good example of a non-coersive utility that exists in order to passively make it easier for people to behave safely with each other. They only work when all players make the most benign choice.

Many if not most machine learning applications, have some nonconsensual power being asserted, these machines are being used (abused) to let one group of people consolidate their power over another group.

If it’s this easy for the second group to push back, then its at least possible that computers may eventually help humanity become better listeners over the long term.

6 Likes