"Adversarial perturbations" reliably trick AIs about what kind of road-sign they're seeing

Originally published at: http://boingboing.net/2017/08/07/nam-shub-of-enki.html

2 Likes

pERturbation

2 Likes

Another reason to consider confining AI-controlled vehicles to “fenced” routes.

That’s already possible without “AI”.
They are usually called “trains”.

9 Likes

The original idea of AI was to take a logical hierarchical approach. First, find the sign, then reason about its shape, color and, if possible, read the text. That’s more or less what people do. That didn’t work, so AI got turned into a statistical matching grab bag with no internal semantics. It was just matrix math and limited by the training set and structure of the mathematics. A better approach would be to use a hybrid approach with machine learning supporting the hierarchy. This is what most animals do internally, and it provides more robust solutions. We’ll probably have to wait for the next AI winter, and we’ll see this in the next AI spring.

2 Likes

If you are worried about people vandalising road signs you can vandalise them to fool humans too. At least the AI classifier successfully flags up that the sign has been tampered with with the reduced confidence value.

Anyway, TASbot plays Brain Age was a pretty good segment on adversarial inputs to simple classifiers.

To be honest, I don’t really see what the point of this study is. If you are allowing people to physically deface signs, then the easy thing to do for a vandal is to just cover the sign with a different sign. That fools everyone, AI or human, and will be difficult for any investigator to notice and deal with without knowing what the sign was originally supposed to be.

why the unnecessary hyphen in “road-sign”? Oh–Cory Doctorow. Question answered.

When you say, “If you are allowing people to…” it makes me think you are thinking this whole thing through as if it were a problem in logic class rather than a problem in the real world. In the real world people might steal traffic signs but it doesn’t happen very often, people recognize it as a dangerous thing to do, we likely have a law against it, and drivers who come through the area regularly will actually notice and report the problem.

In the real world, also, people might put up stupid stickers on traffic signs. It’s probably also illegal but if it doesn’t impair people’s ability to see the sign then the punishment for it won’t be severe. The people doing it won’t think they are really causing any harm because they are just putting up stickers.

If we think everything through logically, a person who is putting up a sticker that is intentionally designed to fool an AI into driving through an intersection it is supposed to stop at ought to know they are doing something that could get someone else seriously hurt or killed. But we take time to adapt. Like when morons endangered public safety by pointing laser pointers at airplanes. The idea that what they were doing was something dangerous was at odds with the idea they were just messing around, and just messing around won.

Knowing this kind of thing, if we were going to be serious about having lots of robotic cars on the road, we might want to make a law against modifying or defacing signs in any way and treat putting a dumb sticker on a sign with the same level of severity that we would treat removing a sign. We might want to educate people that AIs can be fooled into thinking strange things by visual changes that we wouldn’t expect would cause any problems so they know not to make potentially problematic changes.

But AI car makers probably don’t want to warn people about how if you put stickers on signs they’ll confused the AIs because that would diminish confidence in the AIs.

From within the logic problem, this seems like a non issue, but from within that framework why would anyone ever deface a sign? Why would anyone drive unsafely when that’s clearly a bad idea?

This is a serious problem because we don’t get to decide what behaviours we “allow” in the real world, we only get to speculate on what behaviours might emerge.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.