That was my thought too. I wonder what would happen if a significant percentage of autonomous vehicles used an AI that became confused by a combination of sleet and ice pellets in a certain size range.
Also, I expect manufacturers would have to stop running prototypes like this on public roads.
That’s exactly right and they haven’t (yet) been trained to thwart these kinds of attacks. My guess is that now that these kinds of adversarial attacks are becoming publicized, newer models will start taking them into account and train against these kinds of attacks. Likely, they will no longer work.
Of course as these “simple” adversarial attacks are defeated, then new ones will be discovered. And then systems will train against them as well.
It will be a cycle, but the AIs will get stronger for it (for better and worse).
But they might get weaker for it. Take one of those AIs they are using in Japanese taxis to guess your gender. Long ago we made those better than humans. But if you build in a system that effectively is there to second guess the determination to reduce adversarial manipulation, that may well reduce the rate of correct guesses. Generally reducing false negatives tend to increase false positives. It’s far from guaranteed that you can increase resilience to attacks without sacrificing some accuracy.
They might get weaker. And I think you’re saying that this is a fundamental weakness with all of these kinds of algorithms. But, I’m not so sure about it. In general, the more data you throw at any of these statistical learning algorithms, the better they get. And I think that researchers in the past just didn’t fold these kinds of attacks into their training data.
Of course, I might be wrong. I’m not an AI researcher even though I do know a bit about how neural nets work.
From a purely theoretical point of view, this is a fascinating topic. But, from a moral and a human point of view there will be profound consequences if AI algorithms can’t easily defeat these simple attacks…
I would have hoped that the way email got hijacked by bad actors would have convinced people to avoid purely non-adversarial systems. But I guess that’s just asking too much.
Yup. It was proved recently that all this deep learning stuff that’s all the rage these days is computationally equivalent to a really fancy polynomial fit