A 40cm-square patch that renders you invisible to person-detecting AIs

That was my thought too. I wonder what would happen if a significant percentage of autonomous vehicles used an AI that became confused by a combination of sleet and ice pellets in a certain size range.

Also, I expect manufacturers would have to stop running prototypes like this on public roads.

5 Likes

What an odd place to stop testing. It’s not like printing on a t-shirt is difficult or expensive.

3 Likes

This will work so much better if they make the patch a little bigger and wear it a little higher.

Where’s Faffenreffer?

2 Likes

That’s exactly right and they haven’t (yet) been trained to thwart these kinds of attacks. My guess is that now that these kinds of adversarial attacks are becoming publicized, newer models will start taking them into account and train against these kinds of attacks. Likely, they will no longer work.

Of course as these “simple” adversarial attacks are defeated, then new ones will be discovered. And then systems will train against them as well.

It will be a cycle, but the AIs will get stronger for it (for better and worse).

1 Like

I’m gonna get me one of these.

3 Likes

Cool, but I’m not really into prints. Can you make my AI-defeating fashion a solid color, preferably in a dark blue or gray? Thanks.

But they might get weaker for it. Take one of those AIs they are using in Japanese taxis to guess your gender. Long ago we made those better than humans. But if you build in a system that effectively is there to second guess the determination to reduce adversarial manipulation, that may well reduce the rate of correct guesses. Generally reducing false negatives tend to increase false positives. It’s far from guaranteed that you can increase resilience to attacks without sacrificing some accuracy.

1 Like

I prefer to stick to the classics.

3 Likes

They might get weaker. And I think you’re saying that this is a fundamental weakness with all of these kinds of algorithms. But, I’m not so sure about it. In general, the more data you throw at any of these statistical learning algorithms, the better they get. And I think that researchers in the past just didn’t fold these kinds of attacks into their training data.

Of course, I might be wrong. I’m not an AI researcher even though I do know a bit about how neural nets work.

From a purely theoretical point of view, this is a fascinating topic. But, from a moral and a human point of view there will be profound consequences if AI algorithms can’t easily defeat these simple attacks…

I like the idea of a convoluted neural network. But, I’m much more familiar with the term convolutional neural network.

1 Like

I would have hoped that the way email got hijacked by bad actors would have convinced people to avoid purely non-adversarial systems. But I guess that’s just asking too much.

Yup. It was proved recently that all this deep learning stuff that’s all the rage these days is computationally equivalent to a really fancy polynomial fit

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.