Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye

Originally published at: https://boingboing.net/2019/03/08/hot-dog-or-not.html

“Hotdog”

4 Likes

Even human systems can be vulnerable to the tiniest of errors, for example, identifying a cell phone as a gun, or a pack of Starburst, or a wallet. At least HAL-9000 doesn’t have a gun, yet.

3 Likes

The idea of eliminating these is ridiculous. We all know many well known optical illusions that fool our own eyes. Unless you have direct access to underlying reality, I don’t think you can have a sensor that can’t be fooled into “seeing” the wrong thing.

The problem is that we make stop signs for the human eye. We don’t make them optical illusions that are easy to miss. But with systems like this, what will end up being an illusion is mysterious and unpredictable.

7 Likes

Capture

What are you talking about? This is absolutely a hotdog.

This topic was automatically closed after 5 days. New replies are no longer allowed.