Researchers can fool machine-learning vision systems with a single, well-placed pixel


Originally published at:


Boy, machines sure are dumb.


But we seem to be pretty good at teaching those dumb machines to do various things better than we can.


Dumb Things! :wink:


I think this sort of research is valuable and will obviously lead to more robust systems but I can’t help being reminded of how flawed non-AI systems are whenever I read about “adversarial perturbations” and fragile AI systems.

Our meaty brains are constantly misinterpreting what are senses are telling us and while we’re pretty good at dealing with those misinterpretations we’re far from perfect.

Not that I think that means we need to cut our AI algorithms some slack. One reason for all the effort being put into developing AI is that we expect it to do better than us at the tasks we assign it. And methodically identifying flaws and addressing them permanently is something that’s hard to do with humans so this sort of research is definitely worthwhile.

I’m not sure what my point is here. I guess I’m just practicing my response to an imagined argument that AI is bad because it can make mistakes when presented with unexpected inputs. I say “So what? Fix it and move on to the next problem.”


Boy, humans sure are dumb! :wink:


I think something we are learning from AI is how complex a balancing act Natural Intelligence does to get its results. I’m not sure it’s as easy as “fix it and move on.” I’m not sure we could have a vision system that doesn’t sometimes hallucinate, and I don’t think a vision system can really work without some system guiding it with some kind of reference back to another system that says, “Could what I’m seeing really be happening or have I got it wrong?”


I agree. I guess that sort of falls under my idea of “fix it and move on”. Meaning, adjust the system so or can tolerate these perturbations without falling to pieces. Whether that means enhancing the algorithms or developing executive systems that work at a higher level I’ll leave to the researchers.

Maybe my assertion that AI systems can improve on natural ones because improvements are persistent and sharable amongst existing systems is a bit naive. Designing AI is probably a bit more complicated than designing a GUI.


I think AI can probably improve on natural systems, especially when we are talking about specific tasks that didn’t exist when natural ones were evolving. It would be crazy to think our sight couldn’t be bettered by a computer. I just think maybe what we’re imagining right now as a system that can “see” may be far more primitive than we admit.


…says a human with a brain that is susceptible to about 100 different types of “optical illusions”. :face_with_raised_eyebrow:


Damn Skippy.


Haven’t read the paper yet (I saved it for later), so maybe it will answer my question. But here it is anyway:

I work with AI myself, so I definitely see why this is important. (Over-specialization is one of the biggest problems we are still fighting in the field of NN-based AI.)

Does this actually have any relevance on the real world? You don’t throw raw images with arbitrary sizes and color depth into a classifier without normalizing them first. During the training phase you even use random transformations, otherwise you’ll get an over-specialized system that is broken anyway. But the changes you make to the image during that step would break their approach, given that pixels aren’t the same size and position anymore.

tl;dr My question is: Does this actually have ANY impact on real-world classifiers that I missed or am I right, that this is only interesting for research and future NN-architectures?


You can simply use an ensemble of neural networks to solve that problem and gain other benefits as well. It related to bifurcation in chaos theory.!topic/artificial-general-intelligence/itUghRNZWN8


This topic was automatically closed after 5 days. New replies are no longer allowed.