Techniques for reliably fooling AI machine-vision classifiers

Originally published at:

How about a little chalk?


Salt circles and black magic?


Take a shortcut through the Toontown Tunnel?


That quoted paragraph looks alarmingly similar to a Flossy post.


What worries me about self-driving cars isn’t so much that they will have hallucinations - that might be a minor or a major problem, but if it’s a major one they won’t end up on the road. It’s that they will all have the same hallucination under the same circumstances. We’re basically talking about cloned intelligences, which has all the same problems as cloned organisms.


We’re over-due for a murder attempt by tweaking a large GPS navigation DB to route heavy human driver traffic through someone’s living room.

1 Like

Little Bobby Tables has grown up, and now he’s driving a car with a visually perturbed paint job through the city, causing mayhem.


Dazzle paint?


That’s exactly where I went with it. Cars might mistake a cliff edge for an open road? Big whoop, humans do that all the time, it’s why we have guardrails.

It’s the notion that all cars can be persuaded that a particular cliff edge is an open road. Or that a particular type pedestrian crossing sign is a bouquet of flowers.

1 Like

I have car with a lane keeping assist system. It is useful in that if you drift from the lane center, the steering wheel will give a gentle tug towards the center. I amuse myself by seeing how far I can go with no input other than to convince the system that I am still touching the steering wheel. As long as you are not in an outside freeway lane with various exits, the car will autonomously maintain the lane quite well and also maintain an appropriate following distance. In general, trying to let the car drive itself requires more attention as you need to be ready to quickly stop it from doing something crazy. I am hoping that by the time I reach an age where I inevitably become a traffic hazard, safe self driving cars will be available.

1 Like

The glory of software is that we can, and are using adversarial programs like these to up the intelligence on neural networks.


Been there done that.


For quite a while to come, cars will be stupid. And when they get a bit smarter, they will be silly and start fooling around.

That’s a tricky one. The obvious solution would be to have some kind of variance between each vehicle’s software, but then I guess you’d have to test each car separately before letting it on the road, since you never know which “mutation” is going to screw up the whole thing.

The upside of having every car want to jump off the same cliff is that you’d at least find out about it pretty quickly and either update the software or change the signage around the cliff.

This kind of stuff is only going to get more vexing as AI advances, what happens when you have a system that is demonstrably smarter than you are, but that reliably fails in the real world? You sit down and talk to it, and it convinces you that you’re mistaken, then it puts all your money in time-shares and calls the police when the postman comes round each day. You’d like to get rid of it, but somehow when you try to uninstall you end up buying a second one.

1 Like

When I was young, I learned that the F-16 fighter jet had fly-by-wire with 3 or 4 computers to interpret the input. If one computer gave an odd result, the others would overrule it.

I think that’s something that could work here. Instead of giving every car unique software, give it several different and unrelated image recognition systems that are all tested as good enough, so when one claims to see something odd, the others can overrule it.

1 Like

I think by default self-driving systems will have something like this, as they will all be getting multiple sources of input (visual (probably including different wavelengths), radar, lidar, GPS, networked information, driver overrides…) so there will be some kind of comparison among them and then choosing the safest ones.

A little fooling around is okay if it keeps them from being depressed and suicidal.

It’s a bit like a clumsy, not at all scary Dalek…
When friends of mine got a first generation Roomba a couple of years ago it tumbled down the stairs every other week, despite the magnetic boundary strips (or whatever they are called).

As to fully autonomous cars - they are still a long way off. That’d be level 5; the most advanced kit on the market right now barely manages level 2.

However, if cars become truly intelligent, I fully expect them to become bored with routine pretty fast. “All I get to do is to drive you to work and home every day, why don’t we go someplace fun?” “I wanna sign up for a stunt drjving seminar!” “The Tesla next door has much nicer rims than me. Can I at least get some cool stripes?” “The satnav hates me.” And so on…

I think it was a Rudy Rucker story where IoT devices became self-aware and liked to gossip and start drama between their owners to pass the time. Like your shoes start dropping hints that your wife’s shoes spent a lot of time off her feet when she had that late meeting last night.