Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Right, this is an interesting point. Current deep neutral networks might be able to identify photos with cats or dogs but they have no notion of cattyness or doggyness, at least, not in the way humans do (colour, orientation, perspective etc independent). And humans acquired that knowledge with several orders of magnitude less training data than the machine. Clearly something is not being done “right”.

I’m actually a huge advocate for machine learning. I just think it should be done properly!

You all seem to be implying that natural intelligence can’t be spoofed by presenting it with carefully chosen falsified inputs: showing it fake news, if you will. I think there’s ample evidence that human intelligence is every bit as vulnerable.

2 Likes

“Machine-learning models” --it’s an adjectival phrase. The way you wrote it causes a miscue on “models”; readers likely read it as a verb. You’re welcome.

Why is this a problem? I mean sure, we can design stop signs that can fool AI, but why would we do that?

We can also design stop signs that can fool human drivers, but we don’t do that.

  1. Because we can.
  2. Oh yes, we do. (See 1.)

Hmm, take this observation in conjunction with this one - https://boingboing.net/2018/03/09/study-fake-news-reaches-mor.html - and you might think we’re a lot closer to human-like intelligence emerging from machine learning than we thought.

Everybody knows that greater rheas (aka nandus) live in Southern America.

… everybody knows you would need a change of your KfZ-Zulassungsbescheinigung part II according to FZV § 15 and § 47 for that… Oh.

The problem is that in most CNN cases, people have no idea about the domain of validity. Like, really no idea… What does changing that pixel from black to grey do? Probably nothing, but there is no way of knowing except to try it. All you can really say is your training data is meaningful in the context of the model (and i include any validation data in the training data).

The only way around it is an AI, because human intelligence IS ‘how do I beat this game and get my banana’.

If you build it, they will come.

1 Like

But it’s not just that. We take for granted that a human driver, when faced with an unknown, will slow down and become extra cautious until they can make sure that unknown isn’t going to cause them a problem. And example of bad human behaviour that I used to hear people complain about when I was a kid was if there is an accident in one direction, cars in the other direction slow down as well. They called it “rubbernecking” as if people were slowing down to gawk at the accident.

Recently I’ve been thinking about that. When you are driving very fast you need to have a reflex to slow down and re-evaluate if you encounter something unexpected. A reflex can push the break pedal much, much faster than a conscious thought can. It’s important that people have reflexes to make them cautious because implementing caution takes time.

Which is why I always find people talking about the trolley problem with AIs very silly. If a self-driving car is deciding whether to hit one person or five people we’ve designed it in an insane way. It should be doing what a human would be doing in the same circumstance, using fast, low-analysis thought to brake and/or swerve. That is unless the computers in driverless cars are many, many times better at understanding the real world than humans brains.

It wouldn’t make any sense at all to think that human brains are the best suited machines for safely driving a car, but I get the sense we are currently thinking of many of the best features of brains as bugs for machine learning to solve, rather than understanding their utility.

(also @Timoth3y)

It is, but I think Randall’s thinking is kind of wrong here. People generally don’t try to kill one another, but people do stupid things that can get other people killed all the time.

If you really stop and thinking about what altering a stop sign to make AIs drive right through it means, you ought to be able to figure out it could get someone killed. But young people in search of a laugh don’t always stop to think through the consequences of their actions.

Sure, some amount of news coverage would get through to people that putting stickers on stop signs to fool driverless vehicles is bad. But also some amount of news coverage would make people think driverless vehicles were terrible death traps that can’t be trusted and basically sink the industry. I think the latter might have a lower threshold than the former.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.