First driverless shuttle in Las Vegas crashes on first day while shuttling passengers

Well, “First driverless shuttle in Las Vegas is crashed into while stationary under circumstances where a human driver would have likely prevented the crash on first day while shuttling passengers.”

But I think was this incident demonstrates is that we should expect that while AI might end up being better than humans drivers, it is going to be different than human drivers. There will be scenarios that the vast majority of human drivers succeed in that for some reason AIs fail in. Like those adversarial perturbations of visuals; we might not even see what was done but they cause an AI to totally misinterpret the data.

Driverless cars could reduce accidents by 50% or 90%, but some of accidents they do get into will seems inexplicable, like a human driver would never be in that situation.

What concerns me about this situation is that apparently the algorithm really doesn’t account for someone else driving aggressively and badly. The AI didn’t think that a vehicle would just slowly back into it. In driving, like in pretty much everything else, some people are aggressive assholes who just assume that other people will get out of their way (often figuratively, but in driving sometimes literally). Maybe we’d be a better society in the long run if we didn’t so consistently get out of the way for these people, but in the short run it would cause enough strife (or car accidents) that we keep rewarding a certain amount of belligerence. If, instead, we program the AIs to give way the same way the human drivers do, that rewards bad driver further by making it more predictable how much leeway you’ll be given.

Right now there is a balance of power between an aggressive asshole driver and a person who accommodates them. Being more willing to get into a confrontation (or accident) gives you a certain amount of room to shove another person around, but that room does have its limits, and both parties are constantly gauging those limits.

In order drive on the roads as they are, you need to make social judgments about power, and I think that’s probably way out of scope for the AI. That means that other people’s guesses about how an AI will act will be wrong. It’s the sort of thing that in the short run could increase accidents. So I think it’s too simple to say that the AI isn’t at fault because a human driver in the same situation wouldn’t have been at fault. I think it’s too simple to say that AIs are better drivers than many humans. There is going to be a transition from not having AIs on the road to having many or mostly AIs. I think that pointing out that humans are also bad drivers is papering over how painful that might be.

Some of the worst, dangerous drivers largely think they are very good drivers. We may be headed for a future where the only cars on the road are well behaved AIs and crazy aggressive assholes.

3 Likes