Tesla 'assertive' mode is too aggressive, 53,822 cars recalled

This all gets even more monstrous if you understand how these algorithms work and why there is an “assertive mode” at all.

First- I spent 20 years working in AI, with an emphasis on autonomous navigation systems for complex environments. I know what I’m talking about here.

What you have to understand (and that the general public does not) is that self-driving consists of a couple of dozen of what we call Hard Problems in computer science. These are problems for which the solution can’t really be calculated, only approximated. The whole system is probabilities. It’s how all this “machine learning” stuff you hear about now works. Probabilities. Siri cannot identify the phrase “what time is it”. What it can do is take a long string of noise and map that to a series of likelihoods of that noise being “what time is it” or “play Weezer” or “call mom”. That’s all “machine learning” is. It is so much more primitive than the lay public thinks it is.

None of this is ever EVER guaranteed. The car does not know there is a stop sign there. It knows there is an 82% chance that this frame of video contains a stop sign and a 71% chance that it is in our lane and an 89% chance it is close enough that we should consider stopping for it.

This is where “assertive” mode comes in. Because the algorithms are never sure about anything (and to be clear, never will be) you need “fudge factors”. How sure is sure enough that it’s a stop sign and we should stop? If you set it 100%, the car will run every single stop sign. If you set it to 85%, the car will stop randomly in traffic at red birds and polygonal logos on signs. There’s no answer that works in every situation so they are beta testing these sets of fudge factors to see which works best. “Assertive” mode is one of those. No fudge factor will always work. It’s about picking where you want your errors to land- do you want the car to fail by occasionally stopping for nothing (which you see the Google and DARPA test vehicles do all the time) or do you want it to run the occasional stop sign. Both are dangerous in different ways. This technology is not safe.

All of this should, of course, terrify you. These cars should not be on the road, and no ethical AI engineer would say otherwise. You know all those times Siri misunderstood you? With a self-driving car, someone just died. They are the same algorithms.

The other thing about self driving is that it’s a classic 80/20 problem in computer science. 80% of the work is getting the last 20% right. People think because we’ve seen a few controlled demos that self driving cars are imminent. They are not. We will not see full autonomous vehicles on open public roads in our lifetimes. That is way way further away than people think. They see the 80%. We’re “almost there”. No, the last 20% (the part that keeps the cars from killing people) is still decades of work.

This is a frustrating topic for me because the mainstream press gets every single detail about this technology wrong and the general public are way too optimistic about this type of technology.

22 Likes