Uber's autonomous vehicles require frequent human intervention

I bet Volvo is really quite happy about all this. /s

1 Like

Who really is overseeing the testing that is allowing these cars to operate on the open road? Our congresswoman Debbie Dingell (wife of the long-serving and curiously gun friendly “D” John Dingell) hustled through some legislation designed, by many accounts, to reduce barriers to on-road testing:

“Establish new exemptions for motor vehicle safety standards to make easier the development or field evaluation of highly automated vehicles”

Not looking to assign blame, but in the face of a massive technology gap as others have mentioned, perhaps this Uber set up should not have been on the road. The Google testing is almost ALL edge scenarios, virtual and actual, from what I understand. Have to wonder if Uber is spending all that investment on subsidizing their substantial per/ride losses instead of on R&D/testing.


I think that Kevin Drum gets it right here:


Uber is exactly the kind of company which should not design auto autos. The move-fast-and-break-things ethos is fine when the things are websites, but criminal when the things are people. What is needed here is cautious engineers who do things The Right Way.


As pointed out, Google was on the road with similar difficulties. Perhaps it just got lucky, perhaps there are fundamental flaws in Uber’s technology.

Honestly, I’ll be incredibly surprised if self-driving technology reaches any sort of generally usable “safer than humans” state without at least a thousand deaths.

As for Google being “almost there”. Um. No. My guess it that it (and others) are perhaps 10% of the way there. Why? Because for true self-driving, we don’t need 99% accuracy, we need 99.9999% accuracy, which may be the difference between requiring a billion dollar investment, and a 10 trillion dollar investment.

Acceptable self-driving under highly limited conditions will probably be a reality in 5 years.

However, I’ll be amazed if we get the real deal (say a non-driver in a car travelling back roads in a snow storm with no network connectivity) within my lifetime (say another 30).

I think of this a lot like the medical discoveries I keep reading about. Successful trials and then 5 years later wondering why I never heard about it again. There’s a hell of a lot of reality between being “90% there” and actually being there.


I would have assumed that they would want a basic level of functionality before road testing, just for liability and PR issues if nothing else. But, knowing them, they were probably counting on pushing responsibility off on the human “backup” driver. Given the scrutiny this case is receiving, the “driver” may end up facing some serious consequences, unlike almost all cases of a human driver hitting and killing a pedestrian.

1 Like

My suspicion is that we are near the point where self driving vehicles are ready for the interstates but not local streets So the question is how do you use THAT technology? My guess would be automated long distance semis driving between truck stops where the trailers are swapped for actual driven trucks for delivery. Even when the technology is still quite expensive, it makes sense if it allows the long distance trucks to be utilized 24/7 and not covered by maximum hour rules.


I think it’s just a time issue - Google’s been working on this for a lot longer, and Uber’s desperately trying to keep up (see: stealing trade secrets, cough, allegedly) because even the threat of autonomous vehicles is devastating to their business model, and therefore their ability to still attract suckers AHEM investors.

It went from ~1250 in 2015 to 5600 in 2016. That’s enormous improvement in just one year, and strongly suggests that they’re running down those last handful of edge cases that are the trickiest to solve, which is about where I expected them to be based on the general reports I’m hearing and how the industry is positioning itself in general.

The very fact that they are “edge cases” makes them difficult (read time consuming and expensive) to work on.

1 Like

Or any one of those.
Isn’t the big pitch for self-driving technology that it can transport non-drivers?

1 Like

So ironically, the car may have done better without all the uber tech, even with a distracted driver.

Query why anyone in their right mind, who’ve been following Uber expect them to do autonomous vehicles well enough? (BTW: Waiting for an explanation why the LIDAR, which theoretically sees in the dark, didn’t.) I mean, Uber is based on the business model of being a unicorn: Build a business, albeit an unprofitable one, then cash out with an IPO. Theirs has been of course delayed because the livery business is so low income and modestly profitable that making it all worthwhile and profitable is very difficult if not impossible. And the money’s not there to develop their own autonomous vehicles as opposed to waiting and using a better, off the shelf system developed by others.

1 Like

Seeing that objects are there is one thing, and figuring out if and where they are likely to move is a different thing. You can’t simply stop for every pedestrian standing on the sidewalk on the off chance that they might move. You really have to look at them and gauge their intent. Do they see you? are they moving towards a crosswalk, or a parking meter? Are they waving you on, intending to cross the street behind you? Is the bicyclist looking over his shoulder, getting ready to merge through traffic to get to the left turn lane? It will be a long time before computers are good at that, even when you’re only asking them to be as good as imperfect people.
OTOH, what computers are good at is paying just as much attention after 8 hours of a boring task as when they started. thus my contention that the day when they are better than humans on long stretches of interstate is coming soon, even if they aren’t ready for neighborhood streets.


But is a good course of action to slow down when you see a potential hazard by the side of the road, bycycle, kid, or deer. The Uber car didn’t even do that.

1 Like

Of course a computer doesn’t have to “cover the brake” to reduce reaction time. That was one of the actual useful things that I got out of Driver’s Ed. When you see somebody ahead that looks like they MIGHT do something stupid, take your foot off the gas and put it over the brake pedal in case.


Maybe. But my fear is that computers tend not to have graceful degradation. We don’t let humans who have narcolepsy or epilepsy drive for exactly this reason. I expect that we’ll see a few “fully loaded transport truck drives full speed into stopped traffic killing a hundred” from self-driving cars over the next decade.

The thing is, it’s absolutely possible to have that occur a few times a year, and still have it be a lot safer than human truck drivers. It’s a bit like airplane accidents. Much safer than driving, but such accidents occupy a way bigger mind share.

Anyway, I’ll be interested in seeing the accident analysis as I wonder if this is a fundamental design flaw or a “oh, this should have been a minus, not a plus” error.

(Non programmers often don’t realize how a trivial error can instantly result in massively non-trivial changes in behaviour because humans don’t work that way. Non-programmers also sleep better at night.)


In an unrelated point - is there anything worse job than being a driver in an autonomous vehicle?

Your job is to not touch anything, but be ready to take full control of the vehicle on perhaps 1/2 a second notice for hours at a time?

That’s of course an impossible job for a human. But then you get to be responsible for not taking over when the vehicle does harm?

That is my definition of hell.

(And no, I don’t know if the driver is held legally responsible (although I think so), but you can also be certain they’ll feel emotionally responsible for the rest of their life…)


Arizona is a capital punishment state.

Seeing that objects are there is one thing, and figuring out if and where they are likely to move is a different thing.

Per news reports, the Uber did not apply brakes at any point. There’s no way that is the designed behavior for something in the path of travel. Late braking is a complexity problem. Zero braking is a software error that killed someone.


Another thing of interest. Uber’s recording seems inordinately dark. Ars Technica reviews some interesting footage of other drivers experience in the same area and time and things are much better illuminated.

1 Like