For jaywalking, it may turn out.
5 Replies
And no jokes about Zunes killing people.
This isn’t some giant “baffling” mystery. Shit goes wrong, sometimes people don’t know why. The claim of autonomous vehicle developers is that shit will go wrong less with their sensors than it will with human drivers. To hear them say they are “baffled” drives me nuts, it makes me think their equipment’s failure rate is higher than they believe it is. It’s very confidence sapping.
There is no “last handful” of edge cases. There are 1 in 1000, 1 in 1,000,000, 1 in 1,000,000,000 and on and on. Every tier you go up the problem gets harder and more time consuming to solve while giving you less and less benefit. That doesn’t mean they can’t get an operational self-driving car to the market in the next few years, but I’m sure not sold on the idea they can.
Though a very likely possibility is that a sensor just plain not working or that a wire was frayed near the connector. As your examples from this post, it could just be a computer system having degraded ungracefully.
Considering that in this circumstance, the pedestrian was actively moving across the street after already crossing one and a half lanes of open road to the left of the Uber vehicle, I don’t think there’s much point in discussing how effectively Uber’s software can judge pedestrian intent. It seems to just be blithely ignorant of pedestrians as a concept.
That’s the sensor manufacturer saying that - what they’re essentially saying is, “Our equipment works fine and is perfectly capable of detecting someone in those circumstances. We don’t know what Uber did to fuck things up, but the fuck up is entirely in Uber’s court.” Which seems to be the case - this was a textbook example of something the car should have been able to see. Something seriously wrong went on.
That’s fair, in a technical sense, but I should clarify that I really meant “The last handful of edge cases that prevents AVs from performing equal to or slightly better than the average human driver”, which is the current meaningful goal for AVs.
Once they hit that point, everything changes, and it seems like we’re not getting too far away from it.
A quick Google suggests that you should expect crash-related injuries about every 1.3 million miles driven, and fatalities about every 100 million miles driven, on broad average. (I unfortunately couldn’t easily find data on incidents without injury or fatality).
Yes, someone had to intervene once every 5600 miles in a Google AV, but it’s unknown how many of those would have resulted in anything other than the vehicle coming gently to a stop, if anything, until the inciting conditions resolved.
But again, the shocking rate of improvement suggests strongly that, as you said, they’re getting pretty far down the probability line over there for chasing down those pesky edge cases, and I’m excited to see how much it jumps the next couple years in turn.
Uber was a scam from day one, and remains to be one.
Well, that is an upper bound. It assumes “require human intervention” means there would be an accident if a human didn’t intervene. A situation where an autonomous vehicle refuses to enter an intersection because of unclear signage and markings or another car being in a weird place would also constitute requiring intervention.
Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera.
I just did some reading to be clear I understood this. Basically the SUV they were using comes with a collision avoidance system that will detect collisions and automatically break. The car is still meant to be driven by a person, it’s a safety feature to try to help a human driver.
In a way it makes sense that if you were designing and testing and autonomous vehicle you’d want to deactivate someone else’s autonomous driving system. Still, the fact that Uber actually managed to make an autonomous vehicle that was worse at detecting pedestrians than the manufacturer default is… incredible?
It isn’t terribly surprising as a response from a company of Uber’s sterling reputation(they only got rid of the the ‘hey, it’d actually be pretty inexpensive to have inconvenient journalists harassed into silence, just thinking out loud here guy under duress during the Kalanick ouster, and there was the whole "nah bro, I’m pretty sure the private medical records we obtained and retained show that she’s just pretending to be raped as part of one of our competitors’ schemes" incident); but it’s an astonishingly poor retort if you think about it for more than about a second(made even worse by the immediate flurry of people producing dashcam videos of the same street that hadn’t been been sent through the ‘grimdark’ filter):
Even if the road had been as dark as it was made to look; “Laser is the sauce”, right? Oh, wait, you switched to using a single LIDAR system, from the prior seven, and cut the camera count in half. Also, headlights?
Since the road does not appear to be particularly dark, in the video provided by independent observers with totally unexceptional cameras; you are left with the “actually trying to do this with cameras worse than your average cellphone; or just blatantly dishonest?” problem.
Not a good look.
This topic was automatically closed after 5 days. New replies are no longer allowed.