If only there was some type of non-visible spectrum the car’s sensors could have used.
I’ve walked my bike across the road regularly. Usually the answer is because i can or i felt like it. Sometimes you just don’t feel like riding the bike, and it’s not like a person is required to ride a bike if they have it and it’s not an unsafe thing to do.
I haven’t seen the video because i just don’t need to see that, but i don’t know if the woman looked at the car to see if it was going to stop before she walked out. If she misjudged the situation then that’s part of the problem as one should never walk into traffic expecting cars at speed to slow down. Maybe such a thing is common in some areas but it’s not something i would do, unless i had eye contact with the driver beforehand.
Because every developer of the technology has to test their systems, gather real world data in order to make the necessary research toward full automation. Testing in controlled conditions is done but it doesn’t simulate real world conditions. This is why self-driving test cars are heavily regulated though.
Well, not everybody sees the same at night, my post lasik eyes being one particular example. And, I am skeptical about the lighting shown in Mr. Beschizza’s photo representing accurately what a human would see, i.e. the building in the top left looks like their lights are out for the evening, different exposures, wide angle camera lens. The driver who did see her last minute said she appeared out of nowhere. Zero indication from the pedestrian that she knew a car was coming.
But the lidar should have picked her up.
Headline should read:
Person Hit by an Uber Self Driving Car, “Driver” Subsequently Thrown Under a Bus.
Exactly. We need to stop asking questions about the pedestrian, or about how a human would have fared, or whether the human backup driver did her job. This could have been someone laying in a crosswalk (having tripped or suffered a heart attack) or two eight-year-olds crossing the road while talking about fishing, or anything. Seeing those unusual obstacles is precisely what those damned LIDAR systems are supposed to be doing.
Someone called this “autonomous driving’s Apollo 1 moment”. We can hope that the industry and lawmakers see it that way. And if they don’t, it’s up to us to force them to see it that way.
-
Hey, look at that, the reason we have those jay walking laws in the first place… I only slightly kid. The laws were put in place to prevent things like this, but of course they are mainly enforced for harassment.
-
I am sure people hit crossing the road at night should extend into the thousands with people behind the wheel.
-
This Uber vehicle SHOULD have picked this up and stopped. Hell they advertise the collision alert system with deer and such.
-
The tech isn’t perfect - yet, but it is getting better exponentially. IIRC it has been less than 10 years since the first DARPA race with autonomous cars where none of them finished.
-
Once the tech is perfected enough for the “real world”, it will eclipse humans in safety, though there will still be SOME fatalities.
We could easily get zero fatalities. All it takes is for no one to drive anywhere at any time.
this kind of goes to the other thread regarding the driver turning left and the cop who ran through the yellow almost hitting the driver…My thought process is around risk/safety. Agreed, sometimes you just want to walk and not ride for whatever reason; however, when I am crossing a road I prefer to get across as quickly as possible to avoid something tragic.
Like I said…a question I would ask, it is definitely not blame or cause for the accident. Any human driver would have been able to slow down in time and allow the person to cross freely. Maybe they’d honk the horn, curse at them, be aggravated in some form or fashion…but the pedestrian would be alive and no worse for wear.
I too would love to see the sensor data.
I see a lot of potential in self-driving cars and think their development is pretty important. So much so that I think developers of self-driving cars should be given immunity from civil lawsuits in exchange for freely sharing their data.
A lot of our engineering and building codes have developed over decades. Catastrophes are big motivators for improvements to those codes. The analog in the self-driving car space would be to share all of the sensor data so that all developers can learn from this incident.
Hardly. It just had a different criteria than the driver and Uber. The robot car meant to hit him!
The vehicle considered the minor delay from colliding with the person to be only a momentary problem affecting the designated estimated time of arrival. One which could be rectified by simply going faster after the pedestrian’s removal. (and also total flashback to Knight Rider 2000 television movie)
I welcome my new robot overlords.
Please don’t kill me.
If only there was a way to test in real world conditions without actually endangering human life. A way to, I don’t know, close the course the robot car is driving on, and then throw simulated human-like obstacles in its path with no warning. A way to stress test its ability to see and avoid pedestrians, without actually endangering any.
Seriously, though, why is it that I have the feeling that Uber never bothered to torture test their robot car’s collision avoidance systems before they rushed it out onto the road? Other than the fact that they are one of the tech industry’s most sociopathic companies, the entire point of putting these autonomous vehicles out on the road seems to be more about bolstering the confidence of their venture capital investors that they are on track to eliminate all that pesky labour overhead and get their balance sheet out of the red, than it is about actually having an autonomous car that is anywhere near ready for beta testing.
This is something that should never be allowed to pass.
I mean - unless we apply it to aircraft black boxes retroactively.
The “driver” seemed to spend most of their time looking down into their lap. I’m not convinced they wouldn’t have had more warning had they been scanning the road ahead like a normal driver would (should?) have been.
That was an unbelievably epic fail from the Uber AI. The victim was presenting close to the very simplest “pedestrian in the road” problem that could be designed. Lidar should have seen her, the warm-blooded pedestrian should have been highly visible to IR, and she was also helpfully wheeling a decent-sized metal object that would assist radar-based sensors. If the Uber AI couldn’t see her from 100m, then it was simply not working at anything like an adequate level.
It might have been difficult for a human driver to spot her, if she was dressed darkly, not showing any reflective material, but the AI should be be able to spot her 100% of the time. She didn’t “step off a curb” but was in the adjacent lane, for some time before the collision. That’s a key spot where the AI should be scanning.
I get that the AI has to try to distinguish pedestrians on the edge of the road that it doesn’t need to stop for, and that’s a super-tricky problem (as are bicycles), but this wasn’t that. She was already on the roadway and moving steadily towards the path of the vehicle.
I’m not anti intellectual property or anything, but I think there’s a legitimate public interest in having the sensor data shared to improve all self-driving cars. Yes, it may leak secrets that Uber wants to keep, but there are lives at stake.
Yeah, I would have thought the obvious thing to do would be to build the autonomous cars, but leave the humans in charge while recording telemetry on what the humans do versus what the software would have done, then look at accidents the human got into that software would have avoided, versus accidents the software would have caused that humans didn’t. Then you only let the cars drive themselves after you already know they’re safer than humans. It’s entirely possibly Uber did do this, but I’m not inclined to give them the benefit of the doubt.
I would argue against using this comparison as an example, since basically every independent (meaning non-pharma-funded) cost-benefit analysis of the FDA shows that such an approach would save more lives (or QALYs or other metric of choice) on net by bringing good drugs to market faster and only recalling bad ones after launch, than it costs in letting bad drugs on the market at all.
Yes. But it shouldn’t shield them from liability for harm they cause.
Yes, she “appeared out of nowhere” because the driver wasn’t fucking looking. Cyclists and motorcyclists have head this SMIDSY excuse too many times.
Once we are all plugged into a computer at birth like the Matrix, this will be the reality.