What’s pretty damn naive is giving the tech industry’s self-driving snake oil a pass on killing people. None of the companies working on autonomous vehicles have a car that can be trusted on the roads, but that hasn’t stopped them from pumping up the hype and the bullshit, nor has it stopped Uber from deploying their fucking prototypes before they even get the “don’t run into anyone” part of the program debugged. Not because they need to test and debug their system, but because putting self driving prototypes on the road is a PR move designed to gull their venture capital investors into continuing to pour money down the rathole that is Uber’s balance sheet.
Autonomous driving in town is a “Hard AI” problem. Clue: that means it’s not going to fucking happen in a safe way until we get computers that are as smart as we are, which, if you haven’t been paying attention, is just as much a pipe dream now as it was in the 60’s.
A million times this. All the tech apologists saying we need to see the safety records for miles driven compared to a human driver, this.
I would not disagree with this, but try telling it to my 13 yr old self.
I routinely see packs of kids ride across the street, not walk. And while I think the onus is definitely on the driver to watch out for the cyclist/pedestrian and drive defensively and cautiously…it doesn’t hurt to ask the pedestrian/cyclists to do the same. Which I think gets to the point you made.
Yeah i definitely don’t mind it being pointed out that it is the law there, citation is just part of what i’d expect to see otherwise its speculation. I actually don’t know if walking the bike across intersections is required in all the states i’ve lived in, it’s never been mentioned to me.
The earlier part of my statement was “proving safety (mostly) and efficacy (mostly)”.
Those “mostly” are important parts. There are obviously side effects present. And, the efficacy is frequently not as good as we assume. However, there is usually a fairly good assurance that they are not going to kill you. (Excluding many cancer treatments which are trying to selectively kill you, hopefully). In fact, drugs causing heart failure as a side effect is usually a reason we see for recalls if it wasn’t found in initial trials.
There’s a definite difference between “side effect” level of harm and something larger, such as heart failure and death.
There’s also a reason that some drugs are prescription and not just over the counter. Risks of the side effects is one of them, not the only reason. Things that are “just on the shelf” for anyone to buy should have significantly less side effects, or at least, less harmful ones. Maybe you break out in a rash, puke your guts out, and have bowel issues, but you totally maybe sort of don’t have a headache anymore, “yay!”. That’s not the same as, maybe your heart stops, but you totally don’t have a headache anymore, so “yay?”. Those first ones are annoyances, maybe slight harms, while the second one is a full on harm with no recovery.
The current system tries to eliminate the second scenario and force disclosure of the first. A free for all system that relies on recalls instead of proactive testing, makes everyone a subject in the experiment to determine which set of issues might be caused. I have no idea if that would be better or not at the macro statistical level. It sounds reasonable that it could be. But, at the individual level, it sounds terrifying.
Trying to keep this related to driverless cars and public road testing. Things like the car stopping at a 4 way stop sign and never going because it always yields to other cars. That’s a side effect, annoying but not fatal. Driving the speed limit in the right lane on a highway instead of the full traffic flow, side effect and annoying. Blowing through red lights, hitting obstacles in the roadway, fatal harms. Those should be worked out in closed course testing prior to hitting the public roads. There should be a very high expectation that those aren’t going to happen on the public roads.
I haven’t heard, but is the NTSB going to investigate this incident?
My assumption is that an NTSB investigation would determine if the autonomous system was at fault and should have prevented the incident or if there were mitigating circumstances. If the system was at fault, presumably they would be required to remediate the issue. Just like we do with other forms of transport.
It’ll take much longer for a full investigation and report. It’s only the armchair analysis that the problem looks like it should have been in the sweet spot for a driverless car to avoid. Who knows, maybe they’ll find there was some type of environmental phenomenon that prevented the car sensors from seeing the person and they’ll clear it. (I doubt it, but you never know.)
It is probably pretty low on the totem pole for cops handing out tickets/making arrests (queue the “COPS ARE ALWAYS TRYING TO BUST PEOPLE” mantra).
I think @Bozobub makes an excellent point that applies to the majority of cyclists: easier control and acceleration when walking a cycle as well as having more directional paths open to you. Makes complete sense to me anyway.
I believe there are situations where, as hard as it is to settle for, better than some percentile (at least 50, maybe higher) of human drivers is…good enough. This is a class of situation, however, where I think autonomous cars should be held to a higher standard than human drivers.
Given the brightness/contrast/etc. of the video may not accurately match that of a human’s vision and a human driver may or may not have seen well enough to avoid this situation, it’s irrelevant: technology exists, and is reasonably affordable, and may have been present on this very vehicle, to provide tracking based on higher contrast or non-visible-light-dependent data. Even if all it saw was a vague slow-moving blob, it should have been able to predict that it needed to slow down or stop to avoid the projected trajectory, or that it could not confidently predict the trajectory and should slow down or stop just in case.
Basically, tracking unobstructed objects in any lighting conditions should be expected. From there, classifying them, projecting possible trajectories (including the fastest possible thing they might be and stationary), and avoiding collision with any of those trajectories should also be expected. If the vehicle must reduce speed to a walking pace whenever there is something unexpected in or near the road, then until object classification improves, so be it; it will still get where it’s going, just slower. Since the alternative to slowing down for random vague blobs appears to be killing people.
On the safety/backup human driver: I think the very concept is basically a fraud, unless their role is to only drive when the car that has stopped over-cautiously and will not proceed, or for the human driver to correct the autonomous car over-cautiously (which might hinder the car’s learning, and probably requires extensive specialized driver training in defensive driving and in autonomous car safety/backup driving). For almost any other type of hazard, once it is too late for the autonomous system to make a decision, it is far too late for a human. In order to even begin to have a working human takeover for in-motion hazards, the car must reliably indicate what it is planning to do in a way that a human can quickly comprehend and evaluate. Even then, there are giant leaps for a complete working takeover/handoff system. Having the car tell the driver to take over at speed is right out. The very feasibility of the concept of a safety/backup driver for an autonomous car is a project unto itself that should be carefully evaluated before putting on the road an autonomous car reliant on such a system.
Such laws are very often at the municipality level, NOT state. And no, they are not uniform. Furthermore, enforcement of any existing laws is quite erratic, even within the same municipality. It’s often even more ridiculous than jaywalking enforcement.
Example: In Washington, DC, it’s illegal to ride on any sidewalk (or crosswalk), over the age of 12. Try it on Capitol Hill, and you are VERY likely to at least be yelled at, if not ticketed. The rest of the city? Pretty much zero enforcement. I was a bicycle courier in DC for years ^^’.
Snake oil = millions of miles driven by autonomous vehicles and this is the first real accident of note.
“that means it’s not going to fucking happen in a safe way until we get computers that are as smart as we are”
Someone doesn’t know the difference between artificial general intelligence and the AI being developed for these vehicles. In most every way, the AI used in these vehicles is already “smarter” than you. It sees in ways you can’t, can make decisions in a fraction of the time you can and isn’t distracted by wandering thoughts or boredom. General intelligence may be far off, but general intelligence is the last fucking thing I want controlling my car. I couldn’t care less what my car’s thoughts are on politics, philosophy or it’s ability to play Go.
Human life has a price on it, always. Some people have a hard time with that idea. Whether it’s safer cars, or salmonella chicken, someone has crunched the numbers of what it would cost to make it safer. I saw something, perhaps it was here, about the astronomical price a car would cost if you wanted to bring the passenger fatality rate to zero. But it was not astronomical to install seat belts, even though the bean counters in Detroit fought it the whole way. We make choices individually and as a society of what we are willing to pay for a human life.
It’s all in the numbers, not the emotions. It’s entirely possible this exact car in the accident would save far more lives than it takes if it were marketed tomorrow, but people seem to view this accident totally differently than if my 85 year old mother had run down this woman, something I fear getting a call about. Mom thinks she’s still a great driver. So what if the rate was 10:1 human to machine fatalities? OK with that? Or is 5:1 OK? Does it need to be 100:1? Someone has to make this decision.
Entirely on point, and I’ll contribute another factor of a million to make it a round 10^12.
You’re also 100% on point w.r.t. Uber, and their need to keep VC money flowing. (And that may be the largest problem here.)
I’m bullish on self-driving vehicles; they clearly will lead to lower risk in the long term. But this was an egregious system failure. And if Uber knew or willfully ignored the signs that this system was not ready for street trials, then people should go to jail.
Because it is not as comfortable and is a lot less flexible.
I am in favour of mass transit but personal vehichles will be always an important part of the transport mix.
The statistics are great as a “big picture” or overall metric. And, clearly we expect autonomous cars to be better than humans on the whole. Probably much much better too, humans are crappy drivers based on the overall statistics.
But, there are also individual scenario metrics that need to be evaluated. Depending on the circumstances, we do expect autonomous cars to have much close to 0 incidents. The armchair analysis on this looks like this is one of those scenarios that should be the sweat spot for an autonomous car. We expect it to have much better senses, reaction time, and concentration than your 85 year old mother. If she had done this, even if not charged with a crime, she would lose her license. Since it would be clear that she no longer possessed the senses, reaction time, and concentration to avoid the accident. In that same light, if feels like this autonomous platform should lose it’s license and be sent back to closed course testing. We’ll need the full incident report to know for sure.
PS: Good luck with your mother. I’ve been part of taking away 3 different grandparents cars and it was hard every time as you drive off with their car or take the keys. On my wife’s side, one of her uncles just disabled the car in the garage and told them it was broken and they would be back to repair it (which never seemed to happen then).
Seems like tech failures leading to tragedy and the ensuing outrage, go way back. If history continues to repeat, expect more accidents like this, investigations, plea bargains, recalls, mandates, bans, advancements, the tech gets better, and one day it’s not as big of a concern. It’s going to be a long road I think.
Perhaps new tech like this must be visually labeled as “Experimental”, in a gesture to help users know this tech is still unproven and not widely accepted… ride at your own risk.
Btw… i don’t agree with the “what the driver actually sees” pic as there are no headlights. Headlights alone will cause the shadows to be darker, just like you see in the video, then add in windshield glare, etc. Just my opinion as a photographer, no real point to it.
As far as i know autonomous vehichles have an excellent safety record on the roads.
I guess that this episode will show some sort of bug in the system, which is unavoidable in software complex as this is and is quite worrying in perspective.
I mean, probably skynet was just fine until it wasn’t.
Another thing is that, as we knew already, humans are reasonably good at doing stuff, but inherently bad at continuously checking something else doing stuff when nothing happens.
We are simply not made for that.
It shouldn’t shield them from gross negligence. Also any protections should go away over time.
There are big potential positive externalities from the roll out of self-driving cars. Not shielding them from liability may mean more people die than would otherwise.