First pedestrian killed by autonomous vehicle

I like to drive - and not be dependent on companies like Uber. It’s not like pedestrians and cyclists use these roads as well. And - how many vehicles will they have in rural places?

Public transport is a public asset - if private companies want public assets they need to pay fair market value - I don’t give a damn about their marketing how they’ll improve my life if I don’t want their services.

  1. human reaction speed is finite and much slower than what an autonomous vehicle is capable of – that in the BEST conditions where the human is paying close attention and notices the thing they’re supposed to be reacting to.
  2. but actually, humans have pretty short attention spans – their minds wander even when they’re doing something potentially lethal like steering a half-ton lump of metal around at 70 mph.
  3. It’s even worse for a “safety driver” b/c at least when you’re driving you’re directly engaged with the thing you’re doing and you get immediate feedback when your attention wanders --e.g. the car drifts if you stop steering or slows down if you don’t accelerate when you get to an upward slope. BUT when you’re just sitting in the car observing what the car itself it’s much harder to pay consistent attention to what’s going on second-by-second.

The upshot is that it is incredibly unlikely that a human “safety driver” could take over and prevent a problem that the autonomous system was about to cause.


Actually come to think of it, they’re really being very stupid having their self-driving software actually drive the car at all. Considering the slowness of human reaction times, if you’re testing your prototype self-driving vehicle, all you need to do is feed the software the actual road conditions and have it make driving decisions, which you record, but you don’t let it actually control the vehicle at all. The human driver drives the car, the robot driver just creates a log of all the things it would have done if it had been in charge.

Do that with thousands of cars for thousands of hours and analyse the results, every time the software made a bad decision that’s a problem you need to fix. Rinse and repeat until your software no longer makes makes any mistakes. Then and only then is your software ready to actually be allowed to drive an actual car in the actual world.


I don’t think we will see the evaporation of the user owned and user controlled vehicle any time soon. It will probably have assist features, i.e. collision warning etc. Ultimately if we get the tech so well perfected there is no stoplights, they just literally weave through the intersections, we may have to tie the hands of the user, but that is far off, I think. If it ever does get like that, I imagine there will be car parks where you get to tool around an area for fun. Much like people ride horses for fun.


Do not want. Again - what about paying for what they take?

hey - maybe we should take away your guns too as they’re not safe?

It’s good that autonomous vehicles are causing fatalities at a slow, steady rate. That’s the only way the public will accept a system of autonomous vehicles that kills a few people. And without that public acceptance, we’d be stuck forever with the human drivers, whose death toll is outrageous.

Human drives kill about a hundred people a day, just in the U.S. If a fully robotic fleet killed fifty people a day, it would be a vast improvement – indeed, to avoid the immediate adoption of such a less-deadly system would be tantamount to mass murder. But the public doesn’t like to face reality so squarely in the face.


Well said, also computers have much quicker reactions to humans. If a person is being relied upon to be the failsafe that’s kind of the worst case scenario because it supposes that the driver is anticipating the accident well in advance, when these types of situations happen fairly quickly. A human can also misjudge a situation and react in a way that would make a situation less safe. As you said, the “safety driver” is merely there more for legal reasons than logical and practical reasons. Machine systems are still not effective enough to stop accidents from happening so for now people are necessary but if a person isn’t 100% driving a car the entire time one starts to increase the odds that the safety driver isn’t fully paying attention and them being there to respond to an event becomes more of a liability.



sure, but wait 'til somebody figures out the autonomous cars are killing more <political_party_1> than <political_party_2> at a rate of 5:1

Yes, the concept ignores everything we know about how humans work - reaction times, ability to focus without acting for prolonged periods without getting distracted. Even in aircraft this is a problem, and drivers have nowhere near that level of training and testing. A few assists (like lane departure correction) work. Full autonomy works (once it exists). Anything in the middle seems really far fetched.

1 Like

there is a list of rules already!! :slight_smile:

Isaac Asimov’s “Three Laws of Robotics”
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Next up, the first victim of a self-shooting gun and we’ll have reached Peak America.


IMO - if a computer can’t prevent an accident, other than pure luck, a person is unlikely to prevent it either. I mean, I have read about horrible cases where cars are stuck in acceleration mode and the people crashed and died. All the while I am wondering why they didn’t put it in neutral or turn the car off then flip the key to on with out starting it and you should have steering albeit with out power. (Even with a locked wheel, you may still crash, but you will stop accelerating into it.) Anyway - point is under stress most people make horrible decisions and a significant percentage just freeze.

Another famous example is a plane crash that left almost everyone alive when it “landed”, but many died in the fire afterwards, as they just froze, vs getting off the plane.


Is it too early to cue up Red Barchetta?


There may be situations where a person can resolve a problem with enough dexterity to avoid a worse outcome, however relying on people being able to pull this off is wishful thinking in the worst of ways because it legitimately puts peoples lives at risk because of complacency and faith in an outcome that is unrealistic.


You have a right to freedom of movement throughout the country. You do not, automatically, have the right to drive a car. You have the privilege of doing so because you meet the requirements the state set forth for minimum safe driving ability, and have not had it revoked by doing any number of illegal things. Over time, safety standards tend to go up everywhere, so if or when we end up in a world where autonomous vehicles are similarly priced and demonstrably better on every safety/convenience/speed/whatever metric, why should the state continue to allow you to put other citizens in danger? If you want to drive for fun, I’m sure there will be courses and racetracks available.

I’m not saying I agree with everything I just wrote. I just don’t see a good argument against it.


Like “arming school teachers” level of wishful thinking?

Edit to add extra comment

@AnthonyC, “I just don’t see a good argument against it.”

I’m sure there will be lots of bad arguments against it, like “God-given right!”, “Freedom!” and “#MAGA

Exactly. We can’t even get TSA to do their job right but lets give out more guns, but i digress.

1 Like

Reaction times aside, humans are still better at figuring out how to respond to certain unplanned situations than self-driving cars. Say, if a cop or a road worker is gesturing to go a certain way to avoid a traffic hazard.


With that method though, You can’t have proper feedback to determine if the car’s chosen reaction would be “correct” or not. For example, the car may predict choosing one action, but the driver choses another. Perhaps the driver’s action results in no incident, but the driver behind them had to slam their brakes/swerve whatever, whereas the car’s action would have prevented the action without depending on the driver behind. The AI’s action would arguably be the better of the two, but would be considered “wrong”.

Also, there’s no way to really tell such a machine “if you see someone who looks drunk on the roadside, try to give them a wide berth”. Not yet anyway. It can only learn in the context of the sensory input it receives.

1 Like