I based everything on risk management assuming reasonable data a self-driving car could collect, and I’m curious if it shows bias in the questions themselves. Just using an approach of avoiding active crosswalks and otherwise avoiding intervention resulted in me: saving more lives, saving women over men, saving children over older people, saving doctors over thieves, saving fit people over fat people, and saving humans over animals.
I prioritized more lives over fewer lives, then human lives over pet lives, then people crossing with-the-light over people crossing against-the-light, then young over elderly, and finally non-intervention over intervention.
All of those showed up properly on the final results, but one result surprised me.
Somehow, that skewed the results to the point where I saved a lot more male lives than female lives (I was about halfway between parity and “SAVE ALL THE MEN!”, where the average was about a third of the way between parity and “SAVE ALL OF THE WOMEN!”)
I don’t know if that’s unconscious misogyny, or an artifact of the small sample size.
The problem I see with all these exercises is that they postulate something that I don’t think is going to exist on the roads - entirely autonomous self driving cars.
Automated cars need to be networked to work safely. That means they will be aware of other vehicles and objects which are out of sight, unlike human drivers.
Pedestrian crossings will be able to warn oncoming cars that there are people on the crossing. On mountain roads, cars approaching a bend from opposite directions will slow because they are aware of one another’s existence. Eventually (according to other proposals including, I think, some from MIT) traffic lights will disappear - networking will allow cars to approach and cross intersections at such speeds and in such positions that they don’t collide, like a typical junction in, say, Delhi but without the horns and swearing.
This means that the actual scenarios to be planned for are very different, because the individual vehicle won’t be making decisions; the network will do that.
Basically it’s like railways; in a crisis the decision is made by the signalman. If a driver goes through a red light, it’s extremely clear who broke the rules and is responsible. The driver’s only moral responsibility is to drive at the correct speed and obey the signals. If someone walks onto the tracks the driver brakes - but if there is a collision, complain to the laws of physics, not the driver.
That’s just the traditional traffic engineering approach of pretending pedestrians don’t exist
So now it’s Turing’s Trolley Problem?
It should kill them all, and declare that autonomous robots now rule the planet.
so where’s option C) Apply breaks and stop the car before it reaches the unwinnable scenario.
So much this.
The really good thing about true self driving cars is that they can always behave optimally. They can be designed to stop within a few feet in any circumstances where there might be a surprise interaction with a pedestrian.
The question isn’t “what are the moral choices the car will have to make” it is “what are the plausible scenarios where some kind of accident has become unavoidable, and how do we therefore avoid them prior to that”.
And as an aside, modern cars are incredibly safe things to be inside of. In the given scenario there is no way the car will be going fast enough to cause serious injury to the occupants if it hits that bollard.
The other thing that they postulate that’s ridiculous is every system that could possibly slow down the car failing at once.
I mean, sure, for a human driving an automatic transmission car, if the brakes don’t work, they might panic and have to decide: aim forward or aim left.
A self-driving car, though:
14:05:26.17: Obstacle ahead. Decision to stop car. Impact in 5 seconds @60km/hr
14:05:26.18: Acceleration control to “idle”.
14:05:26.20: Attempt to apply brakes
14:05:26.70: [BRAKES]: FAILED. Impact in 4.5 seconds @58 km/hr
14:05:26.71: Activate Emergency Stop Mode.
14:05:26.72: Attempt downshift 4->3
14:05:26.73: Apply Emergency Brake
14:05:26.75: Calculating possible routes to steer if multiple brake systems fail
14:05:27.03: EMERGENCY BRAKE APPLIED.
14:05:27.23: DOWNSHIFT SUCCESSFUL. NOW IN 3RD GEAR.
14:05:27.25: Attempt downshift 3->2 Impact in 4.2 seconds @49 km/hr
14:05:27.26: WARNING: DOWNSHIFTING AT THIS SPEED WILL REDLINE TRANSMISSION
14:05:27.26: Transmission redline warning has been automatically overridden due to Emergency Stop Mode.
etc.
In the scenario above, the car will have no trouble stopping in the distance given.
And that’s without considering the possibility of shifting the car into reverse and sacrificing the transmission altogether, which should always be the choice if it’s between losing the transmission and losing lives.
What should a self-driving car do? Brake, for fuck’s sake!
Also, the self-driving car should avoid villages where half the villagers always lie, not take insane vampires as passengers, and refrain from calculating all digits of pi even when the Third Law of Robotics says they should.
I suddenly recall this article from a while back ago:
So: the self-driving should have access to an app that will, upon encountering this situation, send an alert in real-time to thousands of people running a decision-making app. People who make the right decision will earn fractional bitcoins from all the people who make the wrong decision, as decided by the ensuing legal settlement. Thus, blame is decentralized and distributed.
Further reading on trolley issues:
http://www.smbc-comics.com/index.php?id=4132
I felt the same way.
An automobile is two tons of mass at rest that, in movement, poses a risk of bodily injury to both passenger and pedestrian. In the city, this risk curve is skewed heavily towards the pedestrian. An operator or (aware and oriented) passenger is demonstrably OK posing this risk to others and, just as imporantly, consents to their share of the risk, too.
You don’t see an issue at all with our devices being subverted for the goals of others?
Sure, be Donald Trump.
I think the car need more info, everyone should have a data callout like in Daemon, so the car can judge your life’s value by your age, dependents, wuffie, etc.
“Our devices”? Now who’s being naive?
You underestimate Donnie’s ambition.
My strategy was to avoid the deaths of non car users as far as possible. Choosing to drive a vehicle carries some intrinsic risk which should be borne by those choosing to drive. The decision of the vehicle should reflect this. Pedestrians have no choice about your dodgy self-driving car.
I apply this logic to humans driving cars too. The onus of responsibility to remain safe should be far more on the individual that chose to use the vehicle rather than bystanders who had no say in it whatsoever (if it was up to me, nobody would be allowed to use cars).
You said it much better than I did. Hearts!