Should your self-driving car be programmed to kill you if it means saving more strangers?

So, like your average human driver then?

Laminated mouse brain is the classic choice.

1 Like

To my mind, one of the faults of utilitarianism is the implicit assumption that the outcome of your choices is highly knowable. If you know the outcomes, or at least the probabilities of the outcomes, of your choices then utilitarianism at least gives you a guideline for making a decision. However, in the real world, we quite often do not know the outcomes of our decisions. Even the probabilities are often murky and come with large error bars and uncontrolled-for variables.

Will driverless cars ever have enough information about the situation to make an informed call about this kind of question? How does the car know how many people are on the bus? (Let alone the non-trivial problem of deciding that the thing is a bus and not a semi.)

What if both the car and the bus swerve? The space of possible actions is much larger than the two presented. We might be able to save my life if we maim a few of the bus riders. Is that OK?

1 Like

A self driving car designed by engineers ought to drive within a safe envelope. When something unexpected happens, all the self driving cars ought to act cooperatively to minimise the risk. There should be significant risk of injury only if something has gone badly wrong with the car or its program, and the car should be developed to a point where it is clearly better and safer than the average human driver. It then follows that any state where the car is trying to decide who will die is a state that the car should not have got into in the first place, so its data or its programming cannot then be relied upon.

A self driving car designed by accountants will, as an accident develops, calculate the risk and auction the costs of avoiding action. As the collision approaches, the various car insurance companies check the net worth of the various inhabitants and their vehicles, and that their insurance has available funds. A second system will also estimate the possible outcomes of any following legal actions, and open bids for personal injury claims lawyers. The cars will then act together to find a solution where the loss to the companies is minimised. The richer drivers may be allowed to bid for the right to offset risk as the situation develops.

Which one of these two schools would you trust? I want my self drive car designed by engineers, thank you. Anyone showing trolley lawyer tendencies should be promptly escorted out of the car plant. If they come back, threaten to program the cars to hunt them down without mercy. You might not actually do it, but they think that way, and they would believe it.

Sorted.

4 Likes
Barghi continued. "For example, murder is always wrong, and we should never do it."
Murder is defined as a type of killing which is wrong, so the philosopher is essentially saying "doing a wrong thing is categorically wrong". That hardly fills me with confidence in his abilities.
2 Likes

Absolutely.
Ideally, if all road vehicles were autonomous theyā€™d behave more like trains where each car maintains the same relationship with those in front and behind, but unlike with a train, an individual could ā€œdecoupleā€ itself from the stream at junctions and attach itself to a new stream heading in the direction it wants to go.
There wouldnā€™t need to be the kind of separation distances required for human drivers - each car could be nose to tail with its neighbours.

Iā€™m inclined to think more is made of the morality/litigation issue than is really necessary. How often does one have to make the kind of judgements in the trolley example? Iā€™ve been driving for (gulp) 37 years, and Iā€™ve never come close to having to make such a decision, and I donā€™t know anyone who has.

1 Like

I want a switch in my dashboard that I can flip between little cartoons of Bentham and Kant.

2 Likes

I think that every time one causes a death, someone at google should be sacrificed.

Deluxe model = more cash = lawsuit-capable individual = save the driver

1 Like

Would that apply equally to the families of human drivers? If not, why would a human mistake be less punishment-worthy than a computer mistake?

Defined by you. There is plenty of room to make a more categorical definition of murder. Itā€™s semantic roots are just from an older way of saying, ā€œto make dead.ā€

I think the bus is more likely to crumple than the wall to lessen the effects of the impact. Also given it relatively narrow width the impact ā€œcouldā€ be partial thus further dissipating the energy involved.

Iā€™m still unclear why the self-driving car is simultaneously driving at a wall and not braking. The right choice would seem to do the opposite of that.

This is a good point in that the scenario changes if all vehicles are self-driving and communicating. If a boulder falls off a cliff into the road, two vehicles can negotiate to avoid the obstacle and each other. Additionally, they can identify their type, size, number of passengers, velocity, stopping distance, turning ability and so on very rapidly to agree on what the best course is for all local vehicles involved.

Much of the trouble with current driving is not knowing what the other vehicle will do, as anyone that has played the ā€œNo, you go firstā€ game at a 4 way stop will agree.

1 Like

No worries. Just go to a manufacturer who adheres to your philosophy. The market will provide.

Hmm - good point. I suppose there would have to be thresholds which would be constantly rearranging the preservation hierarchy. Ideally there wouldnā€™t be bus-stops full of cute primary school kids in every possible direction other than the truck - but you should plan for those contingencies.

The law defined it because thatā€™s how right and wrong are codified. And who the heck cares about 3000 year old roots of the word besides you? The only meaning that matters is what the word means now.

1 Like

Not necessarily. A colleague saw a truck driver voluntarily drive over a cliff in front of him to avoid wiping out the oncoming cars. The lorry brakes had failed and the truckie had to choose between trying to take a corner on the wrong side of the road or keep going straight over the drop. He chose air time and was alive enough to explain his reasoning to the ambulance crew when they arrived.

Really? All laws, everywhere? Even dictionaries differ by meaning. I donā€™t know why youā€™re clinging to this, prescriptivism is hard to defend.

This whole discussion appears to be predicated on the farcical idea that self-driving cars will have Strong AIs.

1 Like

Make non motor-vehicular traffic separated from vehicle traffic. The people killed by autonomous cars in the future will be a small fraction compared to the amount of people killed by human-piloted cars today.

2 Likes