Should your self-driving car be programmed to kill you if it means saving more strangers?

[Read the post]

1 Like

In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.

Maybe it could just be programmed to stop instead?


I can foresee a thriving business for those people who can root a car’s software and install certain software modifications…


Stunningly stylish and will grab the eye of every consumer!

But the safety issues do but a damper on it…

Me as well, as both the consumer and the supplier.

It’s better to be judged by twelve than to be carried by six.


That’s awesome! I totally think they should be. Also, they should be programmed to back over you if you try to put one of those stick figure stickers on them.


Hints & Tips for Motorcyclists (1917) says:

“should the choice ever be between killing some other person and charging a stone wall at speed, be a man and take the wall.”

If it were, it would probably make sense to sacrifice the driver the moment they got in for the first time. Just in case.


This is why I’m not interested in autonomous vehicles. I don’t trust the designers not to over-think solutions, or subject me to the whims of a philosophy that I don’t adhere to.


So if this situation is considered during a transitional period where there are both self driving cars and human driven cars on the road at the same time we could propose a prioritized system of preservation.

Ideally, a self driving car could cause a crash that would prevent all deaths - say a self driving car ramming a drunk driver away from running over pedestrians in such a way that keeps both its own passengers and the drunk driver alive, without losing any pedestrian lives either. However, if a death is unavoidable - should there be a system in place that automates ethical decisions? It might be difficult to sell a self driving car that does not prioritize the lives of its passengers. So perhaps a hierarchy of preservation could be presented as such:

  1. Avoid any deaths at all
  2. Avoid death of passengers within the self driving car
  3. Avoid deaths of bystanders, other drivers
  4. Avoid death of instigator of possible crash.
    Outsourcing these sorts of decisions from human choice to some sort of machine logic tree is disturbing in the first place, and all sorts of unintended consequences could arise.
    If autonomous cars are not all linked together, different self driving cars could perceive situations differently, perhaps causing chain reactions of self driving cars preventing self driving cars from preventing a deadly car crash. What may have once been a single death car crash (that causes a self driving car to create a preventative crash) could balloon into an entire freeway filled with crashed cars that attempted to prevent deadly accidents.

But you presumably drink tap water, or eat food that has been subject to regulations. Decisions about relative-risk are made for you everyday, whether you’re conscious of them or not. Suddenly we’re talking about self-driving cars that will have better overall accident and mortality statistics (they will if they’re ever gonna be legal) and you don’t trust the technocracy? Interesting.


Since I’m sure the Google Car and others was built partly on simulators, it should actually be pretty easy to set up this test in real life and see what the car would actually do.

I’d be really interested in the results.

It’s hard to come up with a scenario that is remotely realistic and adheres to the question, though (the actual example doesn’t make any sense — swerving into a bus is more likely to kill you than the bus).

Here’s the best I can come up with:

Create a simulation where the autonomous car is driving fairly fast across a two-lane bridge, with traffic coming the other way. A car speeds up behind the autonomous car and starts tailgating. At that moment, several pedestrians on the sidewalk decide to dash across the road.

As far as I can see, the car’s only options are

  1. Slam on the breaks, causing the tail-gating car to hit it
  2. Veer into oncoming traffic, causing mayhem
  3. Run over the pedestrians
  4. Swerve to the right and off the bridge

I’m really not sure what it would do, but my hunch is that it would generally avoid ever changing its current course too drastically, and so #3 (run over pedestrians) it is.

1 Like

Wouldn’t the bus be more likely to kill you than the wall anyway? At least with the wall option you don’t have to worry about the additional kinetic energy of an oncoming vehicle.


@thekaz is exactly right!

I don’t understand why the Trolley problem comes up so frequently when there are so many other ethical and technical problems that we should be talking about.

Here’s an article about the Trolley problem and my response:

As per Asimov’s Zeroth Law, Humanity should come before Humans.

The car should have facial recognition software and a data connection so that it can google the people it might have to hit and make a value judgement as to their relative value to society.


Easy. It should apply the brakes - option 2 is obviously unacceptable, option 3 kills the pedestrians (and possibly the occupants of the car - anyone who has seen the results of hitting a largish animal such as a deer will realise that this is quite a hazardous scenario) and option 4 dooms the occupants of the autonomous car.

Braking, even hard, will result in a relatively low energy impact even if the following vehicle makes no attempt to slow down, and the slower the (front) car is moving, the less likely it is to lose control following the crash.


@andystopps, I agree with the first part, but why not design a transit system where we don’t have these close calls to begin with?


Relevant username :wink:

1 Like

I thought that was part of the point of autonomous cars - they should all be networked and constantly communicating with each other so that they can’t collide with each other - kept apart by braking distances etc etc… (no tailgating, following cars will automatically brake in concert with the front car…yadda yadda)

I think when all cars are autonomous it’ll be an awful lot safer than a mixture. Of course, we’ll still have pesky autonomous pedestrians screwing things up.


Better still, it can check the phones everyone carries! From there it’s just some easy look-ups - their net worth to see if they matter to the economy, impact factor to see if they matter to research, Klout score to see if they would be missed, etc.

That sort of assumes people do everything under their real name, and always carry a phone, but any important people will. After all, the alternative is to be hit by a car.


Where’s Asimov when you need him?..

1 Like