Should your self-driving car be programmed to kill you if it means saving more strangers?

The example in the article is awful. The full context is:

How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation – a blown tire, perhaps – where it must choose between swerving into oncoming traffic or steering directly into a retaining wall?

Two problems: the first is that the safest thing to do when a tire blows is actually just to keep driving. The second is that if you drive into a retaining wall, you’re very likely to survive without killing anyone else unless you’re at high speed.

This points to the real problem of pretending trolley problems has any bearing to real world problems–true no-win scenarios are exceedingly rare, and when they arise, they usually arise because someone else has already fucked up. Travel at safe speeds and check your tires and you’ll never be in this situation, ever. Once you’re reached a crisis point, the number of lives saved by which no-win decision doesn’t really matter and you might as well flip a coin.

11 Likes

I’ve had tires blow at highway speed three times. I survived each time by doing what you’re supposed to do, none of which includes panicking, swerving, or crashing. That is a very stupid example that reads like it was written by someone who’s never even passed driver’s ed.

7 Likes

That order of priorities would have the autonomous vehicle correct its skid through the bus-stop full of cute primary school kids in order to avoid wiping itself out on the oncoming truck …

2 Likes

Actually, there was a case in one of the local lab. Before the cops had their own forensic labs, they were bringing samples to check there. One day they brought a bottle of pickled shrooms, to be checked if they aren’t poisonous. The guy in charge left the bottle on the table. The next day the other guy came in, singing praises to the shroomies. So the mushrooms were proved to be non-poisonous by an in-vivo bioassay.

As of testing cars, what about putting the car’s brain into a virtual reality jar? Disconnect the sensors, replace them with computer-simulated inputs (framebuffers for cameras, 3d model-derived response curves for laser scanners and depth cameras, simulated data from vehicular networks…). Make a simulation of the road situation, feed the brain with simulated sensor responses, see what the thing is doing. I believe a game engine could be repurposed for this.

Such sim rig will be a must-have for autonomous cars hacking, and for making sure the vehicles are doing the bidding of us and not of some bunch of philosophers or - worse - lawyers whose skin is not in the game.

1 Like

There has been for many years a market for modifications to engine control software to boost an engine’s performance.

Why not upload people into their cars?

Possibly repurposed, vat-grown simian brains? Meat puppet cars?

Maybe that’s why the easiest answer to the problem is collective ownership. In that case, the owners of the vehicle fleet owe a duty of care to society, and a utilitarian approach to accident mitigation would be appropriate.

The rule could then be a simple “whatever you need to do to have as few injuries or deaths as possible”.

Everyone who rides would have read the EULA, right? :wink:

Because that tech is not available yet. On the contrary to self-driving cars and their environment simulators.

Biotech brains are difficult to make and control. You cannot easily backup the state and make a new one with the same state from scratch. Photolithography will be the king for some more years. Self-assembly of 3d structures will likely follow. Actually grown neural betworks on biological basis, I wouldn’t bet on it much; the disadvantages are too many.

With better vision, better information, better reaction times, better road sensing and car handling, Im going to trust the safety of these cars over myself. And Im especially going to trust autonomous cars over many of the half wits I see on the road daily. What if the half wit is the one with the emergency and decides to swerve into my lane instead of taking himself off the bridge?

3 Likes

The optimism is stunning in its assumption that there will be enough control in an emergency to make a choice. I think it’s enough to try to avoid the accident or minimize damage to the vehicle as much as possible. You know, like we do when we drive.

2 Likes

The car will be able to make a final decision much faster than a human will. However, in these split second scenarios it is more likely the car still doesnt have enough time to react and will simply hit whatever is in front of it no matter what the decision was. The actual outcome should be more focused on taking out just one pedestrian instead of two, or hitting another car in the engine or trunk instead of the passenger compartment.

The idea of who dies is more of a moral dilema and is only confused by tech we dont understand yet… its not the tech causing these decisions.

1 Like

This whole question really bugs me. Or more specifically, any time spent discussing it feels like a less-than-valuable distraction. I should work on a succinct informative summary.

Something something Sophie’s Choice.

1 Like

The whole rule bound idea for self driving car kinda reminds me of the way corporations are required to maximize shareholder profit no matter who gets hurt. Now, if corporations are people, why shouldn’t self driving cars be people too? Then can the dream of generations of men be realized, and they can marry their cars!

Exactly!

Every time I see an article on self-driving cars these days, they always put forward some form of the same straw man. “Given he choice, can a robot be taught that a cyclist is a squishier than a suburban, a school bus is worth more than an ice cream truck, or that it should sacrifice a single passenger for a busload of orphans?” They trouble is, even humans don’t do that. Ever. You stay in your lane, and if things happen you either slam on the brakes, or don’t. That’s the best the human brain has come up with in all this time.

6 Likes

Not from impact at high speeds, but how about cancer? How about death from from infectious diseases? Or is speed the primary criterion upon which you make risk-based assessments?

Long-term hazards testing is expensive, and often frustratingly inconclusive. You can test cars on tracks or in simulators where other cars are piloted by real humans. Said simulators aren’t half bad either.

This is not precisely true, now that computers are being programmed to “learn” (i.e. reprogram themselves to satisfy conditions.) But even if it is… so you don’t fly in planes? TCAS is a collision avoidance system that is installed on (as far as I know) all passenger aircraft and pilots are now trained to listen to TCAS instructions over that of actual humans ever since two planes collided in mid-air precisely because a pilot was listening to a traffic controller rather than his computer. Computers are also considered way more trustworthy than humans in nuclear reactors, and a whole host of applications where human lives are at stake. That computers follow instructions is not a meaningful critique of their use, especially since they are most likely already in your car, controlling a number of systems, including when your car changes gears, and if you have ABS how well your car stops. Unless you’re prepared to go back to the bad old days of learning how to pump your brakes so you don’t go into an uncontrolled skid, I’d say the idea of computers controlling vital processes is here to stay.

2 Likes

So, like your average human driver then?

Laminated mouse brain is the classic choice.

1 Like

To my mind, one of the faults of utilitarianism is the implicit assumption that the outcome of your choices is highly knowable. If you know the outcomes, or at least the probabilities of the outcomes, of your choices then utilitarianism at least gives you a guideline for making a decision. However, in the real world, we quite often do not know the outcomes of our decisions. Even the probabilities are often murky and come with large error bars and uncontrolled-for variables.

Will driverless cars ever have enough information about the situation to make an informed call about this kind of question? How does the car know how many people are on the bus? (Let alone the non-trivial problem of deciding that the thing is a bus and not a semi.)

What if both the car and the bus swerve? The space of possible actions is much larger than the two presented. We might be able to save my life if we maim a few of the bus riders. Is that OK?

1 Like

A self driving car designed by engineers ought to drive within a safe envelope. When something unexpected happens, all the self driving cars ought to act cooperatively to minimise the risk. There should be significant risk of injury only if something has gone badly wrong with the car or its program, and the car should be developed to a point where it is clearly better and safer than the average human driver. It then follows that any state where the car is trying to decide who will die is a state that the car should not have got into in the first place, so its data or its programming cannot then be relied upon.

A self driving car designed by accountants will, as an accident develops, calculate the risk and auction the costs of avoiding action. As the collision approaches, the various car insurance companies check the net worth of the various inhabitants and their vehicles, and that their insurance has available funds. A second system will also estimate the possible outcomes of any following legal actions, and open bids for personal injury claims lawyers. The cars will then act together to find a solution where the loss to the companies is minimised. The richer drivers may be allowed to bid for the right to offset risk as the situation develops.

Which one of these two schools would you trust? I want my self drive car designed by engineers, thank you. Anyone showing trolley lawyer tendencies should be promptly escorted out of the car plant. If they come back, threaten to program the cars to hunt them down without mercy. You might not actually do it, but they think that way, and they would believe it.

Sorted.

4 Likes
Barghi continued. "For example, murder is always wrong, and we should never do it."
Murder is defined as a type of killing which is wrong, so the philosopher is essentially saying "doing a wrong thing is categorically wrong". That hardly fills me with confidence in his abilities.
2 Likes