If you think self-driving cars have a Trolley Problem, you're asking the wrong questions

[Read the post]

2 Likes

“Why not perform amateur brain surgery on yourself first?”

Apparently it has its’ risks.

2 Likes

The Trolley Problem itself is invalid. A mesh network of self driving cars won’t experience the problem because it will be anticipated early. A car or vehicle not under autonomous operation isn’t “known” to the autonomous system so it won’t be aware of the number of people in it. AVs will need to try to protect their owners from manually controlled vehicles which are being driven dangerously.

I can see, though, a situation in which the core code of AVs is stored in ROM - not reprogrammable memory like Flash - as part of a SOC. At that point the resources of an Intel or a State actor would be needed to bypass it.

2 Likes

I’m afraid I strongly disagree with the thesis of this piece. While I certainly support open code for self-driving cars and agree that DRM won’t keep people from tinkering with their cars, the vast majority of people won’t, whether it’s legal or not. As a result, I’m a lot more concerned with what the factory-default software is programmed to do. The trolley problem and its variations were silly hypotheticals until the advent of self-driving cars, but at this very moment there are people writing software that at some point in the future will choose who lives and who dies. It may not come up very often, and self-driving cars will certainly be safer than human-driven ones, but it’s still fucking scary.

If your car is capable of detecting a situation where either the occupant of the vehicle or a pedestrian will die, why not just instruct the car to “flip a coin” and then continue?

3 Likes

The trolley problem is, and always was, a deliberately contrived binary situation. Plato would have been proud.
In the real world (and in most of the virtual world too) situations are far from binary. The autonomous software will do everything it can do to minimize any and all collisions while bringing the car to a stop as quickly as possible. Maybe everyone will live; maybe the occupants and the pedestrians will all die. That’s the way statistics work.

17 Likes

Then reality is accidents happen and people die. The other reality is when self driving cars are perfected, they still won’t prevent all accidents, not save all lives.

But they WILL dramatically reduce accidents and deaths. The theoretical “do I plow into one person or many” scenario has no right answer in the real world. In the real world people panic. They may freeze - do nothing - and keep driving straight. They make swerve one way or another. They may make it worse in their decisions as well, i.e. swerving into oncoming traffic to miss a deer vs just plowing through it.

Eventually they will take away a persons freedom to drive their own cars, at least on the high way. Probably in my life time.

8 Likes

“Tires spitting gravel I commit my weekly crime.”

10 Likes

If self-driving cars can only be safe if we are sure no one can reconfigure them without
manufacturer approval, then they will never be safe.

Like with any safety feature on your current cars, they’d be safe ‘enough’ with stock parts or firmware and anyone modifying them would be responsible for keeping them legal. That’s how anyone would expect that to work, even the Jeeps with remote exploits. Jeep does a recall and life goes on. You could be replacing your own brake fluid right now and not bleed it properly, then go for a drive. Or, you could let your brake lines rust through while you’re driving. Society thinks you are an acceptable risk of being a fuckup, and automated cars might have a different threshold considering they can be held to different standards of acceptable quality, but are otherwise no different.

Secondarily, you seem to be arguing that they could never be safe ‘enough’ because they might have bugs. If we look at Google cars, they’re limited to 25mph, and they can probably handle any number of hardware or software failures with some failsafe combination of not powering the motors, braking, and steering, making them safe ‘enough’ right now. Given the failure data they’re gathering, plus further modeling, and acknowledging that their self-imposed 25mph limit is probably way under their capabilities right now, we could probably say later models could be safe ‘enough’ to work at the speed of traffic. Heck, they’d probably be safer than human drivers even when they fail, considering that few of the few people affected by Toyota’s ‘runaway acceleration’ could figure out what the N means on the shifter.

Tertiarily, you bristle at the Trolley problem because it assumes the stock firmware is locked down, which has little bearing on the decision being made. A bigger obvious issue is assuming something like an AI-level recognition of its surroundings, which would be its own breakthrough and might make it autonomous enough to make its own moral and ethical choices and not be slave to you, just as if you hailed a cab you would expect the cabbie to be making all the hard choices if it came to that. Even then, I’d take issue with how the self-driving car got into that situation in the first place. If a person engaged it at that point I could see it being forced to do damage control, but before then driving faster than it can safely decelerate from before hitting obstacles would be driving faster than conditions allow, which no design should be doing. I’m betting Google’s car fleet aren’t so defective at 25mph, and could probably go faster without ever encountering the Trolley Problem.

1 Like

We’re already on to the trolley problem? I’m amazed we’ve solved the omniscience and prescience problems required to arrive at that in the real world.

18 Likes

5 Likes

Not to mention environmental conditions. I mean humans drive in snow and ice all over the place, it’s not like an individual wheel doesn’t look traction occasionally…now combine that in an emergency situation and the AI will only react with the inputs it has. The Trolley Problem changes as each bit of new information comes in. The AI may try and save the driver, but as it changes course realizes that the new path has less grip than the old one and every will die anyway. That’s basically what happens in the real world. Of course massive mesh networking would minimize something like a 100 car pile up during a snow storm, but a few casualties are going to happen.

As far as redundant systems goes, we build fighter planes with 3 or more systems that control the same thing… Sure they are astronomically expensive, but if I was sitting inside something moving 1,000mph at +30k feet I’d hope more than one wire was keeping me from spiraling to my death. Elon Musk is only sort of right, making a self drive car isn’t that hard, but making it absolutely safe is pretty much damn near impossible.

As much as I hate all the other asshole drivers on the road - and yes you’d say I’m the asshole… - self drive cars are going to be soul sucking efficient machines.

1 Like

These discussions often omit establishing how/why it makes any objective difference whether a possible traffic fatality is me versus anybody else. This makes it another real-world problem where we start from wishful thinking.

I would be far more psyched if there was a patch for this bias in the DNA of humans! Think about how many injuries and deaths that would avoid.

The firmware discussions also get me wondering about DIY autonomous vehicles. This would be my likely entry to this sort of thing, but naturally involves making firmware, rather than modifying it. Are laws about not modifying the firmware enforceable if it was the driver who wrote it in the first place?

2 Likes

Right. Cars aren’t safe now and never will be safe. “Safe” is a mirage. So?

5 Likes

I’ve been wondering, if your self-driving car does kill someone, who is liable? You, the car, the manufacturer, the coders? I understand they will be safer, but accidents will happen.

4 Likes

The issues are not just around the restricted space of choices either; in the classical trolley problem there is only one person with any agency. The “fat man” never gets the chance to say “Screw those kids, I want to live!”. A more realistic situation has many independent actors, whose actions may not have obvious models.

7 Likes

Sadly, we humans (or is it just Americans? I don’t know) don’t seem to be very good at these kinds of issues. Like with drug legalization, people tend to focus on idealized safety rather than harm reduction. It’s sad that the best being the enemy of the better will put off something that could save many lives.

4 Likes

Because the “me-preservation”, aka the survival instinct, provides one with a certain kind of evolutionary advantage?

Maybe…? Perhaps…? Just a hypothesis…? :stuck_out_tongue:

7 Likes

Saying that survival is an advantage sounds like simply paraphrasing one a-priori conceit for another. Isn’t there a very real possibility that it only seems advantageous because a person is programmed to assume that it is? Does that make it objectively true? Having billions of people each believing that they are somehow better than the others with no evidence might be shaky ground for organizing a rational/scientific society.

Run a computer simulation. Use a multiagent system with one of the variables for the agent “personalities” will be its “willingness” to survive (e.g. multiplication coefficient for probability of survival in certain kinds of simulated encounters). See how this trait will be selected for and become dominant after several iterations.

You’re free to come with another paradigm. But don’t be surprised if it will be selected against and will not become the dominant one after even the most arbitrary count of iterations.

2 Likes