The Trolley Problem itself is invalid. A mesh network of self driving cars wonât experience the problem because it will be anticipated early. A car or vehicle not under autonomous operation isnât âknownâ to the autonomous system so it wonât be aware of the number of people in it. AVs will need to try to protect their owners from manually controlled vehicles which are being driven dangerously.
I can see, though, a situation in which the core code of AVs is stored in ROM - not reprogrammable memory like Flash - as part of a SOC. At that point the resources of an Intel or a State actor would be needed to bypass it.
Iâm afraid I strongly disagree with the thesis of this piece. While I certainly support open code for self-driving cars and agree that DRM wonât keep people from tinkering with their cars, the vast majority of people wonât, whether itâs legal or not. As a result, Iâm a lot more concerned with what the factory-default software is programmed to do. The trolley problem and its variations were silly hypotheticals until the advent of self-driving cars, but at this very moment there are people writing software that at some point in the future will choose who lives and who dies. It may not come up very often, and self-driving cars will certainly be safer than human-driven ones, but itâs still fucking scary.
If your car is capable of detecting a situation where either the occupant of the vehicle or a pedestrian will die, why not just instruct the car to âflip a coinâ and then continue?
The trolley problem is, and always was, a deliberately contrived binary situation. Plato would have been proud.
In the real world (and in most of the virtual world too) situations are far from binary. The autonomous software will do everything it can do to minimize any and all collisions while bringing the car to a stop as quickly as possible. Maybe everyone will live; maybe the occupants and the pedestrians will all die. Thatâs the way statistics work.
Then reality is accidents happen and people die. The other reality is when self driving cars are perfected, they still wonât prevent all accidents, not save all lives.
But they WILL dramatically reduce accidents and deaths. The theoretical âdo I plow into one person or manyâ scenario has no right answer in the real world. In the real world people panic. They may freeze - do nothing - and keep driving straight. They make swerve one way or another. They may make it worse in their decisions as well, i.e. swerving into oncoming traffic to miss a deer vs just plowing through it.
Eventually they will take away a persons freedom to drive their own cars, at least on the high way. Probably in my life time.
âTires spitting gravel I commit my weekly crime.â
If self-driving cars can only be safe if we are sure no one can reconfigure them without
manufacturer approval, then they will never be safe.
Like with any safety feature on your current cars, theyâd be safe âenoughâ with stock parts or firmware and anyone modifying them would be responsible for keeping them legal. Thatâs how anyone would expect that to work, even the Jeeps with remote exploits. Jeep does a recall and life goes on. You could be replacing your own brake fluid right now and not bleed it properly, then go for a drive. Or, you could let your brake lines rust through while youâre driving. Society thinks you are an acceptable risk of being a fuckup, and automated cars might have a different threshold considering they can be held to different standards of acceptable quality, but are otherwise no different.
Secondarily, you seem to be arguing that they could never be safe âenoughâ because they might have bugs. If we look at Google cars, theyâre limited to 25mph, and they can probably handle any number of hardware or software failures with some failsafe combination of not powering the motors, braking, and steering, making them safe âenoughâ right now. Given the failure data theyâre gathering, plus further modeling, and acknowledging that their self-imposed 25mph limit is probably way under their capabilities right now, we could probably say later models could be safe âenoughâ to work at the speed of traffic. Heck, theyâd probably be safer than human drivers even when they fail, considering that few of the few people affected by Toyotaâs ârunaway accelerationâ could figure out what the N means on the shifter.
Tertiarily, you bristle at the Trolley problem because it assumes the stock firmware is locked down, which has little bearing on the decision being made. A bigger obvious issue is assuming something like an AI-level recognition of its surroundings, which would be its own breakthrough and might make it autonomous enough to make its own moral and ethical choices and not be slave to you, just as if you hailed a cab you would expect the cabbie to be making all the hard choices if it came to that. Even then, Iâd take issue with how the self-driving car got into that situation in the first place. If a person engaged it at that point I could see it being forced to do damage control, but before then driving faster than it can safely decelerate from before hitting obstacles would be driving faster than conditions allow, which no design should be doing. Iâm betting Googleâs car fleet arenât so defective at 25mph, and could probably go faster without ever encountering the Trolley Problem.
Weâre already on to the trolley problem? Iâm amazed weâve solved the omniscience and prescience problems required to arrive at that in the real world.
Not to mention environmental conditions. I mean humans drive in snow and ice all over the place, itâs not like an individual wheel doesnât look traction occasionallyâŚnow combine that in an emergency situation and the AI will only react with the inputs it has. The Trolley Problem changes as each bit of new information comes in. The AI may try and save the driver, but as it changes course realizes that the new path has less grip than the old one and every will die anyway. Thatâs basically what happens in the real world. Of course massive mesh networking would minimize something like a 100 car pile up during a snow storm, but a few casualties are going to happen.
As far as redundant systems goes, we build fighter planes with 3 or more systems that control the same thing⌠Sure they are astronomically expensive, but if I was sitting inside something moving 1,000mph at +30k feet Iâd hope more than one wire was keeping me from spiraling to my death. Elon Musk is only sort of right, making a self drive car isnât that hard, but making it absolutely safe is pretty much damn near impossible.
As much as I hate all the other asshole drivers on the road - and yes youâd say Iâm the asshole⌠- self drive cars are going to be soul sucking efficient machines.
These discussions often omit establishing how/why it makes any objective difference whether a possible traffic fatality is me versus anybody else. This makes it another real-world problem where we start from wishful thinking.
I would be far more psyched if there was a patch for this bias in the DNA of humans! Think about how many injuries and deaths that would avoid.
The firmware discussions also get me wondering about DIY autonomous vehicles. This would be my likely entry to this sort of thing, but naturally involves making firmware, rather than modifying it. Are laws about not modifying the firmware enforceable if it was the driver who wrote it in the first place?
Right. Cars arenât safe now and never will be safe. âSafeâ is a mirage. So?
Iâve been wondering, if your self-driving car does kill someone, who is liable? You, the car, the manufacturer, the coders? I understand they will be safer, but accidents will happen.
The issues are not just around the restricted space of choices either; in the classical trolley problem there is only one person with any agency. The âfat manâ never gets the chance to say âScrew those kids, I want to live!â. A more realistic situation has many independent actors, whose actions may not have obvious models.
Sadly, we humans (or is it just Americans? I donât know) donât seem to be very good at these kinds of issues. Like with drug legalization, people tend to focus on idealized safety rather than harm reduction. Itâs sad that the best being the enemy of the better will put off something that could save many lives.
Because the âme-preservationâ, aka the survival instinct, provides one with a certain kind of evolutionary advantage?
Maybe� Perhaps� Just a hypothesis�
Saying that survival is an advantage sounds like simply paraphrasing one a-priori conceit for another. Isnât there a very real possibility that it only seems advantageous because a person is programmed to assume that it is? Does that make it objectively true? Having billions of people each believing that they are somehow better than the others with no evidence might be shaky ground for organizing a rational/scientific society.
Run a computer simulation. Use a multiagent system with one of the variables for the agent âpersonalitiesâ will be its âwillingnessâ to survive (e.g. multiplication coefficient for probability of survival in certain kinds of simulated encounters). See how this trait will be selected for and become dominant after several iterations.
Youâre free to come with another paradigm. But donât be surprised if it will be selected against and will not become the dominant one after even the most arbitrary count of iterations.