Network neutrality for self-driving cars

Crashing humans are stupid. They act like shitty computer programs. They don’t make ethical decisions. They fire off what is basically a reflex and it is either the right answer or, more often than not, the wrong answer. If the a human does or doesn’t ram into the infant, it does so out of reflex.

Automated cars will render most ethical decisions moot by simply not driving like an asshole human. I think it makes the question purely academic.

The obvious answer is to simply reduce harm in a boring a practical way because the computer isn’t smart enough to make ethical decisions (not that a panicked human in a crash can either). If the breaks go, scream at the other cars to get the fuck out of the way, reduce speed any way possible, and if it can’t avoid the hit, hit square on and get the car about to be hit to prepare for the impact. Avoid peds because the have squishy insides.

It seems pretty duly straight forward to me. The instances where the car is in control and gets to pick what it is going to hit, it is going to avoid if it can, and hit stuff going the same relative speed and built to take an impact.

The only autonomous vehicles that are going to make anything like an ethical decision that isn’t simple harm reduction might be semi trucks. Those might in fact be programmed to ditch spectacularly instead of crashing into a car if possible because they would presumably be entirely unmanned and a lot more likely to kill someone in a crash.

Regardless, the entire question is academic. Ethics decisions rarely make an appearance in a crash. If you have enough control that the car can choose a target to hit, it probably can avoid crashing all together. On top of that, the ethical decision 9 times out of 10 will be to simply reduce harm be lessening the impact and hitting stuff meant to be hit. Whatever the car does, it will be better and more thoughtful than a panicked human reacting on his stupid ape instincts.

3 Likes

But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.

As much as I love Asimov, I’m getting tired of the 3 laws of robotics. (Or imagining that it could apply to the real world, anyway.) They aren’t very well defined. What constitutes as harm and how should various kinds of harm be weighted against each other? They assume that the robot will be able to predict what these harms will be. Most of the above scenarios implicitly assume that your car is the only active participant. The pedestrians might very well jump in the way of you swerving to miss them.

Let’s say Google supports a strict version of Networked Road Neutrality. But, suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage —perhaps forming a cordon around them?

You know how, in heavy traffic, switching lanes to get into the one which is moving doesn’t actually help? I’'m pretty sure that any of these behaviors would just slow your car down.

The article failed to mention that Asimov himself came up with a Zeroth law that would probably cover this situation anyway.

Besides which it’s sci fi. A lot of the main robotics researchers in real life don’t really put much consideration because as you say for now it’s impractical to have any kind of way to quantify these things and many don’t even agree with Asimov’s laws, e.g. Tilden’s Laws of Robotics.

Other counterfactuals include only the rich using aircraft or owning automobiles and computers. Those darn people with enough disposable to invest in new industries are just screwing the rest of us.

Its actually going to be worse, if we take the current netflix comcast debacle as an example.
First, automated - networked cars need not keep speed limits the way humans do, if you know where every other car is, and you know where they’re going, its relatively simple to program every car for maximum speed under these “controlled” conditions.

Of course, if you realize that sometimes you need some cars to get there faster than everybody else, then there are several solution, one of them being to not let all cars go as fast as safely possible, this way you can re-optimize traffic flow in an emergency with different methods like. creating temporary virtual emergency lanes by clearing traffic from a single lane as a vehicle approaches, filling it back up when its gone. or having a vehicle safely dart between other vehicles already programmed to stay out of its way.

TL:DR:

High speed lanes, if any will likely be created by slowing everybody else down, rather than optimizing a single fast lane which will then only be as fast as the fastest car on it.

But really, self driving cars seem very gimmicky to me, I’m thinking cars will probably, with time, be offered as a service. I mean, if I’m not driving, I might as well take the bus right? (Its a very utilitarian point of view I know, but in practice, its almost the same thing)

The interesting here is that, with, admittedly, stupid humans behind the wheel, who make the stupid assumption that nothing can or will go wrong on the road, there is no time to make ethical decisions, if you have time to make a decision, then you may be able to prevent a crash. (Of course, the A-hole behind you is going to ram into you because he wasn’t keeping a safe distance but anyway, what you gonna do)
We don’t have to worry about making ethical choices because we have, for the most part, already made the ethical choice of choosing rules that do not keep us perfectly safe. Its a trade off, and we know it, although we, for the most part, don’t dwell on it either.
And that’s my point really, just because we don’t have ethical discussions on the subject, doesn’t mean there aren’t any ethical considerations in play, I would argue that because we don’t have any ethical discussions about driving, we get into so many accidents as a species.

But now, you have to understand, there are still still going to be stupid humans behind self driving cars, the same stupid humans who made windows 8 and keyboard prediction algorithms for mobile phones.

You offered this hedge:

So that takes care of most, but not all, and certainly not new ethical dilemmas created by not having anyone at the wheel.

The sticky part is identifying a responsible moral agent. Humans may be poorer drivers than machines but you know who to blame for accidents. Jurisprudence must evolve lest it block innovations in safety and efficiency.

If a driverless car did crash, is Google/Comcast/whoever produced the car liable? It’s going to happen a lot less than with people driving, but if the car is properly maintained and somehow there’s still a crash (say some mud splashes onto the camera or something), you can’t really blame the ‘driver’ if there’s no steering wheel or they’re blind or otherwise unable to control a normal car. What kind of insurance would you have to buy if you aren’t actually operating the car? (I’m guessing driving your own would get really expensive once these became popular, and you wouldn’t have much benefit of the doubt in a crash between human and robot driver (which would have video evidence, of course))

Like I said, the car ethics in a world of autonomous cars is literally academic. They are going to simply seek to minimize harm in the most boring practical way possible. Only a liberal arts major getting a bullshit philosophy concentration in machine ethics is going to worry about it. Google engineers certainly won’t be worrying. Why? Because the cars simply can’t make anything other than the most boring ethical decisions. They are going to avoid accidents at all costs by breaking and dodging. When that fails, they will hit stuff meant to be hit, namely, other cars, and it will try and hit them squarely with the least speed and probably give the car about to be a hit a heads up that they better do what they can to minimize the impact. They are not going to take this tact because that is what a random liberal arts major wants, they will take that route because that is all car is capable of doing. You can’t give it more complex ethics if you tried, and it simply won’t matter one way or the other because it will be a shit ton more ethical than letting bipedal apes slam their stupid monkey brains on how to maintain control of the vehicle, not get bored, and react in a split second to a sudden disaster. I’m saying that there are no meaningful ethical decisions to be made in how those cars are programmed, only dully practical engineering considerations. By the time we sensors and AI that let us process enough information to make an ethical decision, this discussion will be rendered moot again because cars simply won’t ever crash anymore.

There will be ethical dilemmas surrounding autonomous cars eventually. When they have proven themselves out and are cheap, we will get to argue about when it is time to ban humans from driving manually on public roads. By the time we get there though, I think that question will be about as interesting as the question of whether or not we allow horses to run down the interstate.

The insurance question actually isn’t all that interesting or tricky. Who is at fault when an autonomous car crashes? No one, but if your car is the one that faulted, your insurance pays. This is how insurance normally works. Car insurance is a weird type of insurance because it tries to find fault. Autonomous cars will just make it act like normal old boring insurance that kicks in if your house burns down. The insurance pays out if your property or person get wrecked. You would probably keep liability so as to ensure that prices are set correctly. Safer, cheaper cars would cost less to ensure. More expensive and unsafe cars would cost more. Insurance would get amazingly boring as they could basically fire all but a handful of the lawyers and turn it into a more traditional game of actuary tables. The same goes for speeding tickets and the like. Insurance would just cover it, and because you are obviously a passenger it wouldn’t count against you.

1 Like

No. I mean a bit more generally, that there is not so often a situation where someone is forced to make a choice to die, or to kill one group of people or another. That seems a bit extreme, and does not happen that often. As for daily deaths on the highway, they happen constantly, but are rarely the result of someone being forced to make a choice to take a life or save a life. Most often they are the result of stupid, avoidable choices.

My point is you are putting too much faith in technology, when techonolgy is at most reliable, but certainly not perfect.
And considering that there will be accidents from time to time, and considering that manufacturers are to be held liable for these accidents, since, certainly, the passenger won’t have any influence on the matter. A lot of ethical scenarios are certainly going to be taken into account. Oh not by scholars, the caricature you painted doesn’t exist outside of college campuses as far as I’m aware.

But, you made your point, and I made mine, lets leave it at that, we certainly wont agree on this:

You act like life or death liability doesn’t already exist. It does. It is in your freaking car already. A failed seat belt, break, or airbag will kill you. Failed aircraft parts will kill you. Failed construction will kill you. Putting your complete and total trust in a machine and assuming that the system is going to work happens all the freaking time to everyone. This is such a boringly mundane occurrence that you don’t even register that you are surrounded by parts and constructions that are literally life and death and have faults measured in the parts per million with multiple redundant backups.

The system will have boring old insurance to deal with the majority of liability. Will will accept some level of failure, while negligent failures will be dealt with like any other negligent failure; recalls if the company is on the ball, fines and restitution backed up by the courts if they fail. Cars will make the same dull ethical decisions they currently make, which is to protect the passenger while avoiding harm to others in a mindless sort of way.

I’m not saying that there will not be challenges, just that they will be small and boring legalistic challenges that will be easily overcome. Insurance will get boring and start looking like normal insurance. There will be no three laws of robotics, just an engineer that tells the programs the car to fail as gracefully as he can. There is no brave no world, and considering the world we are going to leave behind, the ethical challenges will be the laughable stuff of Philosophy undergraduates. Everyone will just enjoy the fact that car fatalities went to nearly zero, that nearly everyone commutes to work an a rental electric vehicle, and that cities have suddenly become walkable and bikeable places now that all the fucking cars have been kicked off the side of the streets and relegated to quadruple parked garages.

This topic was automatically closed after 5 days. New replies are no longer allowed.