We prefer to hand over our lives to machines than to other human beings.
[quote]Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.
–Dune[/quote]
A fun philosophical question and maybe even a fun question if you’re in an altered state of mind. But seems wholly unrealistic. Why should there be these situations? In 20 years of driving I’ve never been in a situation where it was me or them. Usually you can just brake.
Not sure I agree. Given the constraints of the scenario (car driving fairly fast — since the road ahead was completely open at the time — and sudden tailgating and emergency stop required to avoid pedestrians), hitting the breaks hard will cause a collision from behind which leaves the end result hard to predict: you have a fairly high chance of any of the following worse scenarios unfolding: you’ll either be pushed forward into the pedestrians, killing them anyway; pushed into oncoming traffic, causing much greater damage; or pushed off the edge, which was the original “self-sacrifice” option, except now there are two cars involved in the crash.
It seems (depending on the speeds and distances in the scenario, of course) that they may be a fairly significant chance that slamming on the breaks will imperil at least one other person besides the driver, and because of the unpredictable nature of the crash and the oncoming traffic, has a non-zero chance of killing multiple people.
Driving straight off the bridge is all-but-guaranteed to maim/kill just one person.
So your options are:
Hard break, and have three possible outcomes: no one is hurt, one person is hurt, multiple people are hurt
Drive off the bridge, almost guaranteeing that exactly one person will be hurt
Hit the pedestrian(s), guaranteeing that they will be hurt/killed
Personally, I think the most ethical solution might actually be 3. The pedestrians were most at fault, they should bare the consequences of their actions.
You’re trying to score semantic points, but you completely and utterly missed the point of the statement.
The whole quote, which you only quoted a fragment of, was
Deontology, on the other hand, argues that “some values are simply categorically always true,” Barghi continued. “For example, murder is always wrong, and we should never do it.”
His point was that, in Deontology, some values are axiomatic. Murder is defined as being wrong (as you so helpfully re-iterated), therefore we must not do it.
You seem to be reading this as Barghi telling you that murder is wrong. No, he’s telling you that murder is defined as being wrong, and therefore under this definition, in a specific world-view, you must never do it.
It’s the part that follows the “therefore” that he was explaining. Under some other world-views, for example, just because something is wrong doesn’t mean you might not be justified in doing it.
If the car behind collides with you, then that driver (human or robotic) will always be held to be responsible* - he wasn’t maintaining an adequate distance behind you and therefore was directly and wholly at fault, so you could argue that he should be first choice for receiving any unpleasant consequences of whatever measure you choose to take.
In any collision between a car and a pedestrian, there is little doubt that the pedestrian will come off worst, so given a choice between hitting a car and hitting a pedestrian, it will always be better to hit the car - its occupants are protected inside their energy-absorbing cage and have a good chance of survival.
Collisions above 30 mph or so tend to be lethal for pedestrians.
You cannot assess at the time how culpable the pedestrian is - he might have good reasons for stepping off the pavement (sidewalk if you’re from the western side of the Atlantic), he might be of unsound mind and not responsible for his actions, or (cue the violins) he might be a child, too young to have learnt road-safety (though if you’d spotted an unaccompanied child ahead, you’d probably be sensible to ease off the throttle and be prepared for evasive action).
As for loss of control following a rear collision, all I can say is that I was once shunted from behind; the damage was consistent with around a 30 mph closing speed (far higher than would pertain in a tail-gating incident) and I don’t recall any great difficulty in maintaining control. Presumably a robotic car would handle this kind of thing better than a startled human anyway.
*Assuming you haven’t pulled across in front as he overtakes you.
His point was that, in Deontology, some values are axiomatic. Murder is defined as being wrong (as you so helpfully re-iterated), therefore we must not do it.
So doing a wrong thing is wrong. He's really earning those philosophy paychecks there. The problems is what is murder and what isn't. You can't just program the car to "not murder", you have to define what actions to take in certain situations. Whether those actions are murder or not are defined by people, not axiomatic.
Under some other world-views, for example, just because something is wrong doesn't mean you might not be justified in doing it.
If that is the case, then it isn't "simply categorically always true".
Now I’m going to have a hard time finding the reference, but I know that I read they looked into swerving to the side of the road to avoid accidents and found that braking hard and staying in the traffic flow was just as effective.
I’ll try to find the link. Google Fu don’t fail me now!
In a fully autonomous future world, there would be no tailgaters since all cars would be programmed to maintain a safe distance between them. At some point, we’ll go beyond independent autonomous vehicles and start networking them which would allow them to stop virtually simultaneously further reducing the possibility of a rear-end collision.
Not only that, but we could even allow fully autonomous vehicles when they end up in a no-win scenario where there isn’t enough braking distance with standard brakes to drop their engines which would rapidly slow them down.
I’m actually having trouble coming up with any truly no-win situations in a fully autonomous world that don’t involve major natural disasters.
In a partially autonomous world of course, there are some, but frankly the cars should be programmed the exact same way. Humans are going to be assumed to be at fault since it’ll be quickly shown how a networked autonomous vehicle could have avoided the accident - especially given the amount of data one of these cars can record before any given accident to show exactly how the human failed.
Driving your own car is going to become very expensive, very quickly.
I would guess they will come with protections similar to a modern game console (if you mod it it won’t go online/drive on public roads), and be subject to regular checks (like emissions testing now). Wouldn’t it suck to brick your car?
That’s why you hack it so well the protection system would think the thing is unmodified (see e.g. the enable/disable switches on some chipping systems for the consoles), and if there are the checks, have the original unit to swap in before going to get the check.
That’s why you dump the flash before putzing with it. JTAG is your friend.
Maybe I’ve gotten cynical but I unfortunately think it will play out more like this:
Should your self-driving car be programmed to kill you in order to save wealthier people? I think it will be sold as “minimizing property damage”. If there are two drivers about to have a head on, one of them is worth $12B and has a $200K car and one is worth -$20K and has a $10K car, which driver’s life gets prioritized? I mean if the rich person dies, think of all those jobs that won’t get created…~sarcasm~
I say they should be programed to follow the scenario that has the highest percentage probability for all parties to survive.
Actually thinking about this further, to expand on the above comment…it is very likely that we won’t OWN these autonomous cars, they will all be leased/licensed, at least that seems to be the way things are headed.
In this case I’m betting that a cost analysis to the manufacturer will be the “driving algorithm”, which car is more expensive, which driver is more likely to sue, which scenario causes the least third party property damage, etc. Like big corps do for product recalls, or insurance companies do. Human life and property damage will all be calculated to dollar amounts and prioritized accordingly.
Not OK, but you can guarantee that if they are ugly school kids or non-photogenic adults, their pictures won’t stay in the public eye. Cute school kids and yummy mummies on the other hand …