Well, we’re getting into philosophical territory that can’t be encompassed within a comment board, but what if it’s a self-driving busful of people, say sixty of them, and a single pedestrian? (And there are no seatbelts on a bus, so the everyone-in-the-vehicle-will-die scenario is a little more plausible.) At what point would you be willing to say that the life of the single pedestrian is worth more than those of everyone on that vehicle, all other things being equal? What if it’s a double-decker bus with a hundred and twenty people on it, and all of them must die to save that one pedestrian?
And what if we assert that not everybody on the bus had an equal assumption of risk — that some of them were children who were brought on by their parents and didn’t make the choice to climb aboard?
I still think that in the end it boils down to numbers, within reason.
I agree, clearly there are situations when it gets murky, but I would posit that the assumption of risk ought to be a big influence on defining the most ethical action. I’d actually suggest this conundrum should not occur and that the risk should be basically engineered out. We’ve grown up in a world in which it’s apparently acceptable that goodness knows how many hundreds of thousands of people a year are killed on roads.
Self-driving cars could go a long way to solving the problem of risk - around pedestrians the cars are limited to 10mph, say, and can only move faster on well defined roads. Also, how about making them much much less massive?
Come to think of it, in the “throttle open” scenario, the car should sacrifice itself every time.
Because any calculation which weighs the life of the people killed in the car against those on the street is making the assumption that that will be the end of it. But if the throttle is stuck open, there’s no basis upon which to make that assumption. The car could go on for hours killing people if not stopped.
This. A fully autonomous car would be programmed to stop if an obstacle appeared in its path, while staying on the road. Classifying obstacles and choosing between is something no company would be willing to implement, because a device that “willingly” decided to harm its users would result in a bad lawsuit. Also this is about predictability and reasonable expectations. If an obstacle appeared suddenly, and the machine attempted to stop, but failed - that’s expected behavior. Someone in a car is always at risk of killing someone who suddenly steps in his path. This is accepted to a degree. Wildly swerving is not only unexpected, but also rarely advisable. People have to look out for traffic. They can, in most circumstances, be expected not to step into the way of a fast moving car. They can not be expected to prepare for sudden changes in direction.
In college ( a long time ago) a psychology course as part of the requirements and each of those had mandatory participation in three psychology experiments. Free subjects for the psychology graduate students studies, but perhaps a little too WEIRD. I totally agree that this could be a setup or part of one.
Every single one of the studies had a simple task to complete on the face of it but was studying something completely different. Once you were done, they explained what they were actually studying.
Case in point: Complete this word search that someone has started before you and we will judge the results of how many words you both find. I picked up the pen and started and worked awhile. The researcher comes in and says I am done and says the study was done when I picked the pen! There were two different color pens and by picking a color different than the one that had already been used on the word search I had told them something, which they were going to correlate with other tests I had already taken.
Never trust a research psychologist… (is why I show up as paranoid on all the tests?)
In truth, they don’t really have to behave at a global optimum to decrease the death rate on roads. Human level accuracy plus no fatigue and hard enforcement of the speed limit would go a long way.
Given that the pedestrians in the header graphic are willing to walk into the path of a speeding car which has no escape route, the car at least has some information about their intelligence and likely lifespan to work with.
Or at the very least don’t dry so fast that crashing would results in casualties despite all the protective features of a modern car. This scenario is silly.
So for the case in the picture, I’m torn. On the one hand it should kill it’s passengers as to remove them from the gene pool for buying a car that has gotten them into this situation where it needs to choose. On the other hand it should kill the pedestrians as it may be wrong and the pedestrians aren’t at any risk at all so killing its occupants unnecessarily would open the car manufacturer up to unnecessary lawsuits.
This is what bothers me most about the push for automated cars. There is no reason not to have built in this networked awareness with human drivers all along. If people were more interested in safety and optimised traffic than in selling gadgets.
An unthinking limit on maximum speed accomplishes far less than requiring all drivers to just coordinate with each other. For instance, everybody at a stop starting to drive simultaneously instead on one at a time. And each driver in a lane adjusting drive speed in unison. These aren’t even especially difficult to do, and arguably don’t require investing billions in new untested technology.
Probably sample size, because my mental model was exactly the same as yours and I saved more women.
Edit: Actually, thinking some more I prioritised law abiding lives over non-law abiding lives more highly than number of lives, so would have ended up with slightly more deaths than you. But still can’t see how that would have ended up biasing me towards saving more women.
It has only very recently been technically possible - the radio technology and spectrum needed is only going to become readily available in the next few years.
The difficulty here is the different reaction times. Someone in an electric car or a hybrid just needs to push the pedal. Someone in a manual has to fiddle about with gears and clutch. I think there would be a lot of rear ends. I live in a rural area with a lot of slow reacting drivers; they would rapidly get rear-ended in a city as it is, let alone if everybody expected to move the moment the lights change.
Women live longer than men, on average, so if they use arbitrary ages to define ‘youth’ and ‘elderly’ women are more likely to be the latter. Possibly, also, the arbitrary age at which women are deemed elderly may be lower than for men.
Am I alone in finding the notion of AIs finding some human lives more worthy than others as something completely sinister? I want my car AIs to be demographics-blind or else (in America) they’re just going to be racist. The last ting the US needs on top of everything else is racist cars.