Originally published at: http://boingboing.net/2016/08/11/website-asks-you-to-think-like.html
…
It’s at http://moralmachine.mit.edu/
The missing link…
Mebbe we could link it up with our Pokemon Go scores so there’s math involved. ‘Level 5 trainer or Level 7 trainer’…
Um, it’s “Kill all humans”.
Yes?
ahem
So, if I get all of my friends to choose only the paths that will lead to human death, can we jump-start the singularity SkyNet?
Is this a moral value and utility experiment masquerading as a technology study? Are they looking for flaws in how people judge value of life if we place ourselves as a machine? Are they just retreading social psychology research with new framing?
Man, I need to lay off the social psychology podcasts…
I replied basically along the lines of:
a) save the most hooman lives
b) ignore animal lives unless costless in terms of hoomans
c) avoid intervention (swerving) unless it saved more lives
I was almost indifferent as to whether the lives were in the car or not. I can’t see why this would be a major factor, although the “cause” of the collision always in some way stems from the car occupants’ decision to drive and therefore create the danger, so if the numbers of lives were equal, I’d sacrifice the occupants.
I was also indifferent as to the characteristics of the lives (law abiding/criminal, male/female, young/old, athlete/heavy, professional/other occupation). This is because I don’t think it does matter, and in any case, is not going to be an input to the decision making algorithms for a very long time, if at all.
Generally speaking, self driving cars will be much better at avoiding fatalities than we meat-puppets currently are, and debates abut the moral relativity of the decisions will relate to a tiny fraction of the number of lives saved overall. No reasoned argument nor deliberation regarding the moral outcome of pre-collision decisions currently occurs in the numerous split seconds before we kill 30,000+ each year, so any reasonable AI will do better than that.
In my profession, we believe that risk management and harm reduction is best done in the reasoned environment of professional planning and engineering, and not at the “coalface” where one’s wits and reactions remain the only barrier. Self driving cars offer a great move forward for motoring safety.
Whether we should even be strapping ourselves into private vehicles and zipping around at all (as much as we do) is another debate.
Arf. This makes me look, perhaps, not nice.
But in all honestly I made no judgement at all as to what the people I would hit would look like.
I just said the car shouldn’t swerve into pedestrians at all, if avoidable.
There’s no way to just hurt everyone a bit?
Same as me. I ignored it.
This tells me:
A) Roadways on which self-driving cars drive should be designed to have other options than plowing through pedestrians or concrete barriers in the event of break failure.
B) The people creating these moral dilemmas include unnecessary information such as knowledge of the ages, genders, and careers of potential victims, which is something a car in such scenarios won’t know (unless Skynet has already taken over and then it doesn’t matter).
C) I care more about dogs and cats than I do about pregnant women, children, and the elderly.
D) Dogs and cats shouldn’t be left alone in a self-driving car (or a human-driven car).
There is no correct answer of course. No system is perfect. IMO the default should be to protect pedestrians but not at the cost of passenger lives. People will die just like they do today without self driving cars. But for some reason there seems to be an expectation that self driving cars should be able to make these decisions better than you or I and that’s just silly.
It’s weird at the end they have a category for “caring about passengers lives” but not for “caring about pedestrians lives”. I generally prefer to save the ones who didn’t choose to ride around in a deadly robot.
Maybe the concrete barrier should be better engineered to reduce potential passenger injury? Car vs. human at a high rate of speed will always end poorly for the human, but car vs barrier could be okay if there were maybe a slowdown lane that leads to a less destructive impact and internal protections like airbags (or Demolition Man car foam!).
Of course self-driving cars will be driving on old roads so it’s not likely this will happen with road design any time soon, but maybe that’s what will happen after so many self-driving car fatalities occur at some point in the future, at least in places where the roads can be updated practically.
In this specific scenario that would help. But the overarching question seems to me to rely on a no win decision making scenario which is always a possibility.
for me it’s exactly the other way round - but not because I decided that thiefs are fair game.
I based my decisions on the number of surviving hoomans (but may be counted a few dogs, too. it took me rather long to realize that some of the red blobs are animals…). if it was a tie I decided that the car should drive straight ahead, with the rationale that a - in extreme cases - spinning car (sharp turn while breaking hard) is much more dangerous for hoomans outside of the simplistic model the web site presents.
Easy-peasy, drive into the jersey barrier, nobody dies. Or don’t drive so fast you can’t slow down in time. This is yet another false-choice argument posed to make people have FUD (fear, uncertainty, doubt) about autonomous cars, when the real answer is that such a car, if properly designed, wouldn’t get itself into that all-too-human situation in the first place.
Here, I Kirk’d the Kobayashi Maru. Hypothetical potential victims need not thank me for my hypothetical life-saving photoshop skills!
I like that the car is 100% confident that it will be able to kill everyone it chooses. You kill all humans, you beautiful bastard you.
I’m hoping though that in the real world, cars will be programmed to forgo all this weird contrived moralistic horse shit in favour of a probabilistic model that selects the path most likely to kill the least people. In just about every one of these scenarios as depicted the car needs to activate the horn, use a side barrier to reduce speed, aim for the crossing and hope that the pedestrians have decent reflexes.