Mercedes' weird "Trolley Problem" announcement continues dumb debate about self-driving cars

Originally published at: http://boingboing.net/2016/10/24/mercedes-weird-trolley-pro.html

1 Like

A moot point because Mercedes drivers are always going too slow for traffic flow in the center lane.

/s

11 Likes

Mercedes is specifically programming cars to NOT murder you. So your post seems confused.

And why is the trolley problem “misleadingly chin-stroking” when it comes to autonomous cars? Seems like a valid question any autonomously vehicle algorithm will have to address. Making this about DRM is entirely missing the point of an interesting and pertinent discussion.

I’ll also say that all non autonomous vehicles are basically periodic murder machines already, no DRM necessary. Not to get all Musky.

7 Likes

Trolley problem is completely unrealistic and no car will come equipped with a separate trolley problem-solving algorithm, nor, I presume, any other specialized philosophy software. Which is fine, because humans don’t have one either.

Having a car choose to kill its occupants to save other people assumes that a car can judge the situation perfectly, otherwise a bug in the software can result in the car just veering into the nearest concrete wall because it saw a shadow on the road in the shape of a pile of babies.

14 Likes

what if the driver is watching a Harry Potter DVD?

1 Like

Agree.

Cory seems to want to divert this incredibly complex issue into a shallow story about code and DRM.

It’s evident in his dismissive tone where he describes the “hypothetical silliness of the Trolley Problem.”

Maybe what bugs Cory is that the TP has no quantifiable solutions. It doesn’t fit into a utilitarian world view. That there are no answers - only questions to be raised.

But does not mean the TP is ‘silly’.

It means that designers and policy makers need to take a step back and re-evaluate their decision making processes about the role of artificial intelligence.

And they need to do it using all the tools available, including philosophy and yes, the Trolley problem too.

5 Likes

Except computers don’t think in a philosophical manner. They don’t make ethical decisions. They don’t think at all.

This continued Trolley Problem trolling by people who don’t understand how computers work is really getting tiresome.

The computer will drive as well as it can. It will continue to try to not hit anything for as long as it can. It will dump speed as fast as it can. Most of the time, that will be enough. A very small percentage of the time, it won’t be, and someone will get hurt or killed. Perhaps the occupants, perhaps someone else.

Any car company that puts code into a car to intentionally kill the driver under any circumstances will likely be sued into oblivion shortly after it is discovered by the families of anyone who has been killed by one, rather the code was active or not.

8 Likes

I want mine adjustable between self-loathing and road rage.
Or maybe linked to a mood-ring.

8 Likes

Fixed it for ya’.

6 Likes

Trolley Problem Blog: Day 115

Saw a new article today. The answer is still, “slam on the goddamn brakes.” No one seems to care.

12 Likes

^^this right here is the answer. Just slam on the brakes. The driver is in a metal shell. He will more likely survive being rear ended than a pedestrian being hit. And if many of the surrounding cars are also autonomous, they will most likely slam their brakes at the same time.

Why is everyone having a hard time dealing with this?

2 Likes

The TP is a moral/ethical thought experiment. It postulates a condition where ever single safety imaginable has failed, and you’re left with a moral choice about how to cope with the rest of the world’s lack of planning.

But as presented in the AI context, that is in no way helpful: the goal is to design a system not to solve the TP, but to make the TP never actually come up. And since the TP is highly contrived, it’s actually really easy to prevent it from coming up. Don’t let pedestrians walk into oncoming traffic. Don’t drive too fast so that you’ll have no time to stop when pedestrians cross traffic or a car in front of you stalls. Honk your horn to alert all nearby that you want to stop, but you know you physically won’t have the time so they should brace or disperse. Drive a modern car consisting of a giant steel cage with airbags and restraints, surrounded by a frame meant to crumble and absorb as much force as possible. Drive on roads that are clearly marked. Don’t follow too close, so you don’t hit someone ahead of you who is faced with the same dilemma.

But to the extent that TP is applied to the AI of a self-driving car – rather than some more general-purpose, directed-murder device – the problem itself is EASY. No AI is going to come up with a better algorithm than a human giving careful, deliberate, advanced consideration to this sort of problem. And no hardware is going to allow an AI to detect all the conditions that a human can observe and infer. And after having millions of people (drivers) consider such problems for hours a day over the course of more than a century, the optimum solution we’ve found so far asks one question of the driver: slam on the brakes?

Don’t change lanes or turn: you’ll do more damage. Honk the horn if you need to. Either brake and accept the vehicle behind you may hit you, or don’t and accept that you’ll hit what’s in front of you. If you never want to hit what’s in front of you (such as a person), slam on the brakes and allow your crumple-frames steel cage with restraint system absorb the impact to your much less vulnerable real section.

If any thing, the point that “designers and policy makers need to take a step back and re-evaluate their decision making processes about the role of artificial intelligence” is a lot more similar to Cory’s point that it’s stupid to put this sort of burden on an inherently flawed system that puts vendor and consumer in adversarial roles (DRM).

But more that that, everyone forgets that every human on Earth is just wondering, “should I slam on the brakes?”

5 Likes

Good answers are seldom as interesting as bad questions.

6 Likes

The fundamental issue with the Trolley Problem is that it is artificially binary. It forces an ethical judgement by creating a distinctly unreal situation. TP is binary, while the real world is very analog.

5 Likes

people have mostly stated my thoughts already. but OK, let’s say that despite the car’s AI being much more focused and adept than a human driver, the one-in-a-million situation occurs (though o-i-a-m is still pretty prevalent, given the proliferation of auto use) and the AI needs to endanger either the driver or the ped/other car/whomever.

as has been said, humans make this decision a lot, and they make it in the moment where logic is not being applied. therefore they are absolved (as if anyone ever even brings up blame for humans in that situation.)

so, why are we expecting car AI to be more moral than a fricking human being? remove the AI from the entire argument and make the human choose, same as they do when they pilot the auto:

[engine start]
in the event of "trolley problem," whom shall be endangered?
<myself>
<other>
driver clicks one and the car starts.

Donald Trump clicks <other>, the Dalai Lama clicks <myself>, you do however you feel that day. You’re making the same choice you would anyway, just pre-meditating it. The auto manufacturers are carrying the same burden as they always did in that scenario: nothing.

2 Likes

So every morning I’d click <myself> and then eventually I’d be careening towards a wall to save that idiot neighbor who lets his vicious dogs run free and I’d be screaming “WHY couldn’t I have been more like TRUMP!”

I don’t think I like the future any more.

7 Likes

the brake is still the manual over-ride, but you underline how my solution is imperfect. I still say let the humans continue to be imperfect, don’t outsource your personal moral burden to a software dev and create DRM on your own personal property to enforce it.

1 Like

That’s always what pops into my head when the issue comes up - so many artificialities that are inserted into the discussion.

Yeah, that’s something that always seems to get left out of the discussion - that at most speeds, being a pedestrian hit by a car is not very survivable, but being in a car that hits a barrier is. So the Mercedes announcement means what - that their car will mow down pedestrians rather than run into another car, or a tree?

2 Likes

In the spirit of Bruce Schneier’s annual online “movie-plot threat” contest, how could we abuse the cars AI in such a manner that we could force the car to do something the driver would not want to have happen?

To start off, imagine a well know political candidate that is often reviled was riding a new, government self driving car along California highway 1 after wining the election. At a carefully selected spot, an unhappy democrat decides to take matters into their hands and shove several baby carriages on the road, forcing the the cars “Trolley Problem Program” to be forced to swerve off the road over the cliff in order to save the lives of multiple children, as it is well know the approved self driving algorithm has to engage the “Think of the children” protocol, which dictates no number of adults is above the life of a single child.

1 Like

If you try using some of the early iterations of self-driving – “dynamic cruise control” is a good example – you’ll see that computers drive cars considerably more conservatively than humans do. I’d say that the major danger that these “autonomous” autos present is all of the head-exploding rage that humans exhibit when they see vehicles driving slowly and safely.

I think, at least initially, we’re going to see accidents and fatalities tick ever downwards while the rage-induced use of the horn will tick ever upwards.

4 Likes