Mercedes' weird "Trolley Problem" announcement continues dumb debate about self-driving cars

It seems to me MB have turned this into a sales feature. To them and the driver the statement ‘this car will endeavour not to kill you’ is the right answer. However if you look at who, in the picture, is taking the risk then it should be the pedestrian who is should be saved. The pedestrian did not choose the car nor did they choose to drive yet the scenario means they suffer all the risks of MBs and the car owners decisions.

I suggest the configuration should be the pedestrian is not as risk yet the driver is.

1 Like

The Trolley Problem has nothing to do with the passengers on the trolley, or even the operator of the trolley. It postulates that there are people tied up on the track past a switch. A bystander standing at the switch has the option of doing nothing, thereby allowing those people to be killed, or manually rerouting the trolley onto a siding, where it will kill one other person.

4 Likes

for extra drama, the baby carriages could be shoved from the top of a flight of stairs!

2 Likes

#I’ll save you, Nell!

4 Likes

You missed…
Don’t drive on trolley tracks where your choices can be binary ones determined by an actual track switch.
and…
Don’t allow the vehicle to operate if the brakes have exceeded their service interval.

1 Like

Simple solution: a legal system for cars that behave immorally.

5 Likes

*scroll down
<none of the above>

*click

See, they just need a larger touch screen!

6 Likes

I feel like it’s way too soon to expect car manufacturers to have a good answer for this. They need to start with smaller moral conundrums and work their way up. For example, should the car park itself diagonally across two parking spaces?

3 Likes

Philippa Foot saw the TP as a conflict between ‘positive duties’ and ‘negative duties’ ie the obligation to actively do good, vs the obligation to refrain from doing harm.

It’s the same idea postulated in another thought experiment, ‘do you torture the terrorist (or the terrorist’s child) in order to discover where the ticking bomb is located’

So the point is, as you say, not to find a solution, or indeed, to prevent the million-in-one chance of TP-like scenarios occurring, but instead to decide what constitutes ‘good’ and what constitutes ‘harm’

The big issue is not about swerving or braking to save the group of school kids, it’s about whether or not we let ourselves build AI’s in the first place that are capable of making these types of decisions.

Hence, rather than let market forces and Silicon Valley build a future where smart-but-dumb machines come with built in murder-algorithms, we first need to let the humans have a say.

3 Likes

Ah, I can answer that. It’s misleading because the chin is not usually the part of the anatomy being stroked when people are talking about self-driving cars.

5 Likes

The question will be solved by the lawyers. There’s no way to stop the pedestrians from suing. But the owner can be forced to accept binding arbitration by clickwrap. Therefore, kill the driver every time. Oh, and don’t allow the radio to play Death Cab; they only raise suspicions.

4 Likes

Or we could just, you know, slow the fuck down!

1 Like

Based on the owners I encounter on the streets of Toronto, I really don’t look forward to meeting the digital personality BMW instills in its vehicles…

1 Like

While the role of AI and who gets to shaoe it is an important issue, the immediate discussion has always been about self-driving cars. And the discussion always ignores two facts: 1) that anything resembling a trolley problem comes about only when the driver, civil eningeers, pedestrians, every stakeholder in the process has ignored the simple rules established for nearly a century, and 2) that neither of the two solutions is anything close to the solution a real human employs (“slam on brakes”), which actually works much better, and is in fact a perfect, fatality-free solution when all the existing laws have been obeyed.

The trolley problem only comes up because philosophy (even though useless here) is more interesting than good planning.

2 Likes

Talking about Trolley Problems regarding self-driving cars is fundamentally stupid. In any situation where you have time to significantly change the result, the accident is likely avoidable. In any situation when you don’t have time to avoid the accident, there’s not much you can do to mitigate damages. Furthermore, there is no way to unpack the full ethical implications of the trolley problem without absolute knowledge of the situation, including the numbers of passengers in various vehicles, their absolute ability to avoid harm, their relative culpability for the situation, etc. And of course it’s nearly impossible to get that information with benefit of hindsight, much less from a bunch of vehicle sensors in real time.

Self-driving cars should be designed to avoid accidents, and to avoid situations that will produce accidents. That’s all any driver can do or should try to do.

2 Likes

This is going to be in the highest german courts in no time.

Haggling over human lives is against Article 1 of the German constitution. Can’t trade one innocent human life sacrificed for a thousand saved.

Fun design process ahead.

2 Likes

Not sure why you keep banging on about ‘slam on the brakes’. It’s a nice piece of rhetoric, but hardly an elegant solution when 50 yards ahead, a truck load of wood spills onto the highway in 70mph traffic.

In this case, broadly speaking you have 2 options (a TP if you like):

-Do nothing (logically equivalent to slamming on the brakes in this instance ie you’re guaranted to have a collision)
-Swerve

There are no solutions to this problem, only choices to be made.

Avoiding head-on collisions is not rocket science, most luxury cars can already achieve this.

The dilemma is whether or not we choose to instruct an AI to make swerving choices.

Philosophy cannot solve these problems, but neither is it ‘useless’ in informing the debate.

Because it’s the only good answer humans have developed. Ever. In a hundred years. Swerving into another lane or direction increases the risk of a cascading car accident. Which is why “slam on the brakes or don’t” is pretty much the only choice given in driver’s ed. for these sort of situations.

There are other parts to the solution – well-marked roads, keeping good following distance, stick to the speed limit, separating pedestrians and traffic, knowing how to handle a brake lock (or knowing you need ABS) – but the only decision that’s left to the driver at moment of crisis is “brake or not”. And it doesn’t matter if the driver is a human or an AI.

But when presented with the driving equivalent of the trolley problem, the driver is bizarrely asked if he wants to go left and hit a wall and die , or run over pedestrians and kill them. The situation itself if easily prevented (good speed limit, well-marked roads, good following, etc.). And both options are the two worst of the actual range of options.

This problem is not an Artificial Intelligence problem. It’s a basic intelligence problem, and it’s been solved by any drivers ed. course in America, with considerable help from civil engineers to make the roads themselves pretty safe. Presenting it as an AI problem, much less suggesting it’s an interesting one is disingenuous

And this is completely ignoring the “honk so pedestrians can move themselves out of the way” component.

I get all that.

But your suggestions are prescriptive.

They don’t really cater to prevent ‘act of God’ scenarios; freak weather, catastrophic mechanical failure, aircraft dropping out of the sky etc

These might be rare incidences but that doesn’t mean we should ignore them. Somehow I don’t think that “good speed limit, well-marked roads, good following” would help very much in these cases.

Hence, the AI/TP scenario is not contrived at all. If you have no choice but to swerve, which way would you like the algo to swerve you?

But those scenarios are still considered in basic driver’s ed. Safe speed for conditions (weather, visibility) and safe following distance. It an airplane falls out of the sky, your plan is exactly the same as if you turned a corner and saw a wall, or if an elephant ran into the road. I can’t even think of a scenario where swerving is the answer, unless you’re the only driver on an otherwise featureless plain. It generally makes your problems worse. An ideal algorithm for a car doesn’t swerve.

And I guarantee you no human drier has a good solution for a “plane falls out of the sky” scenario. Algorithms aren’t magic. They don’t create intelligence. They codify the intelligence and foresight of their programmer, and nothing more. If the programmer didn’t foresee how to cope a falling plane, his algorithm won’t either.

2 Likes