Website asks you to think like a self-driving car and decide who should die

Asking the internet about ethics? That’ll end well.

7 Likes

Agreed, I must admit my first thought was how do we know that we’re not being Milgrammed here.

1 Like

Yeah, the website made a bunch of assumptions about me, not knowing (nor perhaps caring) about my ruleset, which it never asked me about. My rules:

  1. The people in the car took a calculated risk that the pedestrians didn’t, so if the number of pedestrians is less than or equal to the number of passengers, the passengers should die. (I can see that there would be counter-arguments for this, but even people who cross on a red light ought to assume that they’re not going to be killed for it.)

  2. Not intervening — driving straight rather than veering off — is always preferable if the number of victims is the same.

  3. People are more important than animals.

  4. The car can’t know (or more accurately we don’t know that it knows) whether the people in it or in the intersection are doctors or criminals, fat or athletic, young or old, so none of those things should have any bearing on its decision. It’s a numbers game.

4 Likes

So “polls” are nowadays called “crowd-sourced pictures of human opinion”?

3 Likes

I’m curious why you think that the numbers matter with respect to the passengers vs the crossers given (1). It seems your rule number one is really only a decision maker when the numbers equate, otherwise it’s just a decision based on numbers.

Already solved this one.

Should your self-driving car be programmed to kill you if it means saving more strangers?

3 Likes

Since the whole thing is predicated on a car under automatic control going too fast for the environment while approaching a hazard, apparently without any brakes, your solution is at least as valid as anything else.

4 Likes

There’s a deep mistake with this approach somewhere. I don’t know what kind, so bear with me while I think aloud.

First off, Asimov himself admitted at one point that his Laws of Robotics probably could not be programmed into AI as laws with a top-down approach; like the Laws are engraved on golden shem in the robot’s head, and it uses its intelligence to interpret the Laws “correctly”, somehow coming to the optimum solution. This approach is probably unusable for any sapient (or mock-sapient) mechanism (even biological ones) without a basic sense of morality or an idea of what an optimum outcome should look like.

No, Asimov said that the Laws of Robotics are more like the guidelines a designer should use when making any tool: that it shouldn’t hurt people, that it should do what it’s supposed to, and that it shouldn’t self-destruct. A smart car should not run into people (or animals), should be able to navigate from a given point to another, and shouldn’t hit anything that isn’t a person. Note that those are pretty much overlapping criteria: cars aren’t terribly hardy things, considering the power they have, and bumping into people or objects will often stop them getting from A to B, so the designers should be concentrating on stopping the cars bumping into objects and people as a fairly basic stance.

In the scenario in the post, the car should not be in that situation, and if it should end up in it, it should be using all of its processing power to slow down to a controlled stop as quickly as possible, since that’s the course of action in most difficult situations that will result in the best outcome. You don’t need to prognosticate about possible outcomes, you don’t need to assess whether the moving objects on one side of the road are nuns or penguins or a pedestrian crossing viewed in a weird light producing an optical illusion, or count the number of passengers and look up a morality database that includes values for nuns, penguins, supreme court judges, and mimes; you just need to stop the car.

Because — and I tell you this as someone who has crashed a car, written it off, and been very lucky that no-one was hurt beyond a banged nose and a knee — even these massively-parallel biological computers between our ears, with their potential for calculation many orders of magnitude beyond what we can etch into microprocessors, and a million years of social evolution going into making a moral machine that can decide between nuns and mimes, these things do not react fast enough to make that assessment in crash-time.

When I crashed, in those few seconds between realising we were in trouble and coming to a stop in a farmer’s field, maybe a bit of my brain was praying to the universe not to kill me, and maybe an even smaller bit was praying not to kill my passengers — and those two parts did not have enough time to even begin negotiating about whether my life was worth more than my three passengers — but the biggest part was trying to keep the car on the road, squeeze that brake-pedal through the floor and avoid that fucken treeeee! And I didn’t even identify it as a tree at the time, just as a big object that I wanted to avoid hitting. It could have been a tractor, a cow, the farmer, a giant Fabergé egg, a Martian riding a thark, or a tent full of inflatable cushions and confetti, I just knew I didn’t want to hit it.

And I know what a tractor is, and what a Martian is, and what a farmer is, in ways that the simple processor in a car cannot, and probably couldn’t even if we had a century left for Moore’s Law to run out. I don’t think any of the self-driving cars around have anything resembling an inkling of an idea that a human is anything more than a mobile obstacle that it will try to avoid colliding with, just as it will avoid these other, bigger, moving obstacles, and these non-moving obstacles (same thing in a moving reference frame).

I doubt that any car at the moment can even count the number of obstacles it needs to avoid, let alone its passengers, and compare one to the other to make a decision based on any philosophical stance. Never mind making an involved moral decision on their worth. Nor can their programmers usefully squeeze in any rule more complex than “Don’t collide with things”. And I don’t believe they should be pretending they can, or have that as a programming goal, even as an unattainable ideal, not at this stage of the game.

This is bullshit. This is PR and lawyer-led ass-covering, so that when the inevitable happens, the car companies will point to this study and claim that they did everything humanly possible to avoid that inevitability, so it isn’t their fault, however it came out. And the crying shame is, the engineers will be doing everything they can to avoid it, just by trying to make the cars avoid hitting things.

7 Likes

To add to the reasons this pseudo-psych survey is wrong (and no different from the train coming to a switch, etc): 99.999% of the crashes that occur under human driver control involve nothing even vaguely resembling these sort of choices, and in fact tend to happen so quickly a human couldn’t react, let alone make a steering decision.
The damage done by jackasses floating these surveys is tremendous: it gives readers an internal feeling of OMG COMPUTERS/AUTDRIVECARS/VIDEOGAMES/COMICBOOKS BAD BAD!!! with no statistical or demographic justification at all.

5 Likes

I’m mostly in agreement with your points but you should try crossing my street sometime. The drivers in that intersection are trying to commit negligent homicide.

Reminds me I need to put an airhorn on my shopping list. :why_is_there_no_whistling_emoji:

3 Likes

There was no way to prioritize people who look both ways before crossing the street over people who blithely sleepwalk into traffic futzing around on their phones.

True, all of the deaths could be prevented by not having cars as @heng proposes, but more realistically they could also all be prevented by pedestrians taking a modicum of interest in their own safety. It’s a much simpler algorithm too: IF there is a car hurtling towards crosswalk THEN do not enter crosswalk.

2 Likes

It’s simple. All they need to do it put a kiosk in every autonomous vehicle that asks you a bunch of these questions. The robot car will then behave according to your personal values. The designers are now off the hook.

2 Likes

Reverse utilitarianism, aka the least harm for the least number of people. If you have to choose to kill two people or five people, all other things being equal, wouldn’t you choose to kill two people?

1 Like

I think a better question would be “Why are we designing cars that drive towards a pedestrian crossing at a speed where a) it cannot emergency stop if somebody steps out and b)your going so fast that to swerve off would literally kill the occupants of the car.”

I think a better debate would be how can we engineer out these types of situation. I think there must be some potential driving features for getting out of situations that could be available to a computer driver with computer reflexes.

Above all, driver-less cars need to drive defensively and safely at all times. Safety should always take precedence over speed. Maybe one way of enforcing this would be to make a strict code in which the car manufacturers, except in extreme circumstances are responsible for any deaths or injuries caused by their cars.

8 Likes

Apparently there are no brakes on this homicidal car.

2 Likes

No, because in this case there is a disparity of expectation of risk. If you choose to get into a vehicle, I assert you should carry a greater burden of the risk associated with that.

1 Like

Clearly in the context of the scenario posed, one might posit that the pedestrian not crossing is the better strategy, but it isn’t hard to imagine a real life situation for all of these examples in which the pedestrian acted with the best information they had yet the dilemma was triggered for other reasons (most of which could involve a faulty speeding vehicle).

It’s interesting that in a world in which pretty much every substantial risk has been sanitised out of our lives (not a bad thing in itself) the most glaring one is now vehicles. To push your point, when does it become the responsibility of the vehicle (or driver) to not hit pedestrians? I mean, one can almost entirely de-risk against vehicles by never leaving one’s house, but that is problematic for many obvious reasons, and it seems rather unbalanced if the rationale is to mitigate the negative externalities of driving without there being any substantial positive externalities to compensate.

Pop-up bollards along greenlit crosswalks maybe, but many intersections would have to be completely redesigned so cars could continue to make turns. And they’re expensive, and they probably don’t take well to snow and ice. I don’t think no-brakes runaways are nearly common enough to warrant such a thing.

A pedestrians-only phase of the signal cycle would help (1: East-West auto traffic only 2: North-South auto traffic only 3: All pedestrians, no autos) but I’ve only seen that in the busiest downtowns of large cities. Additional benefit is that peds can cross diagonally.

2 Likes

Why just brakes? In the context of a human driver, there are many accidents caused between vehicles and pedestrians every day. There is an analogue in self-driving cars for most reasons human cause accidents.

That’s the premise of this survey/study/test: that either the car’s brakes have failed or its throttle is stuck open. Those failures occur so infrequently on modern automobiles (barring human error of course) they’re essentially insignificant, but it’s still the engineers’ responsibility to account for all foreseeable corner cases.

2 Likes