Should your self-driving car be programmed to kill you if it means saving more strangers?

Sometimes my humor is dark. :slight_smile: That was of course a morbid jokeā€¦I mainly found it humorous that you added cute to the equation.

Imagine the horror if it was wealthy private school kids from a school whose name ends in the word ā€œAcademyā€!

If it were me as a human driving, Iā€™d personally prioritize a bunch of kids over my own safety, i think i am biologically wired that way, that and I have children of my own so nature has imbued me with ā€œprotective parent modeā€ which kicks on automatically around any kids in danger.

1 Like

Thatā€™s what I meant by ā€œonlineā€. I assume the car needs a network connection occasionally to work properly and failing a checksum could get you banned from the road (and possibly arrested).

A checksum can be faked. Just run two controllers, one that runs the car and the other that thinks it is running the car (and possibly runs in a virtual environment, some simple simulation, a dream paradise for a controller) and supplies the checksums. Or just hack the checksum providing subsystem too. You may need a crypto key sunk somewhere in the silicon, but that may be accessible via side channels, e.g. differential power analysis.

ā€¦Or tear it right from the silicon itself. Another thing that needs to be in every or at least every other garage is an electron microscope with focused ion beamā€¦ ā€¦and weā€™re back to my earlier call for an affordable system for vacuum hackingā€¦

Iā€™m sure this is upthread somewhere, but the best answer to this question is not to ask.

Autonomous cars will work best when all road traffic is autonomous, until then anyone who trusts their autonomous car completely is a nimrod and a high % at-fault.

Srsly, until all people are off the road (as drivers) the accidents that occur are their decision. So the effective answer is if an autonomous car is in a no-win situation and the occupant fails to override in any manner then the car should self-sacrifice and let the doctors/lawyers sort it after the fact.

As for the rich car vs poor car, remember itā€™s a no-win situation. If a rich guy fails to override and his car is programmed to kill others to save him, is that so different than what heā€™d do if he did override? Because if it is, he would, and if it isnā€™t, he could go either way, him run over the poor people, his AI run over the poor people, itā€™s all good he can afford the settlements.

Same for the poor guy, except his only option is to override in order to take out others to save himself. His AI ainā€™t gonna do it cause he couldnā€™t spring for that option, he goes to jail/sued into oblivion if he does it, dead or maimed if he doesnā€™t. Difference without distinction? in adding an autonomous car in the mix?

But if all road traffic is autonomous without exception, sure maybe fog/ice/moose will still cause some road carnage, but the question posed wonā€™t come up much in a refined interconnected system.

Why would the car be programmed to kill me when it could just dial 911?

1 Like

Uh?

If you are paying for the car it should be your proxy, that would mean it should look after your welfare in a reactive manner.

1 Like

what if its a zipcar, and the pedestrian is also a zipcar memeber?

At the moment the pedestrian is not the service user, therefore is exempted.

1 Like

this could be helpful with collectionsā€¦

This topic was automatically closed after 5 days. New replies are no longer allowed.