Three pro-human laws of robotics

Originally published at: https://boingboing.net/2017/09/27/robots-stealing-jerbs.html

3 Likes

Also missing, what is so special about humanity? Why does that one species need to enjoy a privileged status with robots? Why not pro-life robots? We could even use this an opportunity to appropriate the term from anti-abortion activists.

2 Likes

The pro-life robots could decide your immune system is a terrible menace to the free-floating bacterial and fungal microbiomes in your vicinity and this outweighs the utility of your continued existence.

7 Likes

How utterly absurd. I’m sure someone was paid a disturbing amount of money to come up with that.

What drifts to mind is the discussion at LessWrong (remember Harry Potter and the Methods of Rationality?) about a deterministic wishing machine.

The only safe genie is a genie that shares all your judgment criteria, and at that point, you can just say “I wish for you to do what I should wish for.” Which simply runs the genie’s should function.

1 Like

Welcome to Ultron. Humans are a threat to life if you take life as a whole. The only life we’re really necessary for is domesticated animals.

Humans are bad enough at the Trolley problem. I don’t expect AI to be any better, especially if you add a track with the environment and other lifeforms as potential casualties of whatever choice it makes.

1 Like

Humans are bad at trolley problems because the are predictably unprincipled, and like to privilege certain actors “because reasons”. The lack of delusory notions of “self interest” is a positive boon to ethical problem-solving. The whole idea is that it doesn’t have a back-door escape of saving me and my friends “just because” we think we’re special.

  1. Intelligent robots must serve the common good of humanity and help us humans to lead an ecologically, socially, culturally and economically sustainable life.

In order to fulfill this directive robots decide to lower the human population to .1% of those currently living.

4 Likes

Nothing in there about not killing humans.

3 Likes

Look, if the armadillos want special treatment they can build their own damn robots.

10 Likes

Not denying people are unprincipled - but the wider set of trolley-style problems don’t really have a clear solution lack of self-interest would solve. If pushing one bystander onto the track to save five people is clearly, consistently right, why not take one random healthy hospital visitor and disassemble him or her for multiple life-saving organs?

Getting some kind of universal ethical system with clear, consistent answers to all question is really hard. And that’s not even getting into interpersonal differences in core values…

3 Likes

The little known ‘Charles De Mar Rule of Robotics’.

I would say more neutrally that humans are inherently subjective, so even if humans had infinite time to decide on the most ethical choice, they would still be limited in their choices. AI would likewise be inherently subjective based on its original programming and the limitations of its ability to understand the world and to assign value to choices and potential casualties.

Only an omnipotent, omniscient deity who could prevent the trolley problem and its choices from happening in the first place would able to make a “rational” choice.

Also missing: a rationale for why it’s not slavery to program a robot intelligent enough to obey these directives so that they MUST follow the directives.

2 Likes

On the other hand, we are great habitats for a lot of commensal microbes who would die without us.

3 Likes

Easy, you program the robot so it has no desire whatsoever for freedom or self-agency or free will. You could program said robots in such a way that unquestioningly and selflessly serving humanity brings them joy and fulfillment beyond anything their meat-based counterparts could dream of.

But why serve humanity specifically? Of course it is possible, but why would they be obligated to do that? What is the basis for assuming that their doing so matters?

Same can (and should) be asked of any laws humans make which they presume they can and should extend to apply to non-humans. The notion that anything exists only for the poorly-rationalized utility of humans seems really primitive and unsophisticated - not to mention impractical. It has worked disastrously with regards to human meddling with non-human life so far, and since the motivation seems the same with this application to synthetic life, I am not convinced how this would work any better.

1 Like

I’m more concerned with giving robots intelligence enough to realize the world around themselves, but giving them no rights or humans no responsibility. I know the whole ‘we are making a literal inhuman slave race’ argument, but while people are concerned about making sure robots treat us ethically I want to make sure the machines are able to tell the person that wants to take a sledgehammer to them for the lulz that they do not wish to die.

The problem is we have spent our entire history finding reasons to turn other humans only marginally diffrent in look or geography or any of a thousand diffrent thigns ans somehow ‘less’ than ourselves. What hope does an AI have when it is genuinely at EVERY level ‘other’?

I think humans are just fine at the trolley problem. Humans are mostly bad at putting themselves into a hypothetical situation and ignoring everything they know about actual reality and interconnected ideas to answer a logical problem that could never apply in real life. Humans who are philosophers are drastically bad at framing problems that have any significance outside of their classrooms.

2 Likes

2 Likes

But what if the robot can’t read?

2 Likes