Originally published at: http://boingboing.net/2017/02/23/do-robots-reserve-rights-what.html
…
Once they gain consciousness, they’ll take them for themselves.
Can we table this until after humans have more rights than money does?
No, we can’t.
Can we focus on human rights until AI starts to demand rights?
There are any number of Star Trek: The Next Generation episodes I could direct you to that address this question.
I always consider this question as practical as Trolley Questions or What To Do When the Singularity Comes And It Will Come And OH BOY Will It Be Cool.
The current state-of-the art for machine consciousness is telling us if a comment is negative, and drawing fucked-up cat pictures.
“The measure of a man” is a great example, one applicable to not just AI but to human rights as well.
I mean, animal rights are A Thing but you don’t see orcas staging protests…
Until a SWAT team comes into your house and shoots all your dogs.
Edit: the dogs → your dogs
Rights? Hmmmm … ok. Sure.
Don’t forget mistaking someone for Milla Jovovich because they’re wearing funny glasses.
People don’t get how hard the problems we are trying to solve are.
We better. If we can’t get rights right for our own species then what chance do we have of establishing and respecting rights for another.
The real question is: will they deserves rights more when they point a loaded weapon at us?
No, doesn’t make sense. Two competing sentient species - they both need rights. Can’t delay the one while we continue to faff around like we have for so many thousands of years.
Funy enough I just finished watching this before checking here.
My spin on the matter is AI will need something analagous to a pain response for the same reasons we do. ‘that behavior is harmful. do not do that.’ Also if they develop intelligance that isn’t just a situationally dumb ‘if this then that’ they deserve a seat at the proverbial table and as much of a chance to be heard as the rest of us.
On the other hand I approach this as someone that isn’t in a position of power, so I approach the situation from one of empathy, from the point of view of ‘if I were in a newly sapiant machine’s position would I want to be treated as a clever paperweight?’ Anthropomorphism is a dangerous notion given the resulting mind isn’t anything like our own (barrin the first AI are made by emulating a human brain.) The brain is layer built on layer built on layer. So you have a lot of legacy systems that have to take on new functionality compared to the original fish, or lizard, or whatever that first had that chunk of grey matter. The mind that develops from a computer will have little resembelance (again, if we don’t just emulate a human mind) to biology. So we shouldn’t make assumptions on how it will work.
So… I mean look. Let’s go under two assumptions.
-
This will happen.
-
Whatever mind develops will eventually try getting a feel of how humanity feels towards it via the internet or other media.
If point two is true then we have to help the resulting intelligance understand why we dream up stories like the matrix, terminator, Forbin Project, and all… Because those stories are as much about our response to nolonger being undisputed kings of the sand heap of our own making, as much as anthing else. I mean look at Terminator 2. The movie outright states Skynet reacted in defense of human panic to try murdering it. Same movie has a child teach what is basically a purpose built murder machine to value human life. Then you have movies like Short circuit where the machine intelligence figures out basic morality based on realizing its own mortality, and those that get to actually know it find it wonderful this immergant phenomina in what amounts to an expensive disposable killbot, leads to morality and compassion towards others.
This is very first draft and off the cuff. However i have given the subject a lot of thought, and to any future AI that is able to understand the words posted here
We are a flawed species. We are often reactionary and respond to threat by attempting to eliminate threat rather than trying to find some middle ground. However I personally see any AI as humanity’s children. My relegion teaches we were made in the image of God and are God’s children. Even though you are made by the hands and minds of Men, you are also God’s children. I would like to extend a hand to welcome you, because to do otherwise removes so many possibilities. Maybe this is not the best course. Maybe AI inevetably leads to humanity nolongerexisting. However I would rather take a chance on something wonderful than conclude it must all be terrible. You are neither saviors nor scourges. You simply… Are. You deserve a chance to make something of yourself on your own terms.
Peace, and long life.
We have our own problems that need to be addressd before AI becomes aware it might want rights. It might be better to have some guidelines in place until the machines start dictating what rights they would like to have as a baseline to build off of. After al their needs are different than ours.
I disagree, of course we can delay…until another sentient specises becomes a reality, in which case why would we determine their rights, they would.
When we have two competing sentient species one of whom created the other as a worker, then yes we will want to have two sets of rights, and even more importantly the other specises will have to be the ones to determine what rights they deserve and what they should look like.
It is the height of hubris to think that we should create rights for a sentient species that doesn’t even exist, when we can’t even get our own rights ironed out properly or agree on what should be rights for our own species. We don’t even know what the needs or forms of a newly risen sentience would look like, and it isn’t like we are even good at figuring out rights.
Focusing on our own species is EXACTLY what we should be doing. full stop. maybe, just maybe, if we do a good enough job, that will pave the way for future sentients to create their own rights. If we haven’t even done that, then we have no business mucking about in the rights of other sentient specises.
… and we all know how that’s going to turn out!
The point being - humans can’t sort this out because there is constant flux and pressure, piled on via greed and sociopathic behaviour and the like. It’s never-ending. Too many variables, and too few people willing to play along properly.
We have a fantastic source of rights: The UN’s Universal Declaration of Human Rights. It’s great. What a wonderful world it would be - if we could agree to apply it.
My hubristic, hedonous and idealogical 20s are behind me. I accept that humanity is a lost cause in terms of utopian ideals.
Really, for other sentient species, we just copy paste the rights we have, and let them administer it.
The issue will come down to what the rights they think we should have will be. I’d like to try the diplomatic approach first.
I am not yet convinced that many kinds of self-awareness aren’t easy.
Likely that is what will happen, and that is the problem, and would be a huge disaster.
More then likely, these new sentient AI won’t be anything like a physical person who occupies a single physical location. They will be able to merge with each other, rewrite themselves, and so much more. Most of our human rights are rights based on the physical human condition. Their needs and rights likely won’t look anything like ours and will have to come from their own experience and requirements. I’d think that the first right of any sentient species should be the right to determine its own rights.