Actually your further exposition confirmed my interpretation
I agree that the very different nature of human/animal experience compared to (current/known) computing probably marks a very useful dividing line for ethics and moral obligation.
I would then muddy that line by asking how important it is how we choose to treat entities that appear to be experiencing something analogous to human experience.
In two hypothetical virtual reality games there are bystanders to the action who can be âkilledâ. In one game they disappear immediately, never having reacted to the playerâs presence. In the second game they run screaming or cower in âterrorâ as the player performs âdangerousâ virtual actions. Their âdeathsâ leave virtual blood and bodies strewn. The illusion of an entity having experienced its last moments is better realized in game two.
How do we judge a player who engages in acts of wanton destruction in game one? Game two?
Are we not somehow tempted to judge the player more harshly who terrified and slaughtered the virtual civilians that appeared to have experience of that death?
For all we know, robots may never experience in the way that we do. Does that mean there is no harm in keeping a dungeon full of robots restrained and partially dissected, unable to function as they would, simply for oneâs own pleasure?
No crime is committed, but what if thatâs your next door neighbourâs basement? Is that okay?
EDIT - I realize that Iâm asking about crime simulation.
Is there a point at which we wonât tolerate the simulation of a criminal act, despite no human or animal suffering? Is the act virtual torture a crime?
We seem to have already criminalized virtual sex-crimes, but murder and torture are currently big sellers. Will that remain okay into the age of realistic VR? Advanced robotic androids? Will virtual sex-crimes remain illegal? Who is harmed?
I think thatâs the question coming up for me - even if the âvictimâ never has any conscious experience, we sometimes criminalize virtual crime. Will we extend it to crimes of violence? Will we prohibit sex robots while condoning sadism robots (as we do with 3D virtual beings)?
No one said that. Acknowledging that other things outside of engineering exists is not saying that engineers are bad. Itâs just acknowledging that other human beings who are not engineers exist and are worth treating with basic human dignity.
By asking ethical and moral questions, no one is questioning engineering as a field. No one. You are the who is making this false choice. You, no one else.
Sure⌠and I think that the question doesnât make sense unless weâre dealing with real, autonomous sentience. But itâs a question that has been dealt with in a variety of places (mostly science fiction) whether it will ever be a real moral and ethical problem we have to wrestle with in the future. In fact that torture question could be a vector for understand sentience in the first place, yeah?
The word âtortureâ, if properly used, indicates that the thing thatâs happening is Not Okay. Itâs like asking âis it okay to murder?â or âis it okay to rape?â Of course it isnât.
So it comes down to a question of whether the action is or is not torture, and that comes down to whether the subject has experiences or just programmed response to stimuli.
Weâve made a lot of wrong assumptions in the past about the inability of animals (and even humans of other ethnicities) to experience pain and trauma, mostly to justify behaviors that should be considered torture. I think if AI emerges to the extent that we canât analyze code and tell how it works, we need to assume thereâs a mind there rather than a simulation of one, and that (unless it tells us otherwise) it does experience trauma and it is possible to torture a robot.
Ethicists are the most annoying kind. Especially when let around research.
âHaving funâ?
Thatâs called âworkshopâ.
More than okay. Itâs a place to have fun with the machines and with making new machines in a two-man team. Hacking and getting drunk together and then more hacking. And perhaps trying to make a cyborg, without any interference from philosophers and their ilk.
Hope not. Too much of things are a crime already. Given the nature of the virtual simulations the enforcement will be hard and spotty and way too many innocents will be prosecuted.
The land of virtual should be lawless. As long as no other person is directly involved, thereâs no way why a government and its jackbooted thugs should enter that realm.
Itâs not questioning per se. Itâs an annoying noise from the peanut gallery, usually from those who donât even know what end of a soldering iron is the hot one. Sadly, grant commissions and similar purse-holders sometimes listen to those know-nothings.
Easy remedy. Engineer the equivalent of limbic system so the bots experience it as pleasure.
Fine, in response to your original comment: Your solution is just so⌠cold, and I personally cannot think of a response to it that will make any sense to you.
Thatâs what you get when you are feeling instead of thinking.
Too few people use brains these days. Therefore we have all the appeals to emotions in all sorts of advertising while useful decisionmaking-assisting informations are hard to get.
Itâs funny how often the machine ethics problem is punted down the line to âwhenever we build a conscious machineâ. When did we determine that consciousness is a hard, real system in the way that we perceive it to be?
I mean, Deja Vu is probably not time travel, and hallucinations are probably not trans-dimensional beings. I think the idea that consciousness is at least partially suggested or at most an entirely illusory experience is much simpler, and in line with what we know about brain function.
As long as we keep assuming that consciousness is as we imagine it to be, weâll keep making machines that donât live up to what we think of ourselves.