Generally Isee it less as ‘does the target experiance pain’ and more ‘how is this affecting the mindset of the person performing the act?’
After all wasps don’t have emotion (as we understand the concept.) Yet it’s generally seen as cruel to yank the wings off them. Ditto with tapping the glass of a fish.
It says more about the person doing the thing than the thing getting hurt. IE ‘A person is willing to be systematically cruel for cruelty’s sake.’
Note I’m not talking about stress testing, TTDing a piece of hardware, the fate of many purpose grown lab mice, and the like. You CAN have people that work with lab mice disguise their desire to inflict pain by doing science, but again that is more speaking to the person than the victim.
Is it OK to torture a thing? Certainly not, because ‘you’ are going out with the intent of harm. Doesn’t matter if the target doesn’t experience emotional distress or not.
My point is that the law shouldn’t be about people’s states of mind other than whether an action was deliberate or not. If there’s no victim, it shouldn’t be against the law. When the law starts trying to parse people’s states of mind for their own good, you wind up with totally fucked up laws like those that say I can’t do any drugs that I want to. The law should be about actual harm to a victim (who is not also the perpetrator), not about moral judgements about other people’s state of mind.
That said, as a personal matter, you (or me) are perfectly free to pass any kind of moral judgement we want to about people’s states of mind, and I would agree that that one is pretty fucked-up.
Also, at some point in the future, I would quite likely be willing to consider an argument that a robot can be sentient and thus qualify as an actual victim, but if you’re going down that road, the robot is going to need other legal rights, too.
All the evidence we have is that the the brain is the mind. Damage the brain, damage the mind.
Strictly speaking, all this allows us to claim is that the brain is a necessary condition (i.e. that you need a brain to have a mind). It doesn’t show that it’s sufficient (i.e. that the brain is the mind). Still, I don’t have any more plausible hypothesis.
I would venture to say that argument by obfuscated assertion, which I think philosophy largely is, is not equivalent to math.
Your want to fill a perceived void does no make a “soul” a real or proven thing, no more so than the want of a friend makes a child’s “invisible friend” a real and separate human being.
Is it ok for a robot to torture a human who has been torturing1 it?
Is it ok to torture a robot that has been specifically designed and constructed with no self-awareness, no physical ability to feel pain, no programmed code to define and process pain2?
Is it ok to design and construct such a robot?
Is it ok to design and construct robots that will save the lives of “our” army’s soldiers by making war and bloodshed more feasible/palatable if not winnable?
What measures do humans currently employ to gauge a robot’s sentience? Are those measures sufficient? Are robot designers and builders required [somehow] to measure robotic sentience and task the robot accordingly? Does it even matter (just be because you can, does it mean you should)?
and lastly, though y’all probably saw this coming…
Is it ok for a human to torture another human who is deemed [by some dang authority, insert slippery slope parameters here] a "clear and present danger"3, or isn’t forthcoming with life-saving information or plots that will destroy life and/or property, etc. [insert more other slippery slope parameters here]?
What makes us human?
What costs are we willing to pay to retain our humanity?
IRL robots, a complicated relationship right out of the gate IMO:
What are the rights of the people in this picture? If in the future demented elderly are not able to feel and think to the same degree as some robot, what happens to rights of both parties?
It’s times like these that make me really really miss Terence McKenna.
He has always been an insightful navigator of such things, including matters discussed in this thread.
Is the erasability of their experience sufficient to remove moral obligations?
We don’t have to worry about how we treat them unless they function in such a manner that their “subjective experience” of our treatment leaves “scars”?
Genuinely curious where this line of reasoning leads, as I’m uncomfortable with the future of robotics as a guilt-free slavery 2.0.