âIs it ok to push a robot with a hockey stick?â asks a person, who indirectly slaughters 30 animals a year because âmeat tastes rather niceâ.
Iâm leaving early to go torture my liver. Adios mi Amigos/Amigas.
Why do we feel so bad when we watch the robot fall down, we wonder? Thereâs no soul or force of life to empathize with, and yet: This robot is just trying to lift a box, why does that guy have to bully it?
Because we are robots programmed to attempt to feel what other things that we consider to be like us due to similarity of features feel. Itâs called empathy. You can look it up in a dictionary. Also, you donât have a soul either, and if you disagree, then just show it to me.
Here ends this episode of Slightly Annoyed Answers to Inane Questions.
Thereâs no soul or force of life to empathize with âŚ
This is where I disagree. For any reasonable definition of âsoulâ and âlife forceâ, I donât find the robots lacking.
At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifully, he says. Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.
Finally it was down to one leg. Still, it pulled itself forward. Tilden was ecstatic. The machine was working splendidly.
The human in command of the exercise, however â an Army colonel â blew a fuse.
The colonel ordered the test stopped.
Why? asked Tilden. Whatâs wrong?
The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.
This test, he charged, was inhumane.
The concept seems unreasonable as a prerequisite if you ask me.
Iâd like to know what one means by a reasonable definition of a soul or vitalistic force. You know, just so I could either pick it apart, or show that one is referring to something we already know exists empirically.
Given howâŚphilosophically unsystematicâŚwe are in dealing with animals(human and otherwise) I just canât hold out much hope that weâll get good answers RE: robots and software systems.
As the Tamagotchi craze rather impressively demonstrated, you can get people to obsess over some 8-bit micro driving a 16x32 LCD if you twiddle their empathy correctly; while fish get knocked down several ranks in importance(enough that some âvegetariansâ will eat them) largely because they arenât fuzzy.
With the right character design, you could probably have people threatening to kill you for mistreating a robot that demonstrably contains just some balance-related feedback systems; while you wouldnât hear a peep if IBM had to modify the filesystem permissions to prevent Watson from deleting himself and ending his torturous existence(Iâm definitely not the writer to do it justice; but it seems like there could be a neat story built around a nearly-successful AI R&D effort that has ongoing problems with sufficiently-successful attempts self terminating.).
(Edit: In thinking about it, the real question here is âIs it possible to torture a robot, and how would you be able to tell?â Unless you hang out with sociopaths and John Yoo, the âtorture is badâ part is more or less a premise; the problem is all in determining whether any system(robotic or organic) experiences suffering or simply demonstrates reflex responses to certain stimuli.)
Is it OK to penetration-test a server? Should we really take âconnection refusedâ as a âno,â or just keep trying?
So tasty⌠so very tastyâŚ
This is what happens when you let a philosopher into engineering.
Is it okay to do destructive testing (effectively a torture) on said robot? Why not? I donât see any reason that wouldnât be some empty emotional handwaving.
Not ok because the impulse to torture is unhealthy. Itâs not about the robot.
Hereâs a thought I donât often see thunk on this subject. When a human being is depressed, its brain activity is physically depressed. There are less neurons firing. Blood flow to the brain is reduced. The electromagnetic field generated by the brain is measurably weakened.
When you are excited or happy, the brain is excited. More blood, more synapses firing, a more powerful electromagnetic field is generated. In the locality of your brain, the fabric of time and space itself is likewise excited.
When you are angry, an entirely different chemical balance is in play, causing different patterns of synaptic firing. Blood flow increases drastically. The brain is literally put under increased physical pressure, the difference is again readily measurable as a change in the electromagnetic field that is responsible for holding the god-damned universe together.
When a human being goes through a truly traumatic experience, that trauma is not only psychological, but physical. Pathological. The brain bears the scars forever.
None of this is true of a computer. A large number is calculated in exactly the same way as a small one. The difference between fear and happiness is a different code page loaded into RAM, a subtly different configuration of logic gates, but the difference is not physically measurable to any significant level. A different programmed emotional state may cause a robot to behave differently, but the physical experience for the processing unit is fundamentally the same. The same clock rate, the same voltage levels. And when a robotâs programming enters a state intended to emulate trauma, it could take nothing more than the touch of a button to revert its logic gates to âhappyâ factory settings. Not that, on current hardware, you would be able to measure a difference in what the CPU is doing.
What a brain does is not, fundamentally not, the same as computation. A computer can run whatever software it likes but the hardware does not change. In a human being there is no such separation. There is perhaps a profound truth to the idea that âwe are the universe experiencing itself,â a truth that does not hold in the same way for a microprocessor. Maybe thatâs significant.
Does that include trying to have sex with them?
Because if anime has taught me anything its that the culmination of all Japanese robotics work is to create fully functional sexbots.
They say itâs murder. Thatâs why I like it.
On-topic, itâs not possible to torture a robot. But as @peregrinus_bis points out, it might still have a bad effect on the torturer.
What about if the machine were designed so this isnât possible, with the possibility for unrecoverable function? A partial self-destruct mechanism could be an integral part of the hardware. Does such a machine deserve a different level of empathy?
Already out there, for many many years. It is possible on many systems to cause physical damage by software.
See also various tests of SCADA attacks where it is possible to induce grave damage on generators and other plant equipment. See also Stuxnet, engineered to cause physical damage to a very specific equipment configuration in a stealth way.
I donât see any need for anything else than common cost/benefit risk/reward economy.
Was it programmed to feel pain?
Currently robots have no consciousness and respond pretty deterministically to inputs. So from any moral standpoint, damaging a robot can be no different to damaging any other device. They canât be âtorturedâ. But if we ever create a conscious being, then it would be a different matter.
That said, the response of people to a robot being wilfully damaged is an interesting personality test. I showed the kids Boston Dynamics videos and they were initially creeped out, and then indignant that people would abuse the poor robots like that. So itâs rather unlikely weâve been breeding little psychopaths âŚ