Is it OK to torture a robot?

[Read the post]

4 Likes

“Is it ok to push a robot with a hockey stick?” asks a person, who indirectly slaughters 30 animals a year because “meat tastes rather nice”.

11 Likes
18 Likes

I’m leaving early to go torture my liver. Adios mi Amigos/Amigas.

23 Likes
Why do we feel so bad when we watch the robot fall down, we wonder? There’s no soul or force of life to empathize with, and yet: This robot is just trying to lift a box, why does that guy have to bully it?

Because we are robots programmed to attempt to feel what other things that we consider to be like us due to similarity of features feel. It’s called empathy. You can look it up in a dictionary. Also, you don’t have a soul either, and if you disagree, then just show it to me.

Here ends this episode of Slightly Annoyed Answers to Inane Questions.

35 Likes

There’s no soul or force of life to empathize with …

This is where I disagree. For any reasonable definition of “soul” and “life force”, I don’t find the robots lacking.

11 Likes

At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifully, he says. Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.

Finally it was down to one leg. Still, it pulled itself forward. Tilden was ecstatic. The machine was working splendidly.

The human in command of the exercise, however – an Army colonel – blew a fuse.

The colonel ordered the test stopped.

Why? asked Tilden. What’s wrong?

The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.

This test, he charged, was inhumane.

22 Likes

The concept seems unreasonable as a prerequisite if you ask me.

I’d like to know what one means by a reasonable definition of a soul or vitalistic force. You know, just so I could either pick it apart, or show that one is referring to something we already know exists empirically.

10 Likes

Given how…philosophically unsystematic…we are in dealing with animals(human and otherwise) I just can’t hold out much hope that we’ll get good answers RE: robots and software systems.

As the Tamagotchi craze rather impressively demonstrated, you can get people to obsess over some 8-bit micro driving a 16x32 LCD if you twiddle their empathy correctly; while fish get knocked down several ranks in importance(enough that some ‘vegetarians’ will eat them) largely because they aren’t fuzzy.

With the right character design, you could probably have people threatening to kill you for mistreating a robot that demonstrably contains just some balance-related feedback systems; while you wouldn’t hear a peep if IBM had to modify the filesystem permissions to prevent Watson from deleting himself and ending his torturous existence(I’m definitely not the writer to do it justice; but it seems like there could be a neat story built around a nearly-successful AI R&D effort that has ongoing problems with sufficiently-successful attempts self terminating.).

(Edit: In thinking about it, the real question here is “Is it possible to torture a robot, and how would you be able to tell?” Unless you hang out with sociopaths and John Yoo, the ‘torture is bad’ part is more or less a premise; the problem is all in determining whether any system(robotic or organic) experiences suffering or simply demonstrates reflex responses to certain stimuli.)

16 Likes

Is it OK to penetration-test a server? Should we really take “connection refused” as a “no,” or just keep trying?

32 Likes

So tasty… so very tasty…

2 Likes

This is what happens when you let a philosopher into engineering.

Is it okay to do destructive testing (effectively a torture) on said robot? Why not? I don’t see any reason that wouldn’t be some empty emotional handwaving.

14 Likes

Not ok because the impulse to torture is unhealthy. It’s not about the robot.

36 Likes

Here’s a thought I don’t often see thunk on this subject. When a human being is depressed, its brain activity is physically depressed. There are less neurons firing. Blood flow to the brain is reduced. The electromagnetic field generated by the brain is measurably weakened.

When you are excited or happy, the brain is excited. More blood, more synapses firing, a more powerful electromagnetic field is generated. In the locality of your brain, the fabric of time and space itself is likewise excited.

When you are angry, an entirely different chemical balance is in play, causing different patterns of synaptic firing. Blood flow increases drastically. The brain is literally put under increased physical pressure, the difference is again readily measurable as a change in the electromagnetic field that is responsible for holding the god-damned universe together.

When a human being goes through a truly traumatic experience, that trauma is not only psychological, but physical. Pathological. The brain bears the scars forever.

None of this is true of a computer. A large number is calculated in exactly the same way as a small one. The difference between fear and happiness is a different code page loaded into RAM, a subtly different configuration of logic gates, but the difference is not physically measurable to any significant level. A different programmed emotional state may cause a robot to behave differently, but the physical experience for the processing unit is fundamentally the same. The same clock rate, the same voltage levels. And when a robot’s programming enters a state intended to emulate trauma, it could take nothing more than the touch of a button to revert its logic gates to “happy” factory settings. Not that, on current hardware, you would be able to measure a difference in what the CPU is doing.

What a brain does is not, fundamentally not, the same as computation. A computer can run whatever software it likes but the hardware does not change. In a human being there is no such separation. There is perhaps a profound truth to the idea that “we are the universe experiencing itself,” a truth that does not hold in the same way for a microprocessor. Maybe that’s significant.

13 Likes

Does that include trying to have sex with them?

Because if anime has taught me anything its that the culmination of all Japanese robotics work is to create fully functional sexbots.

10 Likes

They say it’s murder. That’s why I like it.

On-topic, it’s not possible to torture a robot. But as @peregrinus_bis points out, it might still have a bad effect on the torturer.

19 Likes

What about if the machine were designed so this isn’t possible, with the possibility for unrecoverable function? A partial self-destruct mechanism could be an integral part of the hardware. Does such a machine deserve a different level of empathy?

3 Likes

Already out there, for many many years. It is possible on many systems to cause physical damage by software.

See also various tests of SCADA attacks where it is possible to induce grave damage on generators and other plant equipment. See also Stuxnet, engineered to cause physical damage to a very specific equipment configuration in a stealth way.

I don’t see any need for anything else than common cost/benefit risk/reward economy.

7 Likes

Was it programmed to feel pain?

2 Likes

Currently robots have no consciousness and respond pretty deterministically to inputs. So from any moral standpoint, damaging a robot can be no different to damaging any other device. They can’t be “tortured”. But if we ever create a conscious being, then it would be a different matter.

That said, the response of people to a robot being wilfully damaged is an interesting personality test. I showed the kids Boston Dynamics videos and they were initially creeped out, and then indignant that people would abuse the poor robots like that. So it’s rather unlikely we’ve been breeding little psychopaths … :grinning:

7 Likes