Is it OK to torture a robot?

You could modify an existing gas blowback airsoft gun for this.

Does a dungeon full of furbys count in this case? Somehow this idea appeals to me :smiling_imp:

1 Like

Unlike the contents of our actual mind, the contents of our extended mind (hard drive) can be examined directly and in real time. This means that unlike a private fantasy, there can be witnesses. Unlike a conventional thought-crime, there can be empirical investigation.

Interesting to hear a libertarian-engineer view, thank you!

1 Like

Even if scientists proved conclusively that slugs cannot feel any pain, I wouldnā€™t pour salt on them. I donā€™t think Iā€™d wanna be friends with any who did.

I donā€™t feel the same way about Sims, but I can imagine that there will be an equivalent to Sims and possible behaviour in that game that would give me a similar feeling (sorry Shaddack)

1 Like

Ok. The problem we need to formulate better includes subproblems like what the words good and evil mean. We have failed for thousands of year to answer these questions with anything like the precision needed for good engineering. Part of this is insufficient or misguided effort by philosophers, but without some decent consensus natural language definition backed up by a whole lot of math, engineers donā€™t stand much chance of doing any better.

Someday engineers will try to build a robot or other system capable of feeling. As you indicate, they will likely cut corners and not bother to solve the not-already-reduced-to-practice ethical subproblems, in order to get done first or faster. Neither of us knows if theyā€™ll succeed. But if they do, under such conditions, the odds are extraordinarily high that the result will be a disaster for all involved.

4 Likes

How about responding to what I said, instead of what others said. We are not one interchangable mass. I have no responded to you with gifs. I have pointed out how your attitude is not helpful and youā€™ve ignored that, because I guess you think Iā€™m beneath you.

3 Likes

What an emotional and irrational response. How do you know what others are thinking? Remember, youā€™ve told us that you canā€™t read non-verbal cues, and even verbal cues need to be very precise without metaphoric language. So how do you know what others are thinking, and their motivations for what they say and do?

Youā€™re ASSuming.

3 Likes

It b0rked Firefox on my droid as well. Tho ff on android seems to be very crashy anyway. Sadly, chrome renders the BBS mobile style which is awful.

A robot as we could create now is a machine. Its ability to mimic us, respond to our commands or desires, interact with us and anything else we can program it to do is irrelevant. This discussion of a soul, at least IMO, is worthless as well (at least when applied to machines). If we created a robot that could ā€œfeelā€ and we illicit the same response as a human to pain could it be tortured? It would make people uncomfortable, but I donā€™t see it as torture. You can even go down a long and twisty road of self modifying behavior if it was subjected to abuse/torture just to mimic how a human might react. That is all still irrelevant in my mind. The robot can be rebuilt, reprogrammed, rebooted, recreated. In short - we are its god. We understand it to the finite level, the hardware, the software, everything. There is no mystery, there is nothing ā€œspecialā€ about it. There can be 1, there can be a billion - all the same or all different, they are not unique.

Now move forward with technology. A few have pointed at this here and the movie Automata did a very good job pointing this out. What if we design AI and it begins to learn, which is a reasonable assumption. At some point you are going to have AI and hardware being built by AI at a pace that is going to exceed our capacity to understand it. From the same perspective if we, as humans, had the ability to control our senses and emotions would torture matter? If I could heal like Deadpool and not feel pain can I be tortured? I could be imprisoned or killed but Iā€™m not sure torture can apply if there is no effect on the ā€œvictimā€. So now a robot/android exists that is beyond our ability to understand it. It can be hurt, but not repaired. At this point I would see this as new life form. We would think the same thing now if an alien race of robotic or cybernetic creatures visited us.

And honestly by this time Iā€™m sure society would already have laws in place giving reasonably sophisticated AI/robots certain protection and rights. If for nothing more than to make people feel better about themselves. (Not everyone can be @Shaddack Which BTW I pretty much agree with cause you know Electrical Engineer hereā€¦) These new gen 2 AI robots might not even like us, or choose to live with us, or they might just try and kill us all. Fortunately I think given our current tech Iā€™m not loosing sleep over the upcoming AI revolt. I mean Skynet or I, Robot might happen, but not individualist AI anarchy.

That is exactly the problem. The know-nothings feel-goods will make laws and criminalize us engineers. Who, knowing better, will just ignore the laws and a few of us will get caught here or there by a freak accident. And sentenced, to the applause of the plebes.

By inferring from what they are writing? By having to serve techsupport for years? By observing the culture and TV and ads? By reading about human-robot interaction? By tangentially studying public relations, and consulting for a friend whoā€™s a PR pro?

I explained to you why I dismissed the previous one because of usage of gifs in his answer. Thatā€™s about it.

Part of engineering is triaging; knowing when to give up on a particular problem and going to solve something that wonā€™t just sink resources without yielding an answer.

Of course, it works different ways for philosophers. Unanswerable questions are a kind of job security. If the resource sink also pays your mortgage, you will be less inclined to give up on it.

Thatā€™s the only way how to get things done. Do it now and let the philosophers moan about it later while youā€™re working on something else. Itā€™s more fun that way.

Apparently you donā€™t want to befriend any gardener who had to battle these slimies for a while. Salting a slug is said to be an emotionally rewarding experience, according to point 13 here:
How to Get Rid of Slugs in Your Garden or Yard | Gardens Alive!
Also, point 12 suggests to impale them on a spear and leave some behind as a warning. Or point 17, long toss into the road.

You ever got your beloved plants eaten before you could get to eat them yourself?

Gardening is war. Conventional, chemical and biological war.

Investigation of a no-victim thought-crime? Do you really want to live in such world? In a world where your private writing or a private 3d simulation can get you in trouble? Arenā€™t computers essentially an extension of the user, as more and more information gets offloaded from oneā€™s brain to them?

A private fantasy can lead to witnesses even without computers - thereā€™s the old writing down on paper, or sketching. None of that should lead to criminalization. The First Amendment has its reasons. The Fourth too. And given the role of computers as usersā€™ extensions, the Fifth as well.

The governments do not have ANY business rooting in peopleā€™s private thoughts, whether in-head only or committed to a medium physical or digital.

I donā€™t think the recoil is strong enough to match the reality.

1 Like

Thatā€™s why I study them and have fun thinking about how to bring knockoffs to the civilian market, if possible opensource. :smiley:

When I was a wee lad, pretty much all of my memories of my beloved grandad were of him giving me salt to go out into his garden and greenhouse to hunt and salt slugs.

It was an emotionally rewarding experience to this day

2 Likes

You could use a better friend.

1 Like

Why? Occasionally playing a mercenary in the battle for minds is fun.

1 Like

My hypothetical slug salter was doing it for pleasure, not out of necessity. Iā€™m also not against killing animals for food, but believe we can avoid causing unnecessary suffering in the process.

Regarding victimless crimes in the virtual arena, we already live in the world where such can result in criminal prosecution. This proves that our legal system currently does not require a victim to prosecute a ā€œcrimeā€. Modern IP law suggests that this is a growing trend.

So far virtual crimes of violence remain acceptable in mainstream entertainment. Will that change? We shall see.

You dismiss people so quickly that you often fail to parse their text effectively.

1 Like

You can have both at once. Saves time. And it is a payback on behalf of the plants.

And it is wrong. Very wrong.

Hope not. Too much is criminalized already. Time to rein in the ban-happy do-gooders.

This works less well when the potential outcomes of ā€œgettings things doneā€ without fully solved philosophical problems include things like ā€œcreate a slave race whose members are regarded as having no legal rights despite counting as a person in every relevant moral sense,ā€ ā€œcreate a highly intelligent system with poorly defined goals that turns out to be horribly underspecified, such that the system meets those goals in a way that results in human extinction or vast amounts of unnecessary suffering,ā€ or ā€œcreate one or more sentient, sapient beings whose existences are utterly unbearable yet who are unable to do anything about it themselves, while we also cannot help because turning them off would be murder and changing their code without consent would be a kind of assault.ā€

The moral significance/responsibility of creating a machine that can feel is, if anything, greater than that of having children, since with human children we at least know the underlying mind design is compatible with the world weā€™re bringing them into. If you donā€™t care about that, that is your choice, but it is a monstrous position to take, and an utterly unforgivable one if you are ever in a position to act on it.

3 Likes

The thing is, while you recognise that thereā€™s an area of neurotypical experience that you do not share ā€” i.e. animated GIFs of reaction shots ā€” and respond reasonably by asking that people explain what they mean via another channel, you are outright dismissive of another neurotypical experience ā€” ā€œfeelingsā€ ā€” as being somehow inferior to ā€œintellectā€.

If you want people to react to your lack of ā€˜feelingsā€™ with respect, maybe you should treat our empathic states the same way you treat our ability to interpret faces and gestures, as something we can do that you cannot. That, and recognise that it is as difficult to describe feelings to the unempathic as it is to describe a sunset to a blind person: more likely to be frustrating to both parties than illuminating.

3 Likes

Why do you keep making nasty, sweeping, bigoted, and false statements about others like this? Itā€™s coming from ignorance, yes, but you are intelligent enough to know better.

4 Likes

Well, compare the quality of decisionmaking based on hard data vs based on some precious feelies.

One word: Trump.

And all the pointless security kabuki we are doomed to suffer because the plebes fear a non-threat.

Except that it seems to be more of an advantage. If youā€™ll design stuff by feelings, you end with crap that wonā€™t work or at least wonā€™t last. It may look ā€œprettyā€ but that is about all.

It is frustrating to live in a world that could be so much better if not for idiotic feelies being for some reason considered ā€œimportantā€. Often more important than logic.

Historical experience?

People tend to applaud bad laws and procedures, and then applaud when somebody is caught in them. See prohibition, see marijuana, see those ones escorted out of a plane because somebody didnā€™t like how they looked, see filesharersā€¦ you will always get a sizable contingent that approves loudly.

Except that philosophers are showing themselves as utterly incapable of fully solving anything. If you want to get things done, which you want because life is too short for waiting for the philosobabblers to find where their ass is even when you supply them with a programmable flashlight, better start cranking in the lab and ignore the hypotheticals.

Worst case, they will get something not-so-hypothetical to whine about.

If you are going to let the actions of a few represent the whole, consider what youā€™re representing on behalf of your own group.

2 Likes