Here goes… from Article 1 of the UNCAT (United Nations Convention Against Torture and other Cruel, Inhuman or Degrading Treatment or Punishment) via irct.org it’s impossible to torture anything BD has created as none of them can “feel pain”.
Also, we need to be very clear on if “pain/suffering” is experienced or simulated, and, if there is a difference. I would suggest that difference is significant and fundamental. If a humanoid robot/automaton (as that is the most likely variant to induce empathy) so with that in mind, I offer the following thought query:
- If it simulates "severe pain and/or suffering" reactions to input X – does any input value for X constitute torture?
- if it experiences "severe pain and/or suffering" reactions to input X– does any input value for X constitute torture?
If “yes” for either of the above, is there a difference between the two? What are the boundaries of X? What if X == a smile, a blue sky or simply saying the word “supercalifragilisticexpialidocious”? Is it torture if I know saying “supercalifragilisticexpialidocious” will cause a “severe pain or suffering” simulation? Would it be different if I didn’t know what the result of that word queue was?
Intent is important when considering torture & AI because w/ people, it can reasonably be argued that anyone would know, beforehand, if they swing a hard wooden object at high speed (F=ma) and direct the point of impact at another humans kneecap, the result will be “severe pain or suffering”. Yet for AI, the values for X will be different – like with BD’s robots, you could whack their knees with bats all day long and they won’t feel an ounce of pain and/or suffering – even though some might infer such is happening. It has to actually be happening to be torture doesn’t it? If so, is the simulation of something really real?
If simulation is sufficient, then what about actors who simulate pain responses? Should those involved in the Milgram experiment be convicted of torture because the recipients of their actions simulated pain responses? Of course not. They have to actually feel it. Simulators, no matter how sophisticated they are, cannot feel.
A “robot” even with “low level” AI – collision detection, simple interaction capabilities (ie: conversational), simple decision trees (ie: self directing) would be simulating everything back to humans in a way that humans can interpret it and in a way that humans have instructed it to. That’s likely going to be called the first “artificial intelligence” we humans interact with, but it is fundamentally no different than a current, though very, very fancy, industrial robot.
Now, if it were emulation where we loaded an actual human consciousness into an artificial body - that’s day/night different.
IMHO, for true Artificial Intelligence - that is, artificial life, the level of maturity required for it to be able to independently experience anything, not just pain or pleasure, but anything & everything independent of what humans have told it to simulate will be the first third generation artificial intelligence designed by any second (or subsequent) generation of artificial intelligence whose predecessor itself was designed w/out human intervention.
That’s where I’d place the true singularity and we’re a long way from that. This form of life may be so different from humans that it chooses to not even interact with us – maybe in much the same way that we don’t really interact with bacteria.
BTW, anyone watch 100? That last episode has a lot of bearing on this entire concept.
@Mongrove & @Ratel (@LDoBe) is this “soul” of which you speak not unlike the Aether which some were convinced was the fifth element and made sense, even to timely intellectual luminaries like Plato, Aristotle- even Newton?
Question: If no one told you there was such a thing as a soul, would you know there was supposed to be one? Is it self-discoverable? Or, is it just putting someone else’s artificial boundaries on Descartes famous summation: je pense, donc je suis.
@Mangochin Re: ArToo & Golden Rod: (AFAIK) R2 has never had his memory wiped, which is really what makes that astromech so darn unique, his experiences span a significant amount of time and while all astromechs are designed to learn and adapt, most just never last as long nor experience as much as that one did. Not only that, but his masters (their word) solicited and encouraged independent thinking from him (both Jr/Sr Skywalker and Bail Organa did this) - so one could say that he was groomed to be more than the sum of his parts. BTW, anyone know what he was up to before “we” ran into him on Naboo?
GR, comparatively, got his mind wiped on a pretty regular basis during the clone wars - so he never even had the opportunity to “learn & grow”.
Edit: lost the last sentence about 3PO when I CTRL+X->V’d it into boing. My bad.