I’m not out to convert anyone. On the contrary. If someone wants to believe in things without evidence, I wish them the best of luck with their faith.
I’m pointing out the holes in arguments, and endeavoring to do so politely without insulting their intelligence or character. Is that not what a debate should be?
Glad you’re in a position to judge the character of the rest of us though.
Well, I’d say it depends. If a programmer has added a routine in software that deliberately triggers some sort of “hardware trauma” device when inputs are calculated to be highly stressful, colour me unimpressed. To return to the original question of “is it OK to torture it,” I’d be inclined to say pretty much yes, except that the normal moral rules regarding damage to property come into play.
But if you were to show me some artificially intelligent device that could become overloaded to the point of physical damage due to its natural operation in situations that would be similarly stressful to a human being, then I’d certainly be curious to know more about it. But such a device would necessarily operate in a way that is absolutely nothing like the hardware that we currently use to run AI software, and that’s basically my point.
Life like still isnt life. Theres no more ethical problem here than taking apart a mindstorms robot or even uploading new software to a robot. When you get down to it, theres no more ethical problem here than crushing an empty can for recycling.
BD makes robots that are supposed to be able to move & function under adverse conditions. Conditions like someone on purpose or accidentally kicking/falling onto it or things in the environment interrupting its movements. The video shows that their machines can deal with those problems.
This only makes sense under “tortured” definitions of sentient. I used to run an AIM bot with an ELIZA backend engine that I customized to appear to have a distinct personality. It was “self aware” enough to message me when it had network connectivity problems. These days it would be fairly trivial to integrate such into a robot. To a child (or lots of adults) it appears sentient because it appears to have self awareness and response capabilities but IMNSHO only the most bong addled Philosophy 101 student would call such a thing sentient.
It sounds to me like you’re placing changes in hardware above changes in software. But the thing is: even just using the machine changes it.
CPUs don’t last forever. They experience thermal stresses, and other decay, and will eventually decay, as will every other part of a computer. Matter goes toward hydrogen and other stable things with lowest energy level filled first.
Probably because of time.
Anyway, it seems utterly plausible that a brain could be “imaged” with nanotech of suitable size. So we could uplift ourselves to computers. Or we could figure out a way to put the computer in our brain. Or both.
“Humans — who enslave, castrate, experiment on, and fillet other animals — have had an understandable penchant for pretending animals do not feel pain. A sharp distinction between humans and ‘animals’ is essential if we are to bend them to our will, make them work for us, wear them, eat them — without any disquieting tinges of guilt or regret. It is unseemly of us, who often behave so unfeelingly toward other animals, to contend that only humans can suffer. The behavior of other animals renders such pretensions specious. They are just too much like us.”
With robots, there’s a big difference between the capability to feel pain or respond in an intelligent way and the programmed appearance of having this ability. The same applies to characters in computer games: should I have a similar level of care for them as real people, merely because they look and act like people? Is this attitude carried over into real life, and should many games, movies etc. therefore be banned? Is torture fundamentally different from murder when it’s against an entity that merely looks sentient? What about playing against real people, where there is actual intelligence involved but no damage is caused? (presumably in this case torture or rape could have a significant effect, particularly if the other player strongly identified with their character). What about if things start to look so real that people can’t tell the difference? Crap, now I made myself think of Existenz again.
I think the big philosophical question is how are humans different, from a methodological naturalism point of view? There’s internal states which are to an extent probe-able. Certainly moreso in the future. So in a reductive sense, what makes our meatbrain sensations more important than as siliconbrain’s feelings?
How we treat them says a lot about ourselves. As well as them, at least in a future historical context. Whether or not they end up truly sentient or sapient.
The question is not, Can they reason? nor, Can they talk? but, Can they suffer?
He was talking about animals, and the presumed answer was yes, making them sentient beings deserving of humane treatment. When it comes to robots the answer so far is no, making them not sentient and no more deserving of humane treatment than your average rock or teacup.
It’s true, a digital computer isn’t so black and white as computational theory would like it to be. But that wear and tear, the physical experience of “what it is like to be a computer”, seems to me to be pretty much unrelated to the actual numbers that the computer is crunching. Whether the computer is computing joy or fear it will wear out at pretty much the same rate. It’s just not the same as what a meatbrain goes through.
Yeah, it’s plausible. But then again when we first put transistors into mass production it seemed plausible that we’d be able to reproduce human language and thought by making computers answer a lot of yes/no questions really really fast. Before that, when we were first getting the hang of clockwork, it seemed plausible that we could replicate human intelligence using lots of tiny tiny cogs.
So maybe we could do what you’re talking about. Maybe. Right now we’ve gotten as far as modelling the brain as a network of linked nodes, deconstructed and re-encoded to fit hardware that we’ve designed for binary operation, not because binary operation is particularly efficient use of electrical signal but because it gives consistent results that we are capable of reasoning about. But what we’re discovering is that the human brain is a lot more than a network of neurons and synapses. The physical space is important. The dynamics of the flow of blood and chemicals through that physical space is crucial to normal brain function. A firing synapse produces electromagnetic waves, and electromagnetic waves affect the firing of a synapse.
What I’m saying is, what we’re modelling as a well-structured network might be better modelled as a fully chaotic fluid system. It only seems like a network-based model might do the job because our understanding of what the brain does is incredibly limited. A better paradigm for modelling human thought using binary computation might be the same sort of model we use for predicting the weather, where building-sized monstrosities chew through absurd amounts of power in order to get the weather right maybe two thirds of the time. Because the only good way to accurately model a chaotic system is to iteratively compute the interactions of as many particles as you can, as finely as you can. It’s a type of modelling that modern computers, constrained as they are to discrete digital operations on discrete digital data, are especially bad at.
So maybe you’re right, maybe the singularity is just around the corner. Or maybe in our efforts to replicate human thought we will end up replicating the human brain, with all of its quirks and flaws and with no way to “image” or “uplift” the essence of a human from one platform to another, because the software isn’t separable from the hardware.
Supposing I imagine a person. His name is Bob, he likes tacos. Is it OK for me to imagine him being tortured? Maybe it’s a little worrying for me that I’m sitting around imagining an imaginary guy getting tortured, but I’d say it’s pretty much OK for Bob.
Now, supposing a microprocessor emulates an intelligent entity. Is it OK to cause the microprocessor to emulate distress for that entity? To me, that’s what we’re discussing here.
Oh please! Stop pounding the keys on your keyboard so hard! That laptop is surely sentient and resents the excessive force you are using! Heartless bastard, subjugating all of those transistors to do your bidding!