It is quite logical. Mass-produced, and therefore cheap, computer with capabilities and processing power higher than many desktops did not have not so long ago. Low-power, always-on, pocket-sized. With a tacked-on cellular communication interface that is not the main functionality anymore. It’s our own self-blinding use of the “phone” word that makes so many of us not realize the devices’ true nature.
My guess is that if you asked Dr. Hawking, he’d prefer it the other way.
I think it would be interesting if there were organic AI, that was not some variation on human intelligence. I think a human experience is necessarily Earth-based, in terms of the feedback that people are evolved to appreciate. That we have taken an environment with optimal conditions to sustainably produce material abundance for all people with the technology we have and continue to systematically undermine and destroy it and promote scarcity doesn’t bode well for creating a virtual utopia that a human intelligence would enjoy. To think otherwise is hubristic, in my opinion. I understand the thrill in imagining the possibilities and the thought experiments that go along with these concepts, but I just don’t think the reality would be satisfying for an Earth-based consciousness.
Speaking of which, my current body model seems to have lost manufacturer support too – in fact there doesn’t seem to be any certainty that anybody has been able to contact one.
What is certain is that the documentation is incomplete and confusing, we’ve resorted to work-arounds for a number of painful defects, and there are still complete shutdowns that technical support has no idea how to fix. System restores are not available.
But yeah, I guess being able to feel a breeze through a series of electrical impulses and chemical transmitters instead of a series of purely electrical impulses makes it all worthwhile, I don’t know.
I think the worst part would be having to associate only with a bunch of other technologically adept disembodied brains, with the rest of human culture missing or available only as memory. Or maybe that would be the best part. I don’t know.
Organic, inorganic, why does the difference matter? Use the platform that is easier to handle with available tools, the rest is fodder for philosophers so they have something to wank over while you work on next generation (iteration?) of self. Aminoacid chains, polynucleotides, microstamped nanoparticles, photolithography, off-the-shelf chips on circuitboards, pick what you have on hand and work forward from there.
As of Earth-based consciousness… why just Earth when there is an entire solar system out there? Getting rid of bodies that require gravity and cannot cope with just slightly elevated radiation Up There and that need oxygen all the time and can not handle deviations from a narrow range of environmental temperatures, and upgrading to something that is more suitable for the conditions in deep space or on surfaces of asteroids and other planetary bodies, seems a more logical approach than clumsy spacesuits and habitats and fragile ships. Granted, it’s a good idea to start with these as the tech is already here, but use it as a stepping stone for moving on off this stupid rock.
…and not just the solar system. With the resources of all the solar system, and without the human-limitations in both environmental survivability and cognition, the sky is no more the limit.
Then the possible ways out of the solar system open. Whether the classical, mass-accelerated plodding forward towards the nearest stars (playing more on the environmental hardening), or some sort of breakthrough in space-time physics (playing more on the cognitive enhancements).
Thought… Could a body that is localized in several instances (parts) in significant distances where speed of light already shows its slowness maintain a common consciousness as “one”? Or would it have to result to breakup of several individual entities, possibly with synchronization of data?
By organic I meant something independently derived rather than mimicking or uploading a human brain or ‘consciousness,’ not building materials. If the platform were purely electronic, there is no real need to be anywhere in particular, but I guess just sort of empty space with plenty of sunlight for electricity would probably suffice. Spares and building new parts would require resources at some point, though, I guess. I think the main place we diverge is that I think of the Earth as my mother, and you think of it as a stupid rock.
I’m rather sick and tired of the old, boring story of scientists messing with Things Man Was Not Meant To Know. It was already pretty dull and implausible when Mary Shelley wrote it for the first time with Frankenstein, and hasn’t gotten any better with age. Meanwhile the world has improved a heck of a lot since then – thanks to those pesky scientists and their curiosity.
spoiler alert!!
I think you’ve got me backwards. If Hawking, for all his disability, can have a satisfying and fulfilling life, then so could a human mind instantiated in a computer, even if for some reason it lacked a mechanical body. It might not be the greatest, but it’s a damn sight better than being dead.
Cancer. Schizophrenia. Necrotizing fascitis. Minor injuries that never quite heal right and bother you for the rest of your life. Senility. Toothache.
Pain. Acute, chronic, stabbing, aching, burning. Mildly irritating or agonizing. Never-ending, insomnifying, soul-destroying, twisting everything you do or feel.
And, always, death, slowly descending like the hammer of inevitability, mind and body slowly rotting from the moment you’re born.
You mention data loss, and random freezes, and routine operations failing for no obvious reason. Our bodies already do those things to us every day.
I’m sure that the first posthumans will run into all kinds of unanticipated problems. Some of them will be horrific. I see no reason to assume they will be more horrific than the ones we already deal with every minute of every day, the horrors so familiar and commonplace that we come to think of them as normal.
Ponder this: Octopi are extremely intelligent. Possibly the most intelligent invertebrates. They seem to have self-awareness, and can pass the mirror test. They don’t have a single brain, but multiple ganglia that are networked. The “main brain” of an octopus does things like executive function, and vision processing, but doesn’t know how to control it’s legs. It sends messages to the ganglia in the legs basically saying “I wanna go over there, and grab that” and the ganglia in the legs do the work of motor coordination and such.
This is possibly relevant in that it shows a rudimentary example of a distributed set of “minds” working together in the same animal to achieve a goal.
In humans, there’s also the phenomenon of reflexes. When you touch a hot stove, and reflexively pull your hand back, it’s your spinal cord doing the thinking separately from the brain. The spinal cord gets the pain signal and starts pulling your hand back before it even transmits that pain signal to your brain. And then your brain sorts out the perceptions later. That’s distributed “thinking” in a way isn’t it?
I’m sure there’s more examples.
I’m a bit of a control freak, so I’d probably not choose to spawn copies of myself. I think I’d rather write agents that are about as intelligent as bugs and have them basically act as autonomous sensors, instead of distributing my cognitive abilities across any significant distance.
That’s exactly what led me to the question at first.
For reflexes, I would call it “data processing”, as “thinking” is a bit too involved for this level of quick simple reactions.
As of spawning copies, they may not be wholly copies. Think in terms of growing an organ (which may or may not be physically attached to you) specific for a task and then absorbing it back or disposing of it when not needed anymore. Kind of like constructor/destructor call on a software object. Loss of any such organ/agent is then nonconsequential as new can be grown/copied/spawned/fork()ed at will.
Yet another film where pre-emptive murder can be justified.Yeesh!
The movie “Transcendence” starring Johnny Depp is a let-down, but bestselling novel “The Transhumanist Wager,” another new story with futurist and transhumanism themes, is shocking people with its originality and revolutionary nerve. It’s a much better story about the near future, and more controversial too.
Yeah. Makes sense. I just kind of get the creeps when I think about forking instances of myself.
I can’t trust myself even today where I’m just one guy with one body. Imagine the unease at copies of myself that have their own autonomy. What if they don’t want to delta back together with “me”. What if something happens to one of them that changes them so much, they can’t delta back?
There’d have to be a way where either the original me can maintain “admin rights” on my identity, or the other option: I’d have to suck it up and just tolerate that if I spawn multiple instances of myself, there’s a chance that they’ll team up and do things I don’t like up to and including deleting me, or subsuming my identity.
So I suppose I’d probably stick with only forking off lobotomized copies, or separating out different cognitive pieces that can’t really get by on their own, and therefore need to come back to me.
… so, did you like it or not?
As a former AI researcher and developer I can point out a few things; the plausability of the first premise (strong AI in normal computers) is so low as not even to begin to beg any questions. We indeed need a quantum leap (hehe) before we can start talking about any of this being close to doable.
Next, I quite like the concept of AI, however I’ve never really seen a movie do it much justice. There’s nothing in the premise that it always have to turn out bad; that’s just Hollywood looking for a villain, and not a real exploration of the idea. In some ways, I think “Short Circuit” got it pretty good. Enjoy.
Also, strong AI will be nothing like a human consciousness, and there is no way to “capture a human brain” in any way or form. We can simulate fractions of neural nets, but we still even know how such a system actually works. We’ve got little fragments of hints, but the biggest hint so far is that nothing in the brain is of a binary nature, nor is there anything of it that are separated from complexity in a fragmented (and hence storable) fashion. We hear too often about memories, and think they have any relation to the RAM memory of computers. There is none. Nix. Nada. Zilch. Ingenting. Zip. Nothing.
With quantum computing we’re hobbling a few feet forward, but that is more about storing and processing more for your buck, and has nothing to do with, say, organic computing (which I suspect might give us better answers).
Which, of course, I wish someone in Hollywood would understand. Until then, popcorn and misleading nonsense for entertainment all around! Yay!
If it stars Johnny Depp it will be more of a cartoon than a movie.
JADP…
The military types dragged in WWII mortars and artillery because there are no electronics in them. No rocket launchers.
I’d rather watch Caprica.