It also takes a financial and organizational structure that can keep paying the bills over that time, without someone down the line deciding that they could see real cost-savings if they delete that item from the budget to maximize executive and shareholder benefits.
Or will it? The way I resolve this paradox is to accept that the perception of “me” is a metacognitive illusion, created by the existence of causal continuity between brain states, and that there is no real “self.” It seems to me this paradox makes clear that a physical basis for consciousness is just not compatible with our intuitive understanding of consciousness and identity, and that superficial introspection creates a concept of “self” that is fundamentally flawed.
Gonna plug Greg Egan’s Permutation City here - this novel deals with a lot of the same ideas.
I mean it will be you in a metaphysical sense, since it will be a person that had all the traits that make up you (your personality, your memory, your idiosyncracies). But you will still die even if there is another you that will look back and believe that it has never died.
That’s the problem with defining it through continuity. Moving backwards through time from the perspective of computer-you, there’s an unbroken chain of continuity. But looking forward through time from your current perspective there’s a continuity that forks off, but physical old you has to take the other path that ends in death.
But, full disclosure, I haven’t read the story you’re linking of course.
I think we might be talking about slightly different things though. You are asking the question whether the computer persona is going to be you while I’m interested in whether a person becomes immortal through uploading their consciousness. And I come down on that for all intents and purposes the upload is you but that you are still going to die.
I’m very well aware of Parfit’s work, I’m glad his philosophical work existed to ask these questions, but he wasn’t a physicist, and if he were he’d hopefully understand that it isn’t actually a paradox except in contrast to our incorrect intuitions. Because identical particles and systems of particles are literally indistinguishable even in principle, there is no copy, there are two originals. If this bothers you (and it should, it’s a huge departure from traditional thinking), the problem exists in you, not in the world.
I mentioned souls because in the absence of something like a soul, there’s no thing-in-the-world for your use of the word “you” to refer to. There’s no possible thing-in-the-world that persists over even a small instant of time that defines you that wouldn’t also be shared by the product of Parfit’s teleporter.
Uploading a mind into a digital substrate is a harder question, because we haven’t clarified exactly what information said upload includes. Just a neural connectome? Instantaneous info on the firings and the levels of neurotransmitters in each synapse at the moment of uploading? Internal structure of each neuron (now that we know neurons perform some level of internal computational operations)? State of every atom and molecule in the whole brain or body? But if you’re saying there is no level of
fidelity for physical simulation that would count as “you,” then you’re either postulating a soul, postulating new physics, or speaking physical nonsense.
Good, then we’re in agreement. I’m glad @Elmer and I were able to point out that this also implies that you always have died, and always will die, in this way, every instant you’ve ever existed or will exist.
We are still talking past each other. I am not interested in whether a digital or physical copy of mine is considered “me” by an outside observer. In fact I believe it should be. The copy would also consider itself “me” because it would be me. The problem is that it’s not the same me as the me that’s currently writing this reply. That me will live on and eventually die and that me’s conciousness will experience death while another conciousness will live on. A conciousness that will remember writing this comment and that will therefore be me but that will not have died and will therefore not be the me that is doomed to die.
Imagine I was a brilliant scientist and conversationalist and all of society agreed that it would be great if I was immortal. So they upload my conciousness into a computer with perfect fidelity. Meanwhile I live on to great age and eventually die. From the perspective of society I am immortal. They can go and talk to me and I will continue to be an asset to science and the art of dinner conversation. The papers and conversation guides I publish from inside a computer will be published under my name, and rightly so.
Yet I’m also dead. The consciousness writing this comment in the body typing this comment has died. From my perspective on my deathbed I am not immortal even though to everyone else in the world I am.
That’s the question I am interested in. Not “can a copy of a conciousness be considered the same conciousness?” but rather “is there a way uploading me into a computer can defeat death?”. And the answer is no. It can make a version of me immortal but I will still die.
I’m not sure to what extent we’re talking past each other, actually. At this point I think we’ve established that we don’t disagree on any important factual matter, only on what those facts mean. I find your concept of personal identity confusing, most likely because I don’t know your thinking well enough to know what went into it. I’ve tried to lay out what I think: I am a pattern of information encoded in my brain and body, and will remain me so long as there is a clear causal chain linking the current me, moving forward in time (whatever time turns out to be), to a future pattern that preserves features I consider essential, and no I don’t have firm definitional boundaries around what counts as essential.
If I created a digital scan of your whole brain state (or whole body) right now, and then you each went on living separate lives, you would eventually die, and it would not. I think it would be wrong to consider that other being to be the same as the one who died, no matter how many memories you and they share.
How do you feel about an upload conducted after you die (or, if you care to define such a thing, at the moment of death or as close as possible)? Then it shares all the memories you will have ever had, and goes on to form new ones after you’ve stopped doing so. In that scenario the moment of your death is also the moment of divergence. Assume for the sake of a cleaner example that the “scanning” process is destructive, so that there is always exactly one instance of the pattern of information and energy your brain encodes. At that specific moment of uploading, is it you? If it is stored inactive on a disk, is that you? When it starts being simulated, is it you? If not, why not? What specifically does it lack, that you had?
Like I said, I don’t have a clear sense of what specific features need to be preserved to make me, me, so I don’t expect you to, either. I’m only asserting that whatever those features are, they physically exist encoded somewhere in my body, and so a sufficiently precise copy or simulation of my body will also be me at the moment of copying (and thereafter diverge), similarly two copies or simulations are both each other and me until they diverge as a result of different things happening to them.
That was kind of a problem for the world in Torchwood-Miracle Day.
Not as problematic as Bill Pullman’s performance and the failed attempt to Americanize the series. But problematic nonetheless.
I’m glad you brought up the “I Have No Mouth” reference. Because that is the first thing that comes to mind with the term “friendly AI”.
… I’m surprised no one has brought up the last three/four books from Schlock Mercenary, which uses this trope rather well.
(link starts at book 17, A Little Immortality. Danger, DEEP DEEP archives spanning 20 some years with near daily updates throughout.)
Pondering Dyson spheres… would a civilisation that could build one, i.e. had not only the technology but also the political and economic system to do it, actually want to build one?
Extra Librarian points for you!!
*especially as I didn’t have to google that
Social pressure toward keeping up with the Kardashevs practically demands that you build a Dyson sphere…
There is a game called SOMA largely based around the existential horror of this. The main character gets copied a few times and the game messes with showing you each perspective at different points, the one that lives and the one left to die in the situation being escaped with the copy.
One NPC keeps calling the one that lives the winner of the coin flip because it feels like a gamble even though it’s really not.
The exact thing that you describe happens in the Amazon original show Upload. When someone is about to die, their head is vaporized by a scanner so that their consciousness can live on in a computer simulation. There is no consciousness gap because you have to get your head vaporized while still alive. The protagonist has possibly survivable injuries but is goaded into signing a release form by his fiance, who is worried that he will suffer natural death (which is irretrievable). It’s a comedy.
Isn’t the tricky thing about this though - that for the word illusion to have any meaning in the standard sense requires a subjective experienc-er? It’s kind a bit like saying “words are an illusion - they’re really just patterns of ink on a page, vibrations in air, etc.” It’s a short step into a bottomless well of recursion philosophically speaking, (a metaphor which may itself have some resemblance to certain notions of aspects of consciousness). Even descriptions of brain states that talk about the nature of consciousness as a sort of self-perpetuating feedback pattern (which seems pretty compelling based on technological examples where we can observe sensitive and dynamic yet robust complex patterns appear on an organizational layer above the physical infrastructure) … Even these types of models go approximately zero distance to addressing the ineffable question of “why” does it feel like it does to be a “self” and what are the implications of the aforementioned thought experiments - panpsychism might be one of the more fleshed out philosophical attempts to grapple with this issue on a natural, non-dualist basis, by proposing that a sense of self is simply what matter feels like when it gets complex enough to do so.
I like a close cousin of panpsychism: cosmopsychism. As I understand it, it’s the idea that the universe is what’s conscious. Put another way, the universe is the thing that has the capacity to subjectively experience conscious creatures, rather than conscious creatures being the things “experiencing” their own selves subjectively. The universe is experiencing us. It experiences any object within itself that passes some threshold on the gradient of consciousness. Maybe there’s some sort of consciousness “field” that permeates all of reality, that conscious creatures brush up against, interact with.
As with panpsychism, cosmopsychism solves lots of the baffling questions of continuous identity that come with cloning, mind uploading, etc. The questions themselves turn out to be nonsensical. E.g., if you use a Star Trek transporter to teleport (that is, destroy yourself at the starting transporter, and then have a perfect clone of yourself reconstructed at the destination transporter), is that clone picking up the thread of your continuous subjective experience? The answer is: N/A; “you” were never experiencing yourself in the first place. Instead, the universe was experiencing your mind and body. And after the teleport, it’s experiencing a perfect clone of your mind and body, continuing going about its business. It’s as if the universe was watching a movie file, and halfway through, the file was deleted but replaced with an identical copy, and the universe continued watching that instead.
I’m not so sure. If there’s one thing that brain/mind studies have shown us, it’s that intuition and introspection utterly fail as a guide to understanding what’s actually going on. It could be that an understanding of what “consciousness” is, or whether it’s even a thing, is forever beyond our reach. (Hell, we can’t even adequately describe what it means to be conscious, or what “subjectivity” actually is. If we can’t define a problem, how do we expect to solve it?)
All of this is to say that I think this question falls into the same category as questions like “is a photon a wave or a particle?” in that it’s something that evolution did not cognitively equip us to understand intuitively, and any answer based on such reasoning is nonsensical. Only in this case it’s even worse, because there’s no way at all to address the question experimentally. I’m not entirely convinced there’s any “there” there.
Yep, of all the approaches to the question of consciousness, this is the one I suspect is closest to the truth. Or some variation, as @barneyrubble discussed.
I see your teletransportation paradox, and I raise you Trig’s Broom:
There’s an interesting cultural analog to this in e.g. Japan, where castles that have burned and been rebuilt multiple times are still held to be the same castle as the one that existed hundreds of years ago. The question of identity in this case is more about form than it is about actual physical stuff. I dig it.
And of course, there’s the Ship of Theseus story, which is basically the same issue in a different form.
Yes, I’ve watched it and enjoyed it. Glad they’re making a second season.