You think I am resorting to functionalism because you assume I also believe that qualia are inexplicable. I’m not talking about things functionally equivalent to consciousness, I’m talking about literal consciousnes. We’ve done brainscans of people watching movies and created blurry versions of those movies from the brainscans. A few decades from now we’ll literally be able to grab a copy of what you seeing from the visual processing center in your brain and put it on a screen. At some point we will be able to directly observe consciousness operating in other people, researchers already found an off-switch for it.
I’ve heard this said a lot of different ways. The Dalai Lama said that no brain scan could ever explain the feeling off seeing blue. Zizek wondered how, out of a dumb flat universe that just is something like perception could come to be. But the answer is that the idea that consciousness and perception are not things that the universe can just do is no different than our ancient relatives idea that lightning wasn’t something that the universe could do. We don’t need thunder gods and we don’t need qualia gods either. The experiences we are having in our heads is just something the dumb flat universe does.
That doesn’t mean a box that can fool me into thinking it is a person is conscious. The Turing test is very flawed. The Chinese room is an example of a situation where you would be fooled into thinking something was conscious. It’s a pair of red glasses making me perceive a white wall as red. You can’t define anything merely by its function because there is inevitably something else that could have the same function. A computer would be conscious if it was running an actual consciousness program. But right now we don’t even know what that would be.
So our idea that consciousness is some miraculous thing is also just speculation. For all we know tons of things are conscious. Consciousness could be a property you have an amount of rather than an on-off switch and a small amount of consciousness could be present in lots of things and systems. When we know what consciousness is, we’ll be able to say with more certainty.
No, I don’t assume you believe qualia are inexplicable (you seem pretty comfortable with a materialist account of consciousness), I assert they are not accounted for by functionalism, and I suggest you are asserting functionalism as a complete account of consciousness.
I’m suggesting that the conditions required for consciousness are probably not as simple as “any system that can achieve functional equivalence with consciousness”.
You are making some assumptions about my motivations here.*
ETA: So this is the comment you made earlier that as far as I am aware is forming the basis of our current discussion:
Our brains can be simulated with sufficient computing power by any Turing complete machine.
I took that comment to imply you would see such a machine as conscious. Reading what you just wrote, if you agree we do not currently have a sufficient understanding of exactly what consciousness is, I don’t really see how you can offer this as an example of how consciousness could easily exist in disparate universes.
That’s an idea I have a lot of time for.
*ETA: I think you’ve got me pegged as a batshit crazy flat-Earth creationist who is just trying to mince my way through the argument before concluding with something like “and that’s why evolution is a lie and abortion is murder! Praise baby Jesus!” That’s really not where I’m at, I’m just fascinated by this threshold of our knowledge and I enjoy testing my own understanding of it against other people’s.
This seems far more likely given how biology typically operates. There may be a tipping point of self-organization where a quantitative change becomes a qualitative difference, but that qualitative difference may not be the entirety of what we experience as consciousness, but rather just one aspect of it, and so our experience of consciousness may include more than one thing.
BTW, just as a data point on the consciousness discussion.
The general gist of neuroscience research over the last few decades has added some interesting data to these debates (although, apart from the Churchlands, very few philosophers pay any attention to it).
A variety of findings suggest that, despite human intuition on the subject, consciousness is not in any way the most important function of the brain. Nor is consciousness “in charge” of what the brain does.
The current understanding as to the nature of brain/mind/consciousness from a neuroscience POV goes roughly like this:
Most mental processes, including decision making, are largely hidden from consciousness.
The idea that “consciousness makes a decision, then the rest of the brain implements it” is wrong. Instead, it’s “the unconscious mind makes a decision, then the consciousness invents a post-hoc rationale for why that decision was made”.
The best guess as to why it works that way goes like this:
A very important factor in human survival and reproduction is the ability to predict the behaviour of other humans. This requires the ability to make a mental model of their behaviour and simulate it in hypothetical situations. There is and has been an immensely strong selective pressure in favour of that ability.
Whether or not those people are truly free-willed conscious agents in the deep philosophical sense, modelling them as such is a useful heuristic that achieves accurate-enough predictions in a usable timeframe.
Eventually, this “model-other’s-behavior” unit went recursive, and this is what became consciousness.
Basically, consciousness is the thing we use to simulate/predict the future and explain/learn from the past. But it isn’t the “important” bit of the human mind, and it isn’t in charge of our behaviour.
Essentially, we’re meat robots with very good Nexus 6 programming.
I guess that computers will get some aspects of that, then more, then yet more, then it will turn out that certain types with higher complexity are effectively somewhat conscious.
Then we will get screams from various philosophers, ethicists, and PETHEC (People for Ethical Treatment of High-End Computers). That will require quite some popcorn to watch.
Actually I’m desperately uncomfortable with it. It makes me feel sick to my stomach. A philosophy professor of mine said, “One generation’s anathema is the next generation’s paradox is the next generation’s cliche.” The realization that our consciousness probably is a physical phenomenon is irreconcilable with my self-conception. Unfortunately there is plenty of evidence to say it is true and no evidence to say it isn’t.
The point is that we know that consciousness runs in our brains, so a system that can run an accurate simulation of our brains can run consciousness. That’s not the same as a Chinese Room problem, which is a trick to appear to be consciousness. I’m not talking about functional equivalence, I’m talking about a thing that can actually do consciousness. This is why I find the alternative to be magical. Basically, it’s saying, “No, our brains don’t just do consciousness by virtue of their physical structure,” because if they do, then it is obvious that another thing could also execute the exact same function. If you are saying that a full simulation of your brain would not see things the way you see things, I’m at a loss as to why that could be other than: 1) You don’t think that consciousness is a physical property of your brain; or 2) You think there is something special about brain matter so that we can rule out the possibility of such a simulation.
Not at all. I just think that the idea that our consciousnesses are anything other than complex software (with dedicated hardware) is unsupported.
Yeah, that’s where I thought we were. Consciousness is the part that screams about how important it is. Just like in a bureaucracy, that is rarely the important person.
Of course it’s also possible that certain aspects of consciousness are platform-dependent, and that there could be something about our brains that defies efficient modeling on a universal touring machine. Conversely, machines might be able to develop hitherto unknown aspects of consciousness we ourselves cannot experience. It’s possible that we may end up midwifing machines that are differently conscious from ourselves. I say midwife because I don’t think even then that we’ll fully understand how consciousness works, but a lack of understanding something doesn’t necessarily preclude leveraging the power of directed evolution to create it.
…okay, I know what you want to say. But manmade stuff doesn’t have to be a Turing machine. We can talk about arrays of quantum dots on silicon, we can employ molecular electronics, or even arrays of cultivated genemodded neurons on a photonic array (and communicate with flashes of light instead of with electrodes that are notoriously difficult to reliably attach). Or a hybrid of all the above.
I think in the context of this discussion, where the proposition is that a universe where consciousness is possible is an unlikely thing, its important to emphasise that we only know of one “seat of consciousness”, and that the likelihood that anything else could be a seat of consciousness is completely speculative. Otherwise we are stacking the deck a little, and increasing the likelihood of consciousness.
If you are asking if I think it is possible to create a biologically identical structure to the human brain that would be conscious, then yes I think that is possible, but it doesn’t have as much bearing on the likelihood of consciousness being possible as would accepting any structure that functionally replicates consciousness.
[quote=“john_c, post:149, topic:74269, full:true”]
I think it’s the important part. But then I would[/quote]
There’s nothing against saying that consciousness is your favourite bit of the mind. But it isn’t the executive, and the idea that consciousness is the essence of identity is highly debatable.
Consciousness is the commentary booth, not the football game.
As I pointed out above, even if we think that there is only one seat of consciousness, we still have no reason to suspect a universe where consciousness exists is unlikely, we know that we cannot know whether it is likely or not, and even if we could prove it is dramatically unlikely, or even had a probability of zero, that would be no reason to think it wasn’t literally guaranteed to happen.
But back to consciousness, we can’t pinpoint was it is, but at this point we do know that there is a very low bar to be able to achieve anything that can be achieved in terms of computation. Again, I don’t see how to see this other than rejecting that consciousness is a physical thing brains do or insisting that brains are somehow special in their ability to produce consciouness. I don’t see any reason to think either of these things.
I’m not sure how possible this is. I guess it’s that word “efficient.” How much processing power do we need, how much memory do we need to create one neuron if I was coding it? Probably way too much. But we know an upper bound on how much computing power it takes to model one neuron - it runs on about a microgram of matter. General computing is crazy inefficient in all things compared to just letting reality sort itself out.
Let’s say we had a big hill and we rolled a rock down it. Where would the rock land? If we wanted a computer to tell us we’d be pretty much out of luck. The problem is just too insanely complex, and that’s just a rock rolling down a hill. Fortunately, a consciousness simulation doesn’t have to be perfect. I can believe we’d never have a shot at creating a simulation of your consciousness, for example. But just running the operation of consciousness on a machine? On some days I would guess that we’ve probably done it already by accident.
I agree, though, that a consciousness on a machine might end up being so different than human consciousness that our instinct is to say it isn’t the same thing. Unless we create that consciousness by some kind of brutal life-or-death trials that are only peripherally related to consciousness, then it probably won’t think about things the way we do at all.
That’s true, but I wasn’t really talking about complexity theory, at least not directly. What I mean is, universal Turing machines might not actually be as universal as we thought. Specifically, organic brains might be a class of analogue computer that can only be approximated by digital computers. Is there something lost in the approximation (and that might in fact be the purview of complexity theory) that’s essential to some or all aspects of what we experience as consciousness? Yes, Turing showed that you can model any machine on a digital computer given unlimited computing power or time. That’s what I meant by efficiently. It may not be practical on a digital computer.
Now that hypothesis could be totally wrong. It may turn out that any operations salient to consciousness can be modeled efficiently on a digital computer. In fact, that’s the direction I’d lean in if I were a betting man, if for no other reason than, as a scientist, I’m trained, all else being equal, to start with the simplest hypothesis and work my way up the chain of Occam’s razor until I find a good model. But I just don’t think we know enough about how consciousness arises in the human brain to determine that conclusively.
That said, the brain is some sort of machine, and should in principle be something that can be built, even if it turns out that it needs to be something other than a digital computer.
I would bet on the not-too-hard-to-model-digitally because life is a system for maintaining low entropy, not high entropy. Quantum-mechanisms-explains-free-will woo notwithstanding.
Plus, even if we agree think that a functional consciousness isn’t necessarily a consciousness, I find it really hard to think that a functional neuron couldn’t be used as a component to build consciousness.
So take an infinite plane of Conway’s Life with some amount of random noise and you’ll eventually get a working model neuron out of it (*eventually). Carry on until you have a whole brain.
There is no such thing as consistency with myths and “rules”. People who make such claims do so post-hoc and rely on confirmation bias and proof texting. They rely on faith in the idea they are correct and justify it after the fact.
They come up with the idea first and then look for scripture which fits. Prophecy is interpreted the same way. People always claim that specific prophesies which have already passed was just interpreted wrong and the next time will be the correct one. Some people were certainly killed for making false prophesy, others skirted by with good excuses. Plus you have people editing them after the fact to cover up the mistakes. The Bible has many authors over a very long period of time.
The Bible’s importance is best described by the phrase, “your mileage may vary”. Religion relies on one thing and one thing alone, faith. Any claims to the contrary are unsupportable.
I would argue that it’s a system for the mediation of entropic reactions, and that the chaotic decay of high-energy/-low-entropy states to low-energy/high-entropy states appears to be the driving mechanism behind life as we know it, and indeed perhaps the entire reason the universe isn’t steady state. So I wouldn’t say it maintains low entropy so much as it leaches off of it. Conway’s CA do the same under the artificial physical laws of the game, but the question I think is unanswered is whether the brain is such an elaborate mechanism that it’s emergent properties are so delicately dependent on analogue processes as to require a machine that is either not digital or a digital machine much more powerful than folks like Kurzweil predict.
I find Penrose’s intriguing ideas about quantum consciousness to grow less likely as the evidence mounts against quantum indeterminate brain functions. But analogue brain function may turn out to be too complex for a discreet machine. This wouldn’t necessarily be an example of a high entropy system, just a highly continuous one, which might make sense since the natural world doesn’t appear to operate in discreet operations the way a digital computer does. In a sense, the natural world is an analogue computer. Ironically, even if quantum indeterminacy doesn’t play a crucial in consciousness, quantum computation (specifically a subset of a quantum Turing machine) might be more suited than classical Turing machines to modeling analogue phenomenon, not only consciousness, but other complex natural processes as well.
Perhaps, and sorry I keep paying Devil’s advocate here, but even if we can model a functional neuron down to fine molecular precision, we won’t know if we missed something intrinsic that allows it to play consciousness with other neurons until we build a whole brain’s worth and network them. In essence, we must first build a complete model of a brain to test it (unless there’s a breakthrough in understanding consciousness itself that we can test with the brute force approach of raw empirical modeling, which is always a possibility). And that’s a much harder task than building a complete model of a neuron because, plasticity of the brain notwithstanding, there’s a lot of emergent order in neural networks that we’ve barely scratched the surface of. So we might already have a good enough neuron model to aggregate a concious brain, but we’re a long way from being able to actually test that hypothesis out.
All I’m saying is that in the biblical universe (i.e. the narrative we have that people are supposed to believe), there’s no great mystery about God’s existence. Clear signs happen in public, but faith is often seen in response to doubt about whether he’s powerful enough or whether external forces will win. Normal people may not generally hear or see him, but prophets, priests and some kings practically have a direct line of communication with God. In the same way, dragons exist in The Hobbit even if most background characters have never seen one. Claiming that the story fits within our world and they were just metaphorical or something doesn’t really fit the narrative, although in the Bible’s case we can have some good ideas about how the narrative came about without any true revelations from God. It’s not a comment about how religion actually works, just that the truth of Christianity stands or falls on its historical accuracy more than most religions do. I think we’re probably more or less in agreement on how well it does this.