“The God Particle”
Yes, but fecal reingestion is by no means the same as chewing cud. They acheive a similar goal through different mechanisms.
A bit like saying that Lance Armstrong won the Tour de France on a hoverboard, since both bikes and hoverboards have two wheels.
https://en.wiktionary.org/wiki/Sitzfleisch
Noun
Sitzfleisch (uncountable)
- The ability to endure or carry on with an activity.
The preserve of educated dolts.
Most unfortunate nomenclature
German is awesome.
If consciousness is something our brains do then a thing that does the same thing our brains do will also be conscious. If it isn’t, then I think you’ve already accepted enough “supernatural” into the world that believing in God is inevitable.
Victor Stenger goes over this in “God: The Failed Hypothesis.” The finely tuned universe may not be as finely tuned as people think it is. But really my point is that we say that if, for example, the gravitational constant was different and everything else was the same then there would be no life. The idea that these dials can even be turned individually is a baseless hypothesis (unless we have insight into alternate universes I’m sorely uninformed about). If you are going to posit that it was just as likely that these constants have any values and can change independently of one another, why not just posit that God exists. Either way it’s just making things up.
No, it’s not as simple as that. Functionalism ignores sensory qualia and can be shown to be an inadequate description of consciousness quite convincingly.
You mentioned the Turing test, consider a box that the Turing test is conducted against. Imagine that inside the box is a person who receives questions from outside the box as symbols they don’t recognise, then manipulates those symbols according to a set of rules, and returns an answer. Imagine that this system is capable of answering questions in a way that is functionally identical to a conscious entity, but the person in the box never comprehends the questions or answers, just the rules they follow.
Is that box then a conscious entity in the same way we are? I say no.
It’s not an argument that consciousness is magical, it’s an argument that our experience of consciousness has certain qualities that are impossible to directly observe in other entities, and thus functionalism is not a sufficient description of consciousness.
Whoa there! That’s quite an imagination. Who says conscious entities necessarily answer questions according to algorithmic if-then-else rules?
(You’re also proposing that the person in the box is incapable of learning the symbolic language despite having access to all the symbols and all the rules, but that’s secondary.)
If I say yes, where does that leave us? Is there any argument you can give to prefer no besides incredulity about the box being conscious? Because I will see that, and raise with incredulity about the box being able to act functionally identical to a consciousness in the first place.
So far from being convincing, the Chinese Room argument is total nonsense because of this. It asks you to reject an outlandish outcome but after asking you to accept an outlandish premise. We assume the box of rules can’t be conscious because that would normally mean something much simpler than a human brain – and yet we’ve just presumed it isn’t because it somehow acts exactly like one. You go beyond your intuition then insist on it.
A collection of interacting components with no understanding on their own, but that together act like a person, is exactly what a human brain is. I don’t have direct evidence of qualia in any but my own, but rather than turning solipsist, I assume other people experience the same type of consciousness because I can tell they work the same way. If you could really build a box of rules so insanely elaborate that it could actually manage the same result as all my neural interactions too, why would I suddenly assume differently?
I think it’s a pretty useful thought experiment, but you can also look at pain responses in patients with neurological damage who no longer actually feel pain, yet react as if they do. In this case you can actually discuss it with the agent and determine that they have a functionally identical response to pain stimuli but do not have the associated sensory qualia.
I’m happy to accept that other people have the same experience of sensory qualia that I do, because that seems like a reasonable assumption, but I definitely don’t accept that artificial intelligence, entities that are purposefully designed to emulate consciousness functionally, will also have the same qualia, and I think if you do that is a massive assumption.
So accordingly, I think it’s speculative to assert that consciousness could easily occur in environments with disparate physics.
It’s just a thought experiment to illustrate a point. The mechanism isn’t important. It’s designed to help isolate and illustrate the particular nature of our experience of consciousness.
Much more eloquently said than my off-the-cuff objections.
No, the mechanism forms the whole argument. Without it you’re basically presuming something that is interacting with you in exactly the same way I am. I’m guessing you don’t doubt that I’m conscious, so what makes the room different?
It’s not the lack of understanding of the components, since my neurons have the same. The only reason I can see is then that the mechanism you’ve picked is something that intuitively seems ridiculous to call conscious. But if it’s also intuitively ridiculous to imagine it would act conscious, and in this case it pretty clearly is, it means that intuition is being deliberately misled.
If I run the same experiment, but inside the room is a computer performing the rules, would you still say no? What if it the computer were simulating an assembly of electronically simulated neurons configured to behave just like a human consciousness? What if they were instead carefully connected biological neurons? A vat-grown brain? A womb-grown human?
Except for intuition about whether such a mechanism could actually be like a person in the first place, what would make one answer different than another? So doesn’t it matter whether the mechanism makes sense?
Except they obviously don’t react the same. Because part of my reaction to pain is that if you ask me about it I talk about how it felt, and presumably the reason you claim these patients don’t have the associated qualia is based on how they talked about it.
Yet in the thought experiment even those reactions have to be duplicated. If it’s functionally identical to a conscious entity like me, it needs to be able to talk about such qualia as well as I can. Maybe not something like physical pain because if there’s no way to input that to the room, there’s no way it could be asked about it. Just as you couldn’t ask me about a photo if I had no eyes.
But we say anything that can be communicated to the room will be described like a person. So for instance you could ask it if the conversation has been exciting, or boring, or frustrating, or elicits any other emotion. And the assumption is it can answer and explain that emotion just as well as a person. Make the comparisons a person would make, use the metaphors a person would use, if prompted write the poems a person would write.
That it makes any sense to imagine something could do all this without ever actually feeling excited, bored, frustrated, or what have you is a massive assumption on its own. It ignores all the ways we conclude other people feel things in the first place in favor of intuition about a deliberately counter-intuitive hypothetical. In making up the possibility that a book of rules could have one capability and not the other, the thought experiment isn’t illustrating anything, just begging its own question.
Forget about the Chinese room. I’m not married to that argument if it’s so problematic for you.
If functionalism is to provide a full account of consciousness, then functional states must equal mental states.
If it can be shown that functionally identical states can exist but with different mental states, then functionalism is not offering a complete account of consciousness.
There are numerous ways to show this. Absent qualia, such as the patient who can’t feel pain but who has an identical functional response to the same stimulus is one. The inverted spectrum argument is another. I’m guessing you a familiar with it, but it’s the argument where two people can have a functionally identical visual experience but each see the world with a comparatively inverted colour spectrum.
In both cases it is quite easy to imagine identical functional states but different mental states.
You seem to want include qualia in the functional states, which is fine if you are discussing it with a person who is aware they have a different experience of pain, but doesn’t hold up with the inverted spectrum argument. Their mental states will be functionally identical, but different because of sensory qualia, and there is no way for them to be aware of that difference. This implies functionalism doesn’t provide a full account of mind.
If the subjects are seeing opposite colors then they’re not functionally identical.
But they would present as functionally identical, and that presentation is what we are talking about with functionalism as a complete account of mind.
What? No they wouldn’t. That’s not how the eye works. All the colors of the rainbow are not equal, at least in terms of stimulus. That’s why you don‘t see a lot (heh) of 100% yellow type on white paper.
You might get kicked in the shin with the same force as you get kicked in the nuts, but those in no way present as functionally equivalent experiences.
Well, I was mostly responding to the Chinese Room, and I don’t think this is actually the same question – it’s about what we can call equivalent versus what we can say exists.
Just for the record, though, I will say the inverted spectrum never made much sense to me. If you want to talk about the stimulus in the optic nerve or maybe visual cortex, the equivalence seems pretty trivial. So the question is how things are perceived in the rest of the brain.
But in this case it seems plain to me that my red is neither your violet nor your red. That’s why for one person a long wavelength might seem pleasant like a sunset, for another a pulse-quickening reminder of a lover’s dress, and for another nauseating like blood. These are all different perceptions, and themselves change with time and context. The nerve input may be the same but we know the brain responses never are.
So exactly what do we propose could be the same about the red and violet? The idea of a quale I understand, but what does it mean to equate two in different people where neither the nerve nor brain state maps? That already seems to me to give them an independent existence, and one beyond what any one individual experiences, so again seems like begging the question.
IIRC, engineers are also overrepresented amongst conservatives and terrorists.
The Oompa-Loompas of Science are a bit of a worry sometimes.
I think the Chinese room question is a Searle thing? I might have stuffed up, I don’t think he came up with that in response to functionalism, I think it’s directly related to AI. It was probably the wrong example to use.
@L_Mariachi I mean an external observer of the two people would see them as functionally identical. The fact that there may be varied internal experiences for each person, but their states of mind appear functionally identical is the salient point.
I’m mainly just defending myself from the assertion that if I deny functionalism I must be a desperate God botherer…
One thing I’ve always loved about Boing Boing is the chance to put my sometimes difficult to defend views out there against really big brains and get them vigorously challenged.
I’m glad to see that hasn’t changed!!!
However I’m about 4 glasses of wine deep now and my capacity for this kind of conversation is vaporising… so I think I’ll quit for the night. It’s been good to stir this up in my head though, I’ve enjoyed it!
Now where’s the hot Trump thread…