Originally published at: http://boingboing.net/2017/03/09/neuroscientist-explains-the-sa.html
…
I thought that neuroscientist was Jason Mantzoukas. I’m still not totally convinced that it isn’t.
I wonder about the sort of active electrical activity going on in a living brain, and what part of that would be lost to destructive measurement.
Ironically, neuroscience can’t explain Jason Mantzoukas. How can regular words, said in a particular cadence be the funniest thing you’ve ever heard?
Correct me if I am wrong, each connection is weighted by use. The same concept we emulate with neural AI. While neurons fire or not, the connections between the neurons have different resistance because of how they have been used, the frequency, etc. Neuronal pathways that are used more, tend to be used more.
While a map of the connections would show you the connections, I don’t really understand how we can see the weights of the connections, and it seems like that makes the strong differences between people. It seems like what you would have would be a diagram of a generic, empty human brain.
** yeah, i had tuned out the entrepreneur, they cover this at the end.
Bottom line: To a first approximation, everyone is a substantialist.
I always find this stuff frustrating. “Neurons communicate with each other.” Yes, but all a neuron can say is “!!!”
I know (or believe) that when it comes to perceiving and understanding the world, all we have are brains. But I don’t understand how a zillion cells saying “!!!” to each other in various patterns makes us sense, feel, or experience anything.
So I’m a materialist who doesn’t understand… material.
I found that if you stop saying, “How could brain signals result in my internal experience of perception,” and just say, “maybe creating an internal experience of perception isn’t actually that hard a thing to do,” then the world actually makes more rather than less sense. This notion you have of an internal experience that you can’t see being made of matter - what do you think it is? Does it just slip away from you every time you try to think about it to deeply? Maybe thinking it’s all made of matter is simpler.
I recently read an article about some scientists checking whether the predictions of their model that says the universe is holographic coincide with the background radiation from the big bang. It did.
I think it’s weird to think that perception can’t arise from matter when we don’t even know what matter is.
Well, the neurons don’t help us sense anything, nerves (limbic systems) do. Neurons just interpret those sensations. And those work (more or less) randomly at first. But then we also have an endocrine (hormones like dopamine / serotonin / cortisol, etc.) system that are also regulatory in nature. And those systems act (again, more or less) independently at first of the neural system. But! There’s a tiny bit of connection between them, enough to get going anyway.
But over (a relatively short) amount of time, neural pathways are created to directly connect these systems up and create a feedback loop between sensation / interpretation / response. And our pattern recognition / long-term memory storage capacities are truly off the hook.
So to your point, it’s not just neurons firing, but neurons activating more complicated systems (nerves / endocrine / long-term memory potentiation) which form the more complex model we understand as “feeling”, “imagining”, “remembering”, etc.
tldr Neurons have a lot of friends in the body that help you perceive the world.
It helps not to think about it as sensing or feeling as instances of some disembodied concept of sensing or feeling. Instead, think of it as “Oh, so that’s what it is like to be a brain.” The brain just goes on doing its brainy business and that all feels like… well. That feeling right now? Yeah? Feels like that.
Or, in Neuron: !!!, !!! !!! !!!. !!!? !!! !!! !!! !!!. !!! !!!, !!! !!!: !!!, !!!, !!!, !!! !!!.
(I’ve written code in programming languages with barely more expressiveness than that, it occurs to me)
I can’t listen to this. Why does every video now have to have background “music”? Well if it were actually in the background it wouldn’t be quite so bad.
If a brain has 100 billion neurons, it would take 37 bits (2^37 options) to uniquely reference a specific neuron. Assuming that all you want to know about a connection is a source and destination, that could be stored in 74 bits. At 7,000 connections per neuron, my math suggests you’d need 6 petabytes, which seems manageable. I wonder how much more information you’d need to fully model a functional brain.
Indeed.
I believe there is one school of thought that speculates that giving rise to perception/consciousness/sentience/subjective experience is simply one of the fundamental properties of matter, much like a negative electric charge appears to be for the electron, and speculating about the whys and hows of the former is about as fruitful as doing so for the latter. (Somewhat analogously, the Standford Encyclopedia of Philosophy’s answer to the question “Why is there something rather than nothing?” begins “Why not?”)
Certainly dualism seems a cop-out. Great, you’ve posited a different kind of substance – call it soul, spirit, whatever. So how does that give rise to sentience? If the answer is “it just does”, why can’t we use the same argument for boring old matter?
I’m not clever or knowledgeable enough to articulate it properly, but it seems to me there is or may be a contradiction in terms in the idea of the philosophical zombie: I’m not convinced it makes sense to separate the appearance of sentience from actual sentience. Emotional states, for example, seem to be too closely tied to the actions that betray them: not only is hard not to smile if you’re happy, or laugh if you’re amused, or frown if you’re cross, it’s also hard not to feel the corresponding emotions if you deliberately put on a particular expression – smiling can cheer you up, frowning can depress you, regardless of what you were originally feeling. I think this was where Dennett was heading in Consciousness Explained, but although I enjoyed the earlier parts where he gave John Searle and co. a good kicking, by the time he got to zimboes and multiple levels of self-reflection and so on I was more than a little lost.
Emotional states, for example, seem to be too closely tied to the actions that betray them: not only is hard not to smile if you’re happy, or laugh if you’re amused, or frown if you’re cross, it’s also hard not to feel the corresponding emotions if you deliberately put on a particular expression – smiling can cheer you up, frowning can depress you, regardless of what you were originally feeling.
This is just because your brain still creates neural pathways in an amazingly multidisciplinary manner. So when you’re a baby, your brain builds up this model of “something amusing + someone smiling + dopamine = happy + smiling mimickry” and eventually that’s what the pathway is, just a big jumble of physical (smiling), emotional (happy), sensate (relaxed heart rate), hormonal, long-term memory, etc.
It’s the same reason you look up and to the right when trying to visualize things: your brain literally builds that shortcut in as a physical “assist” (reduce your physical visual acuity to increase your mental activity) fit to purpose.
And of course it’s easy to short circuit all of these through chemistry / electricity / disease / etc.
Yes, Brain changes with time. But the purpose of the project is, to make the computer work like a brain. So the change (or change in connection strength) doesn’t matter. If we get only a particular state of the brain, we can simulate the brain. My more concern is a brain without a body. From body we receive signals, we have to simulate the complete nervous system, with all receptors to work computer like the brain and give each part of the brain the signal they receive.
Brains are way more complicated. Those connections are weighted for one thing. But also neurons respond to hormones and neurotransmitters. Neurotransmitters are a way neurons communicate with one another, but they actually float free in a small gap between neurons and so they float out into the brain. So a neuron, at any given time, might encounter a neurotransmitter that might tell it to fire or to not fire with different weights. In order to properly model that you’d need to model how molecules float through 0.05mm fluid filled gaps, when they float free, and what they do. That sounds like dropping a billion messages-in-bottles in the middle of the ocean and predicting where each one lands.
The best we can do model brains is to build things that work like brains. We can’t create a full information-perfect copy of a brain.
I think something could definitely appear to be sentient without being it in the same way a wall looks red but isn’t through red-coloured glasses. Basically when someone brings up the philosophical zombie all I can say is, “Yeah, I could be fooled alright!” Like somehow it being possible to trick me is a big philosophical revelation.
I agree. When someone thinks consciousness requires a miracle, but they can’t tell me what a miracle is, I think it should occur to them that all they are saying is, “I don’t get it.” Yeah, you don’t get it. It’s some sort of weird arrogance to jump from “I don’t understand this” to “This cannot be understood.”
This topic was automatically closed after 5 days. New replies are no longer allowed.