Good question. Apparently long winded rants in response to imagined arguments no one made and disingenuous accusations of denigrating women is all we can get for some reason
You trust a lexicographer over a mathematician about what words mean in English, which in the language this article is written in.
Merriam-Webster defines ring as being a circular band for holding, connecting, hanging, pulling, packing, or sealing, a circlet usually of precious metal worn especially on the finger, and so on. Only when it gets to definition #11 does it give the proper mathematical definition of a set of elements closed under two binary operations with such-and-such properties. But which do you think Lord of the Rings is about?
I don’t get why this is so hard.
Some of the debate seems a bit of the old connectionism vs. hard A.I. controversy - which tends to smolder under the surface whenever metaphors for minds and electronics abound. It’s been going since at least when Minsky wrote the thing about Perceptrons not being general purpose and kicked off the AI winter.
Am also skeptical about fully extending Turing models to every dynamic system since they work best in a timeless and infinite memory resource non-reality.
A computer does not have to be Turing complete, nor does it even have to be electronic, or be based on digital processes. Crazy idea… it just needs to be able to compute. As @jerwin said above, the original computers happened to all be flesh and blood humans.
I expected a thread on bb about one of my favorite subject matter areas to be, well, more interesting…
Well, that would explain how cats videos get to stay in there far more easily than useful facts.
That’s like saying “a brain is a typist” because typists are human beings, and denying that definition denigrates the work of the mostly-female humans who worked as typists.
The Internet, you say?
You don’t have a tiny Katherine Johnson living in your skull. You have a version of the organ she relied on to perform calculations.
social networks, privilege separation, and cosmology
This book hopes to present a new analogy of how the brain works. But why do we need a new imperfect analogy to replace the old imperfect analogy? Who is being confused, and is their confusion scientifically dangerous and unproductive? Or does this book simply aim to entertain the popular audience, who will then form the theoretical basis for the new Neuroscience book-- Your brain is not like the internet.
The definition of compute is “calculate or reckon (a figure or amount).” A computer is that which calculates or reckons – including the human version. Edit: I’ll add that a computer does not have to be a Turing machine, nor Turing complete.
Do brains calculate or reckon? They are signal processing systems to be sure, at least in part. Brains by definition have computational capability. Are they computers, and what is even meant by the question? Are they wholly computers?
I personally think brains (neuronal systems) are both computer and antenna, but that’s another story entirely.
I’m a little surprised this entire thread has gone by, and I haven’t seen any mention of neuromorphic computing. It’s a new area, but seems relevant. von Neumann architecture computers are hardly the only type that can be built. Are our brains von Neuman computers? Absolutely not. Does it rule out their being at least partially computational in nature? No, I wouldn’t think so.
You don’t have a tiny Katherine Johnson living in your skull
Ted Chiang’s “Seventy Two Letters” (From Stories of Your Life and Others) was very confusing…
Well, digital computers are that way. There are all manner of analog computers (both electrical and mechanical) that might be interesting metaphors for some of the mechanisms you describe. Analog computers are particularly good at signal processing and differential equations. Analog was the go-to way to tackle those problems until digital computers got so fast that it was worthwhile to digitize everything at super high resolution and process it in that space. Awesome examples of analog computing in action include the Norden bombsight that (very) arguably won WWII, and the awesome cam-and-gear monstrosities that the Navy used to use for calculation of firing solutions on ships.
Your point is well made though- I’m just being argumentative.
Not sure that’s true, at least for common definitions of “computation.” A tardigrade has a brain but I don’t believe they are particularly good at math.
I don’t know anything about the specific analog computers you mentioned but have wondered whether a high-throughput, distributed, analog computer could be built that is more brain-like. That’d be really cool.
The OP was about finding a better metaphor to help us understand the brain. I was taught “all models are wrong, but some models are useful.” Its useful when it shares key function and we understand how the model produces that function. Using metaphors of unknown or hypothetical devices (like possible future quantum computers) isn’t helpful because it replaces one thing that isn’t understood with another that isn’t understood. The modern, digital computer (which the OP is unquestionably talking about) is a poor model because its similarity with the brain basically ends at they both process information. The analog computers you mention sound like dedicated use machines which then wouldn’t be a good model since flexibility and learning are such key functions of a brain
Of course they do. Brains are wired up to many organs, sensory and otherwise, and receive, interpret, and respond to incoming signals. The signals have values, and neural networks model an external reality (among other things) based on the incoming electrochemical signals.
I think this discussion is largely limited to semantics at this point. As I said…
And as I said, if you broaden the definition of “computer” to encompass all manner of physical phenomena rather than something that performs mathematical calculations then the term “computer” becomes meaningless.
I think I’ve discovered some much needed specificity for this discussion topic. Both are papers authored (at least in part) by Daniel Graham.
Network Communication in the Brain (Daniel Graham , Andrea Avena-Koenigsberger , Bratislav Mišić, 2020)
The Packet Switching Brain (Daniel Graham , Daniel Rockmore, 2011)
The answer is yes, but it turns out to be easier to do it in software because of the scale required. That’s what modern neural network AI algorithms are. The hip term for them these days is “machine learning” but they’re really just statistical models created by layers of “neurons” (we do call them that in computer science) with weighted analog connections between them. Because of the terminology used, laypeople often see these and think they are “simulated brains” but that’s like calling a plastic toy plane a “simulated Starship Enterprise”. Calling it a brain is so far from correct that it’s “not even wrong” as we like to say.
The funny thing is, these things come full circle. Once you have a general purpose algorithm like this that proves effective in many domains, sooner or later someone builds dedicated hardware for it to accelerate it. Nowadays that “dedicated” hardware tends to be GPUs because they’re good at this sort of thing anyway, but dedicated machine learning ASICs and FPGAs also exist.
The elegant genius of them is that they are programmable. They aren’t single-purpose, though calling them General Purpose Computing is generous (if technically correct). Some are programmed via patch cords and/or potentiometers and some with setting gears and the like. They are more cumbersome, but actually more precise because they keep the math analog. When modelling a signal or a trigonometric function, a digital computer is always approximating because it has to reduce what is, in nature, an infinitely precise signal to a sampled approximation of it. Modern computers can sample to such a high rate that it hardly matters, but for a long time that wasn’t true, which is why things like radar processing and even tuning radios stayed analog until very recently.
Coming full circle now, a machine learning algorithm implemented in an FPGA ends up being basically what you’re looking for- a fast special purpose computer modelling a version of a “brain” with the learning ability. The main problem is it doesn’t scale all that well. They are also, at the end of the day, still just an obfuscated way to generate a statistical model.
Laypeople, again, see these algorithms for more than they are and presume if we just scale this up to trillions of “neurons” it will magically gain consciousness and presto: Artificial Human. There’s zero evidence to support that idea, even if implementation were technically possible.
Neuromorphic computing is another approach being tried, though a bit more fringe. It’s basically trying to implement particular brain models (not correct but useful, as you say) directly in hardware with analog elements like memristors and implementing activation thresholds with transistors. It’s sort of the PhD version of the layperson’s idea to “build something brain-like and hope something cool happens”. It’s likely to get supplanted by the systems I describe above, because they are always faster and proving more flexible and scalable. It always ends up being easier to do things in software and neural network algorithms are delivering the goods. We have real-time facial recognition and Siri, after all- things thought impossible 50 years ago.
It’s the dictionary definition yo. You seem to be the one who wants to narrow the definition for some reason. Anyway.
My gods, what a frustrating thread to read. So many knowledgeable brains talking right the fuck past each other. One of the brain’s superpowers is its ability to classify, approximate, and label. But then we get so firmly attached to those frameworks we’ve thunk up that we will bend space and time to get everything but space and time to fit into those frameworks.
Give me enough time and recreational drugs and I’ll list out a hundred ways the brain is like a big city, a government, a university, a computer, or a cyclist in heavy traffic. Each one of those comparisons could yield explanatory insights, but sufficient devotion to any of those models would eventually obscure more than it illuminates.