How neural networks work - A good explainer video

Originally published at: https://boingboing.net/2020/07/08/how-neural-networks-work-a-g.html

6 Likes

Well done!

Honestly, I don’t understand why a video like this must have a soundtrack.
I found the ‘background’ music to be too loud, distracting and annoying to watch the entire thing…

Is there a tool / site that can strip the music from a youtube video but leave the voiceover?
Is that something that someone smarter than me can train an AI to do?

2 Likes

So that is a good explainer of artificial neural networks. For anyone interested in biological neural networks, however, it is very misinforming. Everytime he used the word “neuron”, I felt a pin stab. The nodes of artificial neural networks do not work at all like neurons, and the insistance on using biological terms to describe very non-biological processes is really counter-productive

6 Likes

It’s a fair point that it’s not a fair assumption that everyone chancing on that video is going to initially appreciate the distinction between ANN’s and biological nervous systems. Isn’t the original work of McCulloch and Pitts that underlies what’s described here based on observations of animal (squid?) nerves? (namely the activation function and a nodal structure with a multitude of inputs and outputs). As I recall it was only somewhat later that the degree to which that model departs from the biological was well appreciated.

2 Likes

Its been almost 80 years since McCulloch and Pitts’ original paper. At the time, there was still fairly little known about neurophysiology (it was before action potentials were described). That they didn’t account for the fact that neurons are inherently dynamic, analog, spatially distributed processors does not excuse modern neuroscientist for ignoring the past 80 years of research. Calling a static, point process a “neuron” seriously impedes understanding of how nervous systems work.

For instance, I have a colleague that studies advanced convolutional neural networks that well reproduce much of human visual processing of 2-d images. We discussed building a CNN for a detailed single neuron model I made to do some parameter searches. The requisite model was bigger than his model of the whole human visual cortex.

The issue is largely that adding detailed data about individual neurons into a large ANN is computationally expensive, and since the real neural details are largely unknown it wouldn’t be more accurate, just bigger. Decades of ignoring all of the neural dynamics, though, has led the field to collectively pretend that they don’t exist. It is a real problem

1 Like

Can appreciate that - my experience is mainly with electronics, in SPICE models of transistors there are 40 or more parameters - the simulations are still regarded as limited guides to what can be expected from physical devices, (for analog especially). Wire and silicon is immanently more amenable to modeling - with the phenomena occurring mostly in the EM domain (analog and temporal/spatial to be sure, with a healthy dose of thermal stuff thrown in too), I can only imagine how complexity explodes when biological elements, chemistry and metabolism get involved.

That is concerning - have been generally optimistic about the rise in focus that has been being given to ANN’s recently over the previous era of top-down AI and expert-system type approaches, but mistaking the a map for the territory when the map was always meant to be a cartoon is bad news.

1 Like

If you’re optimism is about AI and what ANNs will be able to do, then don’t get discouraged. They’re amzingly powerful and growing more impressive all the time. Its only if your optimism is about understanding what is going on inside your own head that I’d try to dampen your excitement.

The SPICE models aren’t too different in implementation from the cable models used for neurons. The biggest source in complexity is that biological conductances are all inherently activity dependent, and they’re embedded in high resistance membranes which leads to ongoing history dependent interactions between different parts of a cell at multiple time scales. It can kick your ass pretty thoroughly to try to model it in detail

1 Like

I’m way out of my league here, but I had a couple of thoughts that might be of interest, or at least amusingly naive.

Is it really “AI” when I could write a simple GWBASIC program to do the pressure/temperature/humidity controller. Probably nothing more than a bunch of nested IF/THEN statements. Maybe even a .bat file if all my favorite utils are on the %PATH%. Or Fanuc PMC “ladder”, with or without Fanuc Macro B. A box full of NOR gates even.

But this won’t scale well due to [ exponential | logarithmic | factorial ] growth. Better languages, better algorithms, clever hacks will allow larger and larger nets, but the problem grows faster than the solutions.

But in principle, there’s nothing magic in “machine learning”, at least these self-learning neural nets.

Also, I find it interesting that you can’t fold a piece of paper more than 7 times, and a neural network of 8 layers or more is required for anything useful. He didn’t mention 8, but it fits with my thought, so I’m going with it.

Wonderful video. I hope he has more. I smashed the Like button and subscribed 90 seconds in.

Again, forgive my ignorance. I know enough of the field’s words, jargon, and acronyms, and I can string them together well enough to fool my wife sometimes. I’ve no education in these matters.

1 Like

Well, I certainly learnt a few things there. A very good explainer!

This is surely enough of an excuse to plug the always splendid Grant Sanderson.

Also, for completeness:

This topic was automatically closed after 5 days. New replies are no longer allowed.