Noam Chomsky explains the difference between ChatGPT and "True Intelligence"

You and I are so much in agreement that the difference is negligible.

To be clear, I’m not talking about creating neural networks (hardware or software or both) that simulate human thought and consciousness. I’m talking about networks that achieve human-level thought and consciousness. But that’s centuries or maybe millenia away, so it’s purely academic.

Inputs could be simulated without making the network invalid. But now we’re debating the Caverns of Socrates, and that’s probably all hashed out.

10 Likes

I share his skepticism on the potential of statistical LLMs; but some of the forays into philosophy of science are really straining my preferred raising eyebrow.

Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience.

Aside from shoving centuries of observational astronomy work into the ‘psuedoscience’ bucket because it wasn’t undertaken by physicists; that sounds like it might come as a bit of a surprise to the good Mr. Newton himself, who quite explicitly noted that he was(in a sharp break from a lot of prior mechanics, for which he got dragged about as much as his work’s obvious power and utility allowed) presenting an inductive generalization from the phenomena rather than a hypothesis.

It’s absolutely the case that Newton’s digestion of astronomical data into a compact set of simple mathematical generalizations is vastly better than what statistical models do to their training sets; producing more or less wholly impenetrable black boxes; but while there is a profound difference to be had there ascribing it to the difference between predictive and explanatory theories seems…off.

The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.

Why is it that the theory of natural place invite only further questions; while the theory that mass bends space-time apparently doesn’t invite further questions? Again, an alleged difference between the merely predictive and the explanatory that is flatly stated without support.

Then there’s just this:

Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility). Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.

It sure is a good thing that probabilities are in fact distinct from statements of the limits of rational conjecture, and that you’ve clearly established that, or I might have thought that you were accusing humans and statistical models of both learning by adjusting their inferences against available data over time; and further suggesting that humans may be the more limited of the two because they only function effectively within the domain of certain biologically amenable language structures; while bots can muddle their way through those as well as others.

8 Likes

I, for one, have definitely met children markedly smarter than their parents.

What I’ve not met, and don’t expect to meet, is anyone who is sufficiently smarter than their parents to understand the mechanism by which they are smarter, much less turn that understanding into a recipe by which things smarter still might be produced.

10 Likes

Ha! you beat me. But I found a copy here that seems to load better for me: The False Promise of ChatGPT - Revista de Prensa

3 Likes

Noam Chomsky’s area of expertise isn’t AI. Neither is natural language processing, it’s just adjacent to his area of (academic) expertise, which is actually linguistics. (I think you could also say his long involvement in politics qualifies him as some kind of political expert, as well, but that’s also not AI.)

As a person who recently did a course in natural language processing, I can tell you that the current state of the art involves ignoring most of Chomsky’s work on linguistics and grammar. Formal grammars are now generally considered a dead end for natural language – useful for simpler systems but nothing that has to capture nuance or cultural context. In that light I have to say this reads as more a bitter reaction to the field moving on from his work than a researched argument.

Chomsky has been tremendously influential in CS generally, but his work survives best in systems that deal with parsing formal grammars – programming and mathematical languages.

Anyway I also happen to think his critique basically boils down to “but humans are special” without clearly articulating what makes us special, and he conflates general intelligence (which I feel pretty clearly we are on the road to) with human intelligence. At its best this is basically a straw man argument: obviously, no matter how sophisticated AI gets, it won’t be quite human. But not having human feelings or consciousness doesn’t necessarily mean AI won’t have feelings or consciousness of its own.

The only concrete thing I can identify in Chomsky’s argument as a uniquely human ability, is the ability to formulate coherent theories and explanations in a way that a human can understand. It’s clear that ChatGPT doesn’t have a direct experience of the world, and since it isn’t constantly being retrained, it doesn’t learn, and so it can’t possibly meet that criterion. But it does do the latter part, explaining things that are in its experience to humans. It’s a bit strange to me to see that half of your requirement is fulfilled, and say “well, you didn’t try to do the other half of my requirement, therefore that will never be done”.

If he thinks AI can’t formulate theories and explanations from direct experience, Chomsky needs to take a look at the broader world of AI research. For example, this Two Minute Papers – DeepMind’s New AI: 10 Years of Learning In Seconds! - YouTube – describes an AI that uses pretraining about what kinds of rules can exist in the world to really quickly develop new theories and understandings when presented with novel situations.

All that’s missing here, as I see it, is really to merge the experiential learning with the language model to get what Chomsky demands. Do we know how to do that today? No. Does that mean that independent research into these separate aspects of AI don’t help in pursuit of that goal?

I mean, I can’t be certain, but it really seems to me that general AI is something we’re closer to today than 10 years ago, and ChatGPT is a part of that. At this point I’m definitely expecting human-level general AI within my lifetime. No, ChatGPT isn’t it, but to say it’s not a step on the road is… a very confusing conclusion to draw right now.

11 Likes

Glad to see him speaking about a subject he actually is an expert in

3 Likes

Eh. Call me when they can have a backache.

13 Likes

But don’t you see… that’s what makes AI superior! It doesn’t have our human frailties! /s

4 Likes

My graduate mentor from 20 years ago was very enthusiastic about cutting-edge brain/neuroscience research. We were chatting about AI one day in about 2002 and she stated that it would be very difficult to conceive of an AI that could mimic human brains, because digital technology can’t replicate a “biological, dual-analogue difference generator”. And that’s the key…contrast and comparison between the hemispheres. We look for differences, exceptions, and anomalies to process information and cognition.

4 Likes

I know, right? It’s like Dawkins speaking about anything that isn’t evo/devo. When he speaks as a biologist I’m happy to listen. Otherwise, I kinda wish he’d STFU.

3 Likes

Yeh. Gotta hate a guy who’s written scores of books on politics and studied the subject for about 80 years. No way anyone could ever become an expert doing that. :roll_eyes:

10 Likes

point pointing GIF by Shalita Grant

1 Like

I messed around with chatGPT for an hour. I gave it the test I always give chatbots: I asked it to send me three messages, at random intervals over the course of five minutes, about whatever it wanted to say. I got 1 immediate reply and nothing more.

They have no will, no inner life yet. They are a stimulus-response machine only. I’ll start to get excited if I ever see one move beyond this stage.

3 Likes

At the end of the day, we aren’t magical though. Cartesian dualism isn’t really defendable. Whether or not ChatGPT is really how humans work, we are really just the product of neurons (not that much more complicated than single cell organisms) firing. I think biologists are more comfortable with this than others because we get that humans are just animals.

3 Likes

Well, you know, it’s the only game in town. How do firing neurons become gestures at all of your consciousness?

As scientists we like to say “it’s just this” or it’s “nothing more than that” but what are we even talking about? Why is reality not weird enough to be impressive to us?

This place is so weird we’re only just scratching at its surface. Consciousness emerging from biological life isn’t just a game of configuring Legos, there are multiple complex systems at different levels that are all interacting and giving rise to this incredible thing we experience.

Edit: Sorry for the bold italics, this system is incompatible with my usual usage of asterisks to denote actions…

3 Likes

…mimicking mimicry?

3 Likes

Because if something is based on a simple thing it shouldn’t be that unbelievable that we can simulate it by computer. I’m old enough to remember when it was thought chess was so complicated that it would be ludicrous to imagine that a computer could beat a grand master. Then we could and then it was “well, Go is far more complicated; mere chess isn’t AI” and then that too was mastered. Now crappy undergraduate essays are mastered and then we think that too isn’t really AI. With all these goalposts shifting AI is by definition impossible.

2 Likes

That’s what I’m saying–the folly is in thinking consciousness is “based on a simple thing”. I’m saying that even if you could build perfect replica neurons and fit them together in a brain-like way, you still won’t get to your destination. You need, at the absolute bare minimum, all of the peripheral nervous system, the element of embodiment, and how that interacts with the rest of the universe, those feedback loops.

3 Likes

AI is possible, if you mean something that can learn some complicated rules, remix them, and spit back something coherent that follows the rules. We’ve got that, and it’s getting fancier every day.

But we haven’t made a mind with a self and a will, and it’s possible we won’t for a very long time, if ever.

3 Likes

On the other hand, there is the argument that biological simulation is unhelpful after a point. Studying birds did get us a way towards airplanes, but the people who thought airplanes should flap their wings lost out to airplanes as we know them (not that people haven’t continued research towards orinthocopters just for fun). Similarly, biology helped us get to neural networks, and from there to deep learning, but it isn’t clear if we need to emulate biology further.

5 Likes