Noam Chomsky explains the difference between ChatGPT and "True Intelligence"

If “self” and “will” are actual concepts, which it is unclear if they really are. Humans are egotistical creatures. We thought we lived at the center of the universe until Copernicus taught us otherwise. Then we we still were the pinnacle of God’s creations until Darwin. Concepts like “self” and “will” are likewise concepts that seem part of human exceptionalism.

I asked it to send me three messages, at random intervals over the course of five minutes, about whatever it wanted to say. I got 1 immediate reply and nothing more.

ChatGPT doesn’t have an internal clock and doesn’t run unless it’s generating a message – you’d have just much of a hard time if I asked you to call me on the phone at any random time tonight, after you go to sleep. It’s a nonsense request.

I don’t see how this proves that ChatGPT has no internal sense of self – it’s just paused while it’s not generating an immediate response.

It’s fundamentally not human, and you shouldn’t expect it to be when evaluating its intelligence. You have an internal biochemical clock that’s wired into your brain, so you sense the passage of time innately. ChatGPT’s experience makes it unable to perceive or respond to that kind of passage of time. Being unable to perceive something doesn’t have bearing on intelligence, though. A person missing some “normal” human sense like sight or hearing is just as intelligent as someone with the full complement, and we don’t doubt their internal self.

All that said, I don’t necessarily disagree with your assertion that it’s a simple stimulus/response engine – i only disagree that that is what makes it different from us. I could just as easily say we are simple stimulus/response engines, which happen to respond to stimuli beyond simple text prompts. Different in the particulars, but more than that? Unclear. The case needs to be better made.

6 Likes

A fantastic question, which won’t be answered if you give up on the possibility that they can! AI could be an excellent tool for us to explore that boundary where things go from “not conscious” to “conscious”, and what different kinds of consciousness are possible. Even a persistent inability to cross that threshold would be an interesting result… but you have to keep trying to know.

To me this seems preferable as a way to explore the question to the converse, selectively disabling parts of actual brains until they are below the threshold. (I suppose non-permanent interference with brain operation can be interesting and illuminating. It’s hard to be really experimental, though.)

1 Like

Hopefully, someday, you’ll see how perilously narrow and insufficient this notion is, how it fails all of us. There’s a tragedy in how we’ve rationalized our way to reducing the nature of existence to these mechanical terms. But you’re very clever, I recognize that and where it comes from, so you’re probably not going to understand what I’m getting at. And I don’t have my argument ducks in a row, I just hear them quacking. Goodnight.

4 Likes

Not really, no. More like taking a photocopy of someone’s face over days and weeks; then their hands; then their arms; etc. etc. Piecing them together so that it looks like that person without looking like any of the individual photocopies, but it’s still not them. It’s still not someone wearing makeup and a costume and pretending to be that person.

2 Likes

There’s some guy named Baudrillard outside who’s just screaming “SIMULACRUM! SIMULACRUM!” over and over.

10 Likes
3 Likes

Hopefully someday you’ll see how perilously narrow and insufficient this notion is that the world is a sphere orbiting the sun and how we’ve rationalized our way towards reducing our way to reducing the nature of our existence to such geocentric terms, In all seriousness, there should be no more controversy to modern neuroscience than to astronomy, but the problem lies in our university system. I once was a TA to one of those worthless “Biology and You” courses and wondered why we didn’t have non-biology majors just take introductory biology courses but was told that they tried that but too many people failed them.

1 Like

Very well put. Unlike the rest of us who are just experts because we have taken various random walks through the internet for (portions of) the past 30 years.

4 Likes

Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.

This is one of the key failures of LLMs at the moment, they don’t know when they’re wrong and they don’t know how to correct themselves when they are. They can only make a reflection response that acknowledges being wrong when challenged. Their future prediction capabilities are limited in scope to parroting the speculation done by humans in the corpus they were trained on.

Their knowledge is essentially fixed in time. It’s also notable that the process of getting them to do new tricks, i.e.: “Prompt Engineering” is less about retraining, and more about finding the right prompt to elicit a usable response out of the vast sea of information contained within.

This will change though. We are seeing the dawn of a new kind of intelligence. One that looks like us, but is not like us, and it will have different capabilities than us. Some of those capabilities will be greater than ours. Some of them will be considerably worse.

(side note: imagine combining an LLM with the IRL world model building of “DayDreamer.”)

1 Like

Yesterday, I asked ChatGPT to write lisp code to find the biggest prime p less than some input n, and to not use loop. I ran its code and got 2. It keeps track of the session and I said, “your code is wrong,” and then it asks me a question, can I tell it what was wrong? I say it output 2, and then it apologized (so polite!), explained what was wrong with how it was exiting / passing between function calls, and then output code that did actually work.

4 Likes

I would argue that these issues are rooted in the fact that our LLMs are not, so far, capable of reasoning.

In order to evaluate the accuracy or truthfulness of one’s statements one must be able to reason about the underlying concepts behind those statements, identify any knowledge that might conflict with the statement, and reason about how to resolve the conflict.

Some say that “humans make mistakes all the time” but infallibility isn’t a mark of intelligence. A key feature of intelligence that appears to be missing from ChatGPT is that capacity to reason on an abstract level about the nature of the thing being considered.

5 Likes

There is evidence of abstract models existing within things like ChatGPT, which makes sense. Our language is a compressed compact representation of the world we experience. It follows that a network with sufficiently broad exposure will create new compact reps within its layers that would reflect the external world that influenced the language it was trained on.

What there isn’t evidence of is a network forming new models about the world and producing new lines of reasoning from them. All of the “new” things are perturbations of the training set. It only seems novel and interesting to us because it is operating with a dataset so large no human can wrap their own mind around it. It’s just new and novel to you, personally.

I remember exactly the same hype about expert systems and “4th generation” languages like Prolog in the late 80s.

They’ll have to boil an awful lot of this sap to get a drop of useful syrup.

7 Likes

yeah. conversations about what is thinking and what is self have given many generations of humans something to talk about over beer, and probably always will. if those conversations ever turn up anything truly novel, i suspect it’s forgotten about by morning anyways

the practical real world implications of deep learning, ai, and robotics seem perpetually secondary, and we’re already living with the consequences: some good, some bad

like i don’t believe in a robot apocalypse, but for comparison: it’d be like if sarah conner was deeply concerned with what color the nuclear explosion was… because it doesn’t really matter.

he’s always been skeptical about them, and based his career on the older idea of symbolic learning… which from my armchair has always seemed even further away from the way humans work than llms and neural nets ( put me in the cybernetics column anyway: intelligence arising out of feedback loops )

same. i think he could shed that impression by simply saying he’s surprised by how effective llms have been. my impression is he always held that they’d produce no meaningful results

and yet, to circle back around, they’re already having important real world effects: some good, some bad

3 Likes

Sure about that?

CHATgpt and the like are already being widely used. :person_shrugging:

4 Likes

This is a perfect example. You had to call it out, and it used the context buffer and your course correction prompt to produce a result that was likely if and only if prompted that it had made a mistake.

For fun, try and tell it that it is wrong, and then give it a wrong answer instead. See how long it can keep up.

If you’re sufficiently stern in your (incorrect) course correction it will produce results that reflect that. It’s a stochastic parrot. It reflects a response based upon the authors prompt. Only one party in the conversation is doing any reasoning, and they have control, weather they realize it or not, on the ultimate tone and direction of the response :slight_smile:

4 Likes

all arguments about the existence of machine intelligence are required from this point forward to be written by chatgpt in the style of noam chomsky.

4 Likes

I think your definition of “reasoning” is flawed. Do you think culture and the attitudes of people we talk to don’t influence us? Otherwise, what is gaslighting?

In philosophy, the concept of logical reasoning is based around the observation that logical structure existing separate from facts used as premises. I think what you’re observing is that ChatGPT does reason, and it does it pretty well – it’s just very susceptible to having its premises distorted, because it exists in an experiential vacuum.

Imagine you were locked in a sensory deprivation chamber all your life, from infancy, with only a text terminal to communicate with the outside world. If someone told you that airplanes fly through rock and not air, would you bother to challenge them? Why? It might contradict some things you read in the copious volumes of the internet you have access to, but lots of things you read contradict each other. Besides, even if you recognize it’s a game, why not play along?