Originally published at: Noam Chomsky explains the difference between ChatGPT and "True Intelligence" | Boing Boing
…
I have been in a bunch of online arguments recently against ChatGPT/AI true believers who claim that humans are really just statistical linguistic engines and there is therefore no fundamental difference between what ChatGPT is doing and human (or even non-human) intelligence. Glad to see some heavy hitters coming in on my side.
Here’s the essay:
Glenn Greenwald says “I’m not endorsing any or all of this”. He has reservations about Noam Chomsky’s views when Chomsky is writing on his actual area of expertise.
It’s a pleasure to read Chomsky when he’s back on track. Greewald just resents this as a distraction.
Every computer scientist has been screaming this from the rooftops for the entire time. Of course things like ChatGPT are not intelligent, nor can they ever be. But if it takes some fancy philosophers to come in and write op-eds to get the mainstream to understand that, so be it.
It’s not just that these algorithms are not intelligent- it’s that they are not even on a road to intelligence. People seem to always assume that true AI is a path we are on, and it will come from “ChatGPT, but more so”. That’s like saying if you breed really really really good horses, you’ll get a Model T. The current statistical engines are neat tricks that you can do with huge data sets and fast computers. That’s all. They’ll never be more than that.
I think of “artificial intelligence” kind of like “artificial turf.” No matter how technologically advanced it becomes it will always be fundamentally different than the thing it’s standing in for even if it is an acceptable substitute for narrow use cases.
Okay, so they never will fully replicate human thought and behavior.
But aren’t Chomsky et al. sidestepping more pressing and immediate issues? On college campuses, departments in the humanities are jumping up and down over the implications for student work, and even how to teach students with the AI writing programs somehow on the table (since trying to tell students to ignore such AIs ain’t gonna stop their use of them). In many job fields, workers are already being replaced by writing programs. Not to mention that most of us may have already read something supposedly written by a human that wasn’t. Sure, careful, educated readers can still sniff out robotic prose and content, but many people cannot.
Ignoring such new and pressing concerns makes this piece, intellectually stimulating as it is, seem kind of above it all to me.
That is why we should communicate more in gifs!
Kevin Drum had an interesting response to Chomsky et al on this topic. His basic takeaway is that they are massively over-selling the power of the human brain, as evidenced by real-world experience. ChatGPT and its brethren will only continue to improve, and we can’t imagine where they will be in 10 years.
You mean the AI being made by human brains is gonna be better than those human brains?
Talk about missing Chomsky’s point. Until ChatGPT is self-aware and experiencing emotions, it can never produce real writing. It can only copy from a large database. Even mimicry is currently outside of it’s capabilities. It may look like it’s mimicing someone’s style, but what it is really doing is plaigarizing from terrabytes of samples.
Which… I mean, I’m personally not convinced that’s a possibility, honestly. I think we’ve all been fed years of sci-fi that has made it seem like it’s a possibility, but I just don’t know… The technology we all currently live with isn’t exactly as impressive as we were led to believe it would be in the 20th century via sci-fi and futuristic thinking… We got most of the shitty parts of cyber-punk with almost none of the cooler seeming parts…
Chomsky is probably the most prominent linguist of his age, and was an important figure in the development of modern theories of computation (see the “Chomsky hierarchy.”). On most subjects, I’d pretty skeptical of him, but on the subject of what does and does not qualify as natural language understanding, I’m going to give him a pretty respectful hearing.
I think it is, but so far away that it might as well be impossible.
There’s nothing magical at the microscopic level of the brain that precludes it from being mimicked in a sufficiently powerful computer/software. It’s just way, way beyond current technology.
Probably not, but even if it acts just like a human, is it really gonna be sentient? And how are we defining that anyway? I mean, it’s only been fairly recently that we’ve started to classify non-humans as having some of the same characteristics that we have previously insisted was special to us as having some form of sentience… Or we’re still figuring so much of this shit out that it’s gonna be hard to figure out where it goes and how far it can go?
It also strikes me that Drum thinks the only thing that is “impressive” about humans is when they create “new” things… but the fact that we can learn, we can teach, we can have empathy, we can abstract, make shit up… I mean, we built all this, good and bad… I think he’s just being a misanthrope and ignoring all the amazing things human beings - even the ones he’s seems to believe are beneath him because they are “dumb” or WTF ever - do amazing things with their lives… I think that’s probably just the Pratchian optimism in me…
Even if you were able to digitally model a human brain down to the individual atom that would only be the beginning of the challenge for creating a digital human intelligence, because a brain sitting around by itself isn’t going to develop anything resembling healthy human mind.
So you’d need to model a human body with all its sensory inputs and chemical signals and hormones and such. Then you’d need to model a lifetime of stimuli and interaction for that mind to develop in, complete with a rich and complex environment including other human constructs. Before you know it you’re stuck having to model a whole damn solar system.