Noam Chomsky’s area of expertise isn’t AI. Neither is natural language processing, it’s just adjacent to his area of (academic) expertise, which is actually linguistics. (I think you could also say his long involvement in politics qualifies him as some kind of political expert, as well, but that’s also not AI.)
As a person who recently did a course in natural language processing, I can tell you that the current state of the art involves ignoring most of Chomsky’s work on linguistics and grammar. Formal grammars are now generally considered a dead end for natural language – useful for simpler systems but nothing that has to capture nuance or cultural context. In that light I have to say this reads as more a bitter reaction to the field moving on from his work than a researched argument.
Chomsky has been tremendously influential in CS generally, but his work survives best in systems that deal with parsing formal grammars – programming and mathematical languages.
Anyway I also happen to think his critique basically boils down to “but humans are special” without clearly articulating what makes us special, and he conflates general intelligence (which I feel pretty clearly we are on the road to) with human intelligence. At its best this is basically a straw man argument: obviously, no matter how sophisticated AI gets, it won’t be quite human. But not having human feelings or consciousness doesn’t necessarily mean AI won’t have feelings or consciousness of its own.
The only concrete thing I can identify in Chomsky’s argument as a uniquely human ability, is the ability to formulate coherent theories and explanations in a way that a human can understand. It’s clear that ChatGPT doesn’t have a direct experience of the world, and since it isn’t constantly being retrained, it doesn’t learn, and so it can’t possibly meet that criterion. But it does do the latter part, explaining things that are in its experience to humans. It’s a bit strange to me to see that half of your requirement is fulfilled, and say “well, you didn’t try to do the other half of my requirement, therefore that will never be done”.
If he thinks AI can’t formulate theories and explanations from direct experience, Chomsky needs to take a look at the broader world of AI research. For example, this Two Minute Papers – DeepMind’s New AI: 10 Years of Learning In Seconds! - YouTube – describes an AI that uses pretraining about what kinds of rules can exist in the world to really quickly develop new theories and understandings when presented with novel situations.
All that’s missing here, as I see it, is really to merge the experiential learning with the language model to get what Chomsky demands. Do we know how to do that today? No. Does that mean that independent research into these separate aspects of AI don’t help in pursuit of that goal?
I mean, I can’t be certain, but it really seems to me that general AI is something we’re closer to today than 10 years ago, and ChatGPT is a part of that. At this point I’m definitely expecting human-level general AI within my lifetime. No, ChatGPT isn’t it, but to say it’s not a step on the road is… a very confusing conclusion to draw right now.