The idea that ChatGPT is any sort of intelligence is preposterous, and I think most of the confusion comes from a lack of understanding of what an inference engine even is.
Use the API yourself, set the “randomness” variable to 0. ChatGPT will have perfect recall. It will always give the same exact answer to the same input, forever. The only way it varies is if you intentionally distort its algorithm. That’s not learning! If you program a robot to follow a specific set of brushstrokes to paint a picture, it will do that every time. If you add some stochastic randomness to each brushstroke (so that now, every painting is unique), it didn’t “learn” anything. It just slightly fucked up everything it tried to do, which made it look “new.” Perhaps the most important part of that reality is that it also cannot learn anything from this randomness. It’s not aware of its’ output. It would not learn a thing if it painted 50,000 pictures.
ChatGPT is the same. If you run it at its default, we algorithmically add stochastic randomness to its’ output - essentially intentionally fuck up its perfect recall - to force it to give differentiated answers. But it doesn’t know we’ve done this, and it learns nothing from this at all. It isn’t capable of the sort of self-reflection required to even realize these things. The worst thing we ever did was to call an inference engine AI. They are nothing like an emergent neural network. They aren’t even designed to learn! They’re designed to respond identically from the same corpus to every stimuli, then we fuck up their recall so they will respond slightly differently at each decision gate.