Watch Steve Jobs describe ChatGPT decades before its creation

As great as the idea is, it just wouldn’t work. The reason LLMs are such good language models is precisely because they are so large, and have consumed so much text. The amount of “good quality, peer reviewed, verifiable” text is paltry in comparison, and it simply wouldn’t be able to have the statistical weight to train the model.

You can do clever things where first you take a model that has been trained on a huge corpus, so that it models how language works, and then re-train it on higher quality stuff. I think that will probably be the future.

It won’t be able to give you a logical chain of inferences, though, because that’s not how it works. In the 80s and 90s they tried to make AIs that chained together sets of facts with logic and they got nowhere.

That said, I like to rag on AI too, but the big ones are honestly right much more than they’re not for a huge variety of topics. (Well, it’s all philosophical bullshit because it doesn’t know or care if it’s right, but it does resemble rightness.) Yeah, you can find tons of examples online like putting glue in pizza sauce, but try having a conversation with one about a historical topic or scientific concept.

Go ahead and strike up a conversation with ChatGPT 4 about, say, the French Revolution, and see how long it takes for it to give you a factual error. (And link to the conversation!)

1 Like