Watch Steve Jobs describe ChatGPT decades before its creation

Originally published at: https://boingboing.net/2024/05/24/watch-steve-jobs-describe-chatgpt-decades-before-its-creation.html

1 Like

The X Files Sigh GIF

9 Likes

Same. At least Jobs had enough common sense to pull the plug on technologies that weren’t working out as well as envisioned, like the Apple Newton.

3 Likes

The first “AI” chatbot, Eliza, was created in 1964. Multiple implementations of Eliza and its successors existed for the Apple II and Macintosh. Jobs would not have been unaware of the concept. The implementations have since become considerably more sophisticated (using artificial neural networks, a concept that actually dates back to the 1940s), but some of the same limitations still apply.

7 Likes

The contention that “capture the underlying worldview” is a description of what LLMs do seems like it is doing a lot of heavy lifting.

6 Likes

“Watch Steve Jobs get credit for cribbing an idea as old as sci-fi itself”

6 Likes

It sounds like this is the kinda thing he was going for, but of course it’s really not great:

1 Like

Exactly. That description is no more describing “ChatGPT” than pretty much any other description of an AI in sci-fi in the past 50 years.

I recently read Stanislaw Lem’s Fiasco and I think it in fact had an AI in a bust of Aristotle, saying Aristotle-like things.

2 Likes

See also pads from Star Trek… “OMG, Star Trek PREDICTED genius man Jobs!!!” No! He stole the fucking idea from Star Trek!!!

1 Like

All the questions I would have to ask Aristotle are about why he thought the things he wrote, which if they are not part of his writing are not things an LLM could answer at all.

Angry Flower Aristotelean

3 Likes

Yeah - an answer. Like the answer you’d get from an Einstein voice impersonator helping you with astrophysics.

1 Like

Could AI Aristotle answer why AI Jobs is wearing a cast bronze shirt?

2 Likes

2 Likes

The final cause of the cast bronze shirt is to provide structural stiffness to the thinnest CEO in the industry with the fewest buttons possible.

3 Likes

I blame the failure of Chat-GPT and all it’s ilk from the lack of sanitizing the corpus it’s drawing from; Scraping forums like reddit, any/all the chans, (4/8/etc.) and/or wikipedia is NOT a good starting point, and it’s showing.

If a team of people sat down and distilled a copy of wikipedia (and other sources) that had only verified correct and factual information and then fed that to the LLM, that might be a good start. (along with LOTS of debugging, and having the damn LLM show it’s logic chain as it responded to make sure it’s doing what it is expected.)

:: wanders off to beat a storage appliance into submission ::

1 Like

As great as the idea is, it just wouldn’t work. The reason LLMs are such good language models is precisely because they are so large, and have consumed so much text. The amount of “good quality, peer reviewed, verifiable” text is paltry in comparison, and it simply wouldn’t be able to have the statistical weight to train the model.

You can do clever things where first you take a model that has been trained on a huge corpus, so that it models how language works, and then re-train it on higher quality stuff. I think that will probably be the future.

It won’t be able to give you a logical chain of inferences, though, because that’s not how it works. In the 80s and 90s they tried to make AIs that chained together sets of facts with logic and they got nowhere.

That said, I like to rag on AI too, but the big ones are honestly right much more than they’re not for a huge variety of topics. (Well, it’s all philosophical bullshit because it doesn’t know or care if it’s right, but it does resemble rightness.) Yeah, you can find tons of examples online like putting glue in pizza sauce, but try having a conversation with one about a historical topic or scientific concept.

Go ahead and strike up a conversation with ChatGPT 4 about, say, the French Revolution, and see how long it takes for it to give you a factual error. (And link to the conversation!)

1 Like

AIristotle. It’s right there! Come on…

that’s a problem too, at least on topics that are highly controversial. And we can’t always depend on the core editors to not be biased…

3 Likes

Unfortunately AI isn’t based on the wisdom of Aristotle, it’s based on the posts from Reddit, Twitter, Facebook, etc…

When you give megaphones to everyone indiscriminately, you may accidentally glean some wisdom, but it will be buried within an ungodly amount of stupidity and noise. Calling that “intelligence” is horribly misleading, and I can think of no worse idea than allowing Big Tech or Politicians to define “wisdom” or “truth”. (You can’t separate wheat from chaff without starting from a definition of what constitutes wheat.)

And even if systems like ChatGPT were trained exclusively on quality content, verified and curated without any bias - it would

a) not understand any of that data (like a human could) because it can’t think,

b) still come up with wrong answers/hallucinate stuff; the Bob the Angry Flower cartoon is an excellent example for quality content (Aristotle) being factually wrong (updated physics) - because it can’t think.

2 Likes