Originally published at: Steve Wozniak says ChatGPT is useful, but "it can make horrible mistakes" (video) | Boing Boing
…
“It’s pretty impressive,” he continued. “But the trouble is … it can make horrible mistakes.”
[…]
doesn’t yet know how to convey “humanness” or "emotions and feelings about subjects
[…]
“fails spectacularly” when it is asked for an “analysis or critique”
AI or corporate executive? You decide.
For all that we see these things as critiques, the humans who get paid to run the “slow AIs” see enough of themselves in these algorithms, and see the potential cost savings, that the rush to incorporate them is not going to be slowed down.
“When chatgpt assembles a lie it does so quite convincingly; in distinction to members of the republican party.” --Mary Chestnuttt
I am pretty sure somebody’s using ChatGPT to write news articles and post them to the internet to be picked up by Microsoft Start.
‘Confidently wrong,’ is the most accurate description I’ve seen. ChatGPT is exceptional as a starting point for researching something or formulating an idea. But all it’s doing is assembling and processing the information it can find in the vast data sets at its disposal. It makes mistakes.
The problem is that it’s very good at syntax, grammar and style. So what you read sounds convincing. It sounds right. Even though it can be factually inaccurate. For instance, I’ve seen examples where it attributes major philosophical stances in economics to particular thinkers when, in fact, those thinkers were the primary detractors of those ideas. But those thinkers wrote the most (and most definitely) about the ideas they were criticizing, so ChatGPT attributes the ideas to them.
I know programmers who cite similar examples with computer code. The code is properly formatted. The syntax is correct. It abides by the rules of the language. But it doesn’t actually do what it’s supposed to do.
BIG CAVEAT.: This is a temporary thing. In the next five years, I suspect the models will be refined to the point that they have a shocking degree of accuracy. It’s both exciting and scary.
I’m just hoping that it leads to a functional Domestic Robot that can look after me in my dotage; so, some time in the 5 - 10 years would be great.
I disagree. People keep treating ChatGPT as if it’s an AI because it is well-spoken, but it’s just a language model; there’s nothing there besides being well-spoken. Like all that nonsense about the racism-nuke, there should be absolutely no expectation that anything that comes out of ChatGPT has any sort of ethical or complex thought behind it, there isn’t even real decision-making there.
Even with advances, the only thing ChatGPT will replace is the most surface-level of copywriting. You just can never trust it not to make obvious logical errors since there’s no logical thought involved anywhere in the process.
< /mini-rant>
I think “a blurry jpeg of the web” is better: ChatGPT Is a Blurry JPEG of the Web | The New Yorker
That’s my thought as well. At minimum - it requires an editor.
I asked it to create bullet points about something that my team supports to provide the benefits (even though I basically have it all in my head anyway) and on a surface level, it worked. But you can’t just copy and paste that into a document for presentation…
An enterprise should choose hyper converged infrastructure (HCI) over traditional three-tier infrastructure because HCI offers several advantages over the traditional approach. These advantages include:
- Simplicity: HCI combines compute, storage, and network resources into a single integrated system, making it much easier to manage and maintain than traditional three-tier infrastructure.
- Scalability: HCI is designed to be easily scalable, allowing an enterprise to quickly and easily add or remove resources as needed to meet changing business requirements.
- Improved performance: HCI uses modern technologies such as flash storage and virtualization to optimize performance, making it well-suited for high-performance workloads.
- Cost savings: HCI can be more cost-effective than traditional three-tier infrastructure, as it eliminates the need for separate storage and networking systems, reduces the amount of hardware required, and simplifies management overhead.
- Improved availability: HCI includes built-in redundancy and failover features, making it more resilient and less likely to experience downtime.
Overall, HCI offers a more streamlined, cost-effective, and scalable approach to enterprise infrastructure compared to traditional three-tier infrastructure, making it a compelling choice for many organizations.
Points 1, 2 and 4, pretty well on the money.
Points 3 and 5 - so does 3 tier infrastructure.
This +100.
At best it is one element that can be built into something that is actually “useful”, but by itself chatgpt doesn’t do that much. People compare it to AI assistants like Alexa, Google Assistant, and Siri, but it is not like those things. They have a language model to parse user requests that is turned into a machine useful representation and then tied into an ability to execute some actions based on the query, such as “look up information on a website or database”, or “turn my thermostat up”. ChatGPT (mostly) only does the following: given a series of text messages, what “looks like” the next message in the sequence. It doesn’t have to turn the query into any semantic representation, the ML model just spits out the likely next response directly.
In previous versions, if you asked it “what time is it”, it would confidently answer “9:42 pm” or whatever. Because that is the next step in the conversation, but it skips the part of actually looking up the time. ChatGPT actually tries to detect this sort of thing and responds that “it’s just a language model, I don’t have access to that information” So they are working on it, but it’s still possible to trip it up, and there is a very long tail of cases that are hard to handle.
I think it very much remains to be seen what actual useful tools can be built with what is definitely a very sophisticated language model, but very hard to tie into anything else.
Yeah, I don’t think it will ethical or complex in 5 years. Just accurate. It might not be ChatGPT. It might be some other ML or Deep Learning model (maybe even a novel approach), but if I had to bet, I’d bet that that someone will figure out how to get there.
Computers have apparently mastered the art of rhetoric-- the ability to sound confident, regardless of whether you’ve got your facts straight, In logic, this is known as a valid argument, which may not in fact be a sound one.
Something that came up on The Vergecast today really drove home how convincing ChatGPT can be. They asked the new Bing with ChatGPT something about “Why is the Verge always picking on Elon Musk?”
It reeled off a bunch of stuff about a “feud based on mutual respect” (which they found hilarious, obv) and included an anecdote about the time that Elon retweeted a Verge tweet critical of him saying something about “How to drive clicks to a dying website”
Several of the journalists (who work at the Verge) were like “Oh wow! I forgot about that!” before everyone realized that it had never happened.
The Bing thing works a little differently from straight up ChatGPT in that it has access to the current web (regular ChatGPT cuts off a couple of years ago). It also summarizes stuff that it finds on the web and can provide citations showing where it got the information in it’s answer
In this case, it had pulled that from an article on the web that had a correction at the top of it saying it had not happened. ChatGPT did not get that part though. Another source it was using was thevergevip-dot-com (don’t want to link to it) - a site that illegally scrapes and republishes Verge articles that the actual Verge is constantly hitting with takedown notices.
It’s worse than garbage in garbage out. It’s garbage in, add some more garbage, confidently presented shit out
Honestly, I have never felt the Internet was more doomed
It’s a statistical averaging of all written language. That’s it. There’s no “there” there.
Source: An AI engineer retired after 25 years who’s really goddam frustrated with the constant misrepresentation of what ML models actually are. It’s me. I’m talking about me.
Oh, that’s interesting. I’ve noticed that Python examples code rots pretty fast sometimes, especially if they use new frameworks and libraries.
Usually it’s not much, a library repository moved to a different site or project, quickly corrected by a thinking being. Other times, it’s a major version bump, and either look for a newer example or write your own code.
When ChatGPT is regurgitating answers to exam and job interview questions, that two years could bite it on some topics.
OpenAI also has a specific programming language specific model that is much more effective than chatgpt at generating example programs. AI code generators still have a lot of they same underlying problems as chatgpt but are much more advanced.
Technically, ChatGPT doesn’t have access to the web at all. It is a language model trained off of a snapshot of the web from a few years back. I know it sounds a bit persnickety but I think it’s an important distinction. ChatGPT can’t look at a query, realize that it’s confidence is low because it wasn’t well captured in the model, and go back and look at the original training data to pull a fact from even 3 year old Wikipedia.
That’s why broad summaries of well know and often repeated facts are done well, but if you go ask about some minor detail about the same events it just makes something up that sounds right. Those minor facts never made it into the model, even if they were in the training data.
You are correct that I meant the training model in the case of the regular ChatGPT model that is available to everyone.
So, more accurately, it sounds like In addition to having a more current cut off date for it’s training data (I don’t know that if it is really up-to-date or just very recent), Bing with ChatGPT has the ability to perform (Bing) searches on the web in real-time and incorporate summaries of that information into it’s answers.
That is what I was trying to get across - that there are two different sources in a Bing AI result - Garbage it made up (using the language model) and garbage it found on the internet and summarized.
Which means it can confidently lie about current events more effectively, which seems bad.
All of this is based on what I have heard and read about it of course, since I don’t have access to it myself