The A.I. Dilemma, a month-old reality check about artificial intelligence, already seems dated

Originally published at: The A.I. Dilemma, a month-old reality check about artificial intelligence, already seems dated | Boing Boing

2 Likes

Many, many years ago (25+) , my friends and I would socialize through the day on a chat system that one of them had written and stuck on the net. Nothing big, just type text and see responses - think mini-Discord.

I decided to write a chatbot. Nothing fancy. It would greet people when they signed on, and if they sent a DM, it would send a response. It was an implementation of Eliza. All it would do is rearrange what they said and send it back to them, with minor keyword triggering, and a few random responses thrown in.

I very rapidly came to the conclusion that people are very very easy to fool regarding bots.

I have seen nothing in the current round of AI to change that opinion.

6 Likes

Yes indeed! However you could understand what you wrote. Bolt on some neural networks (or their matrix formulated familiars) add in some Bayesian statistics and even you will not be able to (easily?) follow what your x-thousand lines of code is actually doing on any particular run. So (what i’m babbling is) you’re quite correct about “people are very very easy to fool regarding bots” but this time around that set of people includes the programmers themselves; hence a lot of the “oh’m’gawd!”. (“and why so often is the text original training set a ‘trade secret’?” well, gotta get money involved there somewhere, don’t ya?)

6 Likes

My guess is a combination of:

  • As you say an attempt to prevent/slow down imitators
  • An attempt not to get sued by anyone that has content in the tainting set and thinks they are owed money (regardless of what license they made that content available under – either stuff where I think it is fair game, or stuff where I would fully expect it be protected)
  • Reduce the chance that someone calls results into doubt specifically because they don’t like one or more of the data sources (look! you can’t trust it! it listens to the WHO about the Covid hoax!), this one is pretty weak since merely seeing “answers” they don’t like will cause complaints, “OMG! It thinks the earth is round and that ‘the democrat’ has rights rather then should be used for target practice!”, no real need to worry about a critique based on training set

ChatGPT-3 is definitely a step up from Mark V. Shaney. It is really interesting seeing it do things like invent realistic seeming academic sources to cite (as a byproduct of it attempting to explain something where it looks like it would be good to cite a source). It is really good at making text that looks like it knows what it is talking about. It has cause me to spend some time thinking about “what if we are wrong about it not thinking like humans, not in the what if ChatGPT understands what it is writing about, but what if humans don’t! What if we really are just as shallow as it is!”

4 Likes

I will cite my own experience with ChatGPT-3 (and other tools, although I haven’t paid to see if GPT4 is any better.) My own specialist field is writing cryptic crossword clues. I thought I’d see what these tools could do, and they were gloriously and hilariously terrible. And yet this is a field that is built on recognised rules and structures (which it sort of understood) but also upon an understanding of language that requires an ability to deconstruct it and remodel it in all sorts of seemingly unassuming ways, and that’s where it exposes itself. The best it managed was to keep offering me anagrams that simply didn’t work.
Which is why I don’t entirely agree that this shows that we might be very shallow; if we were, then these sort of word games wouldn’t work at all, since they depend upon wilful misreading and playful inverting of language forms. (Of course, you may be right in the sense that doing this isn’t exactly contributing to the sum of human knowledge!)

But so far, I am resting easy in my bed. Or, possibly, I should be getting annoyed at the fact that they are so incapable of helping me, in the way that they seem to be able to help, e.g. coders with routine function work. I imagine this is because the corpus it works from doesn’t include a lot of this sort of material. Yet.
(My biggest beef with ChatGPT3 in particular is the way it keeps faux-apologising when you ask it to explain something obviously wrong, and then clearly just repeats the same mistake.)

1 Like

Oh hey, an hour-long video about artifical intelligence chatbots, whose thumbnail is the Cyberdyne Systems logo. Pass.

Coders don’t need help in writing code, they generally do that fast. Debugging is expensive. Like by a factor of ten or more. So if an AI can write code 1000 times faster then I can but it has 1% more bugs then it loses huge (unless it can debug code!).

4 Likes

The skillset to develop AI code is not the same skillset to validate it.

Coders don’t need help in writing code, they generally do that fast.

True. But I am starting to really like the predictive text that Visual Studio can do.

Debugging is expensive.

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” - Brian W. Kernighan

I try to keep this in mind when writing stuff.

I’ve been writing code in so many different languages over the decades that it’s very hard to keep track of syntactical differences, newer versions of libraries, api calls, etc… I’m great at writing pseudocode and requirements though so asking ChatGPT well-formed questions has been very handy for supplying code suggestions and examples I can use as a starting point.

This has significantly sped up my work, and a lot of my colleagues are using it similarly, so I’ve found it handy for that specific sort of use.

This topic was automatically closed after 5 days. New replies are no longer allowed.