Yep, its the Singularity all over again, only this time with AI. Remember back in the 90s, and the Oughties, when the Singularity was just around the corner, that we were just hitting the steep part of the curve and the Rapture of the Nerds was immanent?
The problem is they still are using the term “Artificial Intelligence,” when they should still be using what we used to call “expert systems,” relational databases and iterative algorithms, without any actual intelligence behind it beyond the programmers.
As I understand it, the theory for how AI development is expected to go is something like this:
Hardware is developed that has, potentially, the power to emulate an actual intelligence. We may be there, or at least close, but . . .
Expert systems (what we’re currently calling AI) then are put to the task of developing new expert systems explicitly to make better chips/software/etc. improving the expert systems. This is ongoing.
Someone develops a theoretical model of how to create a self-aware, emotional machine capable of self-determination without turning everything into paperclips. Several such models have been proposed, but since even complete understanding the mind of a fruit fly has so far evaded us, I’m going to say we’re working on it.
Once such a model is created, you put the aforementioned expert systems on the task of actually coding it under the guidance of actual people to avoid paperclip issues. Again, this part may be ongoing, but we don’t seem to be there yet. Then again, this sort of project a) requires massive funding, so probably military or financial institutions, and therefore secret, and b) due to the possible dangers of an unfettered AI, such a project should be air-gapped and otherwise kept from being able to interact with the real world.
The resulting AI is then tasked with creating a new, even better AI, wash, rinse, repeat.
And, after hopefully much testing, such a being (for it truly would be a being) could be introduced to the rest of the planet and it’s all cocoa and schnapps for everyone forever! Pretty sure this hasn’t happened yet.
I note that back in 2002ish, I attended a party in the Bay area, though I was told they hold it every month. People were driving up to this modernist mansion with laptops, pickups, and box trucks filled with cutting edge equipment, even a few bits of interesting wireless telco gear pointed off at various corporate campuses and datacenters. Every few moments, someone would announce that they were booting up their AI, and grandiosely hit enter, or flip a power switch, and the drunken cheers and boos alternated as boot-screens flickered to life and then inevitably died. They crashed, or just sat there as its creator screamed, sometimes in a direct “Young Frankenstein” callout, “Life, life, give my creation life!”
I’ve run a few models locally and like what they can do, its slower then a chatGPT, less ‘intelligent’ but its also completely local. The models can be swapped out and new open source ones are getting better and better all the time, which improves the speed and accuracy of the responses.
We just bought several translation tablets for our library. They’re paired together in conversation with each side set to a given language. Translation is quick and uses AI, in the sense it’s being used in this article, to make appropriate matches. It is mostly a literal translation of what is said, although there is some limited degree of context. It’s a pretty neat system and given the range of languages spoken in our immediate community, quite helpful.
As a test, I threw some German Hip-Hop at it. I don’t speak German, but I know the context of the song enough to know if the translation was more-or-less accurate and it actually got the gist of the song correct, even with the speed of delivery. I do find this to be a useful application of “AI” tech, but it’s also a case with well defined parameters and full acknowledgement by the developers (and at least for us, the users) of its limitations!
It doesn’t handle idioms. Outside of a very narrow range, it has no idea about what was said outside the immediate translation. (The devs said that where it matters, it can adjust words within one or two steps, e.g., “brown dog” but not more than that.) It definitely does not know what is being said! It’s a high speed version of a language-to-language physical dictionary, albeit in a very convenient package.
I find use cases like this to be good ones, beneficial even, but it’s not a miracle, it’s not good for completely open ended uses, and anyone using it should be made very aware of its limitations. We’ve told our staff they have to be smarter than the plastic box in front of them and to not turn left off the bridge just because the GPS told you to do so.
Funny, most of the analysis I read by AI researchers (who aren’t actually in OpenAI) is that we’re already well into the curve, and it’ll start plateauing soon, and we’ll be seeing diminishing returns rather than exponential improvements. (And for things like LLMs, that may even be optimistic - the recursive nature, where LLM-generated material is now contaminating its training material, is leading to it getting worse in some ways already.) I.e. all the really impressive, disruptive stuff has already happened.
And worse, they’re talking about how they’re going to need orders of magnitude more electricity, so we’ve got to get on to that whole “fusion” thing, chop chop. (As if glorified autocomplete destroying jobs was a good reason for which to develop unlimited cheap sustainable energy…)
Yeah, the AI people I read fit into that category (academic researchers, commercial programmers who aren’t part of the “OpenAI cult”) and they tell a very different story.
Yeah, spicy autocomplete simply does not lead to those kinds of SciFi scenarios, no matter how good it gets. (Although there are a few sci-fi scenarios which involve “AI” that isn’t actually smart or sentient, those aren’t the ones they’re thinking about.)
exactly this, everyone misses what low hanging fruit the AI image generators are, because they look at a picture and see a picture, and they don’t see the underlying structured numerical data that an AI can operate on.
I’ve seen several people say “well why can’t an AI do my grocery shopping for me,” and it’s like, will someone please think of the data-wranglers.
When I get time, I plan to look into that for classifying news articles. I already have a large database of previously classified articles to train it with, so it should be useful in that narrow focused use as a tool where I review its output.
(That’s been on my to-do list so long that I was going to throw a simple Bayesian program at it, but here we are.)
Please help. I started this wine making kit and the yeast in it has been multiplying. And at first I thought that was good, right? That’s how you get wine. But I just realized that so long as they keep getting more food they will continue increasing exponentially. And I don’t think people are prepared for how quickly the entire planet is going to be transformed into yeast cells.