2-5 years to sci-fi level AI? Former OpenAI employee sounds alarm

… And my money is still on that cockroach, TBH.

5 Likes

Yeah, I wouldn’t personally consider deepfakes and voice cloning to be “AI” in any real sense, but it’s definitely out there causing harm.

Almost as bad is the fact that people who really do get recorded doing reprehensible stuff can now make plausible-sounding denials and claim that the evidence is fake.

4 Likes

My feelings exactly. Thanks to everybody chiming in and exposing this “2-5 years” narrative as that what it is: a puff piece entirely at odds with reality.

2 Likes

And that’s a (relatively) cool and interesting use… and it’s really not worth it.

2 Likes

That depends. Do you want it to navigate a complex and variable environment in an intelligent manner, or do you want it to write a generic five paragraph essay plagiarizing the wikipedia article on Hamlet?

4 Likes

That’s why the “unexpected behavior and radical improvements” predictions for AI are arguably much more optimistic than the more conservative ones.

If ‘AI’ ends doing more or less what it does now, with gradual improvements to how often it goes off the rails, it will be a reasonably reliable and efficient(not in the energy sense, of course) tool for doing mop-up in the class wars.

If an incomprehensible machine god emerges to visit its unfathomable designs upon reality the techbros will be meat-insects along with everyone else; which will be something.

1 Like

That first part sounds more pragmatic, that’s for damn sure.

2 Likes

I’m deeply skeptical of how far the Scholastic Parrot model of AI can be extended but I am in no doubt that there will be massive disruption over then next 5-10 years which will take generations to correct.
Office work is going to be decimalised by the current wave of AI, companies will shedding staff and replacing with AI without the implications to Social services, unemployment, taxation etc being addressed. We will need to do fundamental changes to our society to cope with these changes.

And trying to clean up the mess is going to be hard/impossible.

AI might force a change to UBI or something similar but we are going to go through a hell of a ride over the next few years and we can’t expect the companies implementing the changes to care about what happens to us.

… except the whole point of an exponential curve is that every part is equally steep

semilog1

4 Likes

I can’t say for sure what the future will bring, but from what I’ve seen AI isn’t actually very good at replacing human beings for most kinds of work, even office work.

5 Likes

You are not saying incompatible things. It’s not like counterproductive things don’t get put into wide use if there’s a hint, even if it’s an illusion, of saving money. See: open plan office, SAP.

5 Likes

I posted this before.

4 Likes

In our current world of rewarding CEO for achieving short term profit without worrying about long term sustainability, it doesn’t need to be good at office work, just good enough to replace people in the short term.

And working in an office and having some experience with AI, there are a lot of tasks that can be automated and that will translate into loss of jobs.

2 Likes

That analysis is so derivative.

5 Likes

I was wondering when LLMs would noticeably reach the “dog eating its own vomit” stage.

4 Likes

To be fair, that can seem kind of magical.

2 Likes

It’s unclear to me if it’s actual noticeable yet - some simple alterations to the system have caused certain functionality to get worse all on their own. I get the impression there’s not enough LLM vomit-eating to make much of a difference yet, but that could change quickly as the internet becomes overrun with LLM garbage.

2 Likes

I’m so happy to hear someone else taking about how this isn’t really AI. Also that it has serious issues.

I literally just read an article a few days ago that in 2026 OpenAI is going to run out of data to train their LLM (that means every other similarly trained model that started around the same time will also run out of data around that time). At that point the only option these companies have to continue training their models is copy their models and let the copy of their model write copy for the original model to consume. This is like a cul-de-sac and reminds me of the old adage I learned in computer class so long ago, “garbage in, garbage out.”

There are already significant issues with these models making data up out of thin air (I don’t agree with the term hallucinate as that implies some kind of intelligence or consciousness which no large language model, or large graphical model actually has because they are aggregators not artificial intelligence). To try and have one of these systems check another one of these systems work is not going to produce high quality results. It will likely amplify the errors instead. It is already necessary for a human to check the results of these systems. Here I was thinking that these systems were supposed to check the work of humans…

None of that even mentions the incredible power issues associated with this technology, I seem to remember something about running out of power in the next 5 or 10 years, don’t quote me on that though because I don’t remember precisely. Also the incredible need for chips to run these systems of which there’s going to be an incredible shortfall in the not distant in future.

6 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.