Facebook AI scientist has a good take on GPT-3

Originally published at: https://boingboing.net/2020/10/27/facebook-ai-scientist-has-a-good-take-on-gpt-3.html

1 Like

Disappointed that the AI scientist isn’t an AI.

3 Likes

Five years out, mate.

5 Likes

… just like five years ago.

3 Likes

The GPT models aren’t general intelligence, they’re just intelligent lorem ipsum generators. The significance comes from how increasingly difficult it is for humans to perceive GPT output AS lorem ipsum.

Imagine a future where a malicious actor can spin up a whole interconnected online ecosystem of convincing fake entities in minutes. Disguise your marketing/disinfo as a research paper, GPT up a bunch of others and create your own journal, accredited by your own university, covered by several online publications… 100% of it fake, but convincing enough to pass a cursory inspection and get the media ball rolling.

You can do that already, of course, but it still takes a fair bit of human effort. In a few years we’ll be at the point where it could be fully automated and done in realtime.

I like his analogy. In my day the one I heard most often was about trying to reach the sky by climbing a tree. You’ll make continual progress, right up until you don’t.

Tough, but fair.

Or isn’t named Al bert.

1 Like

AI “scientists” (who, of all people, really should know better), have been continually deluding themselves into believing that AI is anything more than a parlor trick. A clever parlor trick, and one with a few potential limited applications (e.g. generating real-looking conversations) but a trick nonetheless.

Meanwhile, most real-world applications require consistency, reproducibility, and above all the ability to be understood. Any application that mysteriously somehow works most of the time is great for entertainment, but not so great for mission-critical applications.

It’s been “just around the corner” since the 1960ies.
Once we find out how to jump off of the Möbius strip we’re on, we’ll get there.

1 Like

Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves

It’s just a jump to the left.

2 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.