Tech leaders, scientists, etc., call for pause in AI development

The letter immediately caused a furor as signatories walked back their positions, some notable signatories turned out to be fake, and many more AI researchers and experts vocally disagreed with the letter’s proposal and approach.

The TechBro Great Barrington Declaration.

2 Likes

Domain Name: dair-institute.org
Updated Date: 2022-11-26T18:27:21Z
Creation Date: 2021-05-29T01:22:25Z
Registry Expiry Date: 2023-05-29T01:22:25Z

Who?

And these guys. I wonder what their current funding is like?

https://www.erieri.com/Form990Finder/Details/Index?EIN=471052538

3 Likes

What Timnit Gebru did after Google sacked her for saying “are you sure this isn’t, like, a bit unethical?”

1 Like

The link is to a letter penned by the authors of the stochastic parrots paper (which includes Timnit). One notable statement:

“While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as “Stochastic Parrots”), such as “provenance and watermarking systems to help distinguish real from synthetic” media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined “powerful digital minds” with “human-competitive intelligence.” Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities”

2 Likes

My point was that their website is less that two years old.

1 Like

Is that not off-topic?

A newly established group. I think that’s kind of relevant.

1 Like

I guess it’s useful information in some context, but to me its coming across as a bit non-sequitur :man_shrugging:

Especially if malicious groups decide to hack into AI systems.

1 Like

The current mess of massively over-hyped “AI”, followed by a wave of people saying that we need to slam on the brakes (Sam Altman strangely seems to be doing both) smells like a manufactured brohaha to drive stock prices, while stupid amounts of money are moved around.

The Altman-Musk conflict seems as scripted as a WWE folding chair fight. Is it really a conflict or a Two Man Con? (Not literally two men, but two sides that are really one to accomplish some other goal. e.g. Star Wars Republic vs. Separatists.)

With all the uproar, it’s important to carefully examine people and groups to separate the serious ones from the crackpots and sockpuppets.

For example, Future of Life Institute seems like a Musk sockpuppet. (Founded the same year as OpenAI. Well, isn’t that special?)

DAIR, while recent, does seem to check out on the serious side. I don’t see any odd connections, or large sums of money gifted them.

1 Like

The people in the letter at least I know are genuinely serious about ethics and accountability in AI / social good, and have been working hard on exposing measurable risks from AI tech on society as well as providing tools for improving the safety of said tech.

1 Like

Meanwhile, ChatGPT 4 just passed a simulated bar exam in the top 10% of scores. I, for one, welcome our new robot lawyer overlords.

4 Likes

:thinking: On second thought, I don’t welcome the boilerplate legislation that might be written by those bots (US government is overflowing with lawyers who decided to run for office). Goodness knows what might happen to the criminal code. What chance will my poor little defense lawyer :robot: have against JudgeBot and proceCutor 2.0? Hopefully, they’ll still let us have a jury of our peers… :grimacing:

Snl Jury GIF by Saturday Night Live

Hey, what happened to the statue of Lady Justice!?!

Battlestar Galactica Robot GIF

3 Likes

I put that link on the AI thread. The one box isn’t very promising but it’s a response to and a critique of this from the authors of the stochastic parrot paper. A lot of what people here are saying is echoed in that. And a lot of what I think too. As I say to everyone: I’m not afraid of AI, I’m afraid of capital.

3 Likes

Getting sacked by Google for shitting on AI does tend to make you look for a career move…

3 Likes

These are indeed some of the core issues. They existed before LLMs, and LLMs and machine learning have made them easier to exploit. The fox is to close the loopholes that allow this behaviour in the first place - monopolies, stronger worker rights, far better privacy controls, and teeth for regulatory bodies to do something about them - so that there’s no incentive to use cheap exploitive labour or these “AI” tools for these purposes in the first place.

Putting the “brakes” on AI will just have these practices return to exploited cheap labour to do the same things, while moving AI development to the less free places of the world while the rest of us fall behind.

3 Likes

Yah, the implications of this are many.

The good news is, so far it’s just a stunt. Passing a bar exam is actually a really good use case* for these large language model chat bots because synthesizing large volumes of text relative to a question are regurgitating it back out is what long form written exams are all about. None of that means it would actually be a good litigator, be able to write sensical policy or any of the other many things people do with law degrees.

Still, it was impressive. I certainly can’t pass a bar exam.

*to clarify, I’m using “use case” in the computer science sense here. Meaning, a well-suited technical application of an algorithm. Not to say people should actually use chat bots for this in the real world

5 Likes

Exactly. This might be the smart thing to do for the whole human race, but ain’t no way it’s actually going to happen. Collectively we are just not smart enough to do the right thing here.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.