The letter immediately caused a furor as signatories walked back their positions, some notable signatories turned out to be fake, and many more AI researchers and experts vocally disagreed with the letter’s proposal and approach.
The link is to a letter penned by the authors of the stochastic parrots paper (which includes Timnit). One notable statement:
“While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as “Stochastic Parrots”), such as “provenance and watermarking systems to help distinguish real from synthetic” media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined “powerful digital minds” with “human-competitive intelligence.” Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities”
The current mess of massively over-hyped “AI”, followed by a wave of people saying that we need to slam on the brakes (Sam Altman strangely seems to be doing both) smells like a manufactured brohaha to drive stock prices, while stupid amounts of money are moved around.
The Altman-Musk conflict seems as scripted as a WWE folding chair fight. Is it really a conflict or a Two Man Con? (Not literally two men, but two sides that are really one to accomplish some other goal. e.g. Star Wars Republic vs. Separatists.)
With all the uproar, it’s important to carefully examine people and groups to separate the serious ones from the crackpots and sockpuppets.
For example, Future of Life Institute seems like a Musk sockpuppet. (Founded the same year as OpenAI. Well, isn’t that special?)
DAIR, while recent, does seem to check out on the serious side. I don’t see any odd connections, or large sums of money gifted them.
The people in the letter at least I know are genuinely serious about ethics and accountability in AI / social good, and have been working hard on exposing measurable risks from AI tech on society as well as providing tools for improving the safety of said tech.
On second thought, I don’t welcome the boilerplate legislation that might be written by those bots (US government is overflowing with lawyers who decided to run for office). Goodness knows what might happen to the criminal code. What chance will my poor little defense lawyer have against JudgeBot and proceCutor 2.0? Hopefully, they’ll still let us have a jury of our peers…
Hey, what happened to the statue of Lady Justice!?!
I put that link on the AI thread. The one box isn’t very promising but it’s a response to and a critique of this from the authors of the stochastic parrot paper. A lot of what people here are saying is echoed in that. And a lot of what I think too. As I say to everyone: I’m not afraid of AI, I’m afraid of capital.
These are indeed some of the core issues. They existed before LLMs, and LLMs and machine learning have made them easier to exploit. The fox is to close the loopholes that allow this behaviour in the first place - monopolies, stronger worker rights, far better privacy controls, and teeth for regulatory bodies to do something about them - so that there’s no incentive to use cheap exploitive labour or these “AI” tools for these purposes in the first place.
Putting the “brakes” on AI will just have these practices return to exploited cheap labour to do the same things, while moving AI development to the less free places of the world while the rest of us fall behind.
The good news is, so far it’s just a stunt. Passing a bar exam is actually a really good use case* for these large language model chat bots because synthesizing large volumes of text relative to a question are regurgitating it back out is what long form written exams are all about. None of that means it would actually be a good litigator, be able to write sensical policy or any of the other many things people do with law degrees.
Still, it was impressive. I certainly can’t pass a bar exam.
*to clarify, I’m using “use case” in the computer science sense here. Meaning, a well-suited technical application of an algorithm. Not to say people should actually use chat bots for this in the real world
Exactly. This might be the smart thing to do for the whole human race, but ain’t no way it’s actually going to happen. Collectively we are just not smart enough to do the right thing here.