Tech leaders, scientists, etc., call for pause in AI development

Originally published at: Tech leaders, scientists, etc., call for pause in AI development | Boing Boing

13 Likes

I have issues with AI research for the fact that most of what is classified as intelligent behavior is largely symbolic reasoning which while useful isn’t that powerful. In practice, most AI research is about shimming in probabilistic reasoning systems with symbolic reasoning to emulate with some degree of reliability repeatable complex behavior. Beyond the act of categorizing, writing simple essays or instructions, flagging content for review by humans (images, videos, audio clips, etc), and a few other behaviorally narrow fields such as voice interfaces, there’s really not much that will come from the work as there’s only so many jobs that will ever benefit from these platforms.

What’s more troubling is the fact that under capitalism, people persist in the belief that human labor is replaceable with capital machinery. In practice, it is capital machinery that winds up being replaced by human labor when scarcity of such machines or their cost rises. At best, platforms like ChatGPT, stable diffusion, or other such “AI” will be force multipliers for labor (more stuff gets done quicker, better, cheaper).

Such platforms will never produce Skynet, Ultron, Allied Mastercomputer, Johnny 5, nor Chappie. Such “sentient AI” aren’t machines anymore, they’re people. And at that point, if humankind develops such mastery for recognizing people beyond the arbitrarily narrow definition of select humans or humankind as a whole, then we’ll probably be at a level of mastery that present humans wouldn’t be able to comprehend, anymore than Paleolithic humans could imagine our world. Such things will be as alien to us in the present as it is alien to our ancestors in the past.

7 Likes

You may be right (maybe) but it doesn’t NEED to in order to potentially create major, widespread societal problems in the very near future.

Consider just one relatively mundane example: a few Wall Street firms playing around with sub-prime derivatives and similar novel financial instruments that they didn’t necessarily understand caused a major global recession not very long ago. Now just imagine when all the firms are using the next generation of totally opaque and incomprehensible AI to do automatic trades, training the system to optimize profit regardless of the consequences. And maybe that AI can send emails or do other things that could intentionally influence the behavior of the market in order to make a profit. This shit needs to be regulated yesterday.

26 Likes

You don’t need a computer algorithm to have true consciousness (a term we haven’t properly defined anyway) to cause just as much harm as a genuinely malicious “strong A.I.”

17 Likes

Yeah, this is one genie that’s not going back into the bottle. It gives too much advantage to non-participants and too big of a stake for all parties involved.

9 Likes

Barring a Butlerian Jihad I don’t think we’re likely to ever uninvent AI, just as we’re unlikely to ever uninvent nuclear technology. But as with nuclear tech we have a collective responsibility to regulate how AI is developed and how it is used because of the potential it has to cause harm.

18 Likes

Obligs:

9 Likes

I think this is a universal problem inherited from capitalism itself. We’re trying to shim in AI to do work that humans already do rather than making AI something that assists humans in those jobs such as driving, content moderation, complex service industries work, and so forth. SV techbros are basically ignoring the fact that technology evolved to help humans and as such, all humans by extension are cybernetic (in the scientific, not sci-fi sense). So, I can see a networked AI that handles air traffic control being a disaster just from one simple bug intentional or not or a set of high frequency trading algos crashing the economy (we’ve already have many flash crashes which NYSE and other orgs had to put rules in place for explicitly synchronized lag between all HFT agents). I don’t think we’ll see a world ending disaster via AI. Maybe when we humans figure out how to make computronium then we’re screwed.

I would say we’re more likely to mess up our economy via AI just from the fact that capitalists keep buying into the fantasy that labor isn’t necessary to production. It’s like saying that soldiers aren’t necessary to make war. It’s always some nonsensical utopian that keeps pushing for this and in general I’m at the point of wanting to just go Homie The Clown on such techbros and other “visionaries.”

GIF by moodman

10 Likes

I have seen the future, and it is paperclips. All the way down.

image

17 Likes

If that’s your threshold for whether or not we need to take a pause on this stuff to set some ground rules for everyone to follow, then I disagree. Even if we were “only” talking about major economic disruption (and I personally don’t think we are) that has really significant human consequences despite the world continuing to exist.

And I can’t think of any examples of scientists/researchers on any other subject (biology, weapons research, nuclear energy, etc) getting together and sounding the alarm with the message “our research has the potential to cause great harm and needs to be regulated!” where it wasn’t a good idea to listen.

15 Likes

I think we’ll need those rules like yesterday. The fact that capitalist firms keep trying to push for automated vehicles of all kinds without even figuring out the insurance liability policies needed or even who is legally liable for when things go wrong is annoying to me. They want to avoid regulation at all cost even if it’s good regulation. Mind you, these fools are the kind of folks that probably would’ve been shitting behind the bushes at the Palace in Versailles of King Louis XIV’s court. They have no common sense, no sense of scope and cost, other than whatever makes “line go up.”

So, I don’t disagree with your concerns. I just think AI is going to become a dead end in the coming years. I genuinely believe that we’ll find out that such platforms are at best good tools to augment human labor with the help of technicians and legal oversight. I just don’t want folks to get visions of an apocalypse in their head. Think more the rampant car wrecks we deal with now. We already deal with them and accept them despite not truly needing cars (we just forced ourselves into that pattern for the sake of capital interests). If you think the fact that cars being a major cause of death in the United States is terrible, then you’re on the right track for how to treat AI imo.

6 Likes

As the second amendment to the US Constitution (“the right of the people to keep and bear Arms shall not be infringed”) places no limitation on what constitute “arms,” i await our Second Amendmenteers contending that unlimited AI, if weaponized, becomes a Constitutional right.

6 Likes

AI has been used against people for years, now people are starting to get some of that power back so the leaders are scared. This band of tech leaders are not leading so are trying to pause development so that they can get caught up.
It’s worth pointing out that all the major tech companies have either hobbled or outright fired the people addressing ethics so not sure how we can stop the threat to society if ethics aren’t part of it.

12 Likes

I wouldn’t be surprised the techbros are looking at this a way to preemptively bust up the revival of the labor movement in the US and worldwide. Anything to stop “the poors” from asking for a decent wage and to be treated as humans.

11 Likes

Sure, but also AI and robots to make people less dependent on the market for needs (robots to make food, clean, barter out as labor etc.) so less need to have people work is also freaking them out

3 Likes

They’re already achieving this via mass layoffs.

13 Likes

While there are good reasons to put things on hold regarding what’s being called “AI”, I suspect a lot of this is happening now because of the realisation that a lot of well-paying white-collar jobs (including those of some of the signatories) could be wiped out in a matter of a few years.

15 Likes

I forget whose point I’m stealing, maybe Hank Green’s, but:

The model could eat itself out of a job. If it knows everything why write anything about that stuff? Why google what the ai can answer better? And if the ai puts writing and google out of business then the ai just lost sources to learn anything new.

I suppose this means people will begin writing much more often specifically about what the AI doesn’t know or what it gets wrong. And it’ll be THAT writing that the ai consumes to update. We will all end up having all our conversations with that AI one way or another.

7 Likes

One of the best games ever.

5 Likes

This. I’m not worried about sentient AI taking over the world and enslaving humanity, and I largely think people who are talking about that are part of the problem, and are intentionally or not distracting from real problems that we have right here and now.

I’m worried about the ability of language models to be a force multiplier for bad behavior. Language models being used for mass ransomware, online stalking and abuse, political disinformation, infiltrating activist groups, and just plain old “signal jamming”: filling up public spaces with machine generated garbage that renders them useless while being hard to filter. It’s nothing new, humans can do all of that already. It just lets a small number of people do a lot more faster.

15 Likes