Tech leaders, scientists, etc., call for pause in AI development

AI experts criticize the letter as furthering the “AI hype” cycle, rather than listing or calling for concrete action on harms that exist today. Some argued that it promotes a longtermist perspective, which is a worldview that has been criticized as harmful and anti-democratic because it valorizes the uber-wealthy and allows for morally dubious actions under certain justifications.

11 Likes

Since the term “AI” is just marketing hyperbole, how would you write laws to control its development? Are they just going to tell developers that you can’t write software that uses certain mathematical constructs?

5 Likes

I suspect we’re going to be hearing exponentially more discussion of UBI in the coming years.

8 Likes

… is there, like, a list of “fulfilling jobs” versus “unfulfilling jobs” :confused:

8 Likes

The thing is, the Torment Nexus describing the intersection of “AI” and capitalism has been written.

It’s called Accelerando.

6 Likes

Came here to post that. Thank you!

1 Like

(edited out because it was based on a mis-reading)

What are you saying is false, exactly? That many AI researchers believe the field should be regulated?

Ah I misunderstood because of how it was packed in there: yes AI scientists believe that it should be regulated. Apologies for the misunderstanding. :sweat_smile:

Personally I’ve very worried that this sort of call only serves to reinforce a disconnect from public’s perception and what AI scientists do. In fact, I would state that this call doesn’t represent at all how AI researchers treat social impact: is just poorly informed alarmist rhetoric.

3 Likes

They managed to circumvent those pesky regulations and insurance for cars driven by people (see Uber and the Domino’s Pizza delivery insurance industry crisis), so my confidence level that they really care about that for automated vehicles is low.

What concerns me are the folks with wealth and positions of power who want no part of that discussion. Instead, they’re actively undermining public health policies and the social safety net. I suspect their agenda is to make sure there will be far fewer people to worry about (or left to complain about how they are being treated) in the future.

The ways in which they enable/ignore the rising cost of living, fight increases in wages for workers, lie about labor shortages, make housing unaffordable, and criminalize homelessness keep getting worse. Instead they are throwing a lot of money into policing (including ways to automate that) and for-profit prisons. Instead of dystopian, the future for those living in poverty will be positively Dickensian - full of workhouses, debtors prison, and disease.

13 Likes

Enlighen us please. How do AI researchers treat the social impact aspects of these technologies when some of the biggest, most well-funded labs have been laying off their AI ethics teams? And whatever the “best practices” may be in the industry, are you saying that you’re confident that all the AI research groups around the world are consistently following them? I’m honestly not sure what point you’re trying to make.

4 Likes

Ah, so you are unaware: this is a common problem about non-research folk, and it’s easy to assert that “researchers think or do this” without actually doing the work to be involved:

https://www.hcii.cmu.edu/research-areas/fairness-accountability-transparency-and-ethics-fate

That’s only a few examples. Rest assured: people in AI who focus on this sort of thing care as much and probably much more than you do (I know, I talk to them often and they are passionate about this on a daily basis).

Hopefully now you are enlightened. I’m sad about Microsoft laying off so many good and caring researchers: but this is a problem with a company, not the field as a whole:

https://icml.cc/virtual/2021/workshop/8347

(I really could list dozens of these but you should probably do the work instead of just stating what you think AI researchers do)

To be very clear: we should care very much about what AI tools we’re putting in society. But the answer isn’t this call: it’s rewarding researchers who do FATE and accelerating social good research. It is what the AI field has been saying for years.

1 Like

… uh oh, where have we heard that before

I never doubeted that there are some, perhaps many, excellent groups out there who are taking the time to do things methodically and ethically. But if other companies don’t, then we’re still totally screwed. That was the whole point of this warning!!

I’m amazed that you can think that it’s ok if a huge company like Microsoft is working without an ethics team just because there may be other groups out there that have them.

The ones who are acting irresponsibly in the hope to get their tech out of the gate first to the market in order to make a buck aren’t going to slow down just because they see other researchers getting academic acclaim for being more responsible. You can’t control a whole industry with all carrots and no sticks.

5 Likes

No, I don’t think it’s ok; as I said: we need to reward (eg hire and not fire them) researchers who do social good research.

But the community on a whole does very much care. Microsoft is only one very small part of the whole community, we just happen to hear about them a lot because of Bing chat. And for what it’s worth: they fired a lot of researchers in AI, not just social good ones, but people who were also working on all sorts of algorithms (I think they nuked about 100 or so AI research roles, from what I’ve gathered). They are outsourcing their AI to OpenAI.

And also for what it’s worth: I know from my understanding of the ChatGPT algorithm that there’s a lot of FATE research that fed into the end product. Not enough, but we gotta keep pushing!

Let’s be honest: they don’t care about AI. They don’t care about AI replacing jobs, per se. But they are starting to realize that AI is reaching a level where it can take the job of people they actually care about: the people with high paying white collar jobs that buy iPhones and computers every couple of year, and Teslas, and fancy TVs and streaming services, etc. The stuff they are selling.

AI won’t buy that stuff. Displaced factory workers don’t buy that stuff. The people left in dying rural ag-industry related towns don’t buy that stuff. AI threatens to further reduce the number of people who do and can buy that stuff.

Once our Dickensian hell scape arrives, as @PsiPhiGrrrl so perfectly described, they’ll also face an uncomfortably uncertain future. They don’t like that idea, so for a brief time they want to halt AI research — until they figure out how to keep the money spigot flowing their way.

5 Likes

This is the stuff that, as an actual AI developer of 25 years, worries me.

All the shitty low grade grifting behaviour that humans currently engage in (email scams, faking photos and videos, robocalling, identify theft, phishing attacks, etc) can be scaled up to a terrifying degree with the basic language model AIs we have now. It’s only going to be another couple of years before someone can fake perfect video of any public figure saying and doing anything they want. At that point we lose the ability to distinguish truth from lies online. That becomes very worrying for democracy.

17 Likes

One avenue that is being considered is transparency.

Right now, the statistically-weighted neural network models that underpin all these things work opaquely. We ask it to average all pictures of trees and generate a tree for us. We know mathematically how it does that, but we don’t have debugging tools to actually see the process. There’s been little point in building that because it doesn’t matter exactly where all the weights land in the matrices as long as the output is valid.

However if, to borrow @Otherbrother ’s example, an AI crashes a stock market with insane automated high speed trading (much like the Amazon pricing bots regularly getting into race conditions that cause a Pop to cost $9000), well, we’re gonna need to see logs of exactly how it did that so we can prevent it.

The devil is in the details on writing a regulation like that, but I think the basic idea of requiring transparency on the back end is a reasonable idea. The engineers should be doing this anyway.

11 Likes

That’s one risk but not really my main concern when it comes to AI trading because there are existing, relatively straightforward ways to limit high frequency trades. I’m more concerned about bizzare, completely opaque trading strategies that are impossible for humans to understand, but that the traders will still go with because of the potential to make money, until this weird new trading program leads to an unanticipated situation that suddenly causes a bunch of huge financial institutions to go broke.

But we’ll all find out soon enough which concerns were most valid, I guess.

9 Likes