Tech leaders, scientists, etc., call for pause in AI development

Yah, I mischaracterized your example a bit there, but I agree with the risks posed by your scenario. My point was more that the proposed regulation to let us debug these things after the fact is reasonable, whatever catastrophe they cause.

I honestly don’t think all AI-catastrophe is preventable. Every new technology brings a bunch of disasters until we figure out the bounds of how much damage it can do. However we at least need the tools to prevent the second one.

7 Likes
  1. All training datasets must be open and accessible to all so people can observe any flaws, inherent biases, or the use of others copyrighted works.

I imagine that law ALONE will piss off most “AI” companies. They do not want their training sets to be exposed for the bias and illegal fuckery that underlies them.

  1. Nothing produced by AI is covered by copyright or trademark.

That will take care of most of the rest.

  1. Any accidents, violations of law, economic issues, or other gross injustices caused by the use of “AI” are the responsibility of the designers, who can and will be held liable for all damages.

Chef’s kiss.

11 Likes
1 Like

:thinking: Maybe it was in proposals to cut Social Security, Medicaid, and Medicare…or the COVID-19 response. I seem to remember a different reaction to a pause on business activities back then. Of course what governments consider to be a global catastrophe in the making is different from what business leaders consider to be an existential risk.

the truth is out there GIF

Unless we push back, the examples above on how AI can be used for a few to profit will take precedence over uses for the good of all. :nerd_face:

4 Likes

When I look at what’s going on with AI, I think, “That’s how humanity gets to Idiocracy.” AI eliminates the need for people to think about…well…pretty much anything. Without the need to think, people will forget how to. Future generations will never learn how to think in the first place; AI will take care of all of that for them. And humanity will become mentally what the people in Wall-E were physically.

3 Likes

Yes, and many of TPTB – including tech executives like Musk – are not ready for that discussion. Except perhaps for guys like Woz, they do not see AI of any sort ushering in Fully Automated Luxury Communism. Instead, they need more time to lay the groundwork for this version:

The first step is getting people used to the idea that entitlement programmes are not sacrosanct (mais non?). And so…

9 Likes

Recently seen article:

3 Likes

Imagine if the Russians had made a convincing deepfake video of Zelenskiy ordering a cease-fire or surrender. (Wild thought that just occurred to me: he switched to his beard and khaki look to make any deepfakes prepared in advance useless.)

I imagine that somewhere in the bowels of Chinese intelligence there are teams working on deepfakes of Taiwanese politicians.and military chiefs.

4 Likes

Yeah Dat’ll work!

Musk’s short-term concern is that his own companies are being outpaced.

So it’s only a matter of time before his “solution” is to aquire this edgelord AI company.

6 Likes

I recently learned of those audio-generating bots that can be trained on a few snippets of a person’s speech and then produce a near-perfect audio file of them saying anything you want. It’s terrifying.

5 Likes

So very much like this!

Right now, “AI” LLMs are just making it cheaper to do the same things we have been doing all along, things that already reduce the value of human labour and the conditions under which it is performed.

Propaganda isn’t new. Misinformation isn’t new. Spam isn’t new. all of these used to be laborious processes (think cold war), but became inexpensive first due to offshoring and outsourcing and then by the ubiquitousness of the internet. Now, these look to become even easier because LLMs can offload much of that work.

the long-termist view is that none of what I just said matters because all that matters is getting beyond this and to utopia. If thousands or millions of people need to live through a climate catastrophe to get humanity to take this stuff seriously and move forward, so be it. And so instead they focus on the endgame - the idea that we need to solve problems that don’t exist yet because today’s problems aren’t relevant to humanity in 100/1000/etc years, but that somehow our decisions now on AI do.

You solve propaganda with strong measures to fight fraud and information - empower organizations focused on truth and transparency to provide trusted sources of information and transparent processes so corruption and evil are laid bare. You remove economic incentives from spammers and counterfeiters. In short, you solve the problems that people are talking about using LLMs to exploit, rather than “blaming the LLMs” for the problem existing in the first place. Banning further expansion of LLMs won’t solve any of what already exists. We’ve already familiar with nations and other large organizations using power to sway public opinion. Again, see cold wars, climate change propaganda from oil companies, and even cigarettes!

The other reality is that the march of technological progress will continue unfettered no matter what. The only way to outlaw “work on AI’s” (whatever that means), even temporarily, would be to prohibit coders from coding and crawlers from crawling. You will never do this everywhere, but the internet is everywhere. So you stop AI research in the G8 or whatever, and now just those same large nation-states who are already using propaganda and spreading misinformation are the only ones with advanced tools, and oh look - everyone else stopped their research, so now we are hopelessly behind.

Despite @Otherbrother 's statements to the contrary, there are a lot of very smart people thinking about ethics in AI, and they aren’t all thinking about how to use AI effectively, but about how to solve the problems AI is being used to solve today so that there’s no incentive to use AIs this way. That’s critical discussion and research that needs to be undertaken. So is additional research on how, once we get to AI’s instead of these LLMs that basically just guess what to say next, we teach AIs our values and priorities.

This tech is going to arrive regardless of whether the “good guys” are behind it or not. The best defence is for the “good guys” to keep researching it, and for “good” businesses and governments to push social reforms that make all these bad use cases for AI obsolete in the first place.

Again to @Otherbrother 's point, it doesn’t matter that Microsoft is self-policing itself; it matters a hell of a lot more if the major governments institute strong controls so that there’s no financial benefit to them using AI to circumvent privacy protections or whatever else they can think up. Because otherwise, you’re going to get a bunch of signatories that agree to play nice, and the rest of the world who continue the research anyway, and the long-termist worry that AI will be “bad” will be moot while still not solving the actual problems underlying why they’re being used in bad ways to begin with.

8 Likes

Came here to post this. Musk isn’t concerned about the future of humanity. He is concerned, as always, with the future of his wallet. I’m also not concerned about the algorithms that are currently (incorrectly) being called “AI” becoming sentient, which seems to be the bugaboo for several techie types out there, who want us to start thinking about giving rights to these things that are currently nothing more than sophisticated chatbots. I am concerned about forcing these algorithms into situations where they could cause harm, many examples of which have already been discussed in this thread. But mainly, I just wish we could stop using the term “AI” where it isn’t warranted.

3 Likes

Indeed.

via WaPo:

Automated disinformation

The trove of documents initially was shared with a reporter for the German newspaper Süddeutsche Zeitung. The consortium examining the documents has 11 members — including The Post, the Guardian, Le Monde, Der Spiegel, iStories, Paper Trail Media and Süddeutsche Zeitung — from eight countries.

Among the thousands of pages of leaked Vulkan documents are projects designed to automate and enable operations across Russian hacking units.

Amezit, for example, details tactics for automating the creation of massive numbers of fake social media accounts for disinformation campaigns. One document in the leaked cache describes how to use banks of mobile phone SIM cards to defeat verification checks for new accounts on Facebook, Twitter and other social networks.

Reporters for Le Monde, Der Spiegel and Paper Trail Media, working from Twitter accounts listed in the documents, found evidence that these tools probably had been used for numerous disinformation campaigns in several countries.

One effort included tweets in 2016 — when Russian disinformation operatives were working to boost Republican presidential candidate Donald Trump and undermine Democrat Hillary Clinton — linking to a website claiming that Clinton had made “a desperate attempt” to “regain her lead” by seeking foreign support in Italy.

The reporters also found evidence of the software being used to create fake social media accounts, inside and outside of Russia, to push narratives in line with official state propaganda, including denials that Russian attacks in Syria killed civilians.

Amezit has other features designed to allow Russian officials to monitor, filter and surveil sections of the internet in regions they control, the documents show. They suggest that the program contains tools that shape what internet users would see on social media.

The project is repeatedly described in the documents as a complex of systems for “information restriction of the local area” and the creation of an “autonomous segment of the data transmission network.”

A 2017 draft manual for one of the Amezit systems offers instructions on the “preparation, placement and promotion of special materials” — most likely propaganda distributed using fake social media accounts, telephone calls, emails and text messages.

6 Likes

The most important point about the current AI hype is illlustrated by Elon Musk being at the center of both the pro- and anti-AI chatter.

That point being: it’s hype either way. Whether you think AI is good or bad, you’re being railroaded into agreeing it’s important. And I don’t believe that case has been made, or even examined really.

In particular, there’s no suggestion of why I would pay one thin dime for something AI can produce. And let’s not forget, even if it’s hidden away in a data center, the capital and energy costs of this technology are huge. Try generating one tiny image locally on your phone if you want an idea of what needs to be on the other end of the line to provide AI on tap. It is reminiscent of crypto hype in more ways than one.

You might say, but AI can make money by eliminating labor costs from things I do pay for. But what things? If someone I pay for services starts sending me AI emails, then either (a) they were sending me worthless emails before, or (b) they’re offering me a significantly less valuable service. Either way, they could save more by replacing human labor with plain old nothing.

And that is the key to it. This expensive technology won’t pay for itself, and it won’t replace anyone’s labor. It’s a disciplinary threat. And that’s why it’s politically important to remain unimpressed by the hype: the more we amplify the message that AI is a big deal, the more effectively Capital can wield the threat.

To be clear, it’s not an entirely hollow threat, because employers really can fire people. And if enough people believe the AI fearmongering, that fear makes it easier for your boss to find a more pliable replacement. But so the danger comes from the fear itself, not from what AI can actually do.

Maybe, but there again it would only be a rhetorical sheen on top of the decisive material factors. China could invade Taiwan because it’s big and powerful. It could not invade Taiwan if it wasn’t. Deepfakes don’t change any of that. I mean, what could a deepfake of Miguel Díaz-Canel say that would suddenly convince Cubans to welcome a US invasion?

I think why this stuff unsettles us is that it’s come to feel like the world on screen is reality, and the physical world of farms and bullets and oceans and relationships is only a reflection of it. I’d say that feeling should concern us more than Lorem Ipsum 2.0.

2 Likes

The question is what a deepfake of the president of Taiwan could say to convince Taiwanese people that the situation is hopeless and surrender is inevitable.

More insidiously, one can imagine deepfake “leaks” or “intercepts” of Taiwanese commanders saying that China has inflicted massive losses. They don’t have to convince everyone; they only have to cause confusion, sow doubt and sap morale.

1 Like

Update:

Long-termism is being increasingly exposed as a reputation laundry for greedy if not crooked billionaires. From the Vice article:

The letter was penned by the Future of Life Institute, a nonprofit organization with the stated mission to “reduce global catastrophic and existential risk from powerful technologies.” It is also host to some of the biggest proponents of longtermism, a kind of secular religion boosted by many members of the Silicon Valley tech elite since it preaches seeking massive wealth to direct towards problems facing humans in the far future. One notable recent adherent to this idea is disgraced FTX CEO Sam Bankman-Fried.

3 Likes

shocked philip j fry GIF

4 Likes
2 Likes
3 Likes