AI is either going to save us or kill us. Or both

Originally published at: https://boingboing.net/2024/03/12/ai-is-either-going-to-save-us-or-kill-us-or-both.html

4 Likes

Or, it’s going to just make lots of stuff shittier and probably cause some serious problems, and then fade away as a failed technology… This seems the most likely outcome to me.

Tv Show Comedy GIF by HULU

Weighing Options Are You Sure GIF

Winona Ryder Movie GIF by filmeditor

16 Likes

That “maybe” is doing a helluva lot of heavy lifting.

11 Likes

If AI gets out of control, OpenAI’s board of directors will shut it all down.

11 Likes

Or make a lot of stuff shittier, maybe cause some serious problems (hrm, almost certainly cause some serious problems) and then reach some limit of what it can actually do far short of what people think and/or claim it can, and stall out having established some sort of niche where it is if not actually useful more cost effective then other stuff.

Which is a not atypical tech story (breathless “new tech will change the world” stories, followed by “new tech is changing the world” stories, followed by “new tech utterly failed” stories when it actually did change the world a little tiny bit compared to what was promised)

Fading away in this case seems really unlikely, just for example the best text to speech systems are currently all AI based. If every single unfulfilled promise of AI fails that one would still be not going anywhere anytime soon (and to be honest there are a bunch of other things it does very well that have actual value, so all of them would need to go away in order to be a failed tech…)

There is the outside chance though that AI could cause enough semi-predictable economic “bad shit” to happen that even without some sort of AI maximizer apocalypse or other directly related to AI apocalypse that AI is the precipitating cause of some wave of economic hardship that triggers some other more traditional apocalypse.

I’m just full of gloomy ideas today. I think I’m going to eat a bag of chips and go hide under some blankets until tomorrow. Hope tomorrow is nicer!

10 Likes

Ok, maybe I’m just being super paranoid but I’m honestly not sure if this whole article was written by an A.I. That last line could be interpreted as an admission of such.

The author of the article is a pretty new contributor to Boing Boing, and if I do a google search for the name “Yoy Luanda” there are zero results prior to this month. So either it’s an A.I., a brand-new pseudonym, or someone who kept a very low profile online until very recently.

10 Likes

I know as much about killer AIs as the next person who has watched Terminator and played Universal Paperclips; but it seems…a little convenient…to focus on the bots when we already know where the known risks are.

The people at the levers would already happily declare you surplus to requirements if it makes next quarter’s numbers go up; and a pretty substantial percentage even bristle at the notion of running a really miserable welfare state even on the pragmatic grounds of it being cheaper than having to suppress starving proles the hard way.

It’s all fun and games to play at stranger-danger; but it seems somewhere from heedless to complicit to pretend that the main risk isn’t what humans will do to you if they think your labor is no longer required; rather than what a bot might potentially do at some point.

12 Likes

AI has been here for a while, the difference is now individuals can have access to AI models that work for them. In the short term I am looking at running my own model to work for my daily life (sort and manage messages, streaming services, calendar, understand EULAs, shopping etc.) all with the goal of helping me navigate these tasks with my values and desires (like dodge scams, support local and small businesses, open source community etc.)
Longer term have an AI model to get through this transition time and move forward with personal robotics to help in my daily life outside of tech, like home maintenance and chores.
AI should find solutions to my problems, present me with options then a robot to carry out the solution I choose.

2 Likes

That’s something the non-technical public just hasn’t gotten yet – they are conflating AI (in the LLM/deep learning sense) as monolithic systems “in the cloud” run by giant corporations like Microsoft and Google with the concept in general, which can be run locally for your own usages using open source code. This is something that isn’t going to “go away” or “fail” any more than say, Linux has, because no company owns it that could go away.

4 Likes

I’m mostly concerned about our adversaries, be they countries or ransomware groups, weaponizing the technology. Will our defending machine learning models (and, eventually, true AI) be able to keep up?

5 Likes

As long as the decisions about “AI” are being made by the likes of Zuckerberg, Musk, Andreesen and Altman, one cannot only justifiably be a Doomer but can do so without coming off as clownishly as Eliezer Yudkowsky.

3 Likes

If humanity survives after this planet goes up in flames then maybe I’ll start to worry about Ai…

Too doomy?

1 Like

Will our defending machine learning models (and, eventually, true AI) be able to keep up?

Good news and bad news: Colossus The Forbin Project ( 1970) : Free Download, Borrow, and Streaming : Internet Archive

3 Likes

Great point and definitely the open source communities are going to be crucial to keeping this from belonging to just one company.

2 Likes

…yeah you got me waiting on tenterhook brand seat edging for Open Stuff Foundations or weird Global South fakes of them to fit an autocorrect in someplace asking ‘Did you mean Free as in Drip-Pricing Glibness?’ Oh Yagoo, you finally did OAuth so wrong it autocorrected wrapped content!

2 Likes

Defs not a lived-in look, but I’m stuck unpacking ‘up in flames’ as an aerosol problem…

4 Likes

jah, no. read that one before; I still dont believe this story as told at all. something crucial is missing here.

2 Likes

LLMs don’t really have a thought process, let me get back to that in a moment.

LLMs are best thought of as fancy autocomplete. I could believe if asked how to access a web site blocked by a captcha that somewhere in the mass of text ingested when constructing the LLM was one or more reddit posts about AIs or other automated systems bypassing captchas by using a mechanical turk or desk rabbit or the like. So the most likely “autocomplete” to a question about accessing a captcha protected site uses words and word orderings found in the text near the already existing posts about it. As it constructs an answer using some of those words and phrases it looks more and more like one of those messages so it selects more words and phrases that align with it (plus some amount of deliberate chosen “less likely” (i.e. “mistakes on purpose”) words or phrases that both help the text look more lively/less computer/and not get stuck in a loop due to hill climbing issues.

There is no thought process beyond “what are the words that appear after ‘access a website protected by a captcha’ most frequently”.

The “AI figures out a way around AI blockers” is really “reddit figured out a way around AI blockers years before LLMs existed” which is likely “scammers found a way around AI blockers”… (except scammers generally do that by setting up another web site that has things people ‘want’ (frequently porn), and man in the middle captchas from web sites they want automated access to)…plus “AI somewhat rewords things found on reddit, entire world loses it’s shit about how smart AI is now!”

The red team bit was probbably added by someone after the fact. Alternately some red team fooled themselves about what “reason out loud” really makes a LLM do, because it isn’t explaining what it was thinking, it is finding some text that looks a lot like “reason out loud about how you get access to a captcha protected web site” and autocompleting that. In other words if someone on reddit had explained how they decided they would bypass a captcha that text is forming the basis of the LLM’s answer, not any actual “thinking” the LLM does.

(and yes, LLM’s aren’t exclusively created with reddit text, but it is a rich source, and it is way easier to think about LLM’s with only one source of text to model)

LLM’s are pretty amazing, but to me the most amazing part is like watching an artist capture a scene without drawing anything even remotely photorealistic. LLM’s produce responses like thinking without actually thinking. They are the “2 color line art” of thinking. Or the other analogy is playing some really crappy phone games where you can see how little effort was put into a game, how slapdash the art is, how the dialog (if any) is weak and superficial and recitative, and the entire game play is repetitive, and yet it still feels “somewhat fun”. Like a “minimum viable game” is vastly different from a “real game”.

1 Like

In other news: water is wet.

8 Likes

I fluv U for that

7 Likes