It also strikes me that there is still an awful lot we don’t know about how brains work… we know a lot more than we did even a half century ago, but lots of it still seems a big mystery… we know how things like LLMs work, because… we built them!
(Edited to reduce unwarranted snark, sorry) I know some people whose jobs have already been massively changed by LLMs increasing their productivity. For me it has been a fairly small boost so far, but still substantial.
You’ve got a lot of good points, some of which I agree with, and the rest of which are not worth really arguing against (especially in this kind of forum), because I can’t actually know how things will evolve from here. But my tentative expectations point in a very different direction from yours.
As far as what kinds of systems I wouldn’t want to exclude from consideration as potentially intelligent in the senses I care about, anything that might be sentient or sapient. Octopodes, given that their neural nets are set up very differently from ours. Aliens, whose evolved mechanisms for gaining intelligence plausibly won’t look like neurons at all. Future simulations that try to approximate features of how brains work, with varying levels of fidelity. AI systems with architectures we haven’t come up with yet.
Basically, the failure mode I worry about isn’t “I anthropomorphized a thing that clearly wasn’t sapient or sentient.” I do that anyway. The one I worry about is “I’m Spinoza saying animals can’t suffer because they lack souls.”
Is that the only metric of value with regard to AI and the like? Producing more wealth for a few people (which is the argument lots of critics have been making about AI)? Again, Brian Merchant’s argument (and the comparision with the Luddite rebellion) about this kind of technology being used to squeeze more out of workers and devalue their labor makes a lot of sense with this technology. It’s great to be more productive, but you have to ask what you’re being more productive for and what that means for the long-term, in terms of how your labor is actually valued or undervalued. There certainly might be great value in freeing up some time for workers, but historically, that’s often been used to undercut worker’s independence and wages. Lots of tech workers don’t seem to think that that pattern can’t happen to them, because they are highly educated and skilled. So were the Luddites.
Quite frankly, some technologies are worth it, in that it is a legitimate help to our working lives. And others are created specifically to justify cutting our wages by devaluing our labor (often for a worse, automated replacement).
Sure… none of us can. My point is that far too many people are just sort of accepting the “wisdom” of Silicon Valley and its teleological narrative of how this stuff is inevitable and inherently progressive. Nothing is inevitable, of course, because change emerges out of people making choices. I’d much rather that the actual tech workers make those choices rather than CEOs chasing the next shiny tech bauble that they don’t actually understand. The path of technological change is a twisty and winding one, with lots of false starts and stops, and rerouting, for various reasons. We should certainly encourage innovations that makes our lives easier (as workers, as consumers, as just… people)… but we shouldn’t mistake the hype-cycle for actually innovation, if it’s just about fleecing as much value as possible from our pockets to line those of the already wealthy… History does tell us all that and more… it’s worth it to take the hype-cycle a bit critically rather than just accepting the narrative of jumping on the next new thing…
I edited my comment to reduce snark, but didn’t post fast enough, sorry about the tone. And no, that’s not the only metric of value, not by any means. But tools that augment human thinking can be very powerful and we should care a lot how they’re used, and just how powerful they can become. I agree that these concerns are very valid, and the more capable AI becomes, the more valid they are.
And I agree with your points about the path of tech development. A big chunk of my job is about cutting through hype to help people understand the realistic potential of (some categories of) new technologies. My sincere expectation is that because of AI’s current shortcomings, since yes obviously it does way less than it’s hyped to do, many people to severely underestimate future impact as it matures. For good, and for bad, because right now no one developing the technology has enough understanding and control to ensure the effects are positive as capabilities increase. If I had your low opinion of capabilities growth, I wouldn’t be concerned, but I am concerned.
Sure, but that doesn’t mean we shouldn’t look at them critically, and even if they seem helpful, reject them, if they have other impacts that outweigh the good… Just the environmental impacts alone seem to be a tipping point here we should consider.
Well, but I am concerned, because there do seem to be people riding the hype-cycle who seem determined to push through AI, despite their failings/problems/limitations, etc. Because they think it will increase their wealth/power.
So we share that, then!
As far as the long term goes, I’d just like to mention that I’m one of those chumps who was actually doing things like worrying about taking vacations because of how much flights contribute to carbon dioxide emissions. It does not feel great to have all such efforts rendered meaningless by burning more coal just so google can tell people to eat glue and rocks and amatoxins.
It does not feel great to have all such efforts rendered meaningless by bringing on more coal plants just so google can tell people to eat glue and rocks and amatoxins.
This one I’m not worried about. Not for long, anyway. It doesn’t really make economic sense. I do hope we might finally start valuing having abundant electricity and reform our permitting processes for renewables and advanced nuclear, though. That way if AI has another winter we’ll at least have gotten some useful regulatory improvements out of it.
Just the environmental impacts alone seem to be a tipping point here we should consider.
Maybe. Maybe not. I think one of the second order effects of improving AI is “we can push against several kinds of deep technological constraints much more effectively for significant environmental benefit, if we can manage to not be stupid about it.” Yes, that’s a big and very questionable “if.”
there do seem to be people riding the hype-cycle who seem determined to push through AI, despite their failings/problems/limitations, etc. Because they think it will increase their wealth/power.
Yes, and we definitely want to minimize the harms those people are likely to cause. That’s going to require regulation, a lot more and stricter regulation that I’d want to have for almost any other technology. To bring it back to the OP for a moment, I don’t think that looks like what I see the RIAA as doing here: trying to shoehorn AI music startups into our existing music copyright regime, not because they made a copy but because they looked at a copy before making something that legally isn’t a copy.
Bitcoin didn’t make any economic sense at all, but it was still pushing it like crazy by so many people hoping to turn energy straight into money. Generative AI is basically being used as a new version of the same; it’s being forced into everything because investors are desperate for infinite growth rather than because it’s useful. Sure, that won’t last forever, but it is going to have a lot more consequences than precedents on rights for octopuses or probably even a handful more cancer drug candidates.
Bitcoin didn’t make any economic sense at all , but it was still pushing it like crazy by so many people hoping to turn energy straight into money.
This one still baffles me. “Let’s get people to do as much meaningless math as possible, then agree to reward the ones who did the most.”
GenAI at least has genuinely valuable uses, some already clear, some still unclear.
It’s worse than that. It didn’t even reward the people who burned megawatts on having their GPUs do math. They were the marks. It’s the exchanges that made bank; well, some of them.
True. I should have said “agree to pretend to reward.”
We do… the problem is that much of it comes from fossil fuels…
It OBJECTIVELY is right now. And that seems like a bit greenwashing if you ask me.
Right now, it is contributing to climate change. FULL STOP. That’s not up for debate as it’s a fact.
Which isn’t happening right now and won’t happen with teh current right wingers running the GOP with the backing of some deep silicon valley pockets…
It benefits a few wealthy people, not all of us. So far, it’s the same with AI…
I’m really not convinced for the vast majority of “uses” it’s been put to… Even in cases where it’s helped people in specific fields, the insistence on using it is primarily to replace labor, not improve the work of labor.
I disagree that we have abundant electricity. We have, for now, enough. We have many regions of the country that expect to have, or to come very close to having, blackouts and brownouts in the summer months, yet are not approving enough new renewable power supply or making needed grid upgrades or modernizing how they manage the grid and sell power to enable higher renewable power penetration rates. Moreover, if we actually do continue to make progress on decarbonizing, there’s going to be a massive increase in electricity use. Hopefully fewer cars - but still a lot of EVs. Hopefully better logistics - but still a lot of commercial EVs, or fuel cell vehicles of one sort or another, or similar. We’re going to need to electrify a lot of equipment and processes that currently run on fossil fuels directly. Not just in industry, but things like heating homes (heat pumps to replace or supplement furnaces).
Right now, it is contributing to climate change. FULL STOP. That’s not up for debate as it’s a fact.
Granted, yes, absolutely true, but so does everything that uses any energy at all. Using that as the metric also rules out a huge proportion of the (much more obviously beneficial) actions we can take to fight climate change. Manufacturing just about anything contributes to climate change, whether or not the net impact is positive or negative over the course of a product’s lifecycle. It is not yet clear to me that the net climate change impact of current AI development, even with the current mix of energy sources, will be positive or negative, when we look back on it from, say, the 2040s.
That said, if OpenAI came out and said LLMs are good for climate change, I’d also call BS because that’s obvious nonsense. The companies developing AI are consuming a lot of electricity and not doing anything to fight climate change. But, some of their customers are. As one example, one of the side effects of having better AI could be that researchers in different fields become more efficient at developing better materials/processes/chemicals/products; I find it very easy to see how that could outweigh the amount of fossil fuels we are putting into AI development. Another possibility is that AI’s need for continuous electricity contributes to political pressure to push through changes that make it easier to permit and construct geothermal, nuclear, and grid scale energy storage systems to make better use of wind and solar. That would be a massive win for decarbonization.
We do right now.
Like I said.
Which is why we should move off fossil fuels…
Yeah I know… that’s a key political debate right now.
maybe, maybe not.
that will necessitate other changes, such as a much more robust public transit infrastructure. Also a key debate in politics right now.
Much like NFTs and Bitcoin, these are extra pressure that aren’t doing that much good for those of us how aren’t tech-dude-bros.
That’s just a whataboutism talking point. We all understand that about manufacturing. That’s not what’s under discussion, though.
Um… plenty of people have said that these intensive uses of computing WILL be good for the environment. They say shit like that, beause they are full of bullshit and seeking to pump the discourse so full of bullshit that we can’t see facts when they are right in front of us. These are bad for the environment right now.
Such as? Are they or are they just saying that they are?
We can do that without AI. We’ve spent eons doing just that…
We need to completely move off fossil fuels. We all know that to be true.
So that they can use it rather than it going towards other, more universally necessary reasons. As long as this is primarily private sector, for-profit companies driving this, especially these which have a particularly anti-government world view that they’re pushing, we can’t count on them to do what’s right FOR all of us, not when it clashes with their bottom line.
Saying everything takes energy is burying your head in the sand regarding how demanding these models are. I mean, they’ve been bringing coal plants back on specifically to address the increased demands from them, which has been compared to whole cities.
And that’s partly because of a fundamental difference from manufacturing – there is no inherent cap. When you make a product, it takes however much energy and you are done; with generative AI there us no limit to how much you can throw on in hopes of a slightly better result. Again, it’s like bitcoin, in that the more energy they use the theoretically better it is for them. Which means it’s going to keep being an energy disaster regardless of whether people end up finding it useful or not.
But energy efficient A.I. is just 20 years away.
Just like cold fusion, huh?
Or hot fusion.
Jazz fusion is a thing that actually exists though… right?