It’ll likely be the Hunger Games in simple terms. You’ll have a rich oligarch “city” or area which has all it’s resources provided to them by the peasants. Of course this is a narrow view of the future where only a few moving pieces have been taken into account.
With climate change, political instability, resource availability, socioeconomic issues, novel viruses, and a whole host of other things the next 50 years is going to be a rocky ride. Maybe we get the Hunger Games, maybe Terminator, maybe Finch, WW-Z. Maybe we could shoot for a rom-com or Bollywood for once.
Sure, but we (speaking as an American) have spent decades not teaching basic skills to people. A lot of people don’t know or want to fix their sink if it clogs. At the same time we have been bashing blue collar work in pursuit of higher education degrees making those that could fill those needed jobs more valuable. I do a lot of my own repairs and DIY. I have no issues paying someone $100 for doing work that I won’t or can’t. It certainly won’t be for fixing a clogged sink. (My septic lift pump failed at one point and that was $1400 for a 4 hour job, which broke down to roughly $400 in parts…so only $250/hr.)
But as “AI” is capital driven this is the exact opposite of what is intended. We’ve been here before with the Luddites. Or when Keynes predicted mass leisure due to incredible gains in productivity and I’m marching in the army of General Ludd here. They were right. Automation, technology, and machines are not the problem. The concentration of wealth and the gains made from these are. Silicon Valley techbros, VC scum, CEOs are our sworn enemies. They wish us harm. They aim to destroy the world (which the emissions the mass adoption of AI will hasten) in order to corner even more obscene wealth for themselves and to impoverish the existence of everyone else.
It’s why they are so severely dumb that they believe the threat of AI is it becoming “more intelligent” than us and taking over the world. That their mindset is incapable of imagining the actual real threats of AI is grimly funny. They are so captured by the Ayn Rand idiocy they have no idea what is going on.
Anyone with a vaguely positive futurist or economist outlook who sees changes that reduce labor or costs in some way seem to jump to the immediate idea this will be good for laborers. Cause laborers will get compensated the same for doing different work, and products will be cheaper. But:
Laborers aren’t necessarily left with a job they enjoy or are well suited to. And we know how folks love to hear they’re going to have to retrain.
They may not be as well compensated for the new work now that there’s not as much demand for them.
Instead of prices dropping for the typical consumer, they simply generate additional profits for the owners of capital.
So I view any labor-saving technological advancements as inevitable (if your business doesn’t do it, someone else will and undercut you). But it is far from certain to be a benefit to most folks.
The real issue is greed, not technology. Age old “love of money is the root of all evil” greed.
AI and technology overall are just tools that can be used for good or bad ends . AI (so far) is an extremely capital intensive technology, so it lends itself to exploitation by the most massive tech companies. They may use it for automated evil at massive scale. But with careful intent, design, and governance it could be used for great good.
In America our real problem with technology, and with large business generally is unfettered, runaway greed. Greed that has insinuated itself into our social and political systems and into our individual politicians, business leaders, and citizenry.
But capitalism doesn’t have to be bad unless it’s in an environment infected with greed, and unless it’s left unrestrained, to its natural trend toward excessive concentration of wealth and power.
Nor is socialism inherently good, if it’s corrupted by greedy people hungry for power and wealth. The history of Russian communism is a perfect example as it began in bloody conflict for power and has devolved into kleptocracy.
Our core problem is with people. And with the systems and structures people build.
You have an important point there, true. I don’t disagree with you.
The “guns don’t kill people, people kill people” argument gets wrongly used to suggest that all people should be able to own all kinds of weapons. So everybody should get to own assault weapons. Except certain people society dislikes enough to keep in prison.
I strongly disagree with that fallacy. It’s killing our sisters, brothers, parents and children.
Assault weapons do need to exist in a world where militaries exist, but I argue they are military weapons that only the military should have. And perhaps similarly military-like government groups, carefully limited. By which I mean that militarizing law enforcement is a really dangerous thing.
Technology (and business and people generally) certainly can be dangerous. A liberal democratic society needs to engage and understand and restrict dangers in technology. As it should with people and businesses.
That’s a political and a judicial process. Which is what we are discussing here.
It’s also a very HARD and fraught process in the case of technology. Because technology often is hard to fully understand. Also because it evolves so rapidly. (Problems from the accelerating pace of change all around us is a subject for a whole other discussion.)
I hope we can do it well. But in our current world, so fractured with hatred and evil as it is, I am very fearful. Because people.
Yes we should implement many technologies more slowly. According to the best we can anticipate about their potential for harm. And the complex automation that is being called “AI” clearly has huge risks for massive overt harm. And for massively subtle harm.
But if anyone thinks we can build “safe AI” or “good AI” they’re mistaken. That’s a little like asking for safe assault rifles.
If we don’t regulate and enforce how AI is used then we will fail. People and businesses will always find ways to circumvent restrictions.
That is the people problem I’m referring to.
People and businesses will misuse it. And it can’t be un-invented, or even stopped. We must have truly effective sanctions against misuse.
And that is going to be a very hard thing to do.
Because politics. And greed.
Because people.
Also, like the network effects of internet applications, the value of misusing complex automation is going to be huge for the gigantic corporations that currently are the only ones able to use and deliver it at scale. So they will strongly tend to ignore regulations and just “pay the fine”.
Unless we can find the political will to enact truly existential sanctions for corporations.
We need death penalties for corporations, and the will to use them.
i think he means “naturally” in a universal sense. that it’s natural for intelligence to arise, not just on earth among humans, but at large.
as for ai, from the full interview:
"A first prediction of mine is that the only way to produce really powerful AI programs to use evolution. Because it’s literally impossible to write them from the ground up… Your personality comes not only from your software, but from your full body: the sense organs, the emotional flows, the lusts and hungers and fears. "
he believes ai can’t be manually programmed, they require a level of complexity and physicality computers haven’t yet reached. ( and won’t any time soon. ) – though in a small way maybe chatgpt, etc. are examples of that: because they aren’t directly programmed either.
i think he knows that:
“Never ever ever implant a hardware device into your brain. Come on. You know that all of that shit malfunctions, and a lot sooner than you’d expect. Plus, if the thing’s inside your head, you’re not going to be able to turn it off, even if they claim that you can. You’re gonna have ads in your dreams, all night long, for the rest of your life.”
indeed. and i think if you read the interview you’d see the specific context.
his point, as i read it, is that you can’t trust things from corporations because of the nature of what they make
with ai, yes he assumes we’ll all get by somehow and that it’s not an existential crisis. and maybe that is being overly optimistic.
in his books he often takes the point of view that once a society produces real ai, that ai would do all the things that people do for all the same reasons good and bad.
maybe he should instead be focused on what happens before that point? ultimately the interview is him jamming with a friend about the what if and the someday - i don’t know if every such thing has to be so deep