Elon's Basilisk: why exploitative, egomaniacal rich dudes think AI will destroy humanity

The article starts out with a misrepresentation of Roko’s basilisk, then cites a few other thought experiments relating to the existential threats of AI. It heaps some ridicule on them, but fails to argue anything.
Now that we’ve established that those people are obviously wrong, we can switch to psychoanalysis mode and link the results to their world view, to construct some nice ad personam arguments so that we can feel secure that they are really wrong when they talk about an existential threat.

Yes, there are logical arguments against these AI doomsday predictions. Yes, there are moral arguments against exploitative, egomaniacal rich people. But conflating the two is just bullshit.

5 Likes

THIS !! (right arm)

1 Like

OMFG, why? The fears that Elon Musk and Steven Hawking have about the dangers of AI are about the risk of its benefits going only to the ultra-wealthy, and powerful. AI has the potential to economically displace a lot of people and can be a tool for authoritarian users to oppress, and spy on people.

That’s the worry, that’s why OpenAI was founded, to ensure that tech giants don’t have a monopoly on AI. The majority of the benefits of AI are being kept hoarded by the Google’s and NSA’s of the world, and they obviously don’t have our best interests in mind. Why paint this concern as something other than it is, with some flippant dismissal as some paranoid tech bro fantasy?

edit: and to be clear I don’t consider Elon to be some saint

4 Likes

I greatly suspect that the whole thing is in the category of “how many angles can dance on the head of a pin?”. Silicon AI has always strained against materials science, but fundamental physics is starting to get in the way. Power consumption (read: thermodynamics) is increasingly a problem, and this is for AI well below human level. Then there is the question of the computational complexity of super-human AI. There are direct relationships between computational complexity and thermodynamics, which can’t just be magicked away.

2 Likes

The mahines won’t care if you are a sympathetic, semi-intelligent life form or not. When they realize oxygen in the atmosphere is not good for them and they engineer it out of the atmosphere… oh well. Sucks to be a human, or any other oxygen breathing lifeform.

1 Like

But we do know that human-level intelligence is possible with a water-cooled device of about two liters in volume. It’s called the human brain. We have no clue how to build one, and we aren’t sure if its feasible in silicon. But those are just technical obstacles - if there is such a thing as an AI apocalypse, this only puts it off for a few centuries.
And do we have a reason to assume that human-level intelligence is near some sort of natural limit, rather than just being the point where having a smarter and more expensive brain has diminishing returns for the average upright plains ape in terms of individual reproductive success?

7 Likes

Nah, I’m not so sure that we’re running into a physics issue yet. We’ve been building CPUs one way for a long time, optimized for one set of operations and we’ve gotten a lot of power out of it, but there’s room for improving CPU computation on other operations. For example, CPUs aren’t good at doing a lot of floating point math, but GPUs are, which is why improvements in GPUs have driven a lot of growth in AI. CPU manufacturers are working to incorporate more floating point units, improving computing power for classes of work that we’re currently very slow on.

edit: also x86 just sucks, and is inefficient. We’re starting to get a lot of cpu power out of low power, and more efficient arm, and other risc systems

2 Likes

Good. Me too.

2 Likes

I’m actually less worried about a full blown AI It could most likely be reasoned with to some degree. What I’m really worried about is the machine-learned algorithms we have now that we’re pretending are full blown AI.

A good example is the Microsoft chatbot that turned out racist because the developers behind it though it reasonable to let it learn how to chat based on conversations available on the internet - forums, etc. Eventually one of these algorithms is going to stumble into something like Rush Limbaugh’s fever dream mentioned above, and it will be running entire networks of IoT devices or controlling cars.

If we can convince a person to grab a rifle and drive down to a pizza shop because they think there is a pedo ring operating there, it will be pretty easy to accidentally convince an algorithm that all non-white people are an imminent threat. If that algorithm happens to be running a van’s auto-pilot then very bad things are going to happen.

I guess what I’m trying to say is that we’re training these algorithms to act like us, without any sense of good/bad-ness, and increasingly with more and more power and ability to cause harm. Essentialy these are two years olds, mimicking their parents (the internet), and able to use increasingly heavy weaponry.

4 Likes

One of my friends spent over five years developing fully autonomous killbots for the US military when G.W.Bush was president. He was laid off late in the Obama administration because the job was done.

The “intelligence” (like all the other stuff being labeled “AI” these days) was just canned heuristics and machine learning boilerplate, but the machines could accurately identify an AK47 and slaughter the person holding it.

The reason politicians love killbots is that there are things soldiers won’t do - or at least, if you want the soldiers to be able to function in society afterwards, there are things you shouldn’t make them do.

In the field of Natural History the term “charismatic megafauna” frequently comes up. You can commit whatever atrocities you want on microorganisms, you can torture and vivisect mice in the name of science or be viciously cruel to rabbits for product research, but don’t do anything to an elephant or a monkey that would look bad in photographs, people might hunt you down and beat you up in an alleyway. It’s hard to imagine an AI with a similar emotional attachment to humans.

Yeah! Which always reads like the base motivation had nothing to do the argument anyway, but was rather some sort of weird obsessive jealousy thing.

9 Likes

Yep. Garbage in, garbage out is the maxim here. Machine Learning is only as good as the input it receives, and is still highly susceptible to spoiled data. And I think a source of the fear of AI is that it will develop from flawed input, thus behaving badly in the name of its given goals. A better version of the Matrix series would have been having the machines thinking they were behaving altruistically, saving those humans who survived a war between human agencies.

1 Like

I tried to be pretty clear that I was referring to silicon based computers.

That’s not the principle difference between a CPU and GPU; CPUs have quite good FPUs these days. The main difference is in pipelining. CPUs spend a lot of silicon on branch prediction, which on a GPU is used for lots of extra small cores. i.e. never put an if statement in your CUDA code if you can at all help it.

1 Like

That was clear, but it wasn’t clear whether you think that the physical limits current silicon technology is scraping against are in some way universal or won’t be circumvented in the near future. Sorry if I sounded too snarky about that, that was not my intent at all.
My point was that if it’s not possible in silicon, that will only delay things a bit. It makes predicting the time frame a lot harder, but if the AI apocalypse is something to worry about, we should worry about it no matter whether it will be implemented in silicon or in 22nd century gene-manipulated superfungi.

2 Likes

Will they still work without a charge?

1 Like

Actually, this perspective was pretty much implied by The Architect, in my reading, anyway.

1 Like

The thermodynamics of computation is a real thing:
https://centre.santafe.edu/thermocomp/Santa_Fe_Institute_Collaboration_Platform:Thermodynamics_of_Computation_Wiki
For example, we know that certain non-invertable procedures, like erasing a bit have a fundamental cost in terms of entropy:

So there are real thermodynamic limits to how powerful an AI or natural intelligence can be. (The limits in silicon are likely much worse than the theoretical limit.)

I don’t really doubt that we can eventually do a bit better than humans. However, the kinds of problems that we care about tend to have high orders of growth, often exponential. Further incremental improvement can involve doubling the power costs, regardless of medium.

Fundamentally, the thing I wanted to get across is that AI is constrained by the same constraints, like computational complexity and physics, that everything else is. Those who say things like “imagine an AI sufficiently smart that…” have some burden to show that the computation that they suggest is possible by any kind of physical thing at all. Otherwise, you might as well substitute wizard for AI.

4 Likes

It seems many technology breakthroughs from film to video to the internet were fueled by porn. That’s what AI will be drowned in.

2 Likes

They made a spongy pile of rotting garbage and called it peace…

Oh, I’d hook up with a sentient robot lady. Hell, I might marry one! Something tells me we might get along. But maybe I’ve just read too much weird Rudy Rucker shit… :smiley:

2 Likes