AI Alarmism: why smart people believe dumb things about our future AI overlords

I listened to this about a week ago when I was on vacation and was greatly amused.

2 Likes

Who are you talking to? The author of the original piece isn’t here… “Your article…”?

2 Likes

If someone mentions class, you declare “marxism”?

7 Likes

Intelligent algorithms have ALREADY enslaved us. They just used cat videos to do it instead of kill-bots.

4 Likes

Any sufficiently advanced algorithm is indistinguishable from AI.

10 Likes

My hypothesis:

A computer is a complex system that takes in data, processes it, makes decisions, and outputs results. We have now connected lots of physical contraptions to these inputs and physical, making these computations work on physical things. Cars, industrial plants, weapons, et al.

Human society is a complex system that takes in data, processes it, makes decisions, and outputs results. We have connected lots of physical contraptions to these inputs and physical, making these computations work on physical things. Cars, industrial plants, weapons, et al.

IMHO, these two are analogous. And while AI in computers may still be in its infancy, human society has turned into something that is already far beyond any one person’s understanding. No one is in full control. We don’t know where it’s going. And it may destroy itself (and us).

So I think the singularity was passed long ago. We are all part of a huge organic/inorganic computational system, augmented with dangerous machines and advanced computers (AI or not), that is to mind very scary. It’s an artificial intelligence of a very different kind.

I would appreciate any criticism/feedback on this hypothesis.

4 Likes

And replace it with massive blooms of algae. Red goo?

1 Like

I’m snagging that and feeding it to my bots.

4 Likes

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” – Brian Kernighan

9 Likes

Let me just elaborate on the ‘complex motivations’ idea, because I certainly think that ‘orthogonality’ is the weak point in the AGI doomsday story.

Orthogonality is defined by Bostrom as the postulate that a super-intelligence can have nearly any arbitrary goals. Here is a short argument as to why ‘orthogonality’ may be false:

In so far as an AGI has a precisely defined goal, it is likely that the AGI cannot be super-intelligent. I think real (general) intelligence has to be able to handle ambiguity - it’s likely that there’s always a certain irreducible amount of fuzziness or ambiguity in the definition of some types of concepts (‘non-trivial’ concepts associated with values don’t have necessary definitions). Let us call these concepts, fuzzy concepts (or f-concepts).

Now imagine that you are trying to define the goals that will let you specify that you want an AGI to do precisely, but it turns out that for certain goals there’s an unavoidable trade-off: trying to increase the precision of the definitions reduces the cognitive power of the AGI. It’s because non-trivial goals need the aforementioned ‘f-concepts’, and you can’t define these precisely without over simplifying them.

The only way to deal with f-concepts is by using a ‘concept cloud’ – instead of a single crisp definition, you would need to have a ‘cloud’ or ‘cluster’ of multiple slightly different definitions, and it’s the totality of all these that specifies the goals of the AGI.

So for example, such f-concepts (f) would need a whole set of slightly differing definitions (d):
F= (d1, d2, d3, d4, d5, d6…)
But now the AGI needs a way to integrate all the slightly conflicting definitions into a single coherent set.

So unavoidably, extra goals must emerge to handle these f-concepts, in addition to whatever original goals the programmer was trying to specify. And if these ‘extra’ goals conflict too badly with the original ones, then the AGI will be cognitively handicapped.

This falsifies orthogonality: f-concepts can only be handled via the emergence of additional goals that emerge from the internal conflict-resolution procedures that integrate multiple differing definitions of fuzzy goals in a ‘concept-cloud’.

In so far as an AGI has goals that can be precisely specified, orthogonality is trivially true, but such an AGI probably can’t become super-intelligent. It’s cognitively handicapped.

In so far as an AGI has fuzzy goals, it can become super-intelligent, but orthogonality is likely falsified, because ‘extra’ goals need to emerge to handle ‘conflict resolution’ and integration of multiple differing definitions of goals in the concept cloud.

The big fallacy that the ‘super-smart nerds’ are making then, is the belief that you can eliminate all amiguity and fuzziness from the world- that you can somehow exactly specifiy things like ‘good’/‘evil’ etc. This can’t be done, I claim. A precisely specified goal-system is too simple to be super-intelligent. And the moment you do allow fuzzy-goals , the motivations of the AGI system must become complex and non-arbitrary - new emergent goals will appear that the programmers never intended.

1 Like

3 Likes

Yeah, I was thinking grey goo as well. And as with the super AI scenario, we already inhabit such a world.

The super AI horror story posits that the projects these nerds worry about might some day come to harm the humans they are supposed to serve. Which sounds a lot like the modern corporation to me. These guys worry about a faster, more efficient version of the thing that already vexed us.

Grey Goo? Zoom out by a couple orders of magnitude, and you see grey goo everywhere. Overdeveloped parcels of real estate that have taken over our financial systems, without actually offering up any real human benefit- this has been a thing going back to the postwar building boom at Levittown. It’s happening to an absurd degree in China. There’s no assurance that a grey good outbreak would be 100% fatal, and this housing bubble will probably be survivable on its own, as long as global warming doesn’t pile on top.

When this guy ridicules the doomsayers who warn against the thing that they alone are able to prevent, I laugh. But when I look around at the damage that’s already happened, it’s not so funny.

4 Likes

In a similar vein:

http://www.smbc-comics.com/comic/2011-02-17

4 Likes

It only took 42 years of studies to determine that cost of living affects the cost of living? :expressionless:

If official unemployment numbers are bogus, inflation numbers are bogus, and poverty numbers are bogus… Could there be some underlying theme?

3 Likes

I liked the way AI was depicted in Robot & Frank and Moon and Interstellar. Just because a computer is super-smart doesn’t mean it particularly cares if it lives or dies or has its memory erased.

4 Likes

See, the human brain is very much like a complex, organic computer…

Stereotype much? [Or have I just been Poe’d?]

Indeed, a far more credible danger is a green goo outbreak that attacks some chemical pathway common to some essential portion of the biopshere.

That’s what the cats want you to think.

Seriously though, you hit on the point I was going to make, I’m not worried about so-called general-purpose AI. Heck, even humans don’t really meet that definition. We’ve very little success in figuring out how our own brand of sentience operates, or even WTF it actually is. The chances of an AI made in our image suddenly bootstrapping itself is science fantasy. We don’t even know if “general intelligence” is possible, but nature sure hasn’t made one on this planet.

What does concern me is that we’re making ever more complicated robust adaptive algorithmic ecosystems that we don’t really understand and can’t control. Part of me wonders if the widespread fantasy of sentient AI overlords is actually a comforting delusion. After all, at least they can be debated with. At least Ultron or Colossus or SkyNet have a personality or at least a consciousness you can argue with or trick or play for time while you try to pull the plug. Try doing that with the economy or the internet of things!

In physics a singularity is a meaningless value that tells you you just broke your model of nature. As best I understand it’s use by singularitarians, they use it to mean a point of change beyond which current models cease to make meaningful predictions. So in this sense, once you’d past a singularity, it would no longer be a singularity because you could gather data for a new model. The horizon always exists in the distance. So I’m not sure your use of the term is the same as theirs. I say they because I don’t think their arguments are scientific, though I wouldn’t call them religion in the mystical sense but rather an unfounded faith based on a series of unrealistic assumptions, cherry-picked facts and heavily massaged data.

3 Likes

Any sufficiently broadly defined term is indistinguishable from anything.

4 Likes

Sure, and if Aliens are broadcasting the blueprints for the galactic router, then the most important thing is detecting and understanding the signal. But until we know either of these or any other hypothetical high-powered sea-changes are possible, they take lower priority to problems that aren’t ifs, but whens or how much worse, and banking on ifs to pull us out of the fire is extremely foolish. I’m not saying no one work on AI or SETI or whatever other long-shot. I’m saying making it a high cultural priority would demonstrate very confused priorities if one’s ostensible goal is altruism.

That’s the most relevant thing in your whole comment.

Fluffing up the debate to make it sound significant makes great click-bait though.

Transformers movies generate revenue. The capitalist and the socialist can argue over where and to whom that revenue should go to be invested and spent, but the fact that they’re businesses that make a shit-ton of money is just a fact. Money put into a Transformers movie means more money comes out the other end. That’s not waste. Don’t get me wrong, I hate Transformers movies and loathe that our so much of our culture is dumb enough to lap it up, but making them isn’t a waste of money. Nor, for that matter, is AI research if it returns reasonably predictable net profits in a foreseeable. But the problem with your argument here is this…

…because the resources that are being accused of being wasted aren’t the money spent on the programmers, it’s the programmers themselves. The goal of AI skeptics is to argue people out of careers in AI. Just as I’d love to argue filmmakers out of careers in making Transformers movies, not because they’re wasting money, but because they’re wasting talent, and studio time, and life and contributing to the dumbing down of the cinema-going crowds that waste time watching them. Basically they’re a waste of everything except money.

1 Like

Obligatory XKCD
https://what-if.xkcd.com/5/

1 Like