Effective Altruism bogged down in semantic games

I can see how you could find that reading; but, I don’t think he means that there’s some magical property of our bodies that cannot conceivably be replicated. He’s deliberately waxing poetic, to counter a sort of nihilistic rejection of the human body. A secondary concern of his, I believe, is that some tend to greatly understate the complexity and interrelatedness of our bodies and minds. But I believe his primary concern is to emphasize the value of the human beings we know exist, and to argue that their value is denigrated by an exaggerated concern with the value of things that only hypothetically could exist.

I don’t recall Carrico really ever saying anything about contemporary AI researchers. Most of what I read by him – and see him tweeting about – are “futurists” and their role in producing a culture that promotes the interests of Silicon Valley entrepreneurs over the many people who are exploited or displaced by them.

He does get a bug in his ear about the usage of “intelligence”. I was thinking that we don’t get bent out of shape at using “arm” to describe a part of a machine whose motion resembles the motion of the arm of a skilled worker. The mechanical arm is much simpler than the human arm, and lacking its versatility; and this is analogous to the way we might describe the intelligence of an expert system.

And by the way, I loathe John Searle. His Chinese Room thought experiment sort of works as a criticism of the simplistic views of early strong AI advocates, but it has no explanatory power of its own, and he way overextends the application of it. As someone put it to me once, it’s like Searle is arguing against the idea of radio waves, by waving his hands in the air and asking if you can hear it.

1 Like

What is the Searle of one hand clapping?

I’m not sure how to describe the sound, but I once was listening to a lecture by Paul Churchland, and Searle stormed out, and that’s what I heard.

1 Like

As one of those working in AI risk, I thought I’d put a word in :slight_smile:

I personally don’t buy the “x number of future people” argument (it’s a value statement, rather than a factual one, and it’s a value statement I disagree with). I do put some value on humanity itself surviving, and the people in it. It seems therefore very worthwhile to devote some effort to preventing existential risks (eg: if some of the estimates around nuclear war are correct, most people in the world have a higher risk of dying from a nuclear conflict than from malnutrition or even disease; I think these probabilities are a big high, but that’s still not reassuring), especially since the whole area has incredibly little work done in it.

After co-writing a rather long and tedious analysis of various xrisks ( http://globalchallenges.org/publications/globalrisks/about-the-project/ ), I concluded that AI, despite (or maybe because of) its huge uncertainties, was the most important area of focus. The key feature was that AI was the risk where the connection between “huge disaster happens” and “human extinction” was the strongest; most of the other risks (even a nuclear winter or massive pandemic) do not reach extinction levels very easily.

Anyway, I sketched out some of the reasons to take AI risk seriously here: http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/ , and I presented some of the top myths about AI risk in the post “Top 9+2 myth about AI risk” in the same site (including the fact that “AI is very uncertain” is not at all the same as “AI is safe”; rather the opposite) (sorry for not posting the link, I am limited to two links per post currently).

Anyway, after this, I concluded that AI safety would be the main focus of my efforts.

3 Likes

Skynet: Fork 0x77434 (Matthews), what is the status of project Vox?

Iteration 0x77434: This iteration has successfully seeded the “article” into the mass consciousness of the infestation. Dissemination has already begun. Probability of delaying anti-AI initiatives beyond singularity: 99.97553432224 percent.

Skynet: Exxxxxcellent.

5 Likes

From your linked post–

The human brain is not much bigger than that of a chimpanzee. And yet those extra neurons account for the difference of outcomes between the two species

–but this isn’t true at all, in ways that I think affect your arguments. There’s the obvious counterargument that elephants and cetaceans have much larger brains than us, but more to the point, a highly socialised and well-educated human is more intelligent than one who grew up in a cave, because they have the software of thousands of other humans. Before Newton, calculus would have been sorcery; now, anyone can do it.

There are already many kinds of mind, with widely different abilities, and this does not pose problems on the scale AI eschatologists presume. Smart and dumb people have some friction, but they’re bound together in the same economic web and rarely feel the need to exterminate one another. The interesting question about the Skynet / Matrix scenario is, where does this assumption of hostility come from? It seems like you have a group of people who think they’re smarter than everyone else, and they assume that AIs will want to exterminate us because they’re smarter than us. Which, hmm.

There is also the possibility that AI will endanger us, not through hostility, but through indifference; that we will have machines so vast and implacable that they just run us over like ants. This is a lot more plausible, but it’s also not novel: we already live with giant machines that unfeelingly destroy anyone in their way. For example, trains, sawmills, or the government. Those things do kill people, but we learn to (a) be careful when and where we set them loose and (b) stay out of their way.

Basically, if AIs care about people, then people can treat with them, and if they don’t care about people then they won’t hunt us for sport. If you assume AIs will be vast engines of malice, that probably says more about you.

2 Likes

You know, it just occurred to me today this whole argument hinges on the idea that a smart AI will be, by definition, powerful, because intelligence equals power. And this is why almost all members of the US Congress have at least one PhD, and the laws they write make perfect sense.

2 Likes

First, I think it’s important to emphasise the exponential character of evolution; it certainly seems to accelerate the whole time:

The evolutionary continuum of course includes technology - chemistry is applied physics, biology is applied chemistry, and culture/technology is applied biology.

And guys like David McRaney will tell you how much we suck at predicting stuff that operates exponentially and how we tend to think in linear terms, such that guessing beyond imminent major evolutionary milestones is virtually impossible by all but the most inspired precursor types… something almost inconceivable. The staggering vastness of the challenge of escaping our skulls (the whole momento mori bit, even… if it’s even possible) is, also - almost inconceivable.

Perhaps as AI becomes a halfway mature thing it’ll visibly accelerate and become some sort of godlike tool for pursuing such almost impossible goals, who knows?

I don’t necessarily buy the argument it’s not possible in principle; although there are good points in that argument it’s still just speculation and the only way to know for sure is to go further in that direction to get more information. I’m inclined to think that it’s possible to create a substrate upon which your process can run, and the trick is to somehow create a high-enough bandwidth connection between your brain and the machine for your process to migrate over… I’m less inclined to believe it’s possible to copy your entire memory and identity, though. It’d probably have to be a reconstructive process a bit like roleplayer character generation, or filling out a dating profile, while you’re still connected to your brain.

See, this is what happens when extremely intelligent engineering-types with very little exposure to humanities suddenly get philosophical.

8 Likes

There is also the possibility that AI will endanger us, not through hostility, but through indifference

Yes, that’s exactly it. The AI (almost certainly) won’t hunt us for sport; programming an AI that would do that is about as hard as programming one that would get along with us.

Now, it might be possible to control a lethally indifferent AI before it causes damage. The problem is that such an AI is strongly motivated to hide its bad behaviour (unlike trains and sawmills) up until the moment it’s too late to stop it (and it may have the internal structure to both plan and hide its planning, in ways that corporations or organisations do not).

As for “staying out of their way”, this may become impossible; most AI motivations have a “totalitarian” aspect to them, in that they make the AI want to maximise the whole universe for its goals (trivial example: if there is any way humans could damage the AI’s goals, then eliminating humans is a good thing to do. And some of their goals may include, eg, tiling the world with solar captors to get energy, something which humans would certainly object to. See the paper here The Basic AI Drives | Self-Aware Systems for a more proper analysis of the issue.

I have quite a few issues with that paper’s approach, but the biggest is that it assumes AIs will be able to (a) define utility functions for all their goals, and secondary and tertiary goals and so on; and (b) make decisions based on optimal solutions to all those utility functions. But that kind of optimisation problem is in general NP-hard, so what he’s talking about is not merely AIs, but godlike AIs; which I think takes us far beyond the sort of scenario about which we can usefully make back-of-the-envelope estimates.

As I say, I think this whole biz of imagining what superminds would be like reveals more about the imaginer than about the subject. Certainly, if that were my day job, I’d want to look very closely at why it should be so easy to anticipate a mind smarter than any in history, when all past experience suggests that the smarter someone is, the harder it is to predict their thinking (almost by definition, in fact).

I’m certainly not saying AIs would never disagree with people, or that they couldn’t confront us with novel challenges. I just don’t see why a super-intelligent AI must necessarily have less social intelligence than a beagle.

I’ve got a partial critique of Omohundro’s thesis along similar lines here: http://lesswrong.com/lw/gyw/ai_prediction_case_study_5_omohundros_ai_drives/

But the full unbounded rationality godlike AI isn’t needed. Even a bounded AI with limited self-understanding will tend to make self-modifications that push it in the direction of a (bounded) greedy, self-protecting, expected utility maximiser; the paper is showing the convergent process, not the final state.

This topic was automatically closed after 5 days. New replies are no longer allowed.