Resisting Reduction Manifesto: against the Singularity, for a "culture of flourishing"

Originally published at:

1 Like

He’s fifty years too late to invent Flower Power.

1 Like

I must admit to being distracted by the phrase “irreducible complexities”. It brings back unpleasant memories of studying Creationist bullshit.


I made a state engineer purple with rage at a public review meeting a couple of years ago when I kept asking about the failure analysis of a large public rainwater reservoir. “It’s designed not to fail under the specified conditions!” he shouted. Repeating myself for about the third time, I replied “but what happens when it does fail?”. The state had literally never done that analysis at all… even though all things fail eventually, and there was at least one house downslope of that tank.

I have to give the nutter credit for cojones, though, he was older and smaller than me and was totally ready to challenge me to a fistfight.



The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.


Me too, and it is not a distraction, it’s the same argument and fails for the same reasons. “Reductionism” doesn’t mean that world at the level we normally see it follows simple rules. It means that there exists an underlying layer that is orderly, and the apparent complexity is that order applied many many times leading to non-obvious behavior.

The underlying claim seems to be that either a computer cannot come to understand the behavior of complex systems the way humans can (otherwise the idea of designing machines to participant in such systems to achieve human-driven goals makes no sense, because we too could not predict what behaviors we’d want them to have), or that it is not possible to build a computer that is broadly and substantially better than humans at the thing we call intelligence.

The former would imply either that the authors don’t know what a computer is (in the Church-Turing sense), or that they think the human mind has access to some kind of oracle (again in the Turing machine sense). If the latter, what function does the oracle answer? What physical mechanism enables it, and what prevents us from implementing that mechanism in other hardware (even if that hardware turns out to be cultured brain tissue)? I would love to know whether the fundamental laws of physics are computable, there’d be some Nobel prizes in that for sure.

The latter would imply that the human mind is literally the summum bonum of intelligence, that no possible hardware could exceed it. Evolution hit on exactly the global maximum, and it turns out to be three pounds of meat with circuit elements that operate at ~100Hz and propagate signals at around 120 m/s. Yes sir, even if you built a Matrioshka brain (a computer surrounding a star and powered by the entire solar output) with light speed signalling between circuit elements firing at GHz or higher frequencies, it’ll never understand and predict complex systems better than a human can. No hubris in that thought, none at all.

That said, I agree with many of the authors’ apparent goals. Historical attempts to reduce surface-level phenomena in society and in people to simple rules work terribly, because they are based on bad models. Focusing on building technological and social systems that support human (and humane) goals is important, possibly the most important thing. It just has nothing to do with reductionism.


Indeed, which makes me wonder about the tone of the rest of the piece. The author seems to think that the Singularity is all about maximizing economic output or something. How many Singularitarians has he met?

He’s right that it is good to be humble in the face of the unknown (How many terrible mistakes has humanity made by assuming we knew something that we really didn’t know?) but so much of the unknown has become known. Why should we believe that we have to stop now?

I especially hate this sentence:

We need to embrace the unknowability—the irreducibility—of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar with.

Fuck you, man. I’ll gladly admit that I don’t know things and that I have to get on with not knowing them, but if you’re going to tell me that I can’t know things because no one can, you’re going to have to show your work, and pointing at the arts and humanities is not going to cut it.


Arguably, it does have something to do with reductionism; but the relationship is not helpful in this case.

Claims that emulate the style of reductive models(either by relatively innocent cargo-cult mistakes; or more or less malicious sophistry) are extremely popular: “as simple as possible” always makes your job easier than “but no simpler” does; plus a suitably blinkered enthusiasm for your precious model lets you screen out unpleasant empirical phenomena(consult your friendly local economist for advice if you require assistance).

Unfortunately, quasi-mystical appeals to things being impossible to reduce are very poorly positioned to push back against bad models: if you are convincing enough, you might manage to dissuade some people from bad models by convincing them that all models are, like, inherently problematic, man; but anyone you fail to do that to is no better equipped to distinguish a lousy model from a good one; and(much to the displeasure of empiricists looking for pure first principles) even some relatively ugly hacks, if you keep their limits in mind, have proven to be stubbornly useful; which provides an ongoing supply of evidence in favor of reductive modeling of things(plus, as @AnthonyC mentioned, while the capabilities of the human brain are impressive, unless you, also more or less mystically, assign it some flavor of oracular capability it also has to be engaged in fairly effective simplified modeling of the world in order to do what we know it can. This shouldn’t diminish our respect for the heuristics that we employ without even thinking about it(most of them are pretty solid; and quite good given the haphazard development process and resource limitations); but it’s another giant pile of examples of the fact that the right simplifying assumptions can get useful results out of things that would be computationally intractable without a little cheating.)

If you reject ‘reductionism’, you are essentially ceding ground to whatever sophist has the slickest slide deck for his pet model, which won’t end well. To have any hope of pushing him back; you need to get your hands dirty with the question of good models, lousy models; and models that must be carefully confined to specific domains lest they turn to nonsense.


What would you say is the probability that the first general AI will be owned by a corporation? I’d guess at least 90%.

And to those who are focusing on the reductionism part, let’s do a thought experiment. The first generation of general AIs are coming online. They all use similar algorithms, but the objective functions, representing “value” to the machine, have some slight differences. Lo and behold, when asked to solve difficult human problems, the machines all disagree with each other on the correct course of action. Seemingly tiny differences in how values are represented end up producing “reasonable” conclusions that are vastly different. Would this result be surprising? (To a humanitarian, the answer is: of course not; that’s why the problems are difficult.)

I think the other major possibility it ownership by a military organization, with combined probability well over 90%.

Yes, no argument from me regarding the difficulty of AI alignment and representing value mathematically. But unless the philosophers and humanities scholars get involved to help actually solve the problem instead of pointing out its difficulty, they’re not going to have much effect on the quality of the AIs that eventually get built.

1 Like

That’s very true. But, it takes two to tango! This manifesto lays out some perspectives changes that are needed in order for those types of people to contribute in such a way that their input would actually be heard.

The problem, as I see it, is that AI is being birthed in an environment in which there is a strong financial incentive to view technology as neutral with respect to values. It makes it so much easier to defend “disruption.” Well, with an autonomous AI, this view becomes untenable, by definition. Belief in the Singularity is what you get when you just sweep this problem under the rug.

The arguments offered by technologists for why AI will necessarily be made friendly, and safe, will (of course) always be technical. But the humanities scholar, as you put it, sees someone like Kurzweil as rationalizing with these arguments, due to fear of mortality, or the need for belief in a transcendental purpose. The technologist concludes that only other technologists can understand the issue, and the humanist concludes that humanity has put a lot of power in a dangerous place and is likely fucked. It’s not a productive place to be, so there is a real need to examine where we stand before getting to work in search of solutions.

1 Like

You’ve described my career in computer science in 14 words!


But… does that have anything to do with reductionism? The problem you describe is a serious one, but since it’s a question of the way things should be, rather than the way things are, it appears to be orthogonal to the issue of reductionism. Values, I believe, cannot be “proven”. I value life over death and company over solitude, but I cannot prove the truth of my values by starting from first principles.

That aside, I think I owe you an apology; when discussing “Singularitarians”, I used that term to refer to the philosophers and futurists who are discussing the Singularity and trying to prepare us for it, while you appear to be using the term to refer to the corporations and other powerful groups who will actually be building the super-intelligent AIs. Two very different definitions of “Singularitarinan”, but your definition is a valid one, if I’m reading you right.

The philosophical gap between the philosophers (who are deeply worried about unfriendly AI) and the corporations (many of whom show no great concern over unfriendly AI) is disconcerting.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.