Effective Altruism bogged down in semantic games


#1

[Read the post]


#2

Why wouldn’t some sort of present value factor be applied to such a calculation?


#3

The base assumptions in the calculation is flawed. Makes me wonder if they’re all anti-contraceptive too. If the elimination of suffering is the ultimate altruism, instead of going for the largest human population possible wouldn’t it make sense to prepare humanity for voluntary self-extinction?


#4

I wonder if these folks have read anything about the French Revolution. I kinda doubt it.


#5

It would have been nice if when the article mentioned GiveWell, they also mentioned that GiveWell rated giving donations to Machine Intelligence Research Institute (MIRI) as effectively being worse for their project than not donating.

There are at least a few rays of sunshine, even if there are a a surprising number of rich tech. people trapped in an adolescent SciFi worldview and getting pulled into a charity that embodies Pascal’s Mugging:


Oddly, Nick Bostrom came up with Pascal’s Mugging as director Future of Humanity Institute, but seems to have forgotten his own criticisms of the threat of AI, since the FHI are now drinking the Kool Aid.


#6

It was about time Wimpy started giving back to the world.

“I would gladly save you on an entirely theoretical but mathematically sound (if you accept my premises) tuesday for a donation today.”


#7

Contraceptives? Don’t get me started. Masturbation is genocide!


#8

Obligatory:


#9

They should read Dickens (from Hard Times):

‘You don’t know,’ said Sissy, half crying, ‘what a stupid girl I am. All through school hours I make mistakes. Mr. and Mrs. M’Choakumchild call me up, over and over again, regularly to make mistakes. I can’t help them. They seem to come natural to me.’

‘Mr. and Mrs. M’Choakumchild never make any mistakes themselves, I suppose, Sissy?’

‘O no!’ she eagerly returned. ‘They know everything.’

‘Tell me some of your mistakes.’

‘I am almost ashamed,’ said Sissy, with reluctance. ‘But to-day, for instance, Mr. M’Choakumchild was explaining to us about Natural Prosperity.’

‘National, I think it must have been,’ observed Louisa.

‘Yes, it was.—But isn’t it the same?’ she timidly asked.

‘You had better say, National, as he said so,’ returned Louisa, with her dry reserve.

‘National Prosperity. And he said, Now, this schoolroom is a Nation. And in this nation, there are fifty millions of money. Isn’t this a prosperous nation? Girl number twenty, isn’t this a prosperous nation, and a’n’t you in a thriving state?’

‘What did you say?’ asked Louisa.

‘Miss Louisa, I said I didn’t know. I thought I couldn’t know whether it was a prosperous nation or not, and whether I was in a thriving state or not, unless I knew who had got the money, and whether any of it was mine. But that had nothing to do with it. It was not in the figures at all,’ said Sissy, wiping her eyes.

‘That was a great mistake of yours,’ observed Louisa.

‘Yes, Miss Louisa, I know it was, now. Then Mr. M’Choakumchild said he would try me again. And he said, This schoolroom is an immense town, and in it there are a million of inhabitants, and only five-and-twenty are starved to death in the streets, in the course of a year. What is your remark on that proportion? And my remark was—for I couldn’t think of a better one—that I thought it must be just as hard upon those who were starved, whether the others were a million, or a million million. And that was wrong, too.’

‘Of course it was.’

‘Then Mr. M’Choakumchild said he would try me once more. And he said, Here are the stutterings—’

‘Statistics,’ said Louisa.

‘Yes, Miss Louisa—they always remind me of stutterings, and that’s another of my mistakes—of accidents upon the sea. And I find (Mr. M’Choakumchild said) that in a given time a hundred thousand persons went to sea on long voyages, and only five hundred of them were drowned or burnt to death. What is the percentage? And I said, Miss;’ here Sissy fairly sobbed as confessing with extreme contrition to her greatest error; ‘I said it was nothing.’

‘Nothing, Sissy?’

‘Nothing, Miss—to the relations and friends of the people who were killed. I shall never learn,’ said Sissy.


#10

Ten Things You Must Fail To Understand If You Want To Be A Transhumanist For Long


#11

Translation: A bunch of white men decide that working on the things they want to work on (i.e. AI) is more important than helping non-white, non-male people. Film at 11.


#12

As soon as I starting reading this, I knew it was Dale Carrico! I read through it thinking “Ok… but what does any of this have to do with transhumanism?”

I appreciate that he goes to some extraordinary lengths to make his case, over and over, against what he perceives to be transhumanism. But his arguments tend to hinge upon a few positions I do not agree with:

  • The terms “life” and “intelligence” are rigorously and unambiguously defined
  • Anybody who disagrees that they are so defined is automatically engaged in wishful thinking
  • Those who strive to do new things are inherently reactionary

From here Carrico goes on to explain that his goal is basically just to ridicule what he sees as an abhorrent ideology. Not that there is anything wrong with that, as such, but this makes it too much of a personal crusade for me.

Granted, the people in this topics article are probably those he is addressing in his writing. But it reminds me of the countless partisan essays complaining that “libertarians are selfish and short-sighted - and I just proved it by defining them as thus”, when I grew up as a very different kind (socialist anarchist) of libertarian. Pejoration of labels is IMO disingenuous and tars people with an exceedingly broad brush, and uses mockery as a substitute for education.

ETA: But he did apparently coin the term “disasterbator”, so he deserves some props for that!


#13

I’d gotten the impression that he was arguing rather that life and intelligence are not rigorously and unambiguously defined, and perhaps cannot be; that he’s arguing against reductive definitions of intelligence and of mind that assume that strong AI, “uploading” minds, and so on, are really trivial engineering problems that will inevitably be solved.

And he’s also mostly arguing against what seems to be developing into the state religion of Silicon Valley, so I most often come across him countering PR campaigns with an implicit libertarian message.


#14

Maybe there’s additional context not present in this list, but #9 (“What we mean by life happens in biological bodies, what we mean by intelligence happens in biological brains in society, what we mean by progress happens in historical struggles among the diversity of living intelligent beings who share the present – and to say otherwise is not to be interesting but to be idiotic.”) certainly reads like he’s saying that life and consciousness are unique and special properties of meat that can never ever be replicated in non-meat. That was pseudo-religious nonsense when John Searle wrote up the Chinese Room thought experiment, and it’s pseudo-religious nonsense now.

Which is not to say that I think strong AI is going to be coming along Any Day Now, or that I endorse these silly people who mistake their own sci-fi for indisputable prophecy. A good friend of mine is an AI professor, and I can attest that modern AI research is under no delusions about the imminence of strong AI–they’re mostly into what you might call expert systems, software than can reason competently within specific narrow domains, like IBM’s Watson.

In the comments, Carrico does say that he’s open to the possibility that strong AI could exist in the future, but he also says several things that suggest that (1) he (incorrectly) believes that transhumanist forum posters represent the views of the AI research community, and (2) he has a pedantically narrow definition of intelligence and uncharitably applies it to other people, so he gets offended any time someone uses the word “intelligent” to describe something that can’t play the violin or cry at Pixar movies.


#15

“The number of future humans who will never exist if…”
They must be anti-contraceptives, as contraceptives would be worse than genocide by their calculations. (All those people who don’t get born fail to have offspring, etc., you know. Over 50 million years, they really start to add up.)


#16

I saw a really interesting thing once arguing that pre-emptively preventing a person from coming into existence cannot be morally held to be murder, or in fact to have any moral weight at all. This was in the context of a discussion about sci-fi time travel (something like “if you go back and kill Hitler, have you also killed all the people born of parents who were brought together by the war?”), but birth control/abortion also factored into it heavily.

I wish I could find it again. I definitely think it applies here. In any case, I’m going to continue being both a humanist and a transhumanist–in the “belief in the infinite potential of humanity” sense, not in the “I wrote a sweet sci-fi story that is definitely going to come true so you should do what I say” sense.


#17

This criticism misrepresents people concerned with existential-risk as saying that the issue with superintelligence is a tiny risk of a massively negative outcome, while the actual belief of experts and those of us who think we’re not paying enough attention to this is that it’s a medium risk (several percent or more over the next 100 years) of a negative outcome.

If someone made a comparison between global poverty and existential risk and called the former a “rounding error” that would be unfortunate, since it makes poverty sounds like a trivial problem. Poverty is a horrible issue and any sane allocation of resources would no doubt have eliminated it at this point in history. I don’t think anyone attending an Effective Altruism event would look down on anyone e.g. donating to top-rated GiveWell charities to distribute cash to very poor individuals in Uganda or prevent deaths from Malaria.

It’s possible to believe that and still donate or otherwise support existential risk research, just as it’s possible to feel bad for oil-drenched birds in an environmental mess-up but go on to compare different causes to find that one cares even more for starving people than oily birds, and act accordingly.

Effective altruism is all about giving people frameworks and information to evaluate such choices in a principled way, which is not easy, but important.

Two replies that go into this at more length:
http://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
http://www.effective-altruism.com/ea/m4/a_response_to_matthews_on_ai_risk/


#18

That’s a good outline of what I was getting at. I was thinking I’d re-read some of his other pages to see if his position was like I remembered it before posting more about this. But yes, he seems to be quite reductive in his own definitions, with the distinction that they are natural reductions.

Many AI researchers do not agree that there is anything “strong” or desirable in computers mimicking people. And some cognitive scientists point out that humans might simply like to assume that they are sentient, when they really don’t know what they are in any direct sense. The discussion opens considerably when we can posit various ways these can be defined, and corresponding implications, but like in many areas there is the risk of polarizing between rigid fundamentalism and pie-in-the-sky nonsense. Somewhere in between is most of where reality seems to be.


#19

Aside from the casual assumption that the hypothetical-zillions-of-people won’t just be living in the hellish squalor camps of Tau Ceti V; it does seem a bit weird that they don’t seem to be time discounting.

You…might…have difficulty securing money-now on the theory that “eh, all money currently in circulation is pretty much irrelevant in the face of even a low probability of a superluminal post-scarcity society.”


#20

Given that a successful attempt to prevent someone from coming into existence precludes that person ever being a moral agent, it seems reasonable. I’d certainly be much more morally concerned about causing someone to come into existence, since the creation of moral agents isn’t…unambiguously…to their benefit; and must always be non-consensual(since if you could ask, you wouldn’t be creating them).