Why wouldnât some sort of present value factor be applied to such a calculation?
The base assumptions in the calculation is flawed. Makes me wonder if theyâre all anti-contraceptive too. If the elimination of suffering is the ultimate altruism, instead of going for the largest human population possible wouldnât it make sense to prepare humanity for voluntary self-extinction?
I wonder if these folks have read anything about the French Revolution. I kinda doubt it.
It would have been nice if when the article mentioned GiveWell, they also mentioned that GiveWell rated giving donations to Machine Intelligence Research Institute (MIRI) as effectively being worse for their project than not donating.
There are at least a few rays of sunshine, even if there are a a surprising number of rich tech. people trapped in an adolescent SciFi worldview and getting pulled into a charity that embodies Pascalâs Mugging:
Oddly, Nick Bostrom came up with Pascalâs Mugging as director Future of Humanity Institute, but seems to have forgotten his own criticisms of the threat of AI, since the FHI are now drinking the Kool Aid.
It was about time Wimpy started giving back to the world.
âI would gladly save you on an entirely theoretical but mathematically sound (if you accept my premises) tuesday for a donation today.â
Contraceptives? Donât get me started. Masturbation is genocide!
Obligatory:
They should read Dickens (from Hard Times):
âYou donât know,â said Sissy, half crying, âwhat a stupid girl I am. All through school hours I make mistakes. Mr. and Mrs. MâChoakumchild call me up, over and over again, regularly to make mistakes. I canât help them. They seem to come natural to me.â
âMr. and Mrs. MâChoakumchild never make any mistakes themselves, I suppose, Sissy?â
âO no!â she eagerly returned. âThey know everything.â
âTell me some of your mistakes.â
âI am almost ashamed,â said Sissy, with reluctance. âBut to-day, for instance, Mr. MâChoakumchild was explaining to us about Natural Prosperity.â
âNational, I think it must have been,â observed Louisa.
âYes, it was.âBut isnât it the same?â she timidly asked.
âYou had better say, National, as he said so,â returned Louisa, with her dry reserve.
âNational Prosperity. And he said, Now, this schoolroom is a Nation. And in this nation, there are fifty millions of money. Isnât this a prosperous nation? Girl number twenty, isnât this a prosperous nation, and aânât you in a thriving state?â
âWhat did you say?â asked Louisa.
âMiss Louisa, I said I didnât know. I thought I couldnât know whether it was a prosperous nation or not, and whether I was in a thriving state or not, unless I knew who had got the money, and whether any of it was mine. But that had nothing to do with it. It was not in the figures at all,â said Sissy, wiping her eyes.
âThat was a great mistake of yours,â observed Louisa.
âYes, Miss Louisa, I know it was, now. Then Mr. MâChoakumchild said he would try me again. And he said, This schoolroom is an immense town, and in it there are a million of inhabitants, and only five-and-twenty are starved to death in the streets, in the course of a year. What is your remark on that proportion? And my remark wasâfor I couldnât think of a better oneâthat I thought it must be just as hard upon those who were starved, whether the others were a million, or a million million. And that was wrong, too.â
âOf course it was.â
âThen Mr. MâChoakumchild said he would try me once more. And he said, Here are the stutteringsââ
âStatistics,â said Louisa.
âYes, Miss Louisaâthey always remind me of stutterings, and thatâs another of my mistakesâof accidents upon the sea. And I find (Mr. MâChoakumchild said) that in a given time a hundred thousand persons went to sea on long voyages, and only five hundred of them were drowned or burnt to death. What is the percentage? And I said, Miss;â here Sissy fairly sobbed as confessing with extreme contrition to her greatest error; âI said it was nothing.â
âNothing, Sissy?â
âNothing, Missâto the relations and friends of the people who were killed. I shall never learn,â said Sissy.
Translation: A bunch of white men decide that working on the things they want to work on (i.e. AI) is more important than helping non-white, non-male people. Film at 11.
As soon as I starting reading this, I knew it was Dale Carrico! I read through it thinking âOk⌠but what does any of this have to do with transhumanism?â
I appreciate that he goes to some extraordinary lengths to make his case, over and over, against what he perceives to be transhumanism. But his arguments tend to hinge upon a few positions I do not agree with:
- The terms âlifeâ and âintelligenceâ are rigorously and unambiguously defined
- Anybody who disagrees that they are so defined is automatically engaged in wishful thinking
- Those who strive to do new things are inherently reactionary
From here Carrico goes on to explain that his goal is basically just to ridicule what he sees as an abhorrent ideology. Not that there is anything wrong with that, as such, but this makes it too much of a personal crusade for me.
Granted, the people in this topics article are probably those he is addressing in his writing. But it reminds me of the countless partisan essays complaining that âlibertarians are selfish and short-sighted - and I just proved it by defining them as thusâ, when I grew up as a very different kind (socialist anarchist) of libertarian. Pejoration of labels is IMO disingenuous and tars people with an exceedingly broad brush, and uses mockery as a substitute for education.
ETA: But he did apparently coin the term âdisasterbatorâ, so he deserves some props for that!
Iâd gotten the impression that he was arguing rather that life and intelligence are not rigorously and unambiguously defined, and perhaps cannot be; that heâs arguing against reductive definitions of intelligence and of mind that assume that strong AI, âuploadingâ minds, and so on, are really trivial engineering problems that will inevitably be solved.
And heâs also mostly arguing against what seems to be developing into the state religion of Silicon Valley, so I most often come across him countering PR campaigns with an implicit libertarian message.
Maybe thereâs additional context not present in this list, but #9 (âWhat we mean by life happens in biological bodies, what we mean by intelligence happens in biological brains in society, what we mean by progress happens in historical struggles among the diversity of living intelligent beings who share the present â and to say otherwise is not to be interesting but to be idiotic.â) certainly reads like heâs saying that life and consciousness are unique and special properties of meat that can never ever be replicated in non-meat. That was pseudo-religious nonsense when John Searle wrote up the Chinese Room thought experiment, and itâs pseudo-religious nonsense now.
Which is not to say that I think strong AI is going to be coming along Any Day Now, or that I endorse these silly people who mistake their own sci-fi for indisputable prophecy. A good friend of mine is an AI professor, and I can attest that modern AI research is under no delusions about the imminence of strong AIâtheyâre mostly into what you might call expert systems, software than can reason competently within specific narrow domains, like IBMâs Watson.
In the comments, Carrico does say that heâs open to the possibility that strong AI could exist in the future, but he also says several things that suggest that (1) he (incorrectly) believes that transhumanist forum posters represent the views of the AI research community, and (2) he has a pedantically narrow definition of intelligence and uncharitably applies it to other people, so he gets offended any time someone uses the word âintelligentâ to describe something that canât play the violin or cry at Pixar movies.
âThe number of future humans who will never exist ifâŚâ
They must be anti-contraceptives, as contraceptives would be worse than genocide by their calculations. (All those people who donât get born fail to have offspring, etc., you know. Over 50 million years, they really start to add up.)
I saw a really interesting thing once arguing that pre-emptively preventing a person from coming into existence cannot be morally held to be murder, or in fact to have any moral weight at all. This was in the context of a discussion about sci-fi time travel (something like âif you go back and kill Hitler, have you also killed all the people born of parents who were brought together by the war?â), but birth control/abortion also factored into it heavily.
I wish I could find it again. I definitely think it applies here. In any case, Iâm going to continue being both a humanist and a transhumanistâin the âbelief in the infinite potential of humanityâ sense, not in the âI wrote a sweet sci-fi story that is definitely going to come true so you should do what I sayâ sense.
This criticism misrepresents people concerned with existential-risk as saying that the issue with superintelligence is a tiny risk of a massively negative outcome, while the actual belief of experts and those of us who think weâre not paying enough attention to this is that itâs a medium risk (several percent or more over the next 100 years) of a negative outcome.
If someone made a comparison between global poverty and existential risk and called the former a ârounding errorâ that would be unfortunate, since it makes poverty sounds like a trivial problem. Poverty is a horrible issue and any sane allocation of resources would no doubt have eliminated it at this point in history. I donât think anyone attending an Effective Altruism event would look down on anyone e.g. donating to top-rated GiveWell charities to distribute cash to very poor individuals in Uganda or prevent deaths from Malaria.
Itâs possible to believe that and still donate or otherwise support existential risk research, just as itâs possible to feel bad for oil-drenched birds in an environmental mess-up but go on to compare different causes to find that one cares even more for starving people than oily birds, and act accordingly.
Effective altruism is all about giving people frameworks and information to evaluate such choices in a principled way, which is not easy, but important.
Two replies that go into this at more length:
http://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
http://www.effective-altruism.com/ea/m4/a_response_to_matthews_on_ai_risk/
Thatâs a good outline of what I was getting at. I was thinking Iâd re-read some of his other pages to see if his position was like I remembered it before posting more about this. But yes, he seems to be quite reductive in his own definitions, with the distinction that they are natural reductions.
Many AI researchers do not agree that there is anything âstrongâ or desirable in computers mimicking people. And some cognitive scientists point out that humans might simply like to assume that they are sentient, when they really donât know what they are in any direct sense. The discussion opens considerably when we can posit various ways these can be defined, and corresponding implications, but like in many areas there is the risk of polarizing between rigid fundamentalism and pie-in-the-sky nonsense. Somewhere in between is most of where reality seems to be.
Aside from the casual assumption that the hypothetical-zillions-of-people wonât just be living in the hellish squalor camps of Tau Ceti V; it does seem a bit weird that they donât seem to be time discounting.
YouâŚmightâŚhave difficulty securing money-now on the theory that âeh, all money currently in circulation is pretty much irrelevant in the face of even a low probability of a superluminal post-scarcity society.â
Given that a successful attempt to prevent someone from coming into existence precludes that person ever being a moral agent, it seems reasonable. Iâd certainly be much more morally concerned about causing someone to come into existence, since the creation of moral agents isnâtâŚunambiguouslyâŚto their benefit; and must always be non-consensual(since if you could ask, you wouldnât be creating them).