The problem is: We’ll just become too smart and capitalism will make the tools of destruction too cheap.
If civilization doesn’t collapse (and, barring an asteroid there’s no reason to think it would), a Make-Your-Own-Virus DNA kit will become an extremely cheap commodity, used in high schools and available on Amazon.
The person making the virus that kills humanity probably wont even be a terrorist or psychopath. He might just be the guy who spells out his name in DNA (or does it for others as a vanity buisness), and it turns out to be the most lethal thing ever made.
Of course the other way capitalism can destroy humanity, sexbots and holodecks.
If they make them of a high enough quality, they would not be the sole province of people unlikely to procreate anyway. Their appeal would spill over into those who are actually capable of going on dates and reproducing with consenting partners. Humanity would be doomed in a cycle of endless wankery!
The way I see it, there’s no need for any kind of hyper-advanced AI in doom scenarios, just like some people managed nicely with ELIZA as a therapist (or so I understand). All that’s needed is a sufficiently seductive algorithm. Fiduciary Duty and, indeed, Late Stage Capitalism and so on. We’ll let the machines tell us what to do not because the machines are smart, but because we’ll think they’re smart.
As for genetically-engineered viruses, by the time it’s easy to make those, won’t it be just as easy to spin off the equivalent antiviruses?
A better and more comprehensive list of the ways that humanity could end is in the book “The End of the World: The Science and Ethics of Human Extinction.” by John A. Leslie. Leslie breaks his list up into categories:
Risks well recognized (like nuclear, biological or chemical warfare, climate change, poisoning by pollutants, disease)
Risks often unrecognized
- Natural Disasters (like volcanic eruptions, asteroid impacts, nearby supernovas, or thinks-we-know-not-what)
- Man-made disasters (unwillingness to rear children, nanotech, other computer disasters, an accidental laboratory Big Bang, a phase transition to a new true vacuum energy level, annihilation be extraterrestrials)
- Risks from Philosophy (like threats from religion, negative utilitarianism, prisoner’s dilemma)
I though the risks from philosophy section was disturbing and interesting because many of them amount to deciding not to have a human race anymore, and having that idea be infectious enough that people believe and follow it.
You may also want to re-familiarize yourself with the Carter catastrophe or Doomsday argument that says roughly that it is improbable that we are in the first set of humans that ever lived and so it is able to put probabilities on the future lifetime of the human race. There is a ton of work on expanding or refuting this argument.
Almost none of these things can exterminate humanity. Make us damned uncomfortable, sure, but killing every last one would take some doing.
Even if nuclear war, for example, was likely in the Northern Hemisphere, how many nukes would hit New Zealand? They would probably get some nasty fallout and some climate change - but complete annihilation?
Poisoning by pollutants is certainly a possibility, here in America, because freedom. But is it likely to kill every single Swede? I hear those socialists stand in the way of such progress.
Not much we can do about asteroids and supernovas, and those are real possibilities. But I’ll bet you ten thousand dollars neither one ever happens.
I’ll take that bet…
…wait a minute…
I just leave this here: The Large Hardon Collider.
HA! The planners know that, which is why that’s where the first salvo is headed. So, 1) New Zealand, 2) Georgia, USA, 3) Donald Trump, and 4) “It’s a Small World”, Disneyland, Kissimmee, Florida.
Someone has never seen “On the Beach”.
GAATTCGCTGGGTAAACACACCTTG, you bastard!
A direct hit, or even an indirect one, is not necessary in order for a place like New Zealand to suffer badly. With enough airborne particulates (radioactive or not), any survivors would probably face slow-ish starvation. Extinction may take a while if it happens at all, but with devastated supply chains, colder temperatures, minimal sunlight, disrupted rain patterns and likely messed up soil biota, living in caves ala the post-asteriod-impact earth-dwellers in Seveneves would be a complicated matter at best, even with good planning.
at long last i understand where g-a-t-t-a-c-a got it’s name. nineteen years, no big deal.
Once the 'baggers et al accept climate change is real, cos their houses are underwater and so forth, this will be their plan for fixing it, just you wait and see.
Dunno if I can wait that long, and if I do live to see the day you describe, life on this planet might be pretty double-plus-ungood. I read this TC Boyle book the year it came out:
and that, combined with reading of two of Bruce Sterling’s books, Heavy Weather and Distraction, and a slew of actual science articles, influenced my perspective on this. “Climate Fiction” as a spec-fic subgenre interests me: useful longish detailed scenarios and thought experiments as long as the authors do the work to get the science right. I am probably overdue for a plunge into Margaret Atwood’s works on this subject.
So how to engage positively with Dr. Hawking’s Listicle of Doom?
One of the appeals of Lifeboat Permaculture and Transition Towns as movements/personal practices is the argument that people can put handles on something–namely, one’s extinction or the extinction of one’s friends, family and planet–that seems impossible to handle. Doomers who say all this magical thinking, pointless and with no way to avoid DOOM DOOM DOOM will have only themselves and the insides of their own heads to live with. Good luck with that.
You don’t need to exterminate humanity, you just need to permanently and severely constrain its future potential.
This is the first era where we’ve got enough people, trade, technology, and economic complexity for disasters to be truly global - we’ve run out of “outside.” Little surviving islands won’t have the resources or knowledge to rebuild or sustain a modern civilization on hand. The won’t have access to the far-flung geographic locations and high powered machinery to extract raw materials the way we do today, or do research to learn how to reprocess their trash. Plus, basically all the ores and coal etc. that can be gotten with pre-mid-20th-century technology are gone, and wont be back for tens of millions of years. So humans will survive, but civilization probably won’t.
Even worse would be if we die off in a way that prevents even far-future life forms from doing better. If it unlikely but not impossible that we end up sterilizing the biosphere, eliminating the possibility of future earth-originating intelligent life. It is not impossible that Earth becomes uninhabitable due to a paperclip maximizer-type disaster; in that case AI may decide space-based resources would be useful, and end up in control of our entire future light cone, preventing future alien species from succeeding where we failed.
Why do people assume all knowledge of science will vanish and we’ll all start worshipping Vol? It’s true a small population couldn’t maintain the industrial base to sell iPhones to all. But is that the best measure of civilization?