I’ve been following the existential risk (xrisk) community for a few years, and I’ve noticed that its core assumptions aren’t challenged much. Inspired by a well-argued (though perhaps misguided) critique of xrisk by Phil Torres, I’ve tried to pin down my qualms with those assumptions.1
When polled on the odds of humanity going extinct by 2100, the participants at the 2008 Global Catastrophic Risk Conference assigned a median estimate of 19 percent. Toby Ord, author of “The Precipice: Existential Risk and the Future of Humanity,” puts the odds at one in six. These estimates might sound high, but they’re sunny compared to the 50 percent figure that Martin Rees gave in his 2003 book, “Our Final Hour.” Almost without exception, xrisk scholars believe that the 21st century is a “time of perils,” to quote Carl Sagan–an era in which manmade threats, such as artificial intelligence and climate change, make human extinction more likely than ever.
But assume our chance of extinction this century has hovered around 2 percent per decade, in line with Toby Ord and the GCR Conference participants’ estimates. If our xrisk is now much higher than in previous centuries, why have we seen fewer catastrophes in this century than in prior ones? Historical data is flawed, but we know of many cataclysms that have taken big bites out of humanity, such as World War I (which killed 1% of the world population), the Spanish Flu (2%), the first wave of the Spanish colonization of the Americas (2%), the Taiping Rebellion (2%), World War II (3%), the Turco-Mongol invasions of the 13th century (5%), the Three Kingdoms War (20%), the Black Death (23%), the Plague of Justinian (25%), and several others. The Bronze Age Collapse likely killed a large fraction of humanity as well, and a supervolcanic eruption 75,000 years ago might have reduced the world population by more than 90 percent.
So, our species has seen disasters on every order of magnitude, with larger ones occurring less often than smaller ones. If our 21st-century xrisk is 2 percent per decade, our risk of disasters in the high-but-nonexistential range should be far greater than that, and those in the medium range far greater still. Yet we have not seen such disasters in the first two decades of this century.2 Past is not always prologue, but Max Roser and others have pooled tons of data showing that the world is in fact improving in many areas, with several historic risk categories, such as war and famine, in steady decline.
Of course, many xrisk folks argue that our main threats this century belong to new categories, such as artificial intelligence, engineered pandemics, or nanotechnology. Though I appreciate the theoretical dangers of these technologies, assigning high risk estimates to them seems premature. Consider: If past eras had contained folks who cared about xrisk, or even understood the concept, those folks would almost certainly have viewed the technological developments of their day, which we now see with hindsight as manageable, as plausible threats to their survival.
For example, the first tribe to control fire, armed with knowledge of existential risk, might have feared that the widespread adoption of fire in warfare and hunting would destroy the world’s forests and grasslands, causing famine.3 Ancient Egyptians might have feared that domesticating baboons and monkeys for complex tasks would lead to a Planet of the Apes-style species coup, much as we worry about supersmart AI one day taking over. Eurasian farmers might have feared that steppe raiders, with their inexorable mounted archers, would prevent sedentary societies from achieving long-term stability; millennia of raiding-related state collapses would have vindicated those fears.
And anyone of any era might have wrung their hands over the risk of an omnicidal ideology, spread through war and nourished by millenarian fantasies, engulfing the world and leading to a global Jonestown massacre. Remember: Fears about anthropogenic extinction did not start with nukes.4 A decade before the Trinity test, smart people worried that airplanes loaded with poison gas could make life untenable if used in war. Perhaps if we’d been blessed with ancient texts on xrisk as well as recent ones, we would have a more measured perspective on the risks we face now.
One problem with xrisk analysis is that it dwells too much on the risks of specific disaster scenarios instead of viewing them in the context of broader risk-mitigating trends, such as long-term increases in population, expert connectedness, institution quality, knowledge redundancy, and infrastructure durability. Pointing out gains in our destructive capacities matters little if gains in areas that limit or prevent destruction outpace them, as we’ve seen for most of history. Another problem is that xrisk thinkers tend to favor vivid disaster scenarios over humdrum ones. War, climate change, grey goo, and runaway AI get a lot of play in xrisk discussions, whereas resource exhaustion, collapsing fertility, Kessler syndrome, cultural stagnation, and similarly uncinematic threats are sidelined, even though these latter risks aren’t obviously less plausible. This lopsidedness is not helped by our fixation on ancestral threats, such as malicious agents, over modern ones, such as institutional cruft.
What inspires smart thinkers to make grim estimates of human survival? The “gravitas market” is part of it; audiences take pessimism more seriously, in general. But there are other factors. People who study existential risk have no incentive to be optimistic if they want respect or funding. Admitting to potential funders that the risk of extinction is small but still worth studying just doesn’t jolt the limbic system like Martin Rees’s one-in-two estimate.
A further reason, albeit more speculative, is that the modern world seems intrinsically fragile. We know from people’s distrust of markets, and from their reluctance to embrace Darwinian evolution, that spontaneous order is unintuitive. Huge, complex societies like ours seem fantastically unstable when you ponder all the ways they could go wrong. But such societies must be stable to have endured so much change for so long. Extinction scenarios, much like “singularity” scenarios, may also be attractive for their simplicity. Imagining our future as a hell or heaven takes far less effort than imagining it as a farrago of good and bad aspects, which it most likely will be.
But perhaps the most important reason why xrisk thinkers are pessimistic is that optimism is a mug’s game: If you’re wrong, you were complacent; if you’re right, you were lucky. Pessimists face no such trap; if their predictions aren’t vindicated, they can claim their warnings saved the day, and no one can prove them wrong.
1Unlike Torres, I consider xrisk a valuable field of study. Humanity is not guaranteed a place in the stars. We could join the darkness as our cousin species did, if we’re too complacent, unlucky, or careless. But I don’t think our situation is as dire as many xrisk scholars do.
2Covid-19 has taken many lives and could take far more, but the prediction engine Metaculus currently estimates that about 2 million people will die from the disease by 2021, or around 0.02 percent of the world population. To kill as large a fraction of humanity as the Spanish Flu, Covid-19 would have to wipe out 100 times that figure–by no means impossible over the coming years, but not obviously likely either. And even that colossal tragedy would have to be 12 times worse still to match the Black Death.
3Our foraging ancestors had a better claim to be living in a time of perils, when you consider that all other species in the genus Homo went extinct in their technological infancy.
4Nuclear weapons are a paradigmatic example of a technology widely touted for decades to be existentially dangerous, but most xrisk scholars today, after years of careful analysis, think nukes are unlikely to wipe out humanity. Some xrisk scholars note that the Manhattan Project scientists feared nukes would ignite the atmosphere before they tested them, and cite this moment as heralding the age of anthropogenic xrisk. But it’s more apt to say that this moment heralded the age of caring about such risk.