The Fermi Paradox Vs. the E.T. Hypothesis

I’ve always been skeptical of the E.T. hypothesis, which posits that some UFOs are likely aliens. The lack of hard evidence puts a low ceiling on its plausibility, and even mass sightings can have prosaic causes.1 But what about the occasional barn burner of a sighting, one with credible witnesses and backed by compelling video or radar data? What about the USS Nimitz encounters, for example?

Implausible does not mean impossible. UFO sightings are frequent enough, and their potential implications large enough, that smart people should be studying them. But many skeptics have a key objection: the Fermi paradox. The Fermi paradox argues that the apparent deadness of outer space contradicts the possibility of aliens, who would leave visible traces of their expansion, such as by dumping nuclear waste into stars, gobbling up the energy spumes from quasars, or rearranging galaxies in structured ways.

People have sought to reconcile the Fermi paradox for decades, such as by hypothesizing that aliens are hiding from each other, or that they have confined us to a cosmic zoo with a fake night sky. Unfortunately, most of these hypotheses grant aliens whimsical motives or godlike coordination capacities, when the more Occamic explanation is that they don’t exist.

Does that mean the Fermi paradox makes the E.T. hypothesis untenable? Maybe not. Here are two explanations that might reconcile the two arguments. The first is one I’ve heard before, but the second, I think, is original.

1. A Very Early Great Filter

Life looks to have arisen very early in Earth’s history. But what if it arose even earlier, such as in the stellar nursery of which our sun was once a part? If the emergence of life is very difficult, perhaps only occurring in stellar nurseries under rare conditions, then those nurseries might be where the Great Filter lies.2 The entire visible universe might therefore be dead—except for in those star systems that descended from that one stellar nursery, and wherever the life in the systems has since moved to.

If this model is true, the Fermi paradox is a little easier to reconcile, because we can rule out life being visible at certain orders of magnitude, given how long it would take for it to spread from its stellar-nursery origins in the Milky Way. We still have to wonder why local ancient civilizations3 wouldn’t be visible at other orders of magnitude—but, in dealing with just a few such civilizations, we have more room for whimsical equilibria that wouldn’t make sense at the largest scales, including ones where quirky cultural barriers have prevented those civilizations from expanding much.

The UFOs visiting us, if we grant that they are aliens, might therefore belong to our distant cousins, who have colonized but a few patches of the Milky Way yet remain eons from cosmic visibility.

2. A Not-Soon Future Great Filter

A common explanation of the Fermi paradox is that the Great Future lies in our future. That is, civilizations inevitably destroy themselves. They stumble onto some spectacularly dangerous technology, or they succumb to war or resource exhaustion or something else. This explanation seems unlikely, but it doesn’t scuttle the E.T. hypothesis. In fact, a future Great Filter dovetails with the E.T. hypothesis quite well.

That may seem counterintuitive. A common view among longtermists is that transitioning from a planetbound civilization to a spacefaring one will slash our existential risk; asteroids, supervolcanoes, and other hazards that could smite our now-uniplanetary species will no longer hang over us like Swords of Damocles. And risks to multiplanetary species, such as gamma-ray bursts, are rare enough that we can quickly spread beyond their reach. This thinking has merit. Certainly, scattering our species, knowledge, and infrastructure as far as possible cuts our exposure to the local hazards we face now, and the bulk of the Great Filter, if it lies in our future, might well precede interstellar expansion.

But the Great Filter might lie further ahead, in the era of interstellar expansion itself. We can’t yet know what risks we’ll face then. Maybe there are no cheap defenses against relativistic weapons, so that space colonists never get lasting footholds on other worlds before they’re obliterated by their enemies. Maybe expansion via von Neumann probes is much harder than we think, leaving species vulnerable to resource exhaustion before they can spread far. Models of spacefaring civilizations, especially in science-fiction, tend to view offworld colonies as seeds containing sufficient knowledge and infrastructure for further expansion. But such colonies might rely on resource inflows from their homeworlds for a long time before they can send viable seeds of their own.

Put metaphorically, perhaps a civilization’s spacefaring adolescence—instead of its spacefaring infancy, such as we are in now—contains the greatest barriers to attaining full technological maturity. If that’s so, encountering weak hints of alien civilizations, such as UFOs, is more probable than seeing their megastructures in the night sky. Those UFOs might represent the technological peak of almost all civilizations, combining the fantastic speeds and accelerations we expect of technology beyond our own with the clumsy behaviors and hide-fails we expect from a civilization that has not yet had millions of years to perfect its art.

If this model is true, our universe contains countless interstellar civilizations that occasionally bump into each other, and maybe even coalesce from time to time, before contracting into permanent quietude due to cosmic costs or hazards.

In a recent article contending that we should keep an open mind about UFOs, economist Tyler Cowen wrote: “to this observer, the most likely resolution of the Fermi paradox is this: The aliens have indeed arrived, through panspermia — and we are they.” Many people agree with Cowen that the E.T. hypothesis itself reconciles the Fermi paradox, but I’m not one of them. Rather, the Fermi paradox becomes more puzzling when you consider that aliens who can visit us haven’t also left a mark in the stars–unless those aliens are very rare, due to sharing our fluky stellar-nursery origins, or because they don’t survive their spacefaring adolescence.

1In 1909, businessman Wallace Tillinghast falsely claimed that he’d invented a new kind of airplane but would only fly it at night to prevent rivals from copying its design. Over the next fourteen months, thousands of people claimed to have seen the airplane in the night sky. H.P. Lovecraft, no easy mark, encountered a crowd of awestruck witnesses to one supposed night-flight, and saw that the object of their attention was Venus.

2The Great Filter is just whatever prevents dead matter from becoming a cosmically visible civilization.

3Why ancient? The universe is 13.7 billion years old. Adopting the mediocrity principle, we should not a priori assume we are early arrivals to the cosmic scene. Any true early arrivals would’ve had billions of years to leave a mark on the universe–plenty of time to do so, unless the costs and timescales of expansion are far, far higher than seems likely now.

Flawed Assumptions in Existential Risk Analysis

I’ve been following the existential risk (xrisk) community for a few years, and I’ve noticed that its core assumptions aren’t challenged much. Inspired by a well-argued (though perhaps misguided) critique of xrisk by Phil Torres, I’ve tried to pin down my qualms with those assumptions.1

When polled on the odds of humanity going extinct by 2100, the participants at the 2008 Global Catastrophic Risk Conference assigned a median estimate of 19 percent. Toby Ord, author of “The Precipice: Existential Risk and the Future of Humanity,” puts the odds at one in six. These estimates might sound high, but they’re sunny compared to the 50 percent figure that Martin Rees gave in his 2003 book, “Our Final Hour.” Almost without exception, xrisk scholars believe that the 21st century is a “time of perils,” to quote Carl Sagan–an era in which manmade threats, such as artificial intelligence and climate change, make human extinction more likely than ever.

But assume our chance of extinction this century has hovered around 2 percent per decade, in line with Toby Ord and the GCR Conference participants’ estimates. If our xrisk is now much higher than in previous centuries, why have we seen fewer catastrophes in this century than in prior ones? Historical data is flawed, but we know of many cataclysms that have taken big bites out of humanity, such as World War I (which killed 1% of the world population), the Spanish Flu (2%), the first wave of the Spanish colonization of the Americas (2%), the Taiping Rebellion (2%), World War II (3%), the Turco-Mongol invasions of the 13th century (5%), the Three Kingdoms War (20%), the Black Death (23%), the Plague of Justinian (25%), and several others. The Bronze Age Collapse likely killed a large fraction of humanity as well, and a supervolcanic eruption 75,000 years ago might have reduced the world population by more than 90 percent.

So, our species has seen disasters on every order of magnitude, with larger ones occurring less often than smaller ones. If our 21st-century xrisk is 2 percent per decade, our risk of disasters in the high-but-nonexistential range should be far greater than that, and those in the medium range far greater still. Yet we have not seen such disasters in the first two decades of this century.2 Past is not always prologue, but Max Roser and others have pooled tons of data showing that the world is in fact improving in many areas, with several historic risk categories, such as war and famine, in steady decline.

Of course, many xrisk folks argue that our main threats this century belong to new categories, such as artificial intelligence, engineered pandemics, or nanotechnology. Though I appreciate the theoretical dangers of these technologies, assigning high risk estimates to them seems premature. Consider: If past eras had contained folks who cared about xrisk, or even understood the concept, those folks would almost certainly have viewed the technological developments of their day, which we now see with hindsight as manageable, as plausible threats to their survival.

For example, the first tribe to control fire, armed with knowledge of existential risk, might have feared that the widespread adoption of fire in warfare and hunting would destroy the world’s forests and grasslands, causing famine.3 Ancient Egyptians might have feared that domesticating baboons and monkeys for complex tasks would lead to a Planet of the Apes-style species coup, much as we worry about supersmart AI one day taking over. Eurasian farmers might have feared that steppe raiders, with their inexorable mounted archers, would prevent sedentary societies from achieving long-term stability; millennia of raiding-related state collapses would have vindicated those fears.

And anyone of any era might have wrung their hands over the risk of an omnicidal ideology, spread through war and nourished by millenarian fantasies, engulfing the world and leading to a global Jonestown massacre. Remember: Fears about anthropogenic extinction did not start with nukes.4 A decade before the Trinity test, smart people worried that airplanes loaded with poison gas could make life untenable if used in war. Perhaps if we’d been blessed with ancient texts on xrisk as well as recent ones, we would have a more measured perspective on the risks we face now.

One problem with xrisk analysis is that it dwells too much on the risks of specific disaster scenarios instead of viewing them in the context of broader risk-mitigating trends, such as long-term increases in population, expert connectedness, institution quality, knowledge redundancy, and infrastructure durability. Pointing out gains in our destructive capacities matters little if gains in areas that limit or prevent destruction outpace them, as we’ve seen for most of history. Another problem is that xrisk thinkers tend to favor vivid disaster scenarios over humdrum ones. War, climate change, grey goo, and runaway AI get a lot of play in xrisk discussions, whereas resource exhaustion, collapsing fertility, Kessler syndrome, cultural stagnation, and similarly uncinematic threats are sidelined, even though these latter risks aren’t obviously less plausible. This lopsidedness is not helped by our fixation on ancestral threats, such as malicious agents, over modern ones, such as institutional cruft.

What inspires smart thinkers to make grim estimates of human survival? The “gravitas market” is part of it; audiences take pessimism more seriously, in general. But there are other factors. People who study existential risk have no incentive to be optimistic if they want respect or funding. Admitting to potential funders that the risk of extinction is small but still worth studying just doesn’t jolt the limbic system like Martin Rees’s one-in-two estimate.

A further reason, albeit more speculative, is that the modern world seems intrinsically fragile. We know from people’s distrust of markets, and from their reluctance to embrace Darwinian evolution, that spontaneous order is unintuitive. Huge, complex societies like ours seem fantastically unstable when you ponder all the ways they could go wrong. But such societies must be stable to have endured so much change for so long. Extinction scenarios, much like “singularity” scenarios, may also be attractive for their simplicity. Imagining our future as a hell or heaven takes far less effort than imagining it as a farrago of good and bad aspects, which it most likely will be.

But perhaps the most important reason why xrisk thinkers are pessimistic is that optimism is a mug’s game: If you’re wrong, you were complacent; if you’re right, you were lucky. Pessimists face no such trap; if their predictions aren’t vindicated, they can claim their warnings saved the day, and no one can prove them wrong.

1Unlike Torres, I consider xrisk a valuable field of study. Humanity is not guaranteed a place in the stars. We could join the darkness as our cousin species did, if we’re too complacent, unlucky, or careless. But I don’t think our situation is as dire as many xrisk scholars do.

2Covid-19 has taken many lives and could take far more, but the prediction engine Metaculus currently estimates that about 2 million people will die from the disease by 2021, or around 0.02 percent of the world population. To kill as large a fraction of humanity as the Spanish Flu, Covid-19 would have to wipe out 100 times that figure–by no means impossible over the coming years, but not obviously likely either. And even that colossal tragedy would have to be 12 times worse still to match the Black Death.

3Our foraging ancestors had a better claim to be living in a time of perils, when you consider that all other species in the genus Homo went extinct in their technological infancy.

4Nuclear weapons are a paradigmatic example of a technology widely touted for decades to be existentially dangerous, but most xrisk scholars today, after years of careful analysis, think nukes are unlikely to wipe out humanity. Some xrisk scholars note that the Manhattan Project scientists feared nukes would ignite the atmosphere before they tested them, and cite this moment as heralding the age of anthropogenic xrisk. But it’s more apt to say that this moment heralded the age of caring about such risk.

The Parable of the Plant Men

Olaf Stapledon’s splendid 1937 science-fiction novel “Star Maker” (now in the public domain) is a trove of parables on civilizational progress, and the many hazards to it. The passage on “Plant Men” nicely illustrates how a species that rejects creativity in favor of pleasant stasis dooms itself to eventual death. (Note: Long quotes ahead.)

First, some background:

On certain small planets, drenched with light and heat from a near or a great sun, evolution took a very different course from that with which we are familiar. The vegetable and animal functions were not separated into distinct organic types. Every organism was at once animal vegetable.

[…]

The typical plant man was an erect organism, like ourselves. On his head he bore a vast crest of green plumes, which could be either folded together in the form of a huge, tight, cos lettuce, or spread out to catch the light. Three many-faceted eyes looked out from under the crest. Beneath these were three arm-like manipulatory limbs, green and serpentine, branching at their extremities.

[…]

By day the life of these strange beings was mainly vegetable, by night animal. Every morning, after the long and frigid night, the whole population swarmed to its rooty dormitories. Each individual sought out his own root, fixed himself to it, and stood throughout the torrid day, with leaves outspread. Till sunset he slept, not in a dreamless sleep but in a sort of trance, the meditative and mystical quality of which was to prove in future ages a well of peace for many worlds. While he slept, the currents of sap hastened up and down his trunk, carrying chemicals between roots and leaves, flooding him with a concentrated supply of oxygen, removing the products of past katabolism. When the sun had disappeared once more behind the crags, displaying for a moment a wisp of fiery prominences, he would wake, fold up his leaves, close the passages to his roots, detach himself, and go about the business of civilized life.

A dual condition:

Now throughout the career of this race there had been a certain tension between the two basic impulses of its nature. All its finest cultural achievements had been made in times when both had been vigorous and neither predominant. But, as in so many other worlds, the development of natural science and the production of mechanical power from tropical sunlight caused grave mental confusion. The manufacture of innumerable aids to comfort and luxury, the spread of electric railways over the whole world, the development of radio communication, the study of astronomy and mechanistic biochemistry, the urgent demands of war and social revolution, all these influences strengthened the active mentality and weakened the contemplative. The climax came when it was found possible to do away with the day-time sleep altogether. The products of artificial photosynthesis could be rapidly injected into the living body every morning, so that the plant man could spend practically the whole day in active work. Very soon the roots of the peoples were being dug up and used as raw material in manufacture. They were no longer needed for their natural purpose.

Decline and fall:

Seemingly, artificial photosynthesis, though it could keep the body vigorous, failed to produce some essential vitamin of the spirit. A disease of robotism, of purely mechanical living, spread throughout the population. There was of course a fever of industrial activity. The plant men careered round their planet in all kinds of mechanically propelled vehicles, decorated themselves with the latest synthetic products, tapped the central volcanic heat for power, expended great ingenuity in destroying one another, and in a thousand other feverish pursuits pushed on in search of a bliss which ever eluded them.

After untold distresses they began to realize that their whole way of life was alien to their essential plant nature. Leaders and prophets dared to inveigh against mechanization and against the prevalent intellectualistic scientific culture, and against artificial photosynthesis. By now nearly all the roots of the race had been destroyed; but presently biological science was turned to the task of generating, from the few remaining specimens, new roots for all. Little by little the whole population was able to return to natural photosynthesis. The industrial life of the world vanished like frost in sunlight. In returning to the old alternating life of animal and vegetable, the plant men, jaded and deranged by the long fever of industrialism, found in their calm day-time experience an overwhelming joy. The misery of their recent life intensified by contrast the ecstasy of the vegetal experience. The intellectual acuity that their brightest minds had acquired in scientific analysis combined with the special quality of their revived plant life to give their whole experience a new lucidity. For a brief period they reached a plane of spiritual lucidity which was to be an example and a treasure for the future aeons of the galaxy.

But even the most spiritual life has its temptations. The extravagant fever of industrialism and intellectualism had so subtly poisoned the plant men that when at last they rebelled against it they swung too far, falling into the snare of a vegetal life as one-sided as the old animal life had been. Little by little they gave less and less energy and time to “animal” pursuits, until at last their nights as well as their days were spent wholly as trees, and the active, exploring, manipulating, animal intelligence died in them forever.

For a while the race lived on in an increasingly vague and confused ecstasy of passive union with the universal source of being. So well established and automatic was the age-old biological mechanism for preserving the planet’s vital gases in solution that it continued long to function without attention. But industrialism had increased the world population beyond the limits within which the small supply of water and gases could easily fulfil its function. The circulation of material was dangerously rapid. In time the mechanism was overstrained. Leakages began to appear, and no one repaired them. Little by little the precious water and other volatile substances escaped from the planet. Little by little the reservoirs ran dry, the spongy roots were parched, the leaves withered. One by one the blissful and no longer human inhabitants of that world passed from ecstasy to sickness, despondency, uncomprehending bewilderment, and on to death.

Like the plant men, we humans can find many aspects of the modern world uncomfortable: war, drudgery, addiction, bureaucracy, alienation, stupefying complexity, and novel existential threats, to begin with. But we put up with these difficulties in exchange for wealth, freedom, leisure, fun, creativity, and the thrill of solving interesting problems. Most importantly, we crave progress: new kinds of fun, new sorts of problems.

Alas, almost all the civilizations that have ever existed did not even get as far as the Plant Men; they aspired to a safe and predictable equilibrium, to stasis, and were destroyed from without or within. As David Deutsch teaches us: “an unproblematic state is a state without creative thought. Its other name is death.”

Create, or die; the universe affords no middle path.

Covid-19, or The Price of a Handshake

A few days ago, Anders Sandberg tweeted this reply on the subject of Covid-19:

He added:

Micromorts are a slippery concept, since most activities have an ambiguous impact on mortality. When looking at an entire lifespan, does running a marathon, for example, really incur seven micromorts (a 0.0007% chance of dying) when you factor in the gains from exercise or the boost to well-being? Does the first cigarette or charboiled steak you consume really carry the same risk as your last one, all things considered? Probably not. But the concept is helpful for pondering clear risks with few if any positive offsets, such as Covid-19; it even gives us a framework for deciding how much money we might want to spend to avoid such risks. According to Wikipedia:

An application of micromorts is measuring the value that humans place on risk: for example, one can consider the amount of money one would have to pay a person to get him or her to accept a one-in-a-million chance of death (or conversely the amount that someone might be willing to pay to avoid a one-in-a-million chance of death). When put thus, people claim a high number but when inferred from their day-to-day actions (e.g., how much they are willing to pay for safety features on cars) a typical value is around $50 (in 2009).[31][32] However utility functions are often not linear, i.e. the more a person has already spent on their safety the less they are willing to spend to further increase their safety. Therefore, the $50 valuation should not be taken to mean that a human life (1 million micromorts) is valued at $50,000,000. Furthermore, the local linearity of any utility curve means that the micromort is useful for small incremental risks and rewards, not necessarily for large risks.[32]

Government agencies use a nominal Value of a Statistical Life (VSL) – or Value for Preventing a Fatality (VPF) – to evaluate the cost-effectiveness of expenditure on safeguards. For example, in the UK the VSL stands at £1.6 million for road improvements.[33] Since road improvements have the effect of lowering the risk of large numbers of people by a small amount, the UK Department for Transport essentially prices a reduction of 1 Micromort at £1.60 (US$2.70). The US Department of Transportation uses a VSL of US$6.2 million, pricing a Micromort at US$6.20.[34]

First, I’m happy to grant that there’s some irreducible uncertainty behind slapping dollar values onto human life (Norman Borlaug and Ted Bundy’s lives plausibly did not have perfectly equal worth). Most folks would still agree that a life-value is somewhere between zero and infinity, and the technocrat’s cold figure of $6.2 million puts a safe lower bound on that value for our purposes, even if utility is nonlinear and harder to quantify at the extremes.

At a $6.2 million life-value, how much should you be willing to spend to avoid Covid-19? Consider this chart:

When all is said and done, the case fatality rate may be lower than current reports suggest, but it’s probably at least an order of magnitude higher than the common flu. If we turn the CFR percentages for Covid-19 shown here into micromorts, and then assign a $6.2 value to each micromort (based on our $6.2 million life-value), we get the following risk valuations for contracting Covid-19 for each age group:

  • 10-39: $12,400
  • 40-49: $24,800
  • 50-59: $80,600
  • 60-69: $223,200
  • 70-79: $496,000
  • 80+: $917,600

Keep in mind, these valuations are conservative, since people’s willingness to pay for safety can reach up to $50 per micromort reduction, locally. I’d eagerly pay much more than $20,460 to avoid a round of the lottery in Shirley Jackson’s eponymous story, for example, even though it’d only incur a mere 3,300 micromorts of risk (given that there are 300 participants). Context is everything.

But what do these numbers mean, practically speaking? For starters, anyone over age 10 should think of each handshake as carrying an invisible fine: one equal to the risk valuation upon getting Covid-19, multiplied by the probability that any handshake will pass the virus on to them. So, if you’re 30 years old and there’s a 0.1% chance your next handshake will give you the virus, you should imagine $12.40 being deducted from your bank account the moment you link hands. Now replicate this calculus for using the subway, for rubbing your nose, for touching elevator buttons and handrails, and so on. And if you don’t think any of these things are that risky right now, just wait a couple of weeks for exponential growth to do its work.

Speculative Biology

There’s no good word for the art of imagining new animals. “Speculative evolution” is somewhat misleading, and both “xenobiology” and “astrobiology” are scientific disciplines. I’ll stick with “speculative biology.”

The German zoologist Gerolf Steiner was the first big practitioner of the craft. In the late 1950s, he built mock taxidermies such as this:

Steiner’s fictional “rhinograde” mammals use their noses to get by. “Snout leapers” bounce on their noses, while “earwings” use their noses as rudders during ear-propelled flight. The rhinograde in this pic uses its nose to catch fish.

I got my first taste of speculative biology when I was twelve. My grandfather gave me John Varley’s Gaea Trilogy, a dazzling science fiction saga published from 1979 to 1984. It had creatures I’d never seen in the genre: whistling, sapient blimps; barbed airborne predators with pulsejet engines; living film cameras; parasitic brain-worms that beam characters’ thoughts to an alien superorganism; “deathsnakes” that reanimate human corpses. The topography of the world was also unlike anything I’d imagined:

Jean-Paul Verne made this impressive visualization of the Gaea Trilogy’s surreal world.

A few years later, the Discovery Channel aired Alien Planet. A docufiction special based on Wayne Barlowe’s sublime book Expedition, it imagines life on the lush world of Darwin IV. Both the special and the book crowbarred my imagination wide open.

Creatures from Expedition (top to bottom): a forest gulper, emperor sea striders, and prismalopes.

To date, nothing I’ve read has come close to matching the splendor and plausibility of Expedition. It’s like a da Vinci notebook in its abundance of ideas ahead of their time.

Many other writers and artists have built their reputations on this genre, such as Dougal Dixon, C. M. Koseman, Alex Ries, and Peter Ward. James Cameron’s Avatar looked like the field’s big break when it came out. But despite that movie’s record-smashing release a decade ago, few works since have put strange creatures front-and-center in quite the same way. Even Avatar was just weird-ish, with Pandora feeling more like an amusement park than a bold exploration of life’s possibilities. When it comes to movies with giant blue humanoids, I’ll always prefer René Laloux’s Fantastic Planet (1973):

As the virtual worlds of video games get cheaper to create, and as artists strive to be more original in a competitive market, I’m hopeful that people’s interest in speculative biology will grow. And that’s exciting, because we’ve barely scratched the surface of this strange field.

Primal

Genndy Tartakovsky is one of the best animators in the business, and his new show Primal is shaping up to be his finest work. Only the first half has aired, with the other half still in development. It’s a gorgeous, blood-soaked action-adventure story with some unlikely dramatis personae: a caveman and a T-Rex who, after bonding over shared tragedies, struggle to survive a prehistoric hellscape together.

Tartakovsky has always been deft at resonance: novelist David Farland’s term for the skillful layering of allusions to give a work the sense of being nested in a grand tradition. Primal is the animator’s most resonant show ever, with allusions to Tarzan, Conan the Barbarian, The Lost World, Heavy Metal, Planet of the Apes, Moebius and Frazetta artwork, The Incredible Hulk, Journey to the Center of the Earth, King Kong, Devil Dinosaur, and H.P. Lovecraft’s terrifying nightgaunts.

Also a master class in medium specificity, the show’s pathos comes from the characters’ movements and expressions alone. Not one word of dialogue is spoken throughout the five episodes released so far. The linework has a self-conscious scruffiness borrowed from traditional animation, while the color palette is a painterly blend of reds and greens that recalls “The Rite of Spring” scenes from Fantasia.

Tartakovsky’s action scenes are never realistic, and the kinetic hyperbole reaches a new pitch here, with characters shaking off bone-shattering wallops and achieving zero-g leaps whenever the Rule of Cool demands it. Yet a sense of danger is maintained by giving the fights real consequences. Without getting into spoilers, the characters suffer great costs for being a little too slow, a little too careless. You get the sense that anyone could have their throat ripped out at any moment.

The action is also easy to follow. Instead of getting caught up in a lot of flashy choreography, as you might find in superhero cartoons or high-budget anime, Tartakovsky likes to present each move in his fight scenes serially, often drawing it out for clarity or exaggeration. The result is conflict that’s gripping instead of exhausting.

Another nice thing about the show is how it plays with expectations. It starts out in a more or less typical Mesozoic forest, with humans as the only odd ones out, and drifts little by little toward outright fantasy, until embracing, in the final episode, full Hyborian craziness. I was also amused by how many threats in this dino-dystopia were mammals: Three of the five episodes featured them as primary enemies.

I’m thrilled to see so much good work in the world of animation lately. I only wish more of it were as tightly made as Primal, so I could find the time to savor it.

Some Thoughts on Podcasts

I listen to about three hours of audio per day, most of it podcasts. I love podcasts. They fill the time when I’m doing busy work and want more than my inner monologue for company. Having listened to about eight thousand hours of the stuff in the last decade or so, I‘ve noticed patterns:

Length doesn’t matter if the conversation is interesting. If a podcast is twenty hours long but interesting, I’ll spend twenty hour-long chunks listening intently to it. If the conversation is ten minutes long but dull as a droning Maytag washer, you can forget it.

Any subject is interesting if the people discussing it are insightful; any subject is boring if the people discussing it aren’t. I’ve heard conversations on Neolithic Australia, the laryngeal nerves of giraffes, and the origins of barbed wire that were as riveting as anything.

Ideas beat banter. When interlocutors lapse into giggly in-joking and repartee, my interest wanes. I’m listening for ideas. A podcast can meander all it likes as long as it scratches that itch.

Confidence is key. Forget the hedging and mealymouthing. State your ideas frankly, and trust that listeners will apply the principle of charity. The potential for cavils and misunderstandings is infinite no matter what, so wait until they come up to address them.

Flow is also important. Interruptions that throw interlocutors off-track often ruin the conversation, or at least redirect it before a tantalizing insight can develop.

Long openings are a pain. If you want to drill in a Pavlovian cue, a few strums of a guitar will do. Get to talking within the first thirty seconds, please.

Timestamps of each transition in the conversation are incredibly helpful. (Transcripts, too.)

Podcast hosts don’t push back enough when their guests make claims, especially when the guests are famous. It’s hard to trust claims when they’re not put through criticism.

To the extent that hosts do critique claims, they throw softball questions and seldom ask guests to steelman the other side.

So which podcasts satisfy my demanding tastes? Surprisingly, lots. My favorites at the moment are Manifold, The Portal, Making Sense, and Conversations with Tyler. If you want to recommend some, I’m all ears.

Will Your Upload Really Be You?

It’s a trite question, but one you’ll hear a lot over the next fifty to one hundred years.

Mind uploads — also known as “whole brain emulations,” or just “ems” (my preference) — are near-perfect digital replicas of brains rendered down to their electrophysiological nitty-gritty (and maybe even down to the quantum level, if that’s what’s needed to get them working).

They’ll look like server racks to fleshfolk like you and me. Just acres and acres of ugly, sterile metal overshadowed by mushroom clouds of moisture from the ice slurry used to cool them. But from their point of view, they’ll live and work in resplendent virtual realities, retaining most of the motives and behaviors of the daring siliconauts whose brains were sliced and scanned to make them. (Robin Hanson’s 2016 book The Age of Em, which I mentioned in the previous post, is a detailed forecast of how this weird world might look.)

But will your em really be you? Won’t it just be a copy, a statue, a twin, a clone?

Maybe I’m missing something, but the answer always struck me as obvious. You are a package of things. Your physical continuity, sure, but also your memories, knowledge, allegiances, values, goals, and skills. Ems won’t be physically continuous with you. That’s true. But they’ll preserve so much of the stuff you care about that the trade-offs seem clearly worth it.

If that’s not intuitive, consider: Would you rather lose all your memories, knowledge, allegiances, values, goals, and skills but keep your body — or vice versa? I’d pick the latter, because without those things I wouldn’t be me in the ways I care about. Maybe you have the opposite view. Your mileage may vary.

Of course, your physical continuity won’t be the only sacrifice you’ll make if you want to live among ems. You’ll have to trade a world you’ve adapted to for a world you haven’t. Even if that new world is better, adapting will be painful. I’d rather lose a limb, say, than be teleported to a utopian planet I couldn’t leave and on which I’m a total stranger (but I would still prefer that fate to nonexistence, by a country mile).

So yes. Your em will be you in many ways. And no. Your em won’t be you in other ways.

But there’s a wrinkle. Your em will be more you in many respects as well. Your em will live longer, so it will be more extended in time. Your em will be able to make copies of itself, maybe millions of them, so it will be more extended in space. Your em will have perfect access to memories and far greater powers of introspection. That’s a lot more of you to celebrate.

On net, your em might be more you than you are now. A weird thought. But look at it from the em’s point of view.

Two Types of Futurism

In 1900, Ladies’ Home Journal published an article by civil engineer John Watkins that predicted how the U.S. would look one hundred years hence:

Although some of the predictions are howlers (no more city noise or mosquitoes?), the article has received buzz in recent years for its prescience. In fact, the article captures so much of the texture of modern life, from air conditioning to the ubiquity of cameras and telephones, that you might have thought Watkins was a traveler from the timeline next door. A timeline in which, among other things, pneumatic tubes were useful for more than factories and drive-up banks.

Not surprisingly, my tweet about this article received a lot of likes and retweets. More, in fact, than all my other tweets put together have received since I started my Twitter account eleven days ago:

People are impressed.

A few caveats. As economist Robin Hanson noted in 2012, John Watkins’ predictions overestimated people’s ability to coordinate on big efforts. They also underestimated the odd behaviors and widespread inefficiencies a rich civilization can get away with.

I would also add that the predictions failed to anticipate big value shifts (e.g. the sexual revolution and feminism), the devastating reach of ideologies (e.g. fascism and communism), and the creep of global catastrophic risks (e.g. nuclear weapons and climate change). Victorian futurists before and after Watkins, including H.G. Wells, anticipated these sources of social change and wrote about them, so it’s unclear why Watkins left them out. Maybe they were too heavy for a drawing-room lifestyle magazine.

Still, this humble civil engineer’s batting average was higher than any of the famous futurists of his time (H.G. Wells among them).

What made him so successful? It’s there in the first paragraph: He asked specialists to make predictions in their “own field[s] of investigation” — that is, to make narrow forecasts on the subjects they knew best, where their analysis would be concrete and sensitive to trade-offs, and not to look at the big picture, where their analysis would be sloppy and value-laden. He then put these predictions together, as if nailing down a railroad to the future one slat at a time.

But it couldn’t have been that easy. He had to make judgment calls along the way: which scholars to talk to, which questions to ask them, which predictions to ignore. His intuitive grasp of the world had to be sound to filter signals from noise, and he must have had good self-discipline to resist slanting his predictions with hopes and fears and biases, as many futurists do. I suspect he would have made a good superforecaster.

Crucially, Watkins didn’t take the stickiness of social equilibria for granted. Even in our fast-growing industrial era, change only happens when people and institutions let it. And when innovations, however useful, are widely enough adopted that people don’t look weird using them. What changes happen when depends on all sorts of stochastic cultural quirks, and to some degree on the whims and fashion cycles of elites.

The marvelous Apollo missions, for example, could be seen as a fluky, elite-driven indulgence more akin to the building of the Great Pyramid (the tallest structure in the world for 3,800 years) than the start of a new age of space travel. Futurists as far back as Watkins had predicted that space colonization (like undersea cities and domestic robots) would be a defining feature of the twentieth century and beyond. That’s because they almost never took his approach of curating other people’s expertise, favoring their own seat-of-the-pants intuitions instead.

Watkins confined his futurism to a practical baseline scenario in which the core features of the human condition don’t change much. Food, safety, entertainment, travel, education, and information-sharing remain the fundamental aspects of life, on which all social and technological progress incrementally turns. Exotic conditional scenarios (human cloning, world government) were sensibly left out.

Now, contrast Watkins’ forecast with that of the science-fiction legend Philip K. Dick, first published in The Book of Predictions in 1980:

Only two of these eleven predictions (a nuclear accident and widespread computer use) were remotely accurate, despite Dick covering a much shorter time frame. Why?

You might have noticed that most of these forecasts could be the plots of, well, science-fiction novels. They have the vivid imagery, high stakes, and cynical edge that make science-fiction compelling. And that’s the problem. Dick’s futurism is full of narrative bias: the tendency to see the world as a story rather than a world ⁠— that is, a mostly mundane mix of people and institutions locked into mostly stable equilibria by competing costs, capacities, and interests. A lot of futurism is story-like in this way. Which is good for story-lovers but bad for the world, because good forecasts are needed for good policy.

A lot, but not all. One fine exception is Robin Hanson’s The Age of Em. Watkins-like in its method of combining specific insights in many fields to make a picture of the future — one in which mind uploads run the show — Em is likely to be vindicated for similar reasons. But that’s a post for another day.

Why Are Greys Disturbing?

Science-fiction readers dismiss Greys as unoriginal, and rightly so. What are the odds that evolution on an alien world would combine intelligence with the same body plan as it did on Earth? After all, the smartest species on our world come in all shapes and sizes: elephants, dolphins, octopuses, crows. Surely planets with vastly different conditions — higher gravity, say, or closer proximity to a star — would give rise to vastly different sorts of creatures.

For a century, science-fiction writers have taken this consideration to heart, giving us aliens with all manner of sensory organs, appendages, and styles of locomotion. Cinema, too, has aspired to originality from time to time: the xenomorphs of Alien, the parasite from The Thing. But not even the slavering xenomorphs, a fixture of pop culture, have inspired as many nightmares as the Greys, those small, pale visitors from Zeta Reticuli who read minds and experiment on dairy farmers.

That’s a puzzle, though. Greys don’t look like apex predators. They look like sickly nerds. Stories about them sometimes mention stalking, home invasion, kidnapping, and uncomfortable medical procedures, but rarely if ever torture or murder or acts of equal brutality. These star-farers seem curious at best and callous at worst, but never sadistic or cruel. Certainly not cruel enough to justify people’s deep fear of them.

Some people feel the Greys’ human likeliness is the source of our anxiety. That their just-askew humanoid features — bulbous heads, tiny mouths, pitch-black eyes — trigger an ancient reflex we have toward outsiders. Yet myths are full of creatures that look or act like we do, from angels to the aos sí of Irish fairy tales, and it’s doubtful those creatures inspired a tenth as much fear in their heyday as Greys inspire now.

Others believe the Greys scare us because everyone — even skeptics— shares a tiny shred of credence that aliens have really been here, that we are not alone. Aliens don’t violate the laws of physics, after all, and the universe is pretty big. But the plausibility of aliens can’t be the main source of fear, since many of the people who believe in Greys also believe in gods and spirits with far greater powers—and often far greater ill will — but aren’t as bothered by those.

I suspect the reason Greys are so creepy is that their behavior toward us puts a low ceiling on our place in nature’s pecking order. Their cold disregard for our property rights and bodily integrity reminds us of our similar treatment of animals, which rests on our view of them as too low-status to justify moral concern. We primates care a lot about status, and Grey behavior tells us that in the cosmic scheme we’re not much more important than the creatures we eat. It’s telling that the only other terrestrial species Greys seem curious about is cows.

Consider that the Greys, if they exist, have little interest in sharing their insights with us, or engaging with our institutions, or knocking on the door before coming into our homes at two in the morning. They act just as we do when studying an ant colony. We don’t try to learn the ants’ language or secure the permission of their queen to observe them. We do whatever we want with them. We might even abduct a few for arcane purposes.

If Greys were sadistic, we could at least take comfort in our moral superiority over them. We would also take a grim sort of pride in the notion that they’re willing to travel lightyears to play out their sick fantasies on us. Instead we find ourselves relegated to scientific curiosities — ones whose bowels command more interest than our brains. If the governments of the world doubt the public could handle the truth, maybe that’s why. For the sake of our egos, let’s hope the Greys are our time-traveling descendants instead.