Should Humanity Colonize Space?

Dr. Émile P. Torres
10 min readApr 1, 2018

--

There is a good chance that a child born today will live to see colonies on Mars, and perhaps beyond. NASA operates a colonization program that aims to put humans on Mars by the 2030s, and the founder of SpaceX, Elon Musk, declared in 2016 that martian colonies would pop up “in our lifetimes.”

Many factors are driving this quest into space. Some say that expansion is simply in our genes, while others see the colonization of Mars as a way of inspiring young people to get involved in science. Perhaps the most morally compelling reason concerns the very survival of our species. As the current director of IARPA, Jason Matheny, once wrote, “colonizing space sooner, rather than later, could reduce existential risk,” where “existential risk” denotes either human extinction or permanent civilizational collapse. The renown philosopher Derek Parfit echoes this idea, writing that “our descendants or successors could end these [existential] risks by spreading through the galaxy.” Even more, Musk himself has said that “there is a strong humanitarian argument for making life multi-planetary … in order to safeguard the existence of humanity in the event that something catastrophic were to happen.” How long do we have to colonize space to avoid doom? According to Stephen Hawking, humanity has maybe 100 years to get off this planet or face existential destruction.

For many leading scientists and tech leaders, venturing into the firmament is a key step toward ensuring our long-term survival in this morally indifferent and hostile universe. But is colonization really the panacea that so many believe it is? Might it backfire and introduce a range of unprecedented new risks that could actually make us worse off?

I believe that the answers are, in order, “no” and “yes.” Consider what we should expect to happen as our species hops from star to star, galaxy to galaxy, throughout the universe. Let’s imagine a world in which our descendants have already colonized a large portion of the visible cosmos, and bracket the possibility that other alien lifeforms exist (in other words, we will adopt the “Rare Earth hypothesis”).

First, our descendants will undergo evolution as they spread into new planetary worlds or build artificial living places in the form of O’Neill cylinders, for example. Biologists call this sort of evolution “adaptive radiation,” whereby two populations of a single species gradually become two distinct species by occupying different environments. Insofar as our descendants remain biological — either entirely or partially — we should expect them to undergo precisely this sort of evolution given the unique atmospheric pressures, gravitational pulls, seasonal variations, length of days, vegetation (or lack thereof), tidal patterns, average surface temperatures, and so on, of different exoplanets.

But humanity also possesses the capacity to override Darwinian mechanisms and control our own evolutionary trajectory through the integration of biology and technology. This could occur through synthetic biology techniques like CRISPR/Cas-9 and base editing, brain-computer interfaces (BMIs) that connect us to external machines, and nanobots that scan our brains so that they can be uploaded to a computer. We should therefore also expect our descendants to “evolve” even more radical bodies and brains via cyborgization, perhaps resulting in beings that are entirely artificial rather than biological. The result would be a vast range of disparate species that have entirely different physical and mental capacities than modern humans.

Along with genetic evolution, we should also expect memetic evolution, or the diversification of ideas, especially as civilizations become more isolated informationally from each other. There could be entirely novel political ideologies, governing structures, religious systems, linguistic traditions, scientific theories, and so on, that could take shape. For example, imagine some sort of religious ideology that arises on a planet thousands of light-years away from us, according to which those occupying this planet see themselves as God’s “chosen people” and believe that “the cosmos must be destroyed to be saved.” (This is what some apocalyptic groups actually believe. Note also that higher intelligence doesn’t necessarily lead to less crazy beliefs.) Thus, this apocalyptic group in some dark corner of the cosmos sets out to annihilate every civilization that it has access to within its future light cone.

Basically, expanding into space will have the exact opposite effect that globalization has had on Earth. Whereas globalization has homogenized the world in the domains of culture, politics, etc. — and even promises to create a single future race of brown-skinned humans — space colonization will yield an immense amount of multilayered diversity, both genetically and memetically.

* * *

So far, there is nothing obviously worrisome about this picture. Insofar as one values diversity — as many people rightly do these days — this future might appear desirable. But here rises an unavoidable question: Given so many different species and civilizations — perhaps billions of each, with upwards of “a hundred thousand billion billion billion” people in total — how can these different species ensure peace from planet to planet, galaxy to galaxy? How can they prevent conflicts from breaking out between radically different civilizations that are motivated by their own beliefs/desires about what is and ought to be?

Let’s consider two ways to obviate conflict. First, you could establish what the philosopher Thomas Hobbes referred to as a “Leviathan,” or a state system with the power to enforce laws and regulations on individuals within the state’s territory. On this model, the state provides security in exchange for some individual freedoms; this is the idea of the “social contract.” According to Steven Pinker, the rise of the Leviathan in the past few centuries is a major reason that violence has declined in the world. It can, as he writes, “defuse the temptation of exploitative attack, inhibit the impulse for revenge, and circumvent the self-serving biases that make all parties believe they are on the side of the angels.” If it weren’t for the Leviathan, humanity would find itself in an anarchic state of constant fear and violence.

So, the issue becomes: Could our descendants establish a cosmic Leviathan in the “cosmopolitical” (rather than geopolitical) arena of space? Could they create a single governing system that could enforce laws and regulations for all species and their civilizations? This appears implausible. The reason is that an important condition for effective governance is proper coordination between the various appendages of government. Imagine calling the local police to report a bank robbery happening right now and having to wait three weeks for a swarm of police officers to rush up to the bank, guns drawn, and burst inside. There would be no point to having a police force; there would be no state-provided security. The social contract would break down and, with it, the state itself.

This example hints at why a cosmic Leviathan would fail. One fact that people often forget when thinking about space is just how vast it is. For example, traveling at 36,000 mph, it would take approximately 39 days for a spaceship to reach Mars. A beam of light that contains a message would take between 3 and 12.5 minutes. But Mars is our next-door neighbor. Consider the super-Earth Gliese 581c, one of the closer exoplanets in the “habitable zone” of its star. If one were to travel at one-quarter the speed of light, leaving today, one wouldn’t arrive until 2098. A message sent back to Earth to simply say, “Hey guys, we arrived safely!” wouldn’t reach Earth until 2118. That’s a really long time! Now consider that the Andromeda Galaxy is 2.5 million light-years away, the Triangulum Galaxy is 3 million light years away, our Local Group is 10 million light-years wide, and the universe — which is metrically expanding — stretches some 93 billion light-years across.

How could a governing system effectively coordinate the actions of those within its cosmic territory? How could the state maintain “law and order” when a simple question like, “The Zogilian civilization of the EGS-zs8–1 galaxy is violating the Cosmic Weapons Ban! What should we do?” How should law enforcement respond when takes thousands of years to reach the relevant decision-making bodies? How could a cosmic Leviathan ensure peace when millions or even billions of years are needed for information to travel from one region to another? In a universe with potentially trillions and trillions and trillions of different individuals and upwards of billions and billions and billions of different species, the fundamental limits of physics would render even the most basic forms of coordination impossible. A cosmic Leviathan is out.

This leads us to the second strategy for keeping peace, namely, a policy of deterrence: “I won’t strike you if you don’t strike me, but if you strike me, I will strike back with equal or greater force.” This took the form of “mutually assured destruction” (MAD) during the Cold War, where the world’s two superpowers each threatened to annihilate the other in response to a preemptive attack, and this threat (seems to have) prevented (along with pure dumb luck) a preemptive strike from occurring.

The crucial requirement for a policy of deterrence to work is credibility. That is to say, retaliatory threats must be credible in the eyes of one’s opponent to be effective: if you threaten to punch me if I punch you, but I later conclude that you’re too scared to punch me, then my incentive not to punch you first — for whatever reason; perhaps I want to steal your expensive watch or I’m worried that someday you’ll get over your fear of punching me — goes out the window. (#defenestration)

So, could a policy of deterrence work in the massively “multipolar” universe described above? This too seems unlikely. First, some influential “neorealist” scholars argue that bipolar systems, meaning systems with only two actors in play, as was the case during the Cold War, are more stable than multipolar systems — and the more actors, the less stable the predicament.

Second, there could be so many different species and civilizations in the future that determining who exactly perpetrated an attack could pose an impossibly complicated forensic challenge. This too could undercut the threat of retaliation.

And third, so could the weapons available to technologically advanced future civilizations. For example, the US military is already experimenting with “direct-energy weapons” (DEWs) like laser and particle-beam weapons that can attack a target at or nearly at the speed of light. Since nothing travels faster than light — not even a message saying, “Help us, we were just attacked!” — the use of powerful DEWs by a Kardashev type II civilization, for example, could eliminate the threat of a counterstrike.

This differs from the Cold War situation in which each side could detect nuclear missiles traveling through the air with enough time to consult the relevant decision-making bodies and determine whether or not to strike back. Civilizations couldn’t possibly see a deadly laser beam that destroys crucial infrastructure coming; the damage would occur before a warning message from allies could ever reach them.

There are also biological and nanotech agents that civilizations could launch across the galaxy at each other, martial von Newman probes that are aided by metamaterial invisibility cloaks, “heliobeams” that concentrate large amounts of solar radiation on targets, and maybe even “gravity weapons” that use gravitational waves to create black holes (a speculative idea that appears to fall within the realm of physical possibility). Even more, the universe is teaming with asteroids and comets that could be catapulted toward planets or spaceships, with more destructive consequences than a swarm of hydrogen bombs. Some have called these “planetoid bombs,” since asteroids and comets are “planetoids.”

We also shouldn’t overlook the possibility that future civilizations devise entirely novel “weapons of total destruction” (WTDs). Just as our Paleolithic ancestors would be dumbstruck by the extraordinary mechanisms of mass death available to modern humans, so too might we be horrified by the weapons that our spacefaring children invent — say, WTDs that move at close to lightspeed and wreak galactic- or cosmic-scale hazards.

The cherry on the cake is that even a perfectly peaceable civilization might have strong incentives to obliterate its neighbors. For example, imagine two civilizations with radically different political, cultural, and religious traditions. They can’t even communicate very well because they speak entirely different languages and have evolved, through natural selection and cyborgization, divergent emotional repertoires and mental categories. They have different internal models of the world, distinct perceptual and phenomenological experiences, and incompatible “normative” worldviews.

Consequently, neither is able to trust the other. The result is that it would be rational for each to annihilate the other merely to ensure that the other doesn’t annihilate one first. Worse, if a civilization X believes that a civilization Y is rational, then X will believe that Y believes that it should annihilate X so that X doesn’t annihilate Y, since X annihilating Y would be the rational thing to do. (Whew!) This line of reasoning provides X an even stronger reason to annihilate Y, and therefore Y an even stronger reason to annihilate X — thus yielding a “spiral” of escalating tensions that ultimately culminates in war, despite both X and Y wishing for peace. Scholars know this as the “Hobbesian trap.”

But civilizations may have an equally strong incentive to destroy their neighbors even if they believe that those neighbors are irrational (rather than rational). For example, consider a civilization A that is full of irresponsible particle physicists. Civilization A has no bad intentions, yet it conducts physics experiments that could inadvertently end the universe. Another civilization B might try to reason with A not to conduct these experiments, but let’s imagine that A ultimately resists. In order to save A from annihilating the universe by accident, B may thus opt to launch a preemptive attack against A to avert a cosmic disaster.

Generalizing this case, since any given civilization will have some probability of accidentally destroying the universe, it would be in every civilization’s self-interest to destroy everyone else merely to obviate accidental cosmic calamities. This may be especially true if evolutionary adaptive radiation produces numerous species unable to fully grasp each others’ intentions, cognitive abilities, or moral values. The possibilities for miscommunication here are immense — and this should worry rather than reassure us.

* * *

To be sure, this is a highly incomplete picture of what a future with space colonization would probably look like. But it contains enough plausible assumptions to suggest that the widespread faith in space expansionism may be seriously misguided. Rather, it appears far more likely that spreading into space will yield constant catastrophic wars between different future branches of the human evolutionary tree. The shear number of future people could also significantly raise the probability that someone somewhere presses a “doomsday button,” a point that John Sotos has recently examined in a planetary (rather than cosmic) context, with scary results.

Before we establish colonies on Mars, it behooves us — especially NASA and SpaceX — to think harder about how transcending our planet could have devastating consequences. The dinosaurs didn’t die out because they lacked a space colonization program, they died out because they lacked a planetary defense system. Perhaps we should be focusing on the latter instead of the former, at least for the foreseeable future (we have another 1 billion years or so of habitable conditions here on Earth!).

[Note: this paper draws from an academic research article of mine titled, “Space Colonization and Suffering Risks: Reassessing the Maxipok Rule.”]

--

--

Dr. Émile P. Torres

I study all things human extinction: its nature and causes, its ethical implications, & the history of the idea. Philosopher, but MS in Neuroscience.