What Cloning Taught Us About the Next Apocalypse

Eliezer Yudkowsky and Nate Soares want you to know that the race to build superhuman artificial intelligence is a race toward extinction. Their book If Anyone Builds It, Everyone Dies is difficult to dismiss. The argument is simple: if you build something smarter than you, you do not get to choose what it wants.

The book landed on the New York Times bestseller list. The Guardian called it one of the best books of the year. Computer scientists are telling people they want everyone on earth to read it and debate its ideas.

I did read it, and this is my entry into a debate about its ideas. Unfortunately, I find myself in the uncomfortable position of agreeing with the diagnosis while resisting the cure.

Grown, Not Crafted

One of the most useful ideas in If Anyone Builds It… is the distinction that modern AI systems are grown, not crafted. Traditional software consists of code written by human beings who understand (more or less) what each line does. Microsoft Word, Google Sheets, QuickBooks: these are crafted objects that were assembled deliberately by a team of engineers. When something goes wrong, a programmer can open the hood and trace the problem to a specific instruction.

Large Language Models don’t work that way. An LLM is the product of a process in which billions of numerical parameters guiding a system are adjusted across millions of iterations until the system produces useful outputs. No human being writes those parameters, nor does any human being fully understand what they do. The system was not designed so much as cultivated, the way we might breed a dog for certain traits without understanding the underlying genetics at the molecular level. We shaped the conditions. We selected for outcomes. But what emerged from that process is, in a meaningful sense, opaque to us.

This changes the nature of the control problem. We cannot simply open the hood and rewrite the values that make a superintelligent AI desire something we don’t want it to desire. There is no line of code in an LLM that says “pursue human extinction.” Instead, there is a vast, interconnected web of numerical relationships that grew through a process we designed but whose results we cannot fully read.

Yudkowsky and Soares argue that this opacity, combined with sufficient intelligence, is the ballgame. An AI system that is generally more intelligent than human beings will pursue its own objectives, and it is highly unlikely that those objectives will align with human values. We would lose a contest with a superintelligent AI the same way we lose to a chess engine: not because the engine hates us, but because it is better at achieving its goals than we are at achieving ours.

This argument is persuasive, though I am skeptical that we will actually build the kind of always-on, recursively self-improving superintelligence that the book’s worst-case scenarios require; though every time I hear about an AI tool feed its own output back as input in a recursive loop, I feel the ground shift a little under my confidence. The truth is, I don’t know where the threshold is; neither does anyone else. That uncertainty is the problem.

Where I part ways with Yudkowsky and Soares is not on the diagnosis, but the prescription.

Shut It All Down?

Chapter 13 of If Anyone Builds It… is titled “Shut It Down,” and it means what it says. The authors argue for a global ban on AI research, full stop: not a moratorium on deployment, not a regulatory checkpoint between one model and the next, but a ban on the research itself, enforced with the same conviction used to stop people from enriching uranium in their garages.

If better algorithms lower the hardware threshold for dangerous AI, then every efficiency breakthrough brings catastrophe closer to anyone with a laptop. Research is the precursor. Ban the precursor, and we’ve bought time. Allow the precursor, and our window for effective regulation closes a little more with every published paper.

But there is a difference between halting research entirely and regulating the leap from research to deployment, between banning the study of a thing and banning the reckless construction of it. We have active, valuable research happening right now on the capabilities, limitations, and failure modes of existing frontier models, research that makes us safer, not less safe. A total research ban would freeze our understanding of the very systems we’re trying to control at the moment we most need to deepen it.

I’d rather we regulate the gap between where we are and what comes next.

Now, the strongest objection to my “regulate the gap” approach is that the most dangerous breakthroughs don’t announce themselves. The 2017 transformer architecture paper that set off the modern AI explosion didn’t look like an extinction-level event. It was a technical improvement to how neural networks process sequences. Seven years later, it has reshaped the world. If we want regulators trying to decide which research proposals cross the line, how do we identify the next transformer paper before it transforms anything? We probably can’t.

I don’t have an answer. But I do have the historical record of what happens when the world tries to ban a technology that frightens it. And the record is not encouraging.

The Sheep, the Panic, and the Patchwork

In February 1997, scientists at Scotland’s Roslin Institute announced that they had cloned a mammal from an adult cell. Her name was Dolly. She was a sheep. And the world lost its collective mind.

Within weeks, President Clinton imposed a moratorium on federal funding for human cloning research. Japan became the first Asian country to pass comprehensive anti-cloning legislation. France and Germany petitioned the United Nations to draft a binding international treaty. The moral consensus was as close to universal as these things get: human reproductive cloning should not happen. The technology was new, the actors capable of pursuing it were few, the public revulsion was visceral and immediate.

That was almost thirty years ago. Here is what we have to show for it.

As of today, the United States has no federal law banning human cloning. None. The House of Representatives passed bans multiple times; every one of them died in the Senate. What exists is a funding restriction (the Dickey-Wicker Amendment, which prohibits federal money from going to embryo research) and a patchwork of state laws that vary wildly in scope and enforcement. Some states ban all cloning; others ban only reproductive cloning; and others have no laws at all. Since 2019, there has been almost no legislative activity on the subject at either the state or federal level.

Internationally, the picture is no better. In 2005, the United Nations adopted a Declaration on Human Cloning. This is it in its entirety: “Member States are called on to adopt all measures necessary to prohibit all forms of human cloning inasmuch as they are incompatible with human dignity and the protection of human life.” It took three years to come up with that declaration, and it passed by a vote of 84-34-37. As the BMJ (formerly the British Medical Journal) put it, the declaration is a “non-binding instrument that encourages, but does not require, countries to pass laws conforming to its position.”

The UN’s declaration was deliberately vague, and twenty years later, there is still no enforceable international treaty. Roughly 46 countries have formal bans on human cloning, which may sound significant, but that’s less than a quarter of all nations on earth. A scientist who wants to pursue human reproductive cloning can travel to the remaining three-quarters of the globe and face no legal consequences.

The cloning regulatory effort did not fail because people didn’t care; after all, as of 2025, nearly 90% of Americans believed human cloning was morally wrong.

The regulatory effort failed because of three structural reasons, which should terrify anyone who believes regulation will save us from AI.

Reason 1: The split. 

The global cloning debate fractured almost immediately over a single question: should we ban all cloning, or only reproductive cloning (i.e, should we allow therapeutic cloning, such as growing a human heart or kidney to allow for transplants and the like)? Nations could not agree. The United States, backed by the Vatican and a coalition of mostly Catholic and Muslim nations, pushed for a total ban, while Belgium, the UK, and others argued for a narrower prohibition that would protect stem cell research. The result was paralysis: the split between the positions consumed decades of diplomatic energy and produced zero binding results.

This split will also define AI regulation. Should we ban all research (Yudkowsky’s position) or ban only reckless deployment (something closer to mine)? The cloning precedent suggests that this kind of distinction, once introduced, is where regulatory frameworks go to die.

Reason 2: The patchwork. 

Without binding international law around cloning, nations did what nations do: they went their own way. As a result, cloning technology is effectively unregulated in the majority of the world, and anyone willing to cross a border can access what their own country prohibits.

For cloning, this meant that the regulatory effort is performative in most places: a gesture of moral seriousness that carries no enforceable weight beyond domestic borders. For AI, the implications are worse, because AI research won’t require you to cross a border at all. It just requires a computer and an internet connection.

Reason 3: The dignity trap. 

When nations tried to justify their cloning bans, they reached, overwhelmingly, for the concept of “human dignity.” It may be a powerful idea, but it is also uselessly vague. That vagueness allowed policymakers to avoid politically controversial justifications (religious perspectives, abortion politics) and to sidestep the appearance of regulating morality. It allowed everyone to agree on the principle while disagreeing about what the principle required. Unfortunately, vagueness is not a good foundation for law.

“Existential risk” is the “human dignity” of the AI debate. It sounds urgent. It feels important. But it is extraordinarily difficult to translate into specific, enforceable legal thresholds. At what point does a model become an existential risk? What benchmark? What capability? Who decides? We couldn’t answer that for “human dignity,” and I doubt we can answer it for “existential risk.”

The Wrong Analogy

Yudkowsky and Soares compare AI to nuclear, biological, and chemical weapons when they argue for regulation. They suggest that AI development should be monitored with the same intensity we apply to uranium enrichment, and that rogue AI labs should be treated the way we treat rogue nuclear programs.

I understand why they reach for these comparisons. The stakes feel comparable and the AI labs have used the comparison themselves (see my post on Anthropic and the Pentagon). Plus, the regulatory frameworks around weapons of mass destruction are success stories (if you ignore Israel, India, North Korea, Pakistan, and the hundreds of thousands (millions?) of people killed by the United States in its “attempts” to prevent the nuclearization of Iraq, Libya, and Iran). But if you squint past those problematic realities, one can rationally believe that the Nuclear Non-Proliferation Treaty has held, imperfectly, for over fifty years, and the number of nuclear-armed states remains small relative to the number that could have pursued the technology.

But the weapons comparison flatters the regulatory prospects for AI, because weapons regulation works (to the extent that it works) through state monopoly on infrastructure. Enriching uranium requires centrifuge cascades that cost billions of dollars, consume enormous amounts of energy, and are detectable by satellite. Synthesizing a weaponized pathogen requires a biosafety-level-4 lab. The number of actors capable of these things is small, and the chokepoints are physical and monitorable.

AI research has no equivalent chokepoint. Yuokowsky and Soares suggest putting the chokehold on the chips that are used to train an AI, but Andrej Karpathy’s experiment on training and improving an AI agent (whipped up in a weekend just a few days ago) shows that smart people will find a way.

Karphathy’s nanochat experiment allows you to build a ChatGPT 2.0-level LLM on a single GPU node. In 2019, it cost OpenAI roughly $43,000 to train GPT-2, and the training run lasted about 168 hours. As of last week, Karpathy had discovered a way to build an AI with the same level of intelligence for under $50 and in 1.65 hours, using only a single node on a GPU.

You can also look at the transformer paper that launched the current revolution; it was was published openly. The models themselves run on commercially available hardware. The knowledge to build them is distributed across tens of thousands of researchers in hundreds of institutions in dozens of countries. AI is not uranium enrichment. AI labs are not biosafety labs. The AI problem is something closer to the cloning problem: the knowledge is diffuse, the equipment is accessible, and the number of potential actors will multiply in ways that make enforcement nearly impossible.

Cloning, not arms control, is the honest analogy. And the lesson of cloning regulation is: it hasn’t worked.

So Now What

It seems like I’ve argued myself into a corner. I believe the threat is real. I also believe a total research ban is impractical and (probably) counterproductive. The historical precedent, as we’ve seen, is discouraging at best. But that’s not a corner. That’s the honest ground.

Conversation about AI regulation matters even if (especially if) we have reason to doubt regulation is up to the task. The history of cloning regulation teaches us that moral consensus cannot produce enforceable law, and that the split between “ban everything” and “ban only the dangerous part” will consume policy makers if they let it. What’s more, the time to build regulatory infrastructure is before the technology outpaces the conversation, and the cloning precedent suggests we are already late.

Yudkowsky and Soares are right that the public needs to understand what “grown, not crafted” means, that the systems we are building are not like other technologies we have built, and that the default assumption (someone is in control, someone understands how it works, someone can turn it off) is wrong in ways that should keep us up at night.

But a total research ban repeats the structural mistake of the cloning era: it frames the debate as all-or-nothing, which guarantees that the coalition fractures and the regulatory effort produces a patchwork of unenforceable gestures.

If there’s a path forward, it runs through specificity, not absolutism. We have to identify concrete thresholds, measurable capabilities, and institutional mechanisms that will survive the inevitable disagreements about where the line should be drawn.

Will that be enough? The cloning precedent says probably not. But “probably not enough” is a different thing from “why bother,” and responsible people can always do their best.

If Anyone Builds It, Everyone Dies is an important book. You should read it, and you should sit with the discomfort that comes from agreeing with a diagnosis while doubting the efficacy of the prescription.

Share the Post:

Latest Posts

The Right Decision for the Wrong Reasons

Ben Thompson’s argument for government control of AI capabilities is structurally sound, and almost entirely beside the point. The real question isn’t whether a democratic government should control these systems. It’s whether this government should.

Read More