The Right Decision for the Wrong Reasons

Ben Thompson is not wrong. That’s the problem.

In his recent Stratechery piece on Anthropic’s refusal to grant the U.S. military unrestricted access to its AI models, Thompson makes two arguments that are, in principle, sound.

  • Dario Amodei (the CEO of Anthropic) was not elected, Anthropic is not a branch of government, and no private company should get to dictate the terms of our national defense.
  • If AI capabilities are truly analogous to nuclear weapons (as Amodei himself has suggested), then Anthropic is building a power base that potentially rivals the U.S. military, and no democratic society should tolerate that.

These are clean arguments. They appeal to structures we were taught to trust: civilian control of the military, democratic accountability, the constitutional chain of command from voter to Congress to Commander-in-Chief. The argument is straight out of a civics textbook, and the civics textbook is not wrong.

But we don’t live in a civics textbook.

The difference between the U.S. government as described by textbooks and the actual, existing United States is that the the White House, the Supreme Court, and a large part of Congress is currently controlled (through undemocratic means) by a coalition of millionaires and billionaires that include self-declared Christian nationalists, white-power ideologues, and End Times evangelicals who believe (and I mean believe, in the way that belief functions as an engine of action) that war in West Asia immanentizes the eschaton. Robert Anton Wilson used that phrase as satire. These people use it as strategy.

Thompson’s argument assumes that we live under a functioning democratic order, where the Congress and the military, which is ultimately answerable to the President, can be trusted to make responsible decisions about civilization-altering technology. That load-bearing assumption has already failed.

The government that is demanding unlimited AI capabilities from Anthropic has systematically dismantled the guardrails that Thompson’s argument depends on.

First, consider the Epstein scandal. Some of the most powerful men in the country have been implicated in the trafficking and abuse of children, and the response of our institutions has been to look the other way (sure, we may see some of the files, but has anyone powerful in the U.S. been arrested yet?).

Second, consider the corruption that is now so flagrant in the Trump Administration that it no longer even pretends to hide itself.

And then consider that the people demanding unguarded access to the most powerful AI systems on Earth include those who have publicly stated that certain categories of human beings are “illegal,” that climate change is a scam, that education is indoctrination, and that the separation of church and state is a liberal fabrication.

These are not abstract principles I’m putting up against Thompson’s abstract principles. These are facts on the ground. And Thompson’s argument can’t handle them.

Thompson writes as though the philosoiphical question is a clean one: Should a private company of roughly 3,000 people or a democratically-elected government representing 360 million people decide whether and how AI is used for national defense? Framed that way, the answer is obvious: the government; the people’s representatives; the rule of democratically-enacted laws.

But framed against the real United States government, the question is different: Should Anthropic hand over potentially civilization-ending capabilities to a group of men who have demonstrated, repeatedly and publicly, that they are not accountable to the people they claim to represent, who are driven by ideological commitments having nothing to do with national security, and who are is willing to abuse every form of power they have ever been given?

Because that’s the actual, real question. And the answer, for me, and for Anthropic, and I hope for anyone with a modicum of healthy anxiety, is no.

To be clear: Anthropic’s position is not philosophically clean. Thompson’s point about the democratic deficit is real. In a functioning democracy, a private company should not make decisions about national security, and just because the company happens to have made (in my judgment) the right call this time does not make the structure acceptable. If we grant a private company veto power over military applications of AI because we agree with the company’s reasoning, we’ve also granted it on the days we disagree.

That is a real tension, and I won’t pretend it doesn’t exist.

But the tension is not just in the philosophical problem; it’s in our reality, and unfortunately, there is no clean option.

We either allow a private company to stand up to the end-times oligarchs who have captured our government, or we allow those same oligarchs to conduct unlimited, AI-assisted surveillance on the American people while also allowing them to make the boneheaded mistake of permitting AI-powered weapons to have the final say on which humans live or die.

Are we really going to try to decide that question on democratic principles or are we brave enought to admit that, in 2026, the United States is not actually a democracy?

Thompson, following Amodei’s lead, raises the nuclear analogy, and we should take a moment to consider it.

If the power of AI-assisted weaponry is indeed analogous to nuclear weapons, then yes, of course, allowing a private company to build its own arsenal would be intolerable and a democratic government would be correct in deciding to either nationalize or destroy it. That’s logical; that make sense. And if we were living in 1955, with General Eisenhower in the White House and the institutional guardrails of the postwar consensus still intact, I might even find that logic compelling (excluding Einsenhower’s support for the secret world war run by the Dulles brothers).

But we don’t live in 1955. In 2026, the United States has abandoned its European allies and the people who control the nuclear arsenal consider the Rapture is a credible policy objective. Thompson seems to think the question is whether an ideal democratic government should control these capabilities, but it’s actually whether this government, these people, at this moment in history, can be trusted with them.

Is Anthropic’s decision the right one in principle? Probably not.
Is it the right one in reality? Absolutely.

Until the people of the United States can demonstrate that our government is once again accountable to our people and not to the fever dreams of white men who believe they’re ushering in the Kingdom of God, the keys to AI football should stay in the hands of an organization that appears, at least from the outside, to understand the horrors of its power.

Share the Post:

Latest Posts

The Gift

In 2022, Raj Bhakta threatened to hand the former Green Mountain College campus to a religious group if Poultney didn’t give him what he wanted. He’s now doing exactly that, and calling it a gift.

Read More

Split Screen: The Mountain and the School

February 2026 split in two: Days on the mountain with my daughter. A dormant project revived. Old friends. And then, on the last day of the month, bombs. The halves of this split screen are not equal, and I don’t know how to pretend otherwise.

Read More