On February 26, 2026, Dario Amodei—CEO of Anthropic, one of the world’s leading artificial intelligence companies—released a statement that read less like a corporate press release and more like a line drawn in sand. His company, which had been first to deploy frontier AI models in classified government networks, was refusing to remove two safeguards from its technology: restrictions on mass domestic surveillance and fully autonomous weapons.
The response from the Trump administration was swift and unambiguous. President Trump’s statement accused Anthropic of being “radical left, woke” and “out-of-control,” warning of “major civil and criminal consequences.” Secretary of War Pete Hegseth announced the Department would designate Anthropic a “supply chain risk”—a label historically reserved for adversarial foreign entities, never before publicly applied to an American company.
Within 48 hours, a confrontation that began as a contract negotiation had become something far more significant: a constitutional standoff over who decides the ethical limits of powerful technology.
For those watching closely, the contours of this conflict felt hauntingly familiar. We have seen this story before—not with silicon, but with uranium.
The Pattern We’ve Forgotten
In late 1938, physicist Lise Meitner and her nephew Otto Frisch first articulated the process of nuclear fission. Within months, the global scientific community realized that the tiny mass of the atom contained power sufficient to light cities—or level them. The “dual-use” dilemma was born: the same technology that promised boundless electricity could construct weapons of indiscriminate destruction.
What followed was not a unified scientific consensus, but a fracture. Brilliant minds who had collaborated freely across borders suddenly found themselves forced to choose—not merely between competing theories, but between competing visions of human flourishing and catastrophe.
The Trump-Anthropic confrontation follows this same fault line. At its core lies a question that neither politics nor technology can fully answer: What obligations do the creators of powerful tools bear for how those tools are used?
What Was Actually Said
Before analyzing the parallels, we must be precise about what occurred. The primary sources reveal more nuance than either side’s critics have acknowledged.
Anthropic’s Position
Dario Amodei’s February 26 statement made clear that Anthropic had actively pursued government and military contracts. Far from opposing national security applications, the company had been:
- First to deploy frontier AI models in classified government networks
- First to deploy at National Laboratories
- Provider of custom models for intelligence analysis, cyber operations, operational planning, and more
- Willing to forgo “several hundred million dollars in revenue” by cutting access to firms linked to the Chinese Communist Party
The company’s objections were narrow and specific. Two use cases were excluded from their contracts:
Mass domestic surveillance. Anthropic distinguished between “lawful foreign intelligence and counterintelligence missions” (which they supported) and mass surveillance of American citizens (which they did not). Their concern was not espionage abroad but the prospect of AI systems assembling scattered data—movements, browsing, associations—into comprehensive profiles of American citizens “automatically and at massive scale.”
Fully autonomous weapons. Again, the objection was specific. Anthropic supported “partially autonomous weapons, like those used today in Ukraine.” Their concern was fully autonomous systems—those that “take humans out of the loop entirely and automate selecting and engaging targets.” The stated reason was reliability: “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.”
Notably, Anthropic claimed these exceptions had “not affected a single government mission to date.”
The Administration’s Response
President Trump’s statement framed the dispute in starkly different terms. Anthropic was characterized as attempting to “DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS” and “STRONG-ARM the Department of War.” The solution was total severance: “IMMEDIATELY CEASE all use of Anthropic’s technology” across all federal agencies.
Secretary Hegseth’s threatened designation as a “supply chain risk” represented an escalation. Under 10 USC 3252, such designations can restrict how Department of War contractors use a company’s products—though Anthropic contended the Secretary lacked statutory authority to extend this to all contractor relationships.
The administration’s demand was clear: AI companies serving the military must accede to “any lawful use” without exception. Safeguards constitute unacceptable interference with civilian control of the military.
The Unstated Premises
Both positions rest on assumptions worth examining.
The administration’s stance assumes that elected officials and appointed commanders possess both the authority and the competence to determine appropriate uses of technology—that technical creators have no independent standing to constrain their tools once sold. This is a coherent position with deep roots in democratic theory: the military is accountable to the President, who is accountable to voters. Private companies are not.
Anthropic’s stance assumes that technical creators bear ongoing moral responsibility for their creations, even after sale—and that some applications are sufficiently dangerous that no contract or command can make them acceptable. This too has precedent: we do not expect pharmaceutical companies to supply drugs for torture, regardless of governmental demand.
Neither position is obviously correct. Both echo debates that split the scientific community eighty years ago.
The Manhattan Project Fracture
When the United States initiated the Manhattan Project, physicists faced an unprecedented moral situation. Their abstract equations had become geopolitical weapons. The ivory tower had been conscripted.
What emerged was not consensus but a spectrum of responses—a fracture that illuminates our present moment.
The Absolute Refusal: Lise Meitner
Lise Meitner, who had first theorized nuclear fission, refused an invitation to join the Manhattan Project with words that still resonate: “I will have nothing to do with a bomb!”
Meitner’s refusal was deontological—grounded not in consequences but in principle. She believed that the pursuit of scientific knowledge must remain decoupled from the creation of instruments of mass death, regardless of strategic justification. When American press later dubbed her “mother of the atomic bomb,” she found the association “deeply hurtful and contrary to her innermost convictions.”
Her nephew’s epitaph captured her legacy: “Lise Meitner: a physicist who never lost her humanity.”
The Conditional Participant: Joseph Rotblat
Joseph Rotblat joined the British atomic program and subsequently the Manhattan Project under a strictly utilitarian justification: the existential threat of Nazi Germany developing nuclear weapons first. A Polish physicist who had lost his wife to the Nazi invasion, Rotblat believed an Allied bomb was necessary to deter fascist use.
But when Allied intelligence confirmed in late 1944 that Germany had effectively abandoned its atomic program, Rotblat’s calculus shifted. With the original threat neutralized, he became the only senior scientist to voluntarily leave the Manhattan Project on grounds of conscience.
Rotblat recognized what many of his colleagues did not: the weapon was no longer defensive. It would become “the catalyst for a catastrophic post-war arms race, particularly aimed at the Soviet Union.” His departure underscored a consequentialist ethics of responsibility—prioritizing civilization’s long-term survival over technological momentum.
The Strategic Dissenters: The Franck Report
In June 1945, a committee of scientists at the University of Chicago’s Metallurgical Laboratory—headed by Nobel laureate James Franck and including Leo Szilard and Glenn Seaborg—issued what became known as the Franck Report.
Their argument was sophisticated. They did not oppose the bomb’s existence, but its unannounced military use against civilian populations. Their concerns were both moral and strategic:
- The underlying physics could not remain secret; other nations would develop atomic weapons regardless
- An unannounced attack would “destroy any international trust required to establish post-war arms control”
- Dropping the bomb on civilians would “cost the United States its moral standing”
- America’s concentrated metropolitan population made it uniquely vulnerable in future nuclear exchanges
Instead, they proposed a technical demonstration in an “appropriately selected uninhabited area”—displaying the weapon’s power while preserving the moral authority to negotiate international controls.
Their petition never reached President Truman before Hiroshima.
The Loyal Servants: Compton and Oppenheimer
Arthur H. Compton and J. Robert Oppenheimer represented scientists who concluded that immediate military use was both strategically necessary and morally permissible. Compton operated from “patriotic ethics,” believing American hegemony served humanity’s overall welfare. Oppenheimer, mediating between anxious scientists and political leadership, concluded there was “no acceptable alternative to direct military use.”
Both men prioritized ending the war quickly over broader concerns about arms races and international trust.
Yet the aftermath haunted Oppenheimer. The realization of what their theoretical work had produced led him to confess publicly in 1947 that physicists “felt a particularly intimate responsibility” for the weapon—that “in a raw sense, science had known sin.”
The Parallel Made Plain
The structural parallels between 1945 and 2026 are striking:
| Manhattan Project | Trump-Anthropic Conflict |
|---|---|
| Meitner’s absolute refusal | (No direct parallel—Anthropic actively pursued government work) |
| Rotblat’s conditional participation and resignation | Anthropic’s selective cooperation with two exceptions |
| Franck Report’s strategic dissent | Anthropic’s claim that exceptions haven’t affected missions |
| Compton’s patriotic ethics | Administration’s demand for unconditional compliance |
| Government classification of dissenters | ”Supply chain risk” designation threat |
Anthropic most closely resembles the Franck-Szilard position: not opposing military use categorically, but arguing that certain applications—mass domestic surveillance, unreliable autonomous weapons—undermine rather than serve democratic values. Like the Franck Report, they frame their objections as serving American interests, not opposing them.
The administration’s response echoes the Manhattan Project’s handling of scientific dissent. The Franck Report was delayed through military channels and never reached Truman. Scientists who raised concerns found their security clearances threatened. The parallel to labeling Anthropic a “supply chain risk”—historically reserved for foreign adversaries—is difficult to miss.
What the History Teaches
The atomic scientists’ experience offers several lessons for our present moment.
Technical expertise does not confer political authority—but neither does political authority confer technical competence. The scientists were right that the physics could not remain secret; the Soviet Union tested its first atomic bomb in 1949, far sooner than most officials predicted. They were also right that an arms race would follow. The questions they raised deserved serious engagement, not bureaucratic suppression.
“Dual-use” dilemmas do not resolve themselves. The same fission process that incinerated Hiroshima now powers 417 nuclear reactors generating 2667 terawatt-hours of electricity annually. The International Atomic Energy Agency exists precisely because nuclear technology’s peaceful and destructive applications cannot be fully separated. AI presents the same challenge: the capabilities that enable intelligence analysis also enable mass surveillance; the systems that augment human soldiers can, in principle, replace them.
Conscience is not the same as obstruction. The atomic scientists who raised concerns were not “radical left” agitators—they were patriots who had dedicated years to the project and sacrificed enormously for Allied victory. Their concerns arose from intimate technical knowledge and moral seriousness. Dismissing them as enemy collaborators (as some officials did) foreclosed legitimate debate.
The creators of powerful technology bear a burden that cannot be fully delegated. As Joseph Rotblat observed in his 1995 Nobel lecture, scientists “can no longer hide behind the illusion that their work is morally neutral.” The same applies to AI researchers. When your creation can reshape warfare, governance, and human agency itself, claiming that “we just build the tools” is not an adequate moral posture.
Where This Leaves Us
Part 1 of this series has attempted to do something specific: to lay out what was actually said, to identify the unstated assumptions on both sides, and to place this confrontation in historical context.
The parallels to the Manhattan Project are not accidental. Both moments involve revolutionary technology with profound dual-use potential. Both involve technical creators attempting to maintain ethical limits against governmental pressure for unconditional access. Both involve accusations of disloyalty directed at those who raise moral concerns.
But description is not prescription. Historical parallels illuminate; they do not decide.
The deeper questions remain: By what authority do creators of powerful technology constrain its use? What obligations do they bear? And when those obligations conflict with governmental demands, whose conscience prevails?
These are not merely policy questions. They are theological questions—questions about the nature of authority, the limits of obedience, and the relationship between power and responsibility.
In Part 2, we will turn to the Christian tradition’s resources for addressing these questions.
The atomic scientists faced these questions with the resources they had—humanistic ethics, professional solidarity, and their own consciences. We have additional resources available. Whether we use them well may determine whether silicon fission ends better than its atomic predecessor.
Part 2 of this series will examine the theological dimensions of the AI ethics debate.