Skip to content

Culture

Silicon Fission: The Theological Dimensions (Part 2)

By Practical Apologetics | February 28, 2026
Series Christians and AI Silicon Fission Part 2 of 2
9 of 9 in series
Silicon Fission: The Theological Dimensions (Part 2)
0:00 / 0:00

In Part 1 of this series, we examined the confrontation between the Trump administration and Anthropic over AI military applications, placing it in historical context alongside the Manhattan Project scientists’ ethical fracture. We saw brilliant minds wrestling with the dual-use dilemma—the same technology that could light cities could level them—and how their responses ranged from absolute refusal to conditional participation to loyal service.

But description is not prescription. Historical parallels illuminate; they do not decide.

The atomic scientists faced their moment with the resources available to them: humanistic ethics, professional solidarity, scientific conscience. Christianity offers additional resources—theological frameworks developed over centuries that address the nature of authority, the limits of obedience, the moral status of technology, and the conscience of those whose work shapes the instruments of power.

These resources do not yield easy answers. Applied honestly, they illuminate the genuine tensions in this dispute rather than resolving them. Both the administration and Anthropic have made claims that deserve serious theological engagement.

Two Positions, Two Logics

Before applying theological frameworks, we must be precise about what each side is actually claiming.

Secretary Hegseth’s position is grounded in democratic accountability and military necessity. In his statement on X, he announced: “I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

The underlying logic: the military answers to elected officials, not to Silicon Valley. When private companies impose conditions on how the armed forces may operate, they arrogate to themselves decisions that belong to the democratic process. The Commander-in-Chief and his appointed secretaries—accountable to voters—determine military strategy. Contractors supply tools; they do not dictate policy.

This is not a trivial argument. If every defense contractor imposed its own ethical red lines, military effectiveness would be determined by the most restrictive vendor’s conscience rather than by strategic necessity. A company might refuse to support operations in certain regions, against certain adversaries, or using certain tactics. The cumulative effect could be a military constrained not by law or democratic deliberation but by the unelected preferences of technology executives.

Anthropic’s position rests on the moral responsibility of creators and the legitimacy of selective refusal. CEO Dario Amodei’s statement emphasized that the company has actively supported military applications—first to deploy in classified networks, first at National Laboratories, provider of models for intelligence analysis and operational planning. The refusal is specific: mass domestic surveillance of Americans and fully autonomous weapons that remove humans from targeting decisions.

The underlying logic: technical creators bear ongoing responsibility for how their tools are used. Some applications cross lines that no contract or command can make acceptable. The specific concerns—surveillance that assembles comprehensive profiles of citizens “automatically and at massive scale,” and weapons systems “not reliable enough” for autonomous lethal decisions—are narrow and have “not affected a single government mission to date.”

This too is not a trivial argument. We do not expect pharmaceutical companies to supply drugs for torture regardless of government demand. Engineers can refuse to build structures they believe unsafe. The principle that creators have some standing to constrain their creations has deep roots in professional ethics across many fields.

Both positions are internally coherent. Both appeal to legitimate values. The question is how Christian theology illuminates the tensions between them.

The Sword and Its Delegation

Christianity has long recognized that civil government possesses legitimate authority to use force. The Apostle Paul’s teaching in Romans 13 describes the magistrate as God’s servant, bearing the sword to execute judgment on wrongdoers and protect the innocent. Theologians have termed this the state’s “power of the sword”—a delegated authority for the maintenance of public justice.

This framework offers support for the administration’s position. The sword is delegated to the magistrate—the civil authority accountable within political structures—not to private corporations. When Hegseth insists that military decisions “belong to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint,” he is making a claim with theological resonance. The democratic process, however imperfect, provides mechanisms for accountability that corporate governance does not.

Romans 13 does not delegate the sword to Anthropic. It delegates the sword to Caesar.

But the framework also imposes limits. Theology draws a critical distinction between God’s decretive will—His eternal purpose by which He ordains whatsoever comes to pass—and His prescriptive will—His moral requirements for human behavior. A war may occur within God’s providence while simultaneously violating His moral commands.

The magistrate’s authority is real but bounded. The state can act within providence while acting against divine command. When a government mandates something that violates God’s prescriptive decree—an unjust war, an immoral weapon, a surveillance apparatus that destroys the conditions of human flourishing—the Christian faces a genuine tension between submission to authority and obedience to God.

The theological question is not whether the state has authority over military decisions. It does. The question is whether that authority is unlimited, and what recourse exists when it exceeds just bounds.

Just War Criteria and Modern Technology

For centuries, Christian thinkers have developed criteria for evaluating warfare. The just war tradition provides two sets of standards: jus ad bellum (the right to go to war) and jus in bello (right conduct within war).

Two criteria bear directly on the current dispute:

Discrimination requires distinguishing between combatants and non-combatants. The intentional targeting of civilians is prohibited—not as a strategic preference but as a moral absolute.

Proportionality requires that the harm inflicted not exceed the good achieved. Even legitimate military objectives cannot justify unlimited destruction.

These criteria cut in multiple directions on the current dispute.

Anthropic’s concern about fully autonomous weapons invokes discrimination: can a machine reliably distinguish combatants from civilians? Their concern about current AI reliability—that systems “are simply not reliable enough”—is an empirical claim about whether the discrimination requirement can be met. If correct, deploying such systems would violate just war principles regardless of governmental authorization.

But the administration might respond: the military is better positioned than a software company to assess operational reliability. Generals and defense officials evaluate weapons systems daily. The judgment about when a technology meets military standards belongs to those with operational expertise and democratic accountability—not to vendors with commercial interests in appearing cautious.

Similarly, Anthropic’s concern about mass domestic surveillance invokes proportionality: does the security benefit justify the comprehensive monitoring of citizens’ lives? Here too, reasonable people may disagree. Intelligence officials argue that connecting disparate data points prevents terrorist attacks; civil libertarians argue that mass surveillance chills the freedoms it claims to protect.

The just war tradition provides criteria for evaluation. It does not automatically yield conclusions about specific technologies. Christians examining these questions must make judgments about empirical matters (how reliable are autonomous systems? how effective is mass surveillance?) that the theological framework alone cannot answer.

Sphere Sovereignty: A Double-Edged Sword

Abraham Kuyper’s doctrine of “sphere sovereignty” is often invoked to limit state power. Kuyper argued that God has delegated authority to various distinct spheres of human life—family, church, science, art, commerce—and that each sphere possesses legitimate sovereignty within its own domain. The state is not the master of these spheres but their protector and adjudicator.

Applied to mass surveillance, this framework raises concerns. If the state’s role is to protect the “organic life” of society rather than to dominate it, then comprehensive monitoring of citizens’ movements, communications, and associations may represent overreach. The state becomes what Kuyper warned against: an “octopus” that stifles the legitimate autonomy of other spheres.

But sphere sovereignty cuts the other direction as well.

Kuyper did not grant corporations authority over military affairs. The sphere of commerce has its own sovereignty—in matters of trade, employment, production. But national defense belongs to the state’s sphere, not to business. When a technology company dictates the conditions under which the military may operate, it may itself be violating sphere boundaries—a commercial entity reaching into governmental prerogatives.

Anthropic might respond that it is not dictating military operations but simply declining to participate in specific applications—exercising its commercial freedom. The administration might respond that when a company becomes embedded in critical military infrastructure, its “commercial freedom” to impose conditions becomes indistinguishable from constraining national security.

Sphere sovereignty provides a framework for thinking about institutional boundaries. It does not automatically resolve where those boundaries lie in the complex entanglement of government contracts, military technology, and corporate ethics.

Selective Conscientious Objection

Christianity has supported what is called “selective conscientious objection”—the right to refuse participation in a particular war or weapons program judged to be unjust, while accepting the legitimacy of military service in general.

The Christian Reformed Church formally established this position in 1939, affirming that believers may refuse participation in unrighteous conflicts while accepting the legal consequences of that refusal. This is not pacifism; it is selective refusal based on just war criteria applied to specific cases.

Anthropic’s position resembles selective conscientious objection. They have not rejected military AI categorically. They support “partially autonomous weapons, like those used today in Ukraine.” Their refusal is limited to two specific applications they judge to be either unreliable or unjust. And they have stated willingness to accept the consequences—losing the contract, enabling a smooth transition to competitors.

This framework provides theological grounding for Anthropic’s stance. Conscience, trained by God’s Word and exercised in community, has legitimate standing even against governmental pressure.

But the tradition of selective objection developed for individuals, not corporations. A soldier may refuse an order that violates conscience and accept court-martial. A weapons scientist may resign rather than build what he considers immoral. These are personal acts of conscience with personal consequences.

When a corporation exercises “conscience,” who exactly is objecting? The CEO? The board? The shareholders? Corporate conscience is an abstraction that may or may not track with the convictions of actual humans within the organization. And the consequences fall not just on decision-makers but on employees, customers, and shareholders who may hold different views.

Recent developments complicate this picture. On February 27, over 450 employees at Google and OpenAI signed an open letter titled “We Will Not Be Divided,” urging their companies to “stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.” Nearly 400 signatories came from Google, the rest from OpenAI. Roughly half attached their names publicly; all were verified as current employees. Notably, the letter’s organizers claimed no affiliation with any AI company, political party, or advocacy group.

This introduces genuine individual conscience into what might otherwise appear as mere corporate positioning. These are not executives protecting market share; they are engineers, researchers, and staff putting their names—and potentially their careers—behind a moral claim. The tradition of conscientious objection has always involved personal risk; these employees are accepting some measure of that risk.

The picture grows more complex still. OpenAI CEO Sam Altman—whose company competes directly with Anthropic and stands to gain from their loss of government contracts—told staff that he shares Anthropic’s red lines: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions.” He added, “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.” Altman said he wants OpenAI to “try to help de-escalate things” while seeking a deal that “fits with our principles.”

This is striking. A CEO publicly supporting a competitor facing government pressure, declining to exploit their vulnerability, and affirming shared moral commitments—this is not typical corporate behavior. It suggests either genuine conviction across the industry or coordinated resistance to governmental authority. Perhaps both.

From one angle, this validates the conscience claim. If competitors with billions of dollars at stake independently arrive at the same ethical limits, this suggests the limits reflect something more than commercial calculation. Conscience that costs money carries more weight than conscience that pays.

From another angle—likely Hegseth’s—this looks like industry collusion. When every major AI company draws the same red lines, the government faces not a single contractor’s objection but a unified front of unelected technologists constraining military options. The letter’s own framing—that the government is “trying to divide each company with fear that the other will give in”—suggests the signatories understand themselves as resisting together what they could not resist alone. Solidarity among workers is one thing; solidarity among trillion-dollar corporations is another.

But then the picture shifts again. On Friday evening, Altman announced on X that OpenAI has reached an agreement with the Department of War to deploy its AI models on classified networks—one that would include the very red lines Anthropic demanded. According to reports, the government is willing to let OpenAI build its own “safety stack,” retain control over which models are deployed and where, and include explicit contractual limits on autonomous weapons and domestic mass surveillance. If the model refuses a task, the government would not force OpenAI to make it comply.

This raises uncomfortable questions for everyone.

If the Pentagon can accept these limits from OpenAI, why couldn’t it accept them from Anthropic? Were the red lines ever really the issue? According to one OpenAI official, Anthropic’s relationship with the Department broke down partly because Dario Amodei “had offended Department of War leadership, including publishing blog posts that the department got upset about.” If true, this suggests the dispute may be as much about how Anthropic communicated—publicly, with moral language that implicitly criticized the government—as about what they demanded.

For Anthropic, this is vindicating and damning at once. Vindicating: the government can accept these limits, proving Anthropic’s position was not unreasonable. Damning: if OpenAI gets the same terms without getting blacklisted, perhaps Anthropic’s approach—the public statements, the moral framing, the refusal to quietly negotiate—was the problem, not the principles.

For the administration, the emerging OpenAI deal complicates the narrative. If the Pentagon is willing to write the same red lines into a contract with OpenAI, then the designation of Anthropic as a “supply chain risk” looks less like principled insistence on military authority and more like retaliation for public embarrassment.

For those employees who signed letters and staked their names on conscience, the situation is newly ambiguous. Did their solidarity matter? Or was the whole crisis a negotiating tactic that OpenAI navigated more adeptly? Is conscience vindicated when it wins through corporate maneuvering?

The theological question becomes sharper still: What counts as faithful witness? Anthropic spoke publicly, drew moral lines, and faced consequences. OpenAI spoke supportively, drew the same lines, and may get a contract. Is Anthropic prophetic and OpenAI compromised? Or is Anthropic reckless and OpenAI wise? Christianity honors both the prophet who speaks and suffers and the statesman who achieves good through compromise. Scripture does not always tell us which role a given moment requires.

Yet the administration might view this development with equal concern. From Hegseth’s perspective, the coordinated action across competing companies could suggest something other than independent conscience—perhaps shared ideological commitments among Silicon Valley elites, or an industry-wide attempt to constrain military options based on the political preferences of unelected technologists. The letter explicitly argues that the government is “trying to divide each company with fear that the other will give in”—framing the dispute as tech industry versus federal government, which may be precisely the dynamic Hegseth warns against.

The theological question sharpens: Is this individual conscience finding collective expression? Or is it collective ideology claiming the mantle of conscience? Christianity honors the former while remaining skeptical of the latter. Conscience is formed in community but exercised individually; it is accountable to God’s Word, not to professional consensus or class solidarity.

Christians examining this situation must ask: Are these employees genuinely applying moral reasoning to specific applications they find unjust? Or are they expressing broader political commitments under the language of ethics? The answer may vary person by person—which is precisely why individual conscience resists reduction to collective action, even when individuals act together.

Total Depravity and the Distribution of Power

The doctrine of total depravity—the conviction that the Fall corrupted every human faculty—generates deep skepticism about concentrated power.

This skepticism applies to governmental power. The administration’s demand for “any lawful use” without exception concentrates enormous authority in executive hands. The history of surveillance programs, from J. Edgar Hoover’s FBI to the post-9/11 expansion of intelligence collection, suggests that such power is frequently abused. Fallen humans wielding surveillance technology will surveil their enemies, their critics, their rivals—not just genuine threats.

But the skepticism applies equally to corporate power. Technology executives are not exempt from depravity. Silicon Valley has its own ideological commitments, its own blind spots, its own will to power. When Hegseth warns against allowing “radical left, woke” companies to dictate military policy, he is—beneath the political language—raising a legitimate concern about unelected power. Tech companies have shown themselves capable of censorship, manipulation, and ideological enforcement. Trusting their “conscience” requires trusting their judgment, their motives, their competence.

Total depravity counsels distributed power—checks and balances, competing authorities, multiple points of accountability. It does not automatically favor either governmental control or corporate autonomy. It suspects both.

The question becomes: in this specific case, which concentration of power poses greater risk? Reasonable Christians, applying the same doctrine, might reach different conclusions.

Common Grace and the Restraint of Sin

Theology teaches that God restrains sin in the world through what is called “common grace”—His non-saving operations by which He curbs the destructive effects of the Fall, maintains order in human society, and enables even the unregenerate to perform civic good. This grace operates through specific means: the conscience implanted in all image-bearers, the institution of civil government, the natural knowledge of moral law, and the providential ordering of circumstances that check human wickedness.

Common grace explains why fallen humanity has not destroyed itself. The fact that nuclear weapons have not been used since 1945 reflects God’s restraint operating through deterrence structures, international norms, and the consciences of those with launch authority. That surveillance states have sometimes been reformed reflects His restraint operating through political accountability and public resistance. These are not accidents of history but evidences of providence.

Applied to this dispute: both governmental accountability and corporate safeguards function as restraints on sin. The state’s insistence on democratic control checks the potential tyranny of unelected technologists. The corporation’s insistence on ethical limits checks the potential tyranny of unconstrained state power. Each institution, in its proper function, serves as an instrument of common grace—not because either is righteous, but because God uses fallen institutions to restrain one another.

This does not tell us which side is correct. It tells us that the friction itself—the competing claims, the public scrutiny, the institutional resistance—serves God’s purpose of restraining human evil. Christians should therefore be slow to assume that either party holds the moral high ground simply by virtue of being government or being a conscience-claiming corporation. Both are fallen. Both are restrained. Both restrain.

What the Frameworks Illuminate

Applying these theological resources honestly, several things become clear:

Both positions have legitimate grounding. The administration’s insistence on democratic accountability for military decisions resonates with the delegation of the sword to the magistrate. Anthropic’s selective refusal resonates with the tradition of conscientious objection and the limits on state authority.

Both positions have vulnerabilities. The administration’s demand for “any lawful use” concentrates power in ways that Theology views with suspicion. Anthropic’s corporate “conscience” may be a category abstraction that obscures rather than clarifies moral responsibility.

Empirical judgments matter. The theological frameworks provide criteria for evaluation, but applying them requires judgments about facts: How reliable are autonomous weapons systems? How prone to abuse is mass surveillance? How effectively can safeguards be maintained? Christians with identical theological commitments may reach different conclusions based on different assessments of these empirical questions.

The church has a role, but a limited one. The church can articulate principles—just war criteria, the limits of governmental authority, the legitimacy of conscience. It cannot make technical judgments about AI reliability or strategic assessments about national security. Prophetic witness is not the same as policy expertise.

Where the Tension Finds Its Resolution

The frameworks above provide tools for thinking. They do not, by themselves, resolve the dispute between the administration and Anthropic—nor could they. Contracts will be negotiated or terminated. Designations will be applied or challenged in court. The news cycle will move on.

But Christianity offers more than frameworks. It offers Christ.

The tension between authority and conscience, between submission and resistance, between the sword of the state and the limits of obedience—this tension finds its resolution at the cross. There, the Son of God submitted to the unjust authority of Pilate (“You would have no authority over me at all unless it had been given you from above”) while simultaneously exposing that authority’s corruption. He rendered unto Caesar what was Caesar’s—His very life—yet in doing so rendered unto God what no Caesar could claim: the redemption of the world.

Christ is both the model for submission to authority and the ground for its limits. He told Peter to put away his sword, yet He overturned the tables of those who defiled His Father’s house. He taught His disciples to turn the other cheek, yet He pronounced woe upon the powerful who devoured widows’ houses. The same Lord who said “Render to Caesar” also said, through His apostles, “We must obey God rather than men.”

This is not contradiction but fulfillment. In Christ, authority and conscience are not finally opposed—they are reconciled under His Lordship. The state bears the sword as God’s servant; conscience is bound to God’s Word. Both submit to the One who holds all authority in heaven and on earth.

For the Christian navigating disputes like this one, the question is not merely “Which institution do I trust?” but “How do I follow Christ in this moment?” That question cannot be answered by theological frameworks alone. It requires prayer, wisdom, the counsel of the church, and the ongoing work of the Spirit who guides believers into all truth.

The atomic scientists of the 1940s did not know what world their work would create. The AI researchers of our generation face similar uncertainty. The systems they build will be used in ways they cannot predict, for purposes they may not endorse, by actors they cannot control.

But Christians do not navigate this uncertainty alone. We serve a risen Lord who has already defeated the powers and principalities, who holds the future in His hands, and who will one day return to make all things new. On that day, the swords will become plowshares. The surveillance apparatus will be dismantled. The autonomous weapons will rust in fields where children play. The dual-use dilemma will be resolved—not by human wisdom, but by the appearing of the King.

Until then, we work and watch and pray. We engage these questions with theological seriousness, yes—but more importantly, with faith in the One who is himself the answer to every question of authority, conscience, and power. Christ is Lord. That confession does not resolve every policy dispute. But it tells us where resolution will finally be found.

And it tells us that the tension we feel—between competing claims, between legitimate authorities, between conscience and command—is not ultimate. It is penultimate. The last word belongs to Christ. In Him, all things hold together. In Him, the anxious striving of governments and corporations, technologists and generals, finds its proper end.

The dual-use dilemma will not be resolved by human ingenuity. Silicon fission, like its atomic predecessor, will continue to illuminate and to threaten. But the Christian faces this future not with despair but with hope—hope grounded not in better frameworks or wiser policies, but in the return of the One who will judge the living and the dead, and whose kingdom will have no end.


This is Part 2 of a two-part series. Part 1 examines the historical parallels between the Trump-Anthropic confrontation and the Manhattan Project scientists’ ethical fracture.

Discussion