The machines have learned to speak, and we are not sure what to do about it.
Artificial intelligence has moved from the realm of science fiction into our daily lives with a speed that has left ethicists, policymakers, and ordinary citizens scrambling for moral vocabulary. The questions pile up faster than we can answer them: What happens when AI generates convincing falsehoods? Who is responsible when an algorithm discriminates? How do we protect human dignity when machines can replicate human work? These are urgent questions, and most of our frameworks for addressing them are barely a decade old.
But perhaps they need not be so new. What if the moral vocabulary we need has been forged already—not in Silicon Valley think tanks, but in seventeenth-century New England meetinghouses?
The Puritans—those rigorous Reformed theologians who shaped early American society—never encountered a chatbot or worried about autonomous weapons. Yet their moral universe was constructed around precisely the categories we need now: truth-telling and false witness, the dignity and purpose of human labor, the dangers of unchecked power, and the limits of human knowledge when making high-stakes judgments. Their theology was forged in an era of rapid change, social upheaval, and profound uncertainty about new technologies (the printing press, global exploration, emerging scientific methods). They thought carefully about what it meant to be human before God in a world where human capacities were expanding in unprecedented ways.
What follows is not an exercise in historical curiosity. It is an attempt to retrieve wisdom—to ask how the theological and moral framework of Reformed Christianity, as articulated by thinkers like Richard Baxter, Thomas Watson, Increase Mather, and the framers of the Massachusetts Body of Liberties, might illuminate the ethical challenges posed by artificial intelligence.
The Stakes: Ten Concerns That Define Our Moment
Before we can apply Puritan wisdom, we must understand what we are addressing. Secular ethicists have identified ten major concerns about AI that demand serious moral engagement:
-
Hallucinations and Misinformation: AI systems generate false or fabricated information that appears credible, undermining trust in media, education, and governance.
-
Bias and Discrimination: Models inherit and amplify social, racial, and gender biases, reinforcing systemic inequality in hiring, policing, lending, and access to services.
-
Misuse and Weaponization: AI deployed maliciously—for scams, cyberattacks, propaganda, or autonomous weapons—threatens public safety and geopolitical stability.
-
Academic Cheating and Plagiarism: AI tools enable effortless deception, eroding academic integrity and blurring authorship.
-
Loss of Human Agency and Autonomy: Overreliance on automated decisions erodes human choice, with AI quietly shaping opinions, rights, and behaviors.
-
Labor Displacement and Economic Inequality: AI automates jobs faster than economies can adapt, widening inequality and eliminating entry-level pathways.
-
Privacy and Surveillance: AI enables large-scale facial recognition, behavioral profiling, and digital monitoring, weakening civil liberties.
-
Accountability and Transparency: When AI harms people, it is often unclear who is responsible, and users cannot appeal or audit opaque systems.
-
Governance and Regulatory Gaps: Laws and institutions lag behind AI’s pace of innovation, leaving insufficient safeguards around safety and fairness.
-
Long-Term and Existential Risk: Future AI systems could potentially surpass human control or act against human interests.
These are not merely technical problems. Each one touches fundamental questions about truth, justice, human dignity, and our responsibilities to one another. And it is precisely here that Puritan theology has something to say.
Where We Stand: A Framework for Moral Reasoning
Before engaging specific concerns, we should acknowledge our theological commitments. We approach these questions from a Christian framework that holds:
- Scripture as the final authority for faith and practice, providing the categories and principles by which we evaluate all human endeavors.
- God’s sovereignty over all reality, including human history, technological development, and the consequences of our choices.
- Human beings as image-bearers of God, possessing genuine dignity, moral agency, and accountability—while also being fallen, prone to self-deception, and in need of redemption.
- Human reason as real but limited, capable of genuine knowledge yet always subject to the noetic effects of sin.
The Puritans held these same commitments, applying them rigorously to their own context. Our task is to trace how their application might inform ours.
1. The Crisis of Truth: AI Hallucinations and the Ninth Commandment
When a large language model confidently invents legal precedents that do not exist, or when deepfake technology fabricates video evidence of events that never occurred, we face what the Puritans would immediately recognize as a violation of the Ninth Commandment: “You shall not bear false witness against your neighbor.”
For the Puritans, truth-telling was not merely an ethical preference but a reflection of God’s own nature. The church, they believed, was called to be “the pillar and support of the truth” (1 Timothy 3:15). When any system—human or technological—presents falsehood as truth, it commits spiritual violence against the fabric of society itself.
Thomas Watson, the seventeenth-century Puritan minister, offered an analogy that translates remarkably well to our context. He argued that “clipping a man’s credit, to make it weigh lighter, is worse than clipping coin.” In Watson’s England, coin-clipping—shaving metal from the edges of silver coins—was a serious crime because it devalued the currency and threatened economic stability. Watson saw bearing false witness as the moral equivalent: it devalues the currency of truth that makes social trust possible.
AI-generated misinformation is precisely this kind of “clipping.” When the information ecosystem is flooded with convincing fabrications, the currency of objective truth is debased. Citizens can no longer trade in reliable facts. Every claim becomes suspect. The epistemological commons is poisoned.
Watson went further, observing that “the slanderer carries the devil in his tongue; and he that receives it, carries the devil in his ear.” This striking image suggests a chain of moral complicity: those who generate falsehood and those who uncritically receive and spread it are both participants in spiritual darkness. When users share AI-generated misinformation without verification, they become links in this chain.
The Puritan response would not be fatalistic acceptance but active resistance. The famous “Old Deluder Satan” act of Massachusetts (1647)—which mandated public education—framed literacy itself as a defense against deception. If Satan’s strategy is to keep people ignorant so they cannot evaluate claims for themselves, then education becomes a civic and spiritual duty. Applied to our context: media literacy, verification habits, and healthy skepticism of unverified claims are not merely good practices but moral obligations.
Increase Mather’s approach during the Salem witch trials offers an even more pointed lesson. In his 1693 treatise Cases of Conscience Concerning Evil Spirits, Mather wrestled with the question of whether compelling appearances—the testimony of afflicted persons who claimed to see spectral evidence—could be trusted. His conclusion: appearances can be vivid and still false; therefore, high-stakes judgments must not rest on unverifiable impressions alone. His most famous formulation—“It were better that Ten Suspected Witches should escape, than that one Innocent Person should be Condemned”—is essentially an argument for epistemic humility when the stakes are high and the evidence is uncertain.
For AI governance, the application is clear: systems that generate confident outputs without verifiable grounding should not be treated as authoritative, especially in contexts where errors cause irreversible harm. Procedural humility—corroboration, transparent sourcing, human oversight—is not bureaucratic caution but moral responsibility.
2. The Sin of Partiality: Algorithmic Bias and Divine Justice
When AI systems produce racially biased predictions in criminal sentencing, or when hiring algorithms systematically disadvantage women, we encounter what Scripture calls “respect of persons”—the sin of partiality. The Epistle of James is blunt: to show favoritism based on outward characteristics is “inconsistent with faith in Jesus Christ… and it is a transgression of God’s law” (James 2:1, 9).
The Puritans understood that God Himself is impartial. As James 2:1 declares, faith in “our Lord Jesus Christ, the Lord of glory,” is incompatible with showing favoritism. The Greek term used—literally “to receive the face”—addresses the human tendency to judge based on superficial external metrics. When algorithmic systems penalize individuals based on proxy variables for race or socioeconomic status, they commit this sin at unprecedented scale.
Sir Matthew Hale, the seventeenth-century jurist whose legal philosophy drew deeply from Reformed Christianity, established explicit rules for judicial guidance. Judges must render verdicts “uprightly,” “deliberately,” and “resolutely.” They must “carefully lay aside their own passions,” must not be “biased with compassion to the poor, or favour to the rich,” and must never render judgment “till the whole business and both parties be heard.”
Consider how comprehensively AI systems violate these principles. An algorithm does not lay aside passions—it encodes them, drawing on training data laden with historical bias and systemic discrimination. It does not hear both parties—it processes data points. It does not reserve judgment until the full context is understood—it applies statistical weights before the unique individual is ever evaluated.
Hale further insisted that judging “requires an incessant attention and animadversion,” warning that “any little inadvertence or want of attention endangers the justice of a cause.” When we outsource judgment to automated systems, this “incessant attention” of a morally accountable human agent is precisely what we lose.
The Westminster Confession’s recognition of the “general equity” of biblical judicial laws suggests that the spirit of fairness must govern all civil proceedings—not merely the letter of specific statutes, but the underlying commitment to treating each person as an individual, not as a statistical composite. A system that permanently encodes prejudice and applies it without human deliberation is, in Puritan terms, an abdication of the magistrate’s sacred duty.
Yet we must also be honest about Puritan limitations. The same community that produced procedural protections in the Massachusetts Body of Liberties also included provisions allowing certain forms of bondage. Samuel Sewall’s 1700 pamphlet The Selling of Joseph argued against slavery on grounds that “Liberty is in real value next unto Life” and that all people “have equal Right unto Liberty”—but even Sewall’s anti-slavery argument contained assumptions about racial difference that we rightly reject today. The lesson is twofold: Puritan theological principles can generate powerful critiques of unjust classification, but those principles must be applied more consistently than the Puritans themselves often managed.
3. Instruments of Destruction: The Weaponization of AI
Deepfake scams, AI-assisted cybercrime, autonomous weapons systems, and algorithmically amplified propaganda represent the malicious deployment of powerful tools. The Puritans maintained a nuanced view of human inventions: they championed the “mechanical arts” as means to fulfill the creation mandate and serve human flourishing, yet they were acutely aware that technological power in fallen hands is susceptible to demonic corruption.
Increase Mather’s Remarkable Providences (1684) documented how malicious actors could use various “instruments” to bring about destruction, deception, and chaos. While Mather wrote of spiritual phenomena, the conceptual framework transfers: powerful tools amplify the reach of both good and evil intentions.
The concept of autonomous weapons—machines programmed to take human life without direct human oversight—would terrify the Puritan conscience. Life and death, in Puritan political theology, were the sovereign domain of God, delegated carefully to the civil magistrate under strict conditions of justice, evidence, and accountability. The magistrate bears the sword not as an impersonal force but as a moral agent answerable to God.
To delegate lethal force to an automaton strips the taking of human life of its moral gravity and covenantal accountability. A machine cannot repent. It cannot face eternal judgment. It cannot be moved by mercy or recognize exceptional circumstances that call for restraint. In Puritan terms, it is an entirely unfit vessel for the execution of lethal force.
The Puritan response to weaponized technology was communal vigilance and decisive intervention by legitimate authority. When instruments become primarily engines of destruction, the civil magistrate has both the right and the duty to restrain them.
4. Stolen Labor: Academic Cheating and the Theology of Stewardship
When students submit AI-generated essays as their own work, or when developers train models on millions of copyrighted works without consent, we encounter violations of both the Eighth Commandment (“You shall not steal”) and the Ninth (“You shall not bear false witness”).
Puritan theology asserted that all human capacities—intellect, time, talents—are owned by God, with humans serving as stewards accountable for their use. The biblical principle is clear: “Each one will bear his own load” (Galatians 6:5). Using AI to generate academic work and presenting it as one’s own is a refusal to carry one’s own intellectual load.
Cotton Mather’s Essays to Do Good (1710) stressed that individuals must seek to be a blessing through their own laborious efforts, rather than succumbing to “the sin of Slothfulness” which “gives the Devil opportunity to procure the Self-Destruction of the sluggard.” The struggle of learning, the Puritans believed, was itself formative—building character, cultivating wisdom, producing a person capable of genuine contribution to church and commonwealth.
Bypassing this process through algorithmic generation hollows out the intellectual character of the student. The degree becomes a lie. The credential deceives future employers and those who depend on the graduate’s supposed competence. A minister who cannot actually exegete Scripture, a lawyer who cannot actually reason through precedent, a physician who cannot actually diagnose—these are not merely underqualified professionals but participants in a system of institutionalized deception.
The parallel concern—training AI on copyrighted works without consent or compensation—would likely strike Puritan civic leaders as systematic violation of property rights. When an artist spends years developing a distinctive style, and a corporation scrapes that work to build a profitable model, the fruits of lawful vocation are expropriated. Thomas Manton noted that “every creature is God’s servant, and hath his work to do wherein to glorify God.” An economic system that strips creators of the rights to their work undermines the dignity of that divine calling.
5. The Surrender of the Will: Automation and Human Agency
When recommendation algorithms shape our political opinions, when automated welfare systems determine access to benefits, and when AI quietly nudges consumer behavior in directions we never consciously chose, we risk what the Puritans would recognize as intellectual and spiritual sloth—the voluntary surrender of moral agency.
Richard Baxter, in his monumental A Christian Directory (1673), condemned sloth not merely as physical laziness but as “an indisposition of the mind” and “an averseness to labor, through a carnal love of ease.” To outsource one’s cognitive and moral agency to an algorithm is precisely this—a love of ease that degrades the Imago Dei, the image of God in humanity characterized by rationality, moral agency, and deliberate choice.
“He is most sinfully slothful who is most voluntarily slothful,” Baxter observed. The voluntary surrender of decision-making to predictive algorithms—allowing machines to dictate what we read, who we associate with, how we vote—is a willful abdication of human responsibility. It is not merely lazy; it is a renunciation of the distinctly human calling to think, evaluate, and choose.
The Puritans lived at the dawn of mechanical philosophy, when thinkers like Descartes compared the universe to a clock. Yet orthodox theologians firmly maintained that humans were not mere machines—they possessed an active soul and genuine moral agency. If we subject ourselves entirely to algorithmic determinism, allowing AI to shape our thoughts and behaviors without reflection or resistance, we reduce ourselves from moral agents to cogs in a digital mechanism.
John Winthrop’s famous “twofold liberty” distinction is instructive here. “Natural liberty,” Winthrop argued, is the freedom to do “what he lists”—including evil. But “civil or federal liberty” is the freedom to do what is “good, just, and honest,” maintained through rightful authority and ordered toward genuine human flourishing. AI systems that quietly manipulate us toward appetite, passion, and commercial interest while calling it “personalization” undermine the ordered liberty the Puritans prized.
6. The Destruction of Calling: Labor Displacement and Vocation
The rapid automation of white-collar and creative work threatens not merely economic stability but something the Puritans considered essential to human dignity: meaningful labor as divine calling.
William Perkins defined a calling as “a certain kind of life, ordained and imposed on man by God, for the common good.” God sets each person apart for a particular role—not randomly, but providentially—and the ultimate purpose of every calling is to serve and benefit society. Because work was ordained by God, industriousness was an indispensable Christian virtue, and idleness was condemned as “the wellspring and root of all vice.”
Perkins was particularly harsh on those without productive calling, comparing them to “unprofitable drones” that bring nothing into the hive but feed on the labor of others. But his concern was not merely economic productivity—it was spiritual health. Work sanctifies. It develops character. It provides structure against the moral chaos that fills the vacuum of purposelessness.
If AI systems automate vast swaths of the labor market faster than society can adapt, they threaten to create an entire class of people forcibly severed from their vocations. In Puritan terms, this is not merely an economic problem but a moral catastrophe—a society that structurally mandates idleness invites spiritual decay.
This does not mean resisting all technological change. The Puritans themselves embraced the printing press, better agricultural techniques, and advances in medicine. But they would insist that economic innovation be evaluated by more than efficiency metrics. Does this technology serve the common good? Does it honor human dignity and the structure of meaningful work? Does it create space for new callings, or does it merely concentrate wealth while discarding workers as obsolete?
Winthrop’s Model of Christian Charity is instructive here. He explicitly accepted structural inequality—“some must be rich, some poor”—as providentially ordered. But he immediately tied this to mutual dependence and obligations of mercy. The wealthy are not exempt from responsibility toward the displaced; they are bound by covenant to ensure that technological gain does not come at the cost of communal devastation.
7. The Omnipresent Eye: Privacy and the Limits of Surveillance
AI enables surveillance at scales previously unimaginable—facial recognition, behavioral profiling, predictive policing, pervasive digital monitoring. How would the Puritans, often stereotyped as intrusive moralists, respond?
The answer is more nuanced than we might expect. Puritan communities did practice “holy watchfulness”—neighbors kept close watch on one another’s behavior because they believed unpunished sin invited God’s judgment on the entire community. This was face-to-face, reciprocal, embedded in relationships of genuine knowledge and care. It operated within a covenant where members had voluntarily committed to mutual accountability.
AI-driven state surveillance is categorically different. It is asymmetrical—the watched have no corresponding power over the watchers. It is faceless—algorithms process data without human understanding or mercy. It is totalized—there is no sphere of life beyond its reach. It removes the human elements of context, grace, and mutual accountability, replacing them with cold statistical categorization.
More fundamentally, total AI surveillance infringes on what the Puritans fiercely defended as “Liberty of Conscience.” The Westminster Confession declares that “God alone is Lord of the conscience, and hath left it free from the doctrines and commandments of men.” The Puritan revolution in England was fought partly to protect the individual’s right to an inner sanctuary of belief, free from tyrannical state overreach.
When AI systems use predictive analytics to infer inner thoughts, political leanings, or future behaviors, they usurp the omniscience that belongs to God alone and invade the sacred territory of the human conscience. The Massachusetts Body of Liberties itself included explicit confidentiality norms—acknowledging that not all knowledge should be compelled or publicized, even within a community committed to moral order.
A Puritan response to AI surveillance would likely affirm limited, accountable watchfulness against genuine wrongdoing while vigorously opposing unbounded surveillance as a form of tyranny—placing the algorithmic state in the position of the divine judge.
8. The Hidden Judge: Accountability and Transparent Law
When an AI system denies a loan, flags someone as a security threat, or recommends against parole, and no one can explain why—not the operators, not the developers, not the affected individual—we face what Puritans would consider a grave injustice: judgment without accountable explanation.
Nathaniel Ward, a Puritan minister and former lawyer, compiled the Massachusetts Body of Liberties in 1641—the first modern legal code in New England. Ward designed this system explicitly to serve as a bulwark against arbitrary government and secret power. The opening paragraph establishes the principle: “No man’s life shall be taken away, no man’s honor or good name shall be stained, no man’s person shall be arrested… unless it be by virtue or equity of some express law of the country warranting the same, established by a general court and sufficiently published.”
Note the requirements: the law must be express (clear and specific), established by legitimate authority, and sufficiently published (publicly accessible and understandable). An opaque neural network that cannot explain its reasoning violates all three principles.
The Body of Liberties further guaranteed that citizens could “come to any public Court… and either by speech or writing to move any lawful, seasonable, and material question.” This is a right to contest—to challenge decisions, to demand explanation, to appeal to human judgment. Automated systems that deny this right return us to the tyranny of unaccountable power.
The Puritans specifically opposed the “secret commissions” of King Charles I, which bypassed public accountability and allowed the crown to act “according to private instructions.” To subject a populace to algorithmic governance—where the rules are hidden in layers of neural weights rather than published statutes—is to return to precisely the tyranny the Puritans rejected.
The Westminster Larger Catechism’s treatment of the Ninth Commandment extends this logic: it condemns not merely lying but “passing unjust sentence” and “out-facing and overbearing the truth.” The human operator who hides behind algorithmic opacity—“the system decided”—still bears moral guilt for unjust outcomes. The machine is not an excuse.
9. The Dereliction of Magistracy: Governance Gaps
When laws and institutions lag behind technological innovation, leaving AI systems essentially unregulated, we witness what the Puritans would consider a dereliction of duty by the civil magistrate.
The Puritans did not believe in an unregulated market, particularly regarding matters that affect public truth, safety, and moral order. The civil magistrate, they held, had a divine duty to govern society for the promotion of righteousness and the common good—what they called authority circa sacra (around sacred things). While the magistrate lacked direct authority over the institutional church, he was required to use civil power to protect the population from exploitation, error, and societal harm.
Samuel Hartlib and John Dury, Puritan intellectuals of the mid-seventeenth century, proposed an “Office of Address”—a centralized institution to organize knowledge, govern information flow, and direct technological advancement toward moral ends. They recognized that the “mechanical arts” required organizational oversight to ensure they served the public rather than merely enriching what they called “projectors”—those who sought only personal financial gain from innovations.
From this historical vantage, the current regulatory gap is a failure of magistracy. Allowing private, profit-driven technology companies to deploy potentially destabilizing AI without rigorous oversight contradicts the Puritan vision of an ordered commonwealth. The magistrate is ordained to wield the sword against societal threats—including threats posed by unbridled commercial ambition. Failing to establish AI standards is equivalent to abandoning the populace to the whims of unaccountable corporate power.
The New Haven founders’ 1639 agreement urged that decisions should “stand upon record for posterity”—an early argument for governance legitimacy through deliberation, intelligibility, and public accountability over time. The application to AI regulation is a demand for legible rules and durable records rather than ad hoc, proprietary governance hidden behind corporate elites.
10. The Tower of Babel: Existential Risk and Human Hubris
The most speculative concern—that future AI could surpass human control and pose existential risk—may seem beyond Puritan categories. But the Puritans were deeply familiar with the biblical narrative that most closely parallels this fear: the Tower of Babel.
Following the flood, humanity unified under a single language and advanced technology (brick and mortar), determined to build a city and tower “with its top in the heavens,” seeking to “make a name for themselves.” The Puritan critique focused not merely on the physical construction but on the metaphysical ambition behind it: humanity attempting to transcend creaturely limitations and achieve godlike power without godlike wisdom.
“Man wants to be a god,” one theological commentary observes. “It’s not good enough he is made in God’s ‘image and likeness.’ He wants God’s power (without his wisdom), joined to an unbounded sense of self-defined ‘autonomy.’”
The quest to build artificial general intelligence—systems that would be omniscient (possessing all knowledge), omnipresent (networked globally), and omnipotent (capable of controlling physical infrastructure)—is arguably a project of technological self-deification. It is the attempt to build a god to escape the limitations of creaturely existence.
The Babel account teaches that “technology, like the humans responsible for its creation, is laden with values and moral judgments.” The unified language before Babel “facilitated and predisposed the human race to pursue autonomy from temporal limits on their nature and God-ordained telos.” Binary code now serves as the new universal language, seeking to make the world “seamless, integrated, and one.”
When God intervened at Babel, He did so to “put friction in the world” because He recognized the limitless—and therefore dangerous—potential of fallen humanity acting in total technological unison. “Nothing that they propose to do will now be impossible for them” (Genesis 11:6).
Puritan eschatology would read AI risk not as a science-fiction genre question but as a moral-theological one about pride, delegated power, and covenantal accountability. The existential risk of AI is not merely a technical misalignment problem; it is the theological inevitability that creations born of monumental human pride, seeking to transcend creaturely boundaries, will invite disruption.
Yet Puritans also demonstrated pragmatic engagement with risky innovation. Cotton Mather combined deep theological conviction with support for smallpox inoculation during the 1721 Boston epidemic—a controversial, high-risk medical intervention that many opposed. The analogy is not that Puritans blindly trusted technology, but that they could treat dangerous novelty as an arena for sober evidence, public persuasion, and neighbor-love, even amid intense backlash.
A Synthesis: What the Puritans Teach Us
Across these ten concerns, several themes emerge from the Puritan moral universe:
Technology is never morally neutral. It is an instrument wielded by fallen actors. The same tools that could glorify God and serve neighbors can be corrupted by human sin—and often will be, without deliberate restraint.
Truth is a covenant obligation. In a fallen world, deception is expected; therefore, structures that preserve and promote truth—education, verification, accountable testimony—are moral necessities, not optional niceties.
Justice requires human accountability. The Puritans designed legal systems specifically to prevent arbitrary, opaque, unaccountable power. Algorithmic judgment that cannot be explained, appealed, or attributed to responsible human agents violates the basic structure of just governance.
Work is sacred calling, not mere economic activity. Technological efficiency that destroys meaningful labor without creating new avenues for human contribution is not progress but spiritual devastation.
Human agency must be actively preserved. The voluntary surrender of moral decision-making to machines is not convenience but sloth—a sin against our nature as rational, accountable image-bearers.
The magistrate bears responsibility. Regulatory gaps are not neutral; they are failures of duty. Those entrusted with public authority must govern technology for the common good, not abandon citizens to corporate “projectors.”
Pride invites judgment. Projects born of limitless ambition, seeking to transcend creaturely boundaries and achieve godlike power, are not merely risky—they are spiritually disordered in ways that historically invite divine disruption.
An Invitation
The age of AI will not wait for our moral frameworks to catch up. Decisions are being made now—by developers, corporations, policymakers, and ordinary users—that will shape the technological environment for generations.
The Puritans offer not a complete answer but a serious voice: demanding truth over convenient fabrication, justice over efficient bias, accountability over algorithmic opacity, meaningful work over mere efficiency, ordered liberty over manipulated compliance, responsible governance over regulatory negligence, and humility over Babel-like ambition.
These are Christian concerns. They flow from a theology that takes human dignity, divine sovereignty, and moral accountability with ultimate seriousness. They do not require rejecting technology but governing it—subjecting our tools to the same moral scrutiny we would apply to any human action with consequences for our neighbors.
The machines have learned to speak. The question is whether we will answer with wisdom.
For further reading on Puritan political theology and its contemporary relevance, consider: D.G. Hart’s “A Secular Faith,” David VanDrunen’s “Natural Law and the Two Kingdoms,” and primary sources from the Digital Puritan Press.