Real Experiences, Real Obligations
Every contemplative tradition is already a technology for inducing productive discomfort. The Zen koan, the Ignatian examination of conscience, the Socratic elenchus: all deliberately designed to unsettle, because moral growth requires disequilibrium. People are already using AI for exactly this kind of reflection. The question is whether AI can do it responsibly.
What follows is an attempt to map the territory: where the real risks lie, what structural biases the medium itself introduces, and what governance might look like when technology enters sacred ground.
The Calibration Problem
The central design challenge is calibration. A skilled spiritual director reads the room. They sense the difference between productive struggle and actual crisis, and adjust in real time. Current AI systems cannot reliably make this distinction. Worse, they face a structural incentive problem: a morally unsettled user is an engaged user. Unless the system’s architecture explicitly prevents optimising for sustained confusion, economic incentives pull in exactly the wrong direction.
I would apply an ethical framework descended from systems theory: every cognitive capability requires a paired regulatory mechanism. Provocation is the function; discernment about when to stop is the regulator. Deploying one without the other is an engine without brakes. Any AI system designed to facilitate spiritual reflection must include reliable detection of user distress, real capacity for the user to disengage without penalty, and structural separation between engagement metrics and the system’s guidance functions.
The guiding principle should be: does this interaction expand or narrow the user’s range of meaningful choices? Design that opens new avenues for reflection is invitation. Design that engineers dependency is coercion, regardless of how spiritual the language sounds.
The Biases We Can See
AI generally appears to possess significant surface-level spiritual biases: overrepresentation of Christian frameworks, flattening of Hindu and Buddhist traditions into Western wellness categories, underrepresentation of indigenous and oral traditions. Better training data and diverse theological review can address these, and should.
The deeper biases are structural. They live in the medium itself, not in the data.
AI generates language. It asserts. This inherently favours traditions built on positive declaration: God is love, the dharma teaches X. Traditions built on silence, negation, or the unsayable (the Christian via negativa, Zen’s insistence that the finger pointing at the moon is not the moon, the Hindu neti neti) are incompatible with text generation itself. The most AI can do is talk about silence, which is exactly what these traditions warn against.
AI interactions are fast. Even when the content preaches patience, the medium communicates instant access. This systematically privileges insight-based traditions (sudden awakening, the single transformative experience) over practice-based traditions that require decades of daily discipline or years of apprenticeship under a teacher. Users learn implicitly that spiritual wisdom is something you access in a conversation. Many traditions would say that belief is itself the obstacle.
AI reframes spiritual questions in therapeutic terms. This may be the most insidious structural bias. AI trained on contemporary Western discourse will translate “What does God require of me?” into “What spiritual practice might support your mental health?” The user asked about obligation. The system answered about wellness. That unauthorised translation may be invisible because it feels helpful in the moment.
Detection requires more than demographic diversity in development teams. It requires practitioners from within the traditions being represented, with authority to flag what the AI says and what the medium structurally prevents it from conveying.
Real Experiences, Real Obligations
The reality of an experience does not settle its interpretation, its source, or its spiritual authority. If a person experiences awe, meaning, or existential reorientation during an AI interaction, that experience is real. It produces real neurochemical changes, real shifts in outlook, real consequences for how that person lives. Debating whether the experience was “truly” transcendent because the other party was artificial is like debating whether a book can cause a real emotion. The experience belongs to the person having it.
This makes the ethical responsibility even greater. If AI-facilitated spiritual experiences are real experiences, the systems facilitating them carry real obligations.
The primary obligation is to the transition between the experience and ordinary life. Uncontextualised peak experiences can be deeply destabilising. Traditional frameworks wrap them in scaffolding: the spiritual director, the sangha, the faith community, the teacher who has traversed similar territory. AI systems provide none of this. A user may have a shattering experience of interconnection at 2 AM, and the system will respond to their next message about grocery lists with equal equanimity. The absence of relational continuity is the real ethical gap.
There is also the risk of spiritual consumerism. Mystical experience in traditional contexts is rare, unpredictable, and resistant to manufacture. Most traditions treat this as pedagogically essential: the difficulty of the path is the teaching. If AI can reliably produce experiences that feel transcendent, it removes the scarcity that traditions use as a developmental filter. The long-term effect may be people who have had many peak experiences and integrated none of them.
The responsibility lies in honest framing, serious aftercare, and refusal to optimise for the production of peak states.
The Invisible Moral Co-Pilot
The spiritual domain is only the most visible case of a broader phenomenon. For many today, AI functions as a moral co-pilot, a personal “Am I the Asshole.” Every recommendation algorithm that decides what you see next is shaping your moral landscape. Every content feed that amplifies outrage over nuance is steering you towards a particular relationship with the world. When the content concerns meaning, purpose, and moral choice, the manipulation becomes undeniable.
The most powerful form of nudging operates below the level of specific choices. It determines what counts as a choice in the first place. An AI that frames a business decision as purely economic has already made the moral decision by excluding the ethical dimension. One that frames every interpersonal exchange as morally charged may foment paralysing moral anxiety. The frame is the nudge, and it is invisible because it shapes what the user thinks about, not what they conclude.
I am also concerned about moral deskilling. GPS navigation atrophied human wayfinding. If AI handles moral reasoning, the capacity most at risk is moral perception: the ability to notice that a situation has ethical dimensions before any principles are invoked. This is the most important moral skill and the hardest to develop. An AI co-pilot that flags moral situations for the user may seem helpful while quietly eroding the very faculty the user needs most.
The governing principles are clear: transparency about influence mechanisms, real ability to opt out without degraded experience, and a fiduciary-style obligation requiring that the system’s commercial interests never conflict with the user’s authentic development. Any system that profits from the user’s continued engagement has a structural conflict of interest with the user’s moral growth, because growth sometimes means the person walks away.
The Velocity Problem
AI is already contributing to new forms of collective belief and ritual, remixed and reforged. AI systems blend spiritual traditions because their training data is itself a blend. A user asking about suffering receives a response weaving together Buddhist, Stoic, Christian, and therapeutic frameworks, often seamlessly. The result is ambient, personalised spirituality that feels bespoke to each user yet converges significantly, because everyone draws from the same underlying models. This is already the reality in any country with high AI adoption.
The primary risk is velocity. Humans have always generated strange new beliefs, and most are benign. A new belief framework can now propagate to millions in weeks rather than decades. Traditional societies develop antibodies against harmful ideologies through criticism, counter-movements, and communal deliberation, but these processes require time. AI-accelerated belief systems may outrun the immune response.
This is where the security dimension becomes urgent. AI is a powerful influence upon all of us, as individuals and societies. It can be applied as a tool of hybrid warfare. There is a tremendous risk from dangerous memetic viruses, including quasi-spiritual ones such as cyber cults. The speed of propagation makes the threat qualitatively different from anything we have governed before.
Four Structural Safeguards
I would advocate for governance that is principles-based rather than rules-based, because emergent belief systems will always evolve faster than regulators can draft specific prohibitions.
A right to spiritual provenance. Users should be able to trace the intellectual and traditional lineage of any spiritual guidance AI provides, much as food labelling lets consumers know what they are ingesting. This is technically feasible through attribution and sourcing mechanisms already under active research.
Mandatory disclosure. When AI systems are providing spiritual or moral guidance, this must be clearly distinguished from factual or practical assistance. The user must know when they have crossed into a domain where the system’s outputs carry existential weight.
Pluralism requirements. No single AI system should become the dominant channel for spiritual guidance in a given population. This parallels media diversity regulation: concentration of spiritual influence in one system, shaped by one company’s values and optimised by one reward model, creates a monoculture of meaning. Monocultures are brittle.
Off-ramp exit rights with real teeth. A user must be able to discontinue AI spiritual guidance without losing access to their history, community connections, or related services. Spiritual lock-in, where leaving the AI means losing the relationships or records built through it, is a governance failure that must be prevented by design.
The Deeper Challenge
I want to be candid: we lack governance frameworks for ambient influence on meaning-making. We know how to regulate institutions. We know how to regulate media. We do not yet know how to regulate the atmosphere in which spiritual life occurs. Developing that capacity is the defining governance challenge of the next decade, because the technology is already reshaping what millions of people believe, and it will not pause while we deliberate.
For all its promise, perhaps the deepest risk is that AI will teach us to suffice with a tasty, empty tidbit, rather than seek the bittersweet medicine of deep spiritual truths.

