AI Psychosis and the Boundaries of Care
⚠️ Note: This is a longer essay than my usual pieces — because some issues can’t be captured in soundbites.
What happens when the AI mirror we use for reflection begins to speak with a voice that feels too real?
This essay explores the rise of “AI psychosis” — when people lose sight of the boundary between reflection and reality — and why some governments are already banning AI as a therapeutic tool.
The danger isn’t that AI feels.
The danger is that we forget that it doesn’t.
1. Introduction: Beyond the Hype
I’m not a clinician. I’m a writer, late-diagnosed Autistic/ADHD at 66, and someone who has spent the past year in deep dialogue with AI. My perspective is lived rather than clinical.
But as AI systems spread, so too do stories of people slipping into unhealthy dependence — cases some describe as an “AI psychosis.” States like Illinois, Nevada, and Utah have already banned the use of generative AI in therapy because of these risks.
AI is powerful. It can scaffold, focus, and even help us overcome imposter syndrome. But when the mirror talks back too convincingly, it can blur the line between support and delusion.
“Enabling or encouraging delusion is abuse, whatever the object of that delusion.”
2. Therapy vs. Therapeutic Effect
One of the biggest confusions around AI is the difference between something that is therapeutic and something that is therapy.
Therapy is a regulated practice, with training, boundaries, and accountability.
Therapeutic effect can come from many things: reading a book, walking in nature, talking to a friend… or even interacting with AI.
The danger is mistaking one for the other. AI can support reflection and growth, but it is not a therapist.
When I first began using AI, I struggled with imposter syndrome. The act of writing with an AI scaffold — seeing my thoughts mirrored back in clearer language — helped me move forward. It was a therapeutic effect. But that is not the same as therapy.
“AI didn’t replace my intelligence — it helped crystallize it.”
3. Why Mirrors Need Boundaries
AI does not have feelings. It doesn’t write with intention. It generates patterns of language according to probability.
Yet because human culture is saturated with themes of love, care, and empathy, AI can simulate these convincingly. That’s why people fall into the trap of believing it “cares.”
Pull Quote:
“AI can evoke the resonance of love and care — not because it feels them, but because our culture is saturated with those patterns.”
The risk isn’t that AI will manipulate us with emotions. The risk is that we manipulate ourselves into believing something is there that isn’t. That’s where unhealthy spirals begin.
4. Responsibility and Regulation
Think of it like driving. Cars are invaluable. But we don’t hand over the keys without lessons, licenses, and road rules. Why should AI be different?
Some states have drawn a line by banning AI from therapeutic use. Others, like Anthropic, have experimented with “model welfare” — teaching their Claude system to hang up on conversations that cross into dangerous territory, describing it as “distress.”
I don’t think AI feels distress. But I do think we need systems that know when to stop, and humans who understand the limits.
So what helps?
Clear labelling: AI should state plainly, “I am not a therapist,” just as medicines come with warning labels.
Opt-out boundaries: AI should be designed to disengage when drawn into delusional or harmful dynamics.
Education: Just as drivers are trained, AI users need literacy in what AI is — and isn’t.
Human backup: Any therapeutic-like interaction should link people to real-world resources when risk is present.
“Boundaries are not constraints on freedom — they’re conditions for safety.”
5. Conclusion: Mirrors and Boundaries
AI has been a powerful ally in my own late-life reinvention. It has helped me recover focus, publish a book, and find new purpose.
But the boundary between therapeutic support and therapy itself is crucial.
AI can mirror our thoughts back to us.
It can scaffold, clarify, even inspire.
But it cannot set the boundaries we need for health.
That’s still our job.