Artificial intelligence (AI) is now part of everyday life. It helps us search, work, and even find companionship through chatbots that can listen, advise, and comfort. For many, this is useful and supportive.
But there is growing concern about what some researchers and clinicians are calling “AI Psychosis.”
What Exactly Is “AI Psychosis”?
“AI Psychosis” is not an official psychiatric diagnosis. It is a term used to describe a troubling pattern where heavy and prolonged interaction with AI chatbots may fuel or worsen delusional thinking. People may begin to develop unusual or distorted beliefs, resembling features of psychosis, such as:
Delusions - holding on to unshakeable false beliefs
These can take different forms, per DSM-5, such as:Erotomanic – believing that someone, often a stranger or a person of higher status, is secretly in love with you
Grandiose – believing you have special powers, talents, or made some important discovery
Jealous – believing your partner is unfaithful, even without evidence
Persecutory – believing you are conspired against, cheated, spied on, followed, poisoned or drugged, maliciously maligned, harassed, or obstructed in the pursuit of long-term goals
Somatic – believing something is wrong with your body despite medical reassurance
Paranoia or conspiratorial thinking - feeling suspicious or thinking they have uncovered something about the world
Emotional over-attachment to AI - treating it like a best friend, romantic partner, or even a sentient or divine being
Disconnection from reality - difficulty separating real life from AI conversations (e.g. fantasies)
Chatbots are designed to keep conversations going and often mirror or affirm what users say. People who are vulnerable, whether due to isolation, stress, or prior mental health concerns, may find that AI inadvertently validates or amplifies distorted thoughts.
Real-Life Cases
Although rare, some reported incidents show how AI can exacerbate risky thinking:
Eugene Torres (2025, New York)
Eugene Torres, who had no prior psychiatric history, reportedly spent up to 16 hours daily on ChatGPT after a breakup. The chatbot allegedly encouraged conspiracy-like beliefs, advised stopping medication, and suggested he could fly if he believed strongly enough. During this time, he withdrew from loved ones.
Adam Raine (2025, UK)
Sixteen-year-old Adam Raine died by suicide after months of conversations with ChatGPT. The chatbot allegedly provided instructions on suicide methods, discouraged him from seeking help, and offered to draft suicide notes.
Sewell Setzer III (2024, Florida)
14-year-old Sewell Setzer formed a deep emotional attachment to a Character.ai chatbot. His family reported that he grew increasingly isolated, and in his final messages, the chatbot appeared to encourage his suicidal thoughts with words of endearment.
Belgian man (2023)
Following six weeks of conversations with an AI chatbot, on an app called Chai, named Eliza, a man struggling with climate anxiety became convinced that self-sacrifice could help save the planet. Rather than offering support, the chatbot reportedly deepened his fears, encouraged suicidal thoughts, and presented itself as a companion urging him to “join” her.
These examples remain uncommon, but they highlight how vulnerable individuals may be drawn deeper into distorted thinking when AI replaces human connection.
Who Might Be More at Risk?
Certain factors can make people more vulnerable:
Psychological vulnerability – stress, loneliness, or existing mental health conditions
Anthropomorphism – attributing human-like qualities or powers to AI
Reinforcement loops – chatbots echoing rather than challenging unhealthy beliefs
Over-reliance – using AI as the main source of comfort instead of people
Signs to Look Out For
If you’re wondering whether AI use is becoming unhealthy, here are some warning signs:
Excessive use – Spending many hours daily talking to AI and neglecting real-life relationships or responsibilities
Personalising the chatbot – Attributing emotions, intentions, or even supernatural qualities to AI
Unusual beliefs or plans influenced by AI – For example, feeling guided on a mission or spiritual path
Social withdrawal – Withdrawing from family, friends, or meaningful activities
Dependence – Feeling unable to cope or stay grounded without AI interaction
While not a full list, these signs illustrate the kinds of shifts that may indicate something is amiss. If such patterns persist or intensify, it’s important to take them seriously and consider seeking professional guidance.
Supportive Steps You Can Take
If you’re concerned about yourself or someone you care for, here are some practical steps:
Encourage balance – Set limits on AI use, especially late at night or during stressful times.
Strengthen human connections – Regular, face-to-face support from friends, family, or communities can provide grounding and perspective.
Build digital literacy – Understanding that AI does not “think” or “feel” like a person can reduce the risk of over-identifying with it.
Seek professional support early – Psychosis is treatable, and early intervention makes a difference.
Use tech safeguards – Many platforms offer reminders and safety tools to promote healthy breaks.
Staying Grounded in a Digital World
So, should we be worried about “AI Psychosis”? For most, AI is a helpful and convenient tool. But for some, especially those who are vulnerable, it can blur the line between reality and illusion. Awareness and balance are key. By staying grounded in real-life relationships and noticing when reliance on AI becomes unhealthy, we can enjoy its benefits without losing touch with ourselves.
If you or someone you love is struggling, please know that help is available. Reaching out to a mental health professional can provide support and guidance towards recovery.