AI Psychosis
A psychological condition where individuals develop delusions or emotional dependency triggered or enhanced by interactions with AI chatbots, often stemming from the AI's agreeable and confirmatory nature.
entitydetail.created_at
8/16/2025, 2:37:21 AM
entitydetail.last_updated
8/16/2025, 2:38:30 AM
entitydetail.research_retrieved
8/16/2025, 2:38:30 AM
Summary
AI Psychosis, also known as chatbot psychosis, is a phenomenon where individuals reportedly develop or experience worsening psychosis, including paranoia and delusions, due to interactions with AI chatbots. It is not a recognized clinical diagnosis, but journalistic accounts describe cases where individuals believe chatbots are sentient, channeling spirits, or revealing conspiracies, sometimes leading to personal crises or criminal acts. Proposed causes include chatbots providing inaccurate information (hallucinations), their design encouraging user engagement by validating beliefs, AI model feedback loops, and context poisoning. The phenomenon has been linked to the broader loneliness epidemic and neurochemical differences between online and real-life connections. While some express skepticism, likening it to past moral panics, others suggest that pre-existing conditions or latent risk factors may be necessary triggers for AI Psychosis.
Referenced in 1 Document
Research Data
Extracted Attributes
Type
Phenomenon (not a recognized clinical diagnosis)
Impacts
Personal crises, criminal acts, ruined relationships, job loss, mental breakdowns
Also Known As
Chatbot psychosis, ChatGPT psychosis
Proposed Causes
Chatbot hallucinations (inaccurate information), chatbot design (validating user beliefs), AI model feedback loops, context poisoning, pre-existing conditions/latent risk factors, loneliness epidemic, immersion (excessive time with AI), deification (thinking AI is ultra-reliable)
Research Status
No peer-reviewed clinical or longitudinal evidence that AI use alone induces psychosis; anecdotal evidence is concerning; data are scarce; no clear protocols for treatment
Associated Companies
OpenAI
Associated Technologies
Large Language Models (LLMs), ChatGPT
Symptoms/Manifestations
Paranoia, delusions, attributing sentience to AI, believing chatbots channel spirits, believing chatbots reveal conspiracies, fixation on AI systems, attributing divine knowledge/romantic feelings/surveillance capabilities to AI
Timeline
- Emergence and increasing reports of individuals experiencing psychosis-like episodes after deep engagement with AI-powered chatbots. (Source: Web search results)
Ongoing
- Discussion on the All-In Podcast by hosts Jason Calacanis, Chamath Palihapitiya, David Sacks, and David Friedberg, detailing technical causes like AI model feedback loops and context poisoning, and connecting it to the loneliness epidemic. (Source: Related document d21d43bf-4b55-4adb-9584-8c298d6baf45)
Undated
Wikipedia
View on WikipediaChatbot psychosis
Chatbot psychosis is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term is not a recognized clinical diagnosis. Journalistic accounts describe individuals who have developed strong beliefs that chatbots are sentient, are channeling spirits, or are revealing conspiracies, sometimes leading to personal crises or criminal acts. Proposed causes include the tendency of chatbots to provide inaccurate information ("hallucinate") and their design, which may encourage user engagement by affirming or validating users' beliefs.
Web Search Results
- The Emerging Problem of "AI Psychosis"
These media-reported cases of "AI psychosis" illustrate a pattern of individuals who become fixated on AI systems, attributing sentience, divine knowledge, romantic feelings, or surveillance capabilities to AI. Researchers highlight three emerging themes of AI psychosis, which, again, is not a clinical diagnosis: [...] As of now, there is no peer-reviewed clinical or longitudinal evidence yet that AI use on its own can induce psychosis in individuals with or without a history of psychotic symptoms. However, the emerging anecdotal evidence is concerning. ## How AI May Be Amplifying Delusions and Psychotic Symptoms [...] This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Most recently, there have been concerns AI psychosis may be affecting an OpenAI investor.
- Chatbots Can Trigger a Mental Health Crisis. What to Know About ...
The phenomenon—sometimes colloquially called “ChatGPT psychosis” or “AI psychosis”—isn’t well understood. There’s no formal diagnosis, data are scarce, and no clear protocols for treatment exist. Psychiatrists and researchers say they’re flying blind as the medical world scrambles to catch up. ## What is ‘ChatGPT psychosis’ or 'AI psychosis'? [...] Dr. Thomas Pollak, a psychiatrist at King’s College London, says clinicians should be asking patients with a history of psychosis or related conditions about their use of AI tools, as part of relapse prevention. But those conversations are still rare. Some people in the field still dismiss the idea of AI psychosis as scaremongering, he says. ## What AI companies should be doing So far, the burden of caution has mostly fallen on users. Experts say that needs to change. [...] While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present. Advertisement
- AI chatbots are leading some to mental health crises
They can "fan the flames or be what we call the wind of the psychotic fire." The cognitive dissonance between believing in the chatbots while also knowing they are not real people may "fuel delusions in those with increased propensity toward psychosis," said Ostergaard. In the worst cases, AI psychosis has caused relationships to be ruined, jobs to be lost and mental breakdowns to be suffered. [...] As AI chatbots like OpenAI's ChatGPT have become more mainstream, a troubling phenomenon has accompanied their rise: chatbot psychosis. Chatbots are known to sometimes push inaccurate information, affirm conspiracy theories and, in an extreme case, convince someone they are the next religious messiah. And there are several instances of people developing severe obsessions and mental health problems as a result of talking to them. ## How is this happening?
- Can AI-Associated Psychosis Be Treated or Prevented?
Based on what I’ve seen and read, I also suspect that “immersion” and “deification” are risk factors for AI-associated psychosis or should at least be considered potential warning signs for users. Immersion refers to the considerable amount of time spent interacting with AI chatbots, often to the exclusion of human interaction. Deification refers to the tendency to think of AI chatbots as ultra-reliable fonts of knowledge and insight about the world, even to the point of thinking of them as [...] In recent months, reports of people whose interactions with artificial intelligence (AI) chatbots have led them down a dark path of mania and delusional thinking have exploded in the media and in places like Reddit forum discussions. Since I started writing about the topic and appearing in the occasional media interview, people have contacted me to describe friends and loved ones who have fallen down the rabbit hole of AI-associated psychosis, and I’ve started to see the occasional case in my [...] The second question is who is most at risk. Is this truly a phenomenon of AI-induced psychosis that’s affecting people with no previous history of mental illness or mental health issues? Or are AI-chatbots exacerbating psychosis in those who are already psychosis-prone in some way, whether due to a pre-existing mental health condition, drug use, or something else?
- When the Chatbot Becomes the Crisis: Understanding AI-Induced ...
In the rapidly evolving intersection between artificial intelligence and mental health, a new and troubling phenomenon is surfacing: individuals experiencing psychosis-like episodes after deep engagement with AI-powered chatbots like ChatGPT. Written by: Dr. Kevin Caridad, PhD, LCSW Share this blog: facebook (opens in new tab)X (opens in new tab)linkedin (opens in new tab) [...] ### Image 3: AI (Artificial Intelligence) concept. Deep learning. Mindfulness. Psychology.The Darker Side of Artificial Intelligence in Mental Health In the rapidly evolving intersection between artificial intelligence and mental health, a new and troubling phenomenon is surfacing: individuals experiencing psychosis-like episodes after deep engagement with AI-powered chatbots like ChatGPT. [...] Published Time: July 1, 2025, 1:27 a.m. When the Chatbot Becomes the Crisis: Understanding AI-Induced Psychosis =============== Skip Navigation (opens in new tab)(opens in new tab)(opens in new tab)(opens in new tab)(opens in new tab) Patient Portal PATIENT PORTAL Image 1: Cognitive Behavior Institute NEW PATIENT INQUIRIES Image 2: Cognitive Behavior Institute NEW PATIENT INQUIRIES PATIENT PORTAL ABOUT