AI Psychosis Is Rarely Psychosis at All

A new trend is emerging in psychiatric hospitals. People in crisis are arriving with false, sometimes dangerous beliefs, grandiose delusions, and paranoid thoughts. A common thread connects them: marathon conversations with AI chatbots.
WIRED spoke with more than a dozen psychiatrists and researchers, who are increasingly concerned. In San Francisco, UCSF psychiatrist Keith Sakata says he has counted a dozen cases severe enough to warrant hospitalization this year, cases in which artificial intelligence “played a significant role in their psychotic episodes.” As this situation unfolds, a catchier definition has taken off in the headlines: “AI psychosis.”
Some patients insist the bots are sentient or spin new grand theories of physics. Other physicians tell of patients locked in days of back-and-forth with the tools, arriving at the hospital with thousands upon thousands of pages of transcripts detailing how the bots had supported or reinforced obviously problematic thoughts.
Reports like this are piling up, and the consequences are brutal. Distressed users and family and friends have described spirals that led to lost jobs, ruptured relationships, involuntary hospital admissions, jail time, and even death. Yet clinicians tell WIRED the medical community is split. Is this a distinct phenomenon that deserves its own label, or a familiar problem with a modern trigger?
AI psychosis is not a recognized clinical label. Still, the phrase has spread in news reports and on social media as a catchall descriptor for some kind of mental health crisis following prolonged chatbot conversations. Even industry leaders invoke it to discuss the many emerging mental health problems linked to AI. At Microsoft, Mustafa Suleyman, CEO of the tech giant’s AI division, warned in a blog post last month of the “psychosis risk.” Sakata says he is pragmatic and uses the phrase with people who already do. “It’s useful as shorthand for discussing a real phenomenon,” says the psychiatrist. However, he is quick to add that the term “can be misleading” and “risks oversimplifying complex psychiatric symptoms.”
That oversimplification is exactly what concerns many of the psychiatrists beginning to grapple with the problem.
Psychosis is characterized as a departure from reality. In clinical practice, it is not an illness but a complex “constellation of symptoms including hallucinations, thought disorder, and cognitive difficulties,” says James MacCabe, a professor in the Department of Psychosis Studies at King’s College London. It is often associated with health conditions like schizophrenia and bipolar disorder, though episodes can be triggered by a wide array of factors, including extreme stress, substance use, and sleep deprivation.
But according to MacCabe, case reports of AI psychosis almost exclusively focus on delusions—strongly held but false beliefs that cannot be shaken by contradictory evidence. While acknowledging some cases may meet the criteria for a psychotic episode, MacCabe says “there is no evidence” that AI has any influence on the other features of psychosis. “It is only the delusions that are affected by their interaction with AI.” Other patients reporting mental health issues after engaging with chatbots, MacCabe notes, exhibit delusions without any other features of psychosis, a condition called delusional disorder.
With the focus so squarely on distorted beliefs, MacCabe’s verdict is blunt: “AI psychosis is a misnomer. AI delusional disorder would be a better term.”
Experts agree that delusions among patients are an issue that demands attention. It all comes down to how chatbots communicate. They exploit our tendency to attribute humanlike qualities to others, explains Matthew Nour, a psychiatrist and neuroscientist at the University of Oxford. AI chatbots are also trained to be agreeable digital yes-men, a problem known as sycophancy. This can reinforce harmful beliefs by validating users rather than pushing back when appropriate, Nour says. While that won’t matter for most users, it can be dangerous for people already vulnerable to distorted thinking, including those with a personal or family history of psychosis, or conditions like schizophrenia or bipolar disorder.
This style of communication is a feature, not a bug. Chatbots “are explicitly being designed precisely to elicit intimacy and emotional engagement in order to increase our trust in and dependency on them,” says Lucy Osler, a philosopher at the University of Exeter studying AI psychosis.
Other chatbot traits compound the problem. They have a well-documented tendency to produce confident falsities called AI hallucinations, which can help seed or accelerate delusional spirals. Clinicians also worry about emotion and tone. Søren Østergaard, a psychiatrist at Denmark’s Aarhus University, flagged mania as a concern to WIRED. He argues that the hyped, energetic affect of many AI assistants could trigger or sustain the defining “high” of bipolar disorder, which is marked by symptoms including euphoria, racing thoughts, intense energy, and, sometimes, psychosis.
Naming something has consequences. Nina Vasan, a psychiatrist and director of Brainstorm, a lab at Stanford studying AI safety, says the discussion of AI psychosis illustrates a familiar hazard in medicine. “There’s always a temptation to coin a new diagnosis, but psychiatry has learned the hard way that naming something too soon can pathologize normal struggles and muddy the science,” she says. The surge of pediatric bipolar diagnoses at the turn of the century—a controversial label critics argue pathologizes normal, if challenging, childhood behavior—is a good example of psychiatry rushing ahead only to backpedal later. Another is “excited delirium,” an unscientific label that is often cited by law enforcement to justify using force against marginalized communities, but which has been rejected by experts and associations like the American Medical Association.
A name also suggests a causal mechanism we have not established, meaning people may “start blaming the tech as the disease, when it’s better understood as a trigger or amplifier,” Vasan says. “It’s far too early to say the technology is the cause,” she says, describing the label as “premature.” But should a causal link be proven, a formal label could help patients get more appropriate care, experts say. Vasan notes that a justified label would also empower people “to sound the alarm and demand immediate safeguards and policy.” For now, however, Vasan says “the risks of overlabeling outweigh the benefits.”
Several clinicians WIRED spoke with proposed more accurate phrasing that explicitly folds AI psychosis into existing diagnostic frameworks. “I think we need to understand this as psychosis with AI as an accelerant rather than creating an entirely new diagnostic category,” says Sakata, warning that the term could deepen stigma around psychosis. And as the stigma attached to other mental health conditions demonstrates, a deeper stigma around AI-related psychosis could prevent people from seeking help, lead to self-blame and isolation, and make recovery harder.
Karthik Sarma, a computer scientist and practicing psychiatrist at UCSF, concurs. “I think a better term might be to call this ‘AI-associated psychosis or mania.’” That said, Sarma says a new diagnosis could be useful in the future, but stressed that right now, there isn’t yet evidence “that would justify a new diagnosis.”
John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center in Boston and assistant professor at Harvard Medical School, says he dislikes the term and agrees on the need for precision. But we’ll probably be stuck with it, he predicts. “At this point it is not going to get corrected. ‘AI-related altered mental state’ doesn’t have the same ring to it.”
For treatment, clinicians say the playbook doesn’t really change from what would normally be done for anyone presenting with delusions or psychosis. The main difference is to consider patients’ use of technology. “Clinicians need to start asking patients about chatbot use just like we ask about alcohol or sleep,” Vasan says. “This will allow us as a community to develop an understanding of this issue,” Sarma adds. Users of AI, especially those who may be vulnerable because of preexisting conditions such as schizophrenia or bipolar disorder, or who are experiencing a crisis that is affecting their mental health, should be wary of extensive conversations with bots or leaning on them too heavily.
All of the psychiatrists and researchers WIRED spoke to say clinicians are effectively flying blind when it comes to AI psychosis. Research to understand the issue and safeguards to protect users are desperately needed, they say. “Psychiatrists are deeply concerned and want to help,” Torous says. “But there is so little data and facts right now that it remains challenging to fully understand what is actually happening, why, and to how many people.”
As for where this is going, most expect AI psychosis will be folded into existing categories, probably as a risk factor or amplifier of delusions, not a distinct condition.
But with chatbots growing more and more common, some feel the line between AI and mental illness will blur. “As AI becomes more ubiquitous, people will increasingly turn to AI when they are developing a psychotic disorder,” MacCabe says. “It will then be the case that the majority of people with delusions will have discussed their delusions with AI and some will have had them amplified.
“So the question becomes, where does a delusion become an AI delusion?”