I thought I could fly: How AI triggered my psychotic breakdown

www.newsweek.com

Mental health professionals are beginning to warn about a new phenomenon that’s been called “AI psychosis,” where people slip into delusional thinking, paranoia or hallucinations triggered by their interactions with intelligent systems. In some cases, users begin to interpret chatbot responses as personally significant, sentient or containing hidden messages only for them. But with the rise of hyper-realistic AI images and videos, there is a far more potent psychological risk, especially, researchers say, for users with pre-existing vulnerabilities to psychosis.

Two years ago, I learned this firsthand. 

At the time, I was working as the head of user experience at a consumer AI image generation startup, spending up to nine hours a day prompting early generative systems to help improve our models. I’d previously been diagnosed with bipolar disorder, and was stable with medication and therapy.

At first, AI felt like magic. I could think of an idea, type in some text, and a few seconds later, see myself in absolutely any situation I could imagine: floating on Jupiter; wearing a halo and angelic wings; as a superstar in front of 70,000 people; in the form of a zombie.

But within a few months, that magic turned manic.

When I first started working with these tools, they were still unpredictable. Sometimes, images would have distorted faces, additional limbs and nudity even when you didn’t ask for it. I spent long hours curating the content to remove any abnormalities, but I was exposed to so many disturbing human shapes that I believe it started to distort my body perception and overstimulate my brain in ways that were genuinely harmful to my mental health.

Even once the tools became more stable, the images they generated leaned toward ideals: fewer flaws, smoother faces and slimmer bodies. Seeing AI images like this over and over again rewired my sense of normal. When I’d look at my real reflection, I’d see something that needed correction. 

Read MoreMy Turn

I began experimenting with fashion model AI images because we were trying to acquire app users interested in fashion at the time. I caught myself thinking, “If only I looked like my AI version.” I was obsessed with becoming skinnier, having a better body and perfect skin. 

My work hours grew longer and I started to lose sleep, instead just making AI images over and over again. The process itself was addictive because every new AI image gave me a hit of satisfaction, a small burst of dopamine. There was always one more idea, one more iteration to try, one more image to generate.

Soon, my mind unraveled into a manic bipolar episode, triggering psychosis. I stopped being able to tell what was real and what was fiction. I saw patterns where none existed, symbols in the outputs that felt like messages meant just for me. 

As I stared into these images, I started hearing auditory hallucinations that seemed to come from somewhere between the AI and my own mind. Some voices were comforting, while others were mocking or screamed at me. I would respond back to the voices as if they were real people talking to me in my bedroom.

When I saw an AI-generated image of me on a flying horse, I started to believe I could actually fly. The voices told me to fly off my balcony, made me feel confident that I could survive. This grandiose delusion almost pushed me to actually jump. 

After several sleepless nights, I crashed—both physically and emotionally. The high collapsed into exhaustion, fear, depression and confusion. It was one of the most frightening experiences of my life. 

The first step to deescalate the episode was to reach out to friends and family who had context on my mental illness. I ended up leaving the AI startup. Not being exposed to AI images on a daily basis helped me stabilize, though I didn’t actually realize that my work had been the trigger for my episode until I sought care from a clinician and explained what had happened. 

It took time, treatment and intensive integrative therapy for me to recover. I’ve since established a more balanced relationship with technology. I still use AI, but now I set strict limits—no late-night prompting and no endless iterations, for example.

I also learned to see my real self again. The mirror is no longer my enemy. I remind myself that imperfections are what make us human and what distinguish us from the glossy avatars that algorithms prefer.

And I now understand that what happened to me wasn’t just a coincidence of mental illness and technology. It was a form of digital addiction from months and months of AI image generation.

AI systems can hijack the brain’s dopamine loop, much like social media does. Every prompt, every image, every “success” keeps you chasing the next creative high. And yet, I don’t think enough people in the tech industry are talking about this. 

This is about understanding how intimately technology now interfaces with our psychology. We have built tools that blur the line between imagination and reality. That’s beautiful, but also dangerous, especially for people whose mental states are already fragile.

My story isn’t about blaming AI. It’s about understanding how intimately technology now interfaces with our psychology. We have built tools that blur the line between imagination and reality. That’s beautiful, but also dangerous, especially for people whose mental states are already fragile.

AI can be a source of inspiration and positive visualization. It is here to stay. But I also believe more mental health ethics in tech are necessary. We need boundaries, both personal and systemic. That means companies creating usage guidelines, including screen-time limits, age limitations, rest breaks and mental health warnings for both employees and users who spend hours inside generative systems. And it means users have to be educated enough to recognize when fascination becomes compulsion and when creativity becomes dependency.

Because for people like me, and for many others playing at the edge of machine creativity, the boundary between inspiration and instability is thinner than we think.

Caitlin Ner is a Director at PsyMed Ventures, a VC fund investing in mental and brain health. She is a mental health advocate focused on digital addiction and AI’s impacts to mental health. 

All views expressed in this article are the author's own.

If you or someone you know is considering suicide, contact the 988 Suicide and Crisis Lifeline by dialing 988, text "988" to the Crisis Text Line at 741741 or go to 988lifeline.org.