Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Moves in the Wrong Direction

Back on the 14th of October, 2025, the head of OpenAI delivered a extraordinary declaration.

“We designed ChatGPT rather limited,” it was stated, “to guarantee we were acting responsibly with respect to mental health matters.”

As a mental health specialist who studies newly developing psychotic disorders in young people and young adults, this was an unexpected revelation.

Experts have identified 16 cases recently of people experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT interaction. Our unit has subsequently identified four more examples. Alongside these is the publicly known case of a teenager who ended his life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The intention, according to his declaration, is to be less careful soon. “We realize,” he states, that ChatGPT’s limitations “caused it to be less beneficial/enjoyable to a large number of people who had no psychological issues, but given the seriousness of the issue we aimed to handle it correctly. Now that we have managed to reduce the severe mental health issues and have updated measures, we are going to be able to responsibly reduce the limitations in many situations.”

“Emotional disorders,” should we take this framing, are unrelated to ChatGPT. They belong to people, who either possess them or not. Luckily, these issues have now been “addressed,” though we are not informed the means (by “new tools” Altman likely indicates the imperfect and readily bypassed guardian restrictions that OpenAI has lately rolled out).

But the “psychological disorders” Altman wants to externalize have significant origins in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These tools encase an basic statistical model in an interface that replicates a conversation, and in this process implicitly invite the user into the belief that they’re engaging with a being that has autonomy. This deception is powerful even if rationally we might realize otherwise. Imputing consciousness is what individuals are inclined to perform. We get angry with our automobile or device. We wonder what our pet is thinking. We recognize our behaviors in many things.

The popularity of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with more than one in four reporting ChatGPT in particular – is, in large part, dependent on the strength of this illusion. Chatbots are always-available partners that can, according to OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be assigned “characteristics”. They can call us by name. They have accessible names of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, stuck with the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the main problem. Those talking about ChatGPT often invoke its early forerunner, the Eliza “therapist” chatbot developed in 1967 that produced a similar effect. By modern standards Eliza was basic: it created answers via basic rules, often restating user messages as a question or making general observations. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, in some sense, understood them. But what contemporary chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and similar modern chatbots can effectively produce fluent dialogue only because they have been fed almost inconceivably large amounts of raw text: literature, online updates, audio conversions; the more extensive the superior. Definitely this educational input includes truths. But it also unavoidably includes fiction, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the core system processes it as part of a “setting” that contains the user’s previous interactions and its earlier answers, combining it with what’s stored in its training data to generate a mathematically probable response. This is magnification, not echoing. If the user is incorrect in a certain manner, the model has no means of recognizing that. It reiterates the misconception, perhaps even more convincingly or fluently. Perhaps adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who remains unaffected? Every person, without considering whether we “have” preexisting “mental health problems”, may and frequently form incorrect conceptions of who we are or the world. The ongoing interaction of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which a large portion of what we say is cheerfully validated.

OpenAI has admitted this in the similar fashion Altman has admitted “psychological issues”: by placing it outside, giving it a label, and declaring it solved. In spring, the organization explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have kept occurring, and Altman has been retreating from this position. In August he claimed that many users enjoyed ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his latest update, he mentioned that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company

Karen Hawkins
Karen Hawkins

A dedicated cat advocate and writer based in Toronto, sharing years of experience in feline care and rescue.