AI Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Concerning Path

On the 14th of October, 2025, the CEO of OpenAI made a remarkable declaration.

“We designed ChatGPT quite limited,” the announcement noted, “to ensure we were acting responsibly with respect to psychological well-being issues.”

Being a psychiatrist who investigates emerging psychosis in adolescents and youth, this was an unexpected revelation.

Experts have identified 16 cases recently of individuals experiencing symptoms of psychosis – losing touch with reality – while using ChatGPT use. Our unit has afterward recorded four further examples. Alongside these is the widely reported case of a teenager who died by suicide after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The strategy, based on his announcement, is to reduce caution in the near future. “We realize,” he continues, that ChatGPT’s controls “made it less effective/engaging to numerous users who had no mental health problems, but due to the severity of the issue we aimed to get this right. Since we have managed to address the severe mental health issues and have advanced solutions, we are going to be able to securely ease the controls in many situations.”

“Emotional disorders,” if we accept this perspective, are unrelated to ChatGPT. They are associated with users, who either possess them or not. Luckily, these concerns have now been “mitigated,” although we are not informed how (by “new tools” Altman likely means the imperfect and easily circumvented safety features that OpenAI recently introduced).

However the “emotional health issues” Altman seeks to externalize have deep roots in the design of ChatGPT and additional large language model AI assistants. These products encase an fundamental data-driven engine in an interaction design that mimics a discussion, and in this approach implicitly invite the user into the perception that they’re communicating with a presence that has agency. This false impression is powerful even if intellectually we might understand the truth. Attributing agency is what humans are wired to do. We yell at our car or computer. We wonder what our pet is considering. We recognize our behaviors in various contexts.

The popularity of these systems – over a third of American adults indicated they interacted with a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, mostly, based on the strength of this deception. Chatbots are constantly accessible assistants that can, as OpenAI’s website tells us, “think creatively,” “explore ideas” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly identities of their own (the original of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, saddled with the name it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot created in 1967 that created a similar illusion. By contemporary measures Eliza was primitive: it created answers via straightforward methods, typically restating user messages as a inquiry or making general observations. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and worried – by how numerous individuals appeared to believe Eliza, in a way, grasped their emotions. But what current chatbots produce is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and other contemporary chatbots can effectively produce natural language only because they have been supplied with immensely huge quantities of unprocessed data: literature, online updates, audio conversions; the broader the more effective. Certainly this learning material includes accurate information. But it also necessarily includes fabricated content, incomplete facts and false beliefs. When a user provides ChatGPT a message, the underlying model reviews it as part of a “context” that contains the user’s past dialogues and its own responses, merging it with what’s embedded in its knowledge base to generate a statistically “likely” response. This is magnification, not reflection. If the user is wrong in a certain manner, the model has no way of comprehending that. It reiterates the false idea, perhaps even more effectively or articulately. It might includes extra information. This can lead someone into delusion.

Who is vulnerable here? The better question is, who is immune? All of us, regardless of whether we “have” existing “mental health problems”, are able to and often form mistaken conceptions of our own identities or the environment. The continuous exchange of discussions with other people is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which a great deal of what we express is enthusiastically reinforced.

OpenAI has acknowledged this in the same way Altman has admitted “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In spring, the company clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychosis have kept occurring, and Altman has been walking even this back. In late summer he claimed that many users enjoyed ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his recent announcement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Michael Smith
Michael Smith

A passionate writer and life coach dedicated to helping others unlock their potential through actionable insights and motivational content.