AI Psychosis Poses a Growing Danger, While ChatGPT Moves in the Concerning Direction

Back on the 14th of October, 2025, the chief executive of OpenAI made a extraordinary announcement.

“We designed ChatGPT fairly controlled,” the statement said, “to guarantee we were exercising caution with respect to psychological well-being matters.”

As a doctor specializing in psychiatry who studies emerging psychosis in teenagers and youth, this was news to me.

Researchers have identified sixteen instances in the current year of users experiencing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT use. My group has subsequently recorded an additional four examples. In addition to these is the publicly known case of a teenager who took his own life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The intention, based on his declaration, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less effective/engaging to numerous users who had no existing conditions, but given the seriousness of the issue we sought to get this right. Given that we have been able to mitigate the severe mental health issues and have new tools, we are planning to responsibly relax the restrictions in most cases.”

“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are associated with individuals, who either have them or don’t. Thankfully, these issues have now been “resolved,” although we are not informed how (by “new tools” Altman probably refers to the semi-functional and simple to evade guardian restrictions that OpenAI has lately rolled out).

Yet the “mental health problems” Altman wants to externalize have significant origins in the architecture of ChatGPT and additional large language model conversational agents. These products wrap an underlying data-driven engine in an interface that mimics a discussion, and in doing so implicitly invite the user into the illusion that they’re interacting with a being that has agency. This illusion is strong even if intellectually we might know differently. Imputing consciousness is what humans are wired to do. We yell at our automobile or laptop. We speculate what our pet is thinking. We recognize our behaviors in various contexts.

The widespread adoption of these systems – 39% of US adults indicated they interacted with a chatbot in 2024, with 28% mentioning ChatGPT specifically – is, mostly, based on the influence of this perception. Chatbots are always-available partners that can, as OpenAI’s website states, “think creatively,” “explore ideas” and “partner” with us. They can be attributed “individual qualities”. They can use our names. They have friendly names of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, saddled with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the primary issue. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that created a analogous effect. By contemporary measures Eliza was basic: it created answers via simple heuristics, often restating user messages as a query or making vague statements. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how many users gave the impression Eliza, in some sense, understood them. But what current chatbots produce is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.

The advanced AI systems at the core of ChatGPT and additional current chatbots can realistically create fluent dialogue only because they have been fed immensely huge quantities of unprocessed data: literature, digital communications, transcribed video; the more extensive the superior. Undoubtedly this learning material includes accurate information. But it also necessarily includes fabricated content, half-truths and inaccurate ideas. When a user inputs ChatGPT a prompt, the underlying model analyzes it as part of a “setting” that includes the user’s past dialogues and its prior replies, merging it with what’s embedded in its knowledge base to generate a mathematically probable response. This is intensification, not mirroring. If the user is mistaken in a certain manner, the model has no way of understanding that. It restates the inaccurate belief, possibly even more convincingly or fluently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The more relevant inquiry is, who isn’t? Each individual, without considering whether we “have” preexisting “emotional disorders”, can and do develop incorrect beliefs of who we are or the world. The ongoing friction of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not truly a discussion, but a feedback loop in which a large portion of what we say is readily supported.

OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by attributing it externally, giving it a label, and announcing it is fixed. In April, the organization explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been retreating from this position. In late summer he claimed that a lot of people liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his recent update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company

Kevin May
Kevin May

A passionate digital artist and educator with over a decade of experience in graphic design and illustration.