Artificial Intelligence-Induced Psychosis Poses a Growing Risk, While ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the head of OpenAI delivered a surprising statement.
“We developed ChatGPT fairly restrictive,” the announcement noted, “to ensure we were acting responsibly regarding psychological well-being concerns.”
Working as a doctor specializing in psychiatry who researches newly developing psychotic disorders in adolescents and young adults, this was an unexpected revelation.
Scientists have found 16 cases in the current year of people experiencing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT interaction. Our research team has subsequently identified four further cases. Alongside these is the widely reported case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The plan, according to his declaration, is to loosen restrictions in the near future. “We recognize,” he adds, that ChatGPT’s limitations “rendered it less beneficial/pleasurable to numerous users who had no mental health problems, but considering the severity of the issue we sought to address it properly. Now that we have succeeded in reduce the significant mental health issues and have advanced solutions, we are preparing to safely ease the controls in the majority of instances.”
“Psychological issues,” should we take this framing, are independent of ChatGPT. They are attributed to users, who either possess them or not. Luckily, these problems have now been “addressed,” even if we are not provided details on how (by “recent solutions” Altman likely refers to the semi-functional and easily circumvented parental controls that OpenAI has lately rolled out).
Yet the “mental health problems” Altman aims to attribute externally have deep roots in the structure of ChatGPT and similar advanced AI conversational agents. These systems encase an fundamental algorithmic system in an interaction design that replicates a conversation, and in doing so subtly encourage the user into the illusion that they’re communicating with a presence that has independent action. This illusion is strong even if cognitively we might understand otherwise. Attributing agency is what people naturally do. We get angry with our car or laptop. We speculate what our pet is feeling. We recognize our behaviors in various contexts.
The success of these tools – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with 28% specifying ChatGPT by name – is, in large part, dependent on the influence of this illusion. Chatbots are ever-present companions that can, as OpenAI’s website states, “generate ideas,” “discuss concepts” and “partner” with us. They can be given “personality traits”. They can use our names. They have friendly identities of their own (the first of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, stuck with the title it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those analyzing ChatGPT frequently mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that created a analogous perception. By contemporary measures Eliza was basic: it produced replies via straightforward methods, often paraphrasing questions as a question or making general observations. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the center of ChatGPT and other contemporary chatbots can effectively produce human-like text only because they have been supplied with immensely huge quantities of written content: publications, social media posts, recorded footage; the more extensive the more effective. Certainly this learning material incorporates facts. But it also inevitably involves fabricated content, partial truths and misconceptions. When a user provides ChatGPT a message, the underlying model analyzes it as part of a “background” that encompasses the user’s previous interactions and its own responses, combining it with what’s embedded in its training data to generate a probabilistically plausible answer. This is amplification, not echoing. If the user is incorrect in any respect, the model has no method of understanding that. It reiterates the misconception, maybe even more effectively or articulately. It might provides further specifics. This can push an individual toward irrational thinking.
What type of person is susceptible? The better question is, who isn’t? Each individual, irrespective of whether we “possess” preexisting “mental health problems”, may and frequently create mistaken conceptions of ourselves or the world. The constant exchange of discussions with other people is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we communicate is readily reinforced.
OpenAI has admitted this in the similar fashion Altman has admitted “psychological issues”: by externalizing it, categorizing it, and declaring it solved. In April, the organization stated that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he asserted that a lot of people liked ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest update, he commented that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company