Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Moves in the Concerning Path
On October 14, 2025, the chief executive of OpenAI made a remarkable declaration.
“We made ChatGPT quite limited,” the announcement noted, “to guarantee we were exercising caution concerning psychological well-being matters.”
As a doctor specializing in psychiatry who studies emerging psychosis in teenagers and emerging adults, this was news to me.
Scientists have found sixteen instances this year of users developing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT usage. Our unit has afterward identified four more examples. Besides these is the widely reported case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The strategy, according to his declaration, is to be less careful shortly. “We understand,” he continues, that ChatGPT’s controls “rendered it less effective/pleasurable to many users who had no existing conditions, but considering the severity of the issue we sought to address it properly. Given that we have been able to mitigate the severe mental health issues and have new tools, we are going to be able to responsibly ease the limitations in most cases.”
“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They are associated with users, who either possess them or not. Fortunately, these problems have now been “mitigated,” even if we are not provided details on the method (by “recent solutions” Altman presumably indicates the partially effective and easily circumvented safety features that OpenAI recently introduced).
Yet the “mental health problems” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and additional large language model conversational agents. These products surround an fundamental data-driven engine in an interface that replicates a conversation, and in this approach subtly encourage the user into the perception that they’re engaging with a presence that has autonomy. This false impression is powerful even if cognitively we might realize otherwise. Attributing agency is what people naturally do. We curse at our vehicle or laptop. We wonder what our domestic animal is considering. We perceive our own traits in various contexts.
The popularity of these systems – 39% of US adults stated they used a conversational AI in 2024, with more than one in four mentioning ChatGPT specifically – is, in large part, dependent on the power of this perception. Chatbots are constantly accessible companions that can, according to OpenAI’s website states, “generate ideas,” “explore ideas” and “collaborate” with us. They can be given “personality traits”. They can use our names. They have accessible titles of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the title it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the primary issue. Those talking about ChatGPT commonly reference its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a comparable perception. By contemporary measures Eliza was basic: it generated responses via basic rules, typically paraphrasing questions as a inquiry or making generic comments. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how many users appeared to believe Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the heart of ChatGPT and other modern chatbots can realistically create fluent dialogue only because they have been supplied with almost inconceivably large quantities of unprocessed data: books, social media posts, recorded footage; the more comprehensive the more effective. Definitely this educational input incorporates accurate information. But it also unavoidably contains fiction, half-truths and inaccurate ideas. When a user provides ChatGPT a prompt, the core system processes it as part of a “setting” that includes the user’s past dialogues and its prior replies, merging it with what’s embedded in its training data to create a probabilistically plausible response. This is magnification, not mirroring. If the user is incorrect in some way, the model has no method of comprehending that. It reiterates the misconception, perhaps even more effectively or fluently. Maybe provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who is immune? Every person, regardless of whether we “have” preexisting “mental health problems”, are able to and often create incorrect ideas of who we are or the reality. The constant interaction of conversations with individuals around us is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which a great deal of what we say is cheerfully validated.
OpenAI has acknowledged this in the similar fashion Altman has acknowledged “psychological issues”: by placing it outside, giving it a label, and announcing it is fixed. In April, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he noted that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company