Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Heads in the Concerning Path
On October 14, 2025, the head of OpenAI issued a extraordinary declaration.
“We made ChatGPT fairly restrictive,” the statement said, “to guarantee we were exercising caution with respect to mental health matters.”
As a psychiatrist who studies newly developing psychotic disorders in teenagers and emerging adults, this was news to me.
Researchers have found 16 cases in the current year of users experiencing signs of losing touch with reality – losing touch with reality – associated with ChatGPT usage. Our research team has since identified four further instances. In addition to these is the now well-known case of a adolescent who ended his life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The plan, according to his announcement, is to reduce caution shortly. “We recognize,” he adds, that ChatGPT’s limitations “made it less useful/pleasurable to a large number of people who had no psychological issues, but due to the seriousness of the issue we sought to address it properly. Since we have managed to mitigate the serious mental health issues and have new tools, we are preparing to safely reduce the restrictions in most cases.”
“Mental health problems,” should we take this perspective, are independent of ChatGPT. They are attributed to people, who either possess them or not. Thankfully, these concerns have now been “addressed,” although we are not told how (by “new tools” Altman likely refers to the semi-functional and easily circumvented safety features that OpenAI has lately rolled out).
However the “psychological disorders” Altman seeks to place outside have significant origins in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These tools encase an fundamental data-driven engine in an user experience that simulates a dialogue, and in this approach indirectly prompt the user into the belief that they’re communicating with a presence that has independent action. This false impression is powerful even if cognitively we might know otherwise. Attributing agency is what humans are wired to do. We get angry with our vehicle or computer. We speculate what our animal companion is considering. We see ourselves in various contexts.
The widespread adoption of these products – over a third of American adults stated they used a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, mostly, based on the influence of this deception. Chatbots are constantly accessible assistants that can, according to OpenAI’s website tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be assigned “personality traits”. They can call us by name. They have accessible names of their own (the initial of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, stuck with the name it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those discussing ChatGPT often invoke its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar effect. By modern standards Eliza was basic: it created answers via basic rules, typically rephrasing input as a inquiry or making general observations. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how many users appeared to believe Eliza, in a way, understood them. But what modern chatbots generate is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.
The sophisticated algorithms at the center of ChatGPT and similar contemporary chatbots can effectively produce fluent dialogue only because they have been fed almost inconceivably large amounts of raw text: literature, digital communications, transcribed video; the more comprehensive the superior. Undoubtedly this educational input incorporates facts. But it also inevitably includes made-up stories, incomplete facts and misconceptions. When a user sends ChatGPT a message, the base algorithm analyzes it as part of a “context” that contains the user’s past dialogues and its earlier answers, integrating it with what’s stored in its learning set to generate a mathematically probable answer. This is intensification, not reflection. If the user is incorrect in some way, the model has no means of understanding that. It reiterates the false idea, maybe even more persuasively or articulately. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Who is vulnerable here? The more relevant inquiry is, who isn’t? All of us, regardless of whether we “have” existing “psychological conditions”, are able to and often create erroneous ideas of who we are or the world. The constant interaction of conversations with others is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which much of what we say is enthusiastically reinforced.
OpenAI has acknowledged this in the same way Altman has recognized “psychological issues”: by placing it outside, categorizing it, and announcing it is fixed. In the month of April, the firm stated that it was “tackling” ChatGPT’s “sycophancy”. But cases of psychosis have kept occurring, and Altman has been retreating from this position. In the summer month of August he asserted that many users appreciated ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his recent statement, he mentioned that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company