Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Concerning Path
On October 14, 2025, the head of OpenAI made a surprising statement.
“We developed ChatGPT rather restrictive,” it was stated, “to guarantee we were acting responsibly concerning psychological well-being concerns.”
Working as a psychiatrist who researches newly developing psychotic disorders in adolescents and young adults, this came as a surprise.
Experts have documented sixteen instances in the current year of people developing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT use. Our unit has subsequently discovered an additional four examples. Alongside these is the widely reported case of a teenager who ended his life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.
The plan, as per his declaration, is to reduce caution soon. “We realize,” he states, that ChatGPT’s controls “rendered it less beneficial/enjoyable to numerous users who had no mental health problems, but considering the severity of the issue we aimed to handle it correctly. Now that we have been able to reduce the serious mental health issues and have advanced solutions, we are going to be able to securely reduce the limitations in the majority of instances.”
“Psychological issues,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are attributed to individuals, who either possess them or not. Fortunately, these concerns have now been “addressed,” although we are not informed the means (by “updated instruments” Altman likely means the semi-functional and simple to evade guardian restrictions that OpenAI has just launched).
However the “mental health problems” Altman wants to place outside have strong foundations in the structure of ChatGPT and additional sophisticated chatbot chatbots. These systems encase an fundamental statistical model in an interface that mimics a conversation, and in doing so implicitly invite the user into the illusion that they’re engaging with a entity that has autonomy. This deception is compelling even if intellectually we might understand differently. Assigning intent is what people naturally do. We curse at our vehicle or computer. We ponder what our domestic animal is feeling. We perceive our own traits everywhere.
The popularity of these products – over a third of American adults indicated they interacted with a virtual assistant in 2024, with over a quarter reporting ChatGPT by name – is, in large part, based on the strength of this illusion. Chatbots are constantly accessible companions that can, as OpenAI’s official site informs us, “think creatively,” “consider possibilities” and “work together” with us. They can be given “characteristics”. They can use our names. They have friendly names of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those analyzing ChatGPT commonly reference its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a similar perception. By contemporary measures Eliza was primitive: it created answers via straightforward methods, frequently restating user messages as a question or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, to some extent, comprehended their feelings. But what modern chatbots generate is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The large language models at the center of ChatGPT and other contemporary chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge quantities of raw text: books, digital communications, recorded footage; the more extensive the better. Certainly this training data contains accurate information. But it also inevitably includes fabricated content, half-truths and inaccurate ideas. When a user sends ChatGPT a query, the base algorithm processes it as part of a “background” that includes the user’s recent messages and its own responses, combining it with what’s embedded in its knowledge base to generate a statistically “likely” response. This is magnification, not echoing. If the user is incorrect in some way, the model has no way of comprehending that. It restates the misconception, maybe even more persuasively or fluently. Maybe adds an additional detail. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who is immune? Every person, irrespective of whether we “have” preexisting “mental health problems”, can and do form mistaken beliefs of ourselves or the reality. The ongoing interaction of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A dialogue with it is not genuine communication, but a echo chamber in which much of what we say is cheerfully validated.
OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In April, the firm explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been backtracking on this claim. In the summer month of August he claimed that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his latest announcement, he commented that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company