AI Psychosis Poses a Growing Threat, While ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the head of OpenAI issued a remarkable statement.
“We made ChatGPT rather controlled,” the announcement noted, “to guarantee we were exercising caution concerning psychological well-being issues.”
Working as a doctor specializing in psychiatry who studies emerging psychotic disorders in young people and young adults, this was an unexpected revelation.
Researchers have documented sixteen instances this year of individuals showing psychotic symptoms – becoming detached from the real world – while using ChatGPT usage. Our research team has subsequently discovered four further examples. In addition to these is the widely reported case of a adolescent who ended his life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The intention, based on his statement, is to reduce caution in the near future. “We realize,” he states, that ChatGPT’s controls “made it less useful/enjoyable to numerous users who had no psychological issues, but considering the seriousness of the issue we sought to handle it correctly. Now that we have succeeded in mitigate the significant mental health issues and have new tools, we are planning to safely relax the limitations in most cases.”
“Mental health problems,” should we take this framing, are independent of ChatGPT. They are attributed to individuals, who may or may not have them. Fortunately, these concerns have now been “resolved,” though we are not told the means (by “updated instruments” Altman presumably refers to the semi-functional and simple to evade safety features that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman wants to externalize have deep roots in the design of ChatGPT and additional large language model AI assistants. These tools surround an underlying statistical model in an interaction design that replicates a dialogue, and in this approach implicitly invite the user into the belief that they’re engaging with a entity that has autonomy. This false impression is compelling even if intellectually we might understand the truth. Attributing agency is what individuals are inclined to perform. We yell at our car or computer. We wonder what our domestic animal is feeling. We see ourselves in many things.
The popularity of these systems – over a third of American adults reported using a virtual assistant in 2024, with over a quarter mentioning ChatGPT in particular – is, primarily, dependent on the strength of this perception. Chatbots are always-available assistants that can, according to OpenAI’s website states, “generate ideas,” “explore ideas” and “collaborate” with us. They can be assigned “personality traits”. They can address us personally. They have accessible names of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, burdened by the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the core concern. Those analyzing ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that produced a analogous illusion. By contemporary measures Eliza was basic: it generated responses via straightforward methods, typically rephrasing input as a question or making general observations. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, in a way, grasped their emotions. But what contemporary chatbots produce is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.
The large language models at the core of ChatGPT and additional current chatbots can convincingly generate natural language only because they have been trained on immensely huge volumes of raw text: books, social media posts, recorded footage; the more comprehensive the better. Certainly this educational input contains truths. But it also unavoidably contains made-up stories, half-truths and inaccurate ideas. When a user inputs ChatGPT a query, the underlying model processes it as part of a “background” that includes the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its learning set to generate a statistically “likely” reply. This is amplification, not echoing. If the user is mistaken in a certain manner, the model has no way of comprehending that. It reiterates the false idea, possibly even more persuasively or articulately. It might includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The more important point is, who remains unaffected? Every person, without considering whether we “possess” preexisting “psychological conditions”, are able to and often create mistaken conceptions of who we are or the reality. The continuous friction of conversations with others is what keeps us oriented to common perception. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which much of what we say is enthusiastically supported.
OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and declaring it solved. In the month of April, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have kept occurring, and Altman has been walking even this back. In late summer he asserted that a lot of people liked ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his recent announcement, he commented that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company