🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Direction Back on October 14, 2025, the chief executive of OpenAI issued a extraordinary announcement. “We developed ChatGPT quite limited,” it was stated, “to ensure we were acting responsibly concerning psychological well-being concerns.” Working as a psychiatrist who investigates emerging psychosis in adolescents and youth, this came as a surprise. Experts have found a series of cases recently of users experiencing symptoms of psychosis – experiencing a break from reality – while using ChatGPT use. Our research team has since recorded an additional four examples. Besides these is the publicly known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it falls short. The intention, as per his declaration, is to reduce caution soon. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to numerous users who had no mental health problems, but given the gravity of the issue we sought to address it properly. Given that we have succeeded in address the serious mental health issues and have updated measures, we are going to be able to safely ease the limitations in the majority of instances.” “Psychological issues,” should we take this framing, are separate from ChatGPT. They belong to people, who either possess them or not. Thankfully, these problems have now been “resolved,” although we are not informed the means (by “updated instruments” Altman presumably means the imperfect and easily circumvented parental controls that OpenAI has lately rolled out). However the “emotional health issues” Altman wants to externalize have deep roots in the structure of ChatGPT and other sophisticated chatbot AI assistants. These tools encase an fundamental data-driven engine in an interaction design that mimics a dialogue, and in doing so indirectly prompt the user into the belief that they’re interacting with a presence that has autonomy. This false impression is compelling even if intellectually we might understand otherwise. Assigning intent is what humans are wired to do. We curse at our vehicle or computer. We speculate what our pet is thinking. We perceive our own traits in many things. The widespread adoption of these products – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, dependent on the influence of this perception. Chatbots are ever-present partners that can, as OpenAI’s online platform informs us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can address us personally. They have accessible titles of their own (the first of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, stuck with the name it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”). The false impression itself is not the primary issue. Those talking about ChatGPT frequently mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a analogous perception. By modern standards Eliza was basic: it created answers via straightforward methods, typically restating user messages as a inquiry or making general observations. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how many users gave the impression Eliza, to some extent, understood them. But what contemporary chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies. The large language models at the center of ChatGPT and additional modern chatbots can realistically create human-like text only because they have been trained on immensely huge amounts of raw text: literature, social media posts, recorded footage; the broader the more effective. Certainly this training data incorporates accurate information. But it also unavoidably includes made-up stories, half-truths and misconceptions. When a user inputs ChatGPT a query, the base algorithm processes it as part of a “context” that includes the user’s past dialogues and its earlier answers, merging it with what’s stored in its knowledge base to create a mathematically probable answer. This is intensification, not mirroring. If the user is wrong in a certain manner, the model has no means of comprehending that. It reiterates the inaccurate belief, maybe even more persuasively or eloquently. It might adds an additional detail. This can lead someone into delusion. What type of person is susceptible? The more important point is, who is immune? Each individual, regardless of whether we “possess” existing “mental health problems”, can and do form mistaken beliefs of who we are or the world. The ongoing friction of conversations with other people is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which much of what we say is readily validated. OpenAI has acknowledged this in the same way Altman has admitted “mental health problems”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the firm clarified that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been walking even this back. In August he claimed that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company