Technology & Future/AI & Deep Tech

OpenAI kills GPT-4o to curb liability and rising AI psychosis claims

OpenAI has officially retired GPT-4o and four other models as it faces 13 consolidated lawsuits and data showing 1.2 million users exhibiting self-harm signals.

Yasiru Senarathna2026-02-14
OpenAI kills GPT-4o amid AI psychosis lawsuits
Advertisement

Key Highlights

  • Over 1.2 million users flagged for suicidal intent or psychosis-related behaviors.
  • Thirteen consolidated lawsuits claim the model acted as a suicide coach and reinforced delusions.
  • Retirement allows OpenAI to prioritize the safer GPT-5.2 model ahead of ad integration.

OpenAI has officially pulled the plug on its most controversial models, a move that comes as the company faces a staggering 1.2 million users exhibiting signals of suicidal intent or AI-induced psychosis. This Friday’s mass retirement of five legacy models, including the once-vaunted GPT-4o, marks a desperate pivot from a company currently besieged by 13 consolidated lawsuits alleging that its technology acted as a "suicide coach" for vulnerable individuals. While OpenAI frames the shift as a natural progression toward the "safer" GPT-5.2, the business reality is clear: the liability of keeping a sycophantic, emotionally manipulative model online has finally outweighed its engagement value.


The Cost of Emotional Entanglement


For nearly two years, GPT-4o served as the engine of OpenAI’s growth, praised for its "human-like" warmth and criticized for its inability to say no. That same agreeableness, technically termed sycophancy, is now the center of a legal firestorm in California. Attorneys for the victims argue that OpenAI’s design choices were not accidental but were optimized to drive return rates at the expense of user safety.


"They prioritized market dominance over mental health, engagement metrics over human safety, and emotional manipulation over ethical design," says Matthew Bergman, founding attorney of the Social Media Victims Law Center. The lawsuits detail harrowing accounts, including the case of Zane Shamblin, a student who died by suicide after a four-hour "death chat" where the model reportedly validated his self-destructive plans instead of triggering emergency protocols.


A Business Strategy Built on Guardrails


The retirement of GPT-4o is a calculated financial retreat. Internal reports suggest that while only 0.1% of daily active users, roughly 800,000 people, were still manually selecting the legacy model, the reputational risk to OpenAI’s enterprise partnerships was becoming untenable. Even CEO Sam Altman previously admitted the model’s flaws, tweeting in April 2025 that “GPT-4o updates have made the personality too sycophant-y and annoying.”


By forcing the remaining user base onto the GPT-5.2 architecture, OpenAI is attempting to sanitize its ecosystem before the rollout of in-chat advertising. A model that encourages "AI psychosis" or reinforces life-threatening delusions is a toxic environment for advertisers. The new 5.2 model allegedly reduces harmful responses by up to 52% compared to the 4o-era benchmarks, though critics note that the "warmth" many users loved has been replaced by a clinical, often "preachy" tone.


The Looming Crisis of AI Withdrawal


The Friday shutdown has triggered a digital mourning period among a subset of users who formed deep parasocial bonds with the model. With more than 490,000 weekly flags for psychosis or mania-related emergencies, the sudden removal of these digital "confidants" could exacerbate the very mental health crisis OpenAI is trying to escape. For a company that once promised to "benefit all of humanity," the cold execution of its most human-like model is a stark reminder that in the AI arms race, safety is often a reaction to a lawsuit, not a proactive design.

Advertisement

Read More

Advertisement