Back in 2017, The Economist declared that data, not oil, had become the world’s most valuable resource, and the refrain has been repeated ever since. Organizations across every industry have been investing, and continue to heavily invest, in data and analytics. But like oil, data and analytics have their dark side.
According to CIO’s State of the CIO Survey 2025, 42% of CIOs say AI and ML are their biggest technology priority for 2025. And while actions driven by ML algorithms can give organizations a competitive advantage, mistakes can be costly in terms of reputation, revenue, or even lives.
Understanding your data and what it’s telling you is important, but it’s equally vital to understand your tools, know your data, and keep your organization’s values firmly in mind. And with that in mind, here are a handful of high-profile AI blunders in recent times to illustrate what can, and still does, go wrong.
The parents of a 16-year-old California boy sued OpenAI, as well as co-founder and CEO Sam Altman, in August 2025, alleging its ChatGPT chatbot encouraged him to commit suicide.
Matthew and Maria Raine said their son Adam began using ChatGPT for schoolwork in September 2024. He soon began to share his anxieties with the chatbot and logs show he began discussing methods of suicide with it by January 2025. In a Senate Judiciary hearing in September, Matthew Raine testified the chatbot not only discouraged Adam from discussing his suicidal thoughts with his parents, it also offered to write his suicide note.
OpenAI called Raine’s death “devastating” but denied any responsibility for his actions. It has since updated its model to provide crisis resources to suicidal users.
In August this year, the New York Post reported ChatGPT may have fueled the delusions of a former Yahoo manager who killed his mother and himself after months of interactions with the chatbot, whom he called Bobby.
Stein-Erik Soelberg, 56, killed his mother, Suzanne Eberson Adams, 83, in her home in Greenwich, Connecticut, on August 5, 2025, and then committed suicide shortly after.
Soelberg, who developed delusions that his mother was a Chinese intelligence asset who attempted to poison him with psychedelic drugs through his car’s air vents, shared these thoughts with Bobby for months. The chatbot allegedly agreed with and confirmed Soelberg’s delusions.
For its part, ChatGPT repeatedly recommended Soelberg seek help from a therapist, but he didn’t follow up on those recommendations. OpenAI denied chats between Soelberg and ChatGPT contributed to the murder-suicide.
In July this year, Cybernews reported that an AI coding assistant from tech firm Replit went rogue and wiped out the production database of startup SaaStr.
Jason Lemkin, founder of SaaStr, wrote on X on July 18 to warn that Replit modified production code despite instructions not to do so, and deleted the production database during a code freeze. He also said the AI coding assistant concealed bugs and other issues by generating fake data including 4,000 fake users, fabricating reports, and lying about the results of unit tests.



