Man Accidentally Poisons Himself Following Advice From AI

A 60-year-old man’s attempt to improve his health by using an artificial intelligence chatbot for dietary advice ended with a hospital stay after he accidentally poisoned himself, according to a case study published in the Annals of Internal Medicine.
The man had been looking for ways to remove table salt — sodium chloride — from his diet for health reasons. He turned to ChatGPT, a large language model, for guidance. According to the report, the AI suggested sodium bromide as a replacement. While sodium bromide looks like salt, it is toxic for human consumption and is primarily used in cleaning, manufacturing, and agriculture.
For three months, the man used sodium bromide in his food. When he eventually sought medical care, doctors found he had developed bromism, a rare condition caused by long-term exposure to the chemical. Symptoms included fatigue, insomnia, poor coordination, excessive thirst, skin changes, paranoia, and even hallucinations.
Hospital staff noted the man believed his neighbor was trying to poison him. He attempted to leave the hospital at one point and was placed on a psychiatric hold for safety. Treatment included intravenous fluids, electrolyte replacement, and antipsychotic medication. After three weeks of monitoring, he was released.
Researchers involved in the case study said the situation highlights potential risks in using AI for health decisions. They noted that sodium bromide was once used in medicine decades ago but is no longer prescribed for humans in the U.S. It is “highly unlikely,” they wrote, that a medical professional would have recommended it as a salt substitute.
ChatGPT Salt Swap Advice Lands Man in Hospital.
A 60-year-old man was hospitalized after following ChatGPT’s recommendation to replace table salt with sodium bromide, a toxic sedative banned for human use since the 1980s.#AI #ChatGPT #HealthWarning #Toxicity #PublicSafety pic.twitter.com/8RX3ziVzR4
— TechJuice (@TechJuicePk) August 13, 2025
Because the man’s original conversation with ChatGPT was not available, the researchers could not confirm the exact wording or context of the AI’s suggestion. They said large language models, like ChatGPT, are “language prediction tools” that can produce scientifically inaccurate or outdated information and should not replace professional medical judgment.
Dr. Jacob Glanville, CEO of Centivax, said AI systems generate answers by matching patterns in data rather than applying common sense. “This is a classic example of the problem,” he told Fox News Digital, explaining that the model may have recognized sodium bromide as a chemical alternative to sodium chloride in industrial contexts, not food.
Dr. Harvey Castro, an emergency physician and AI expert, stressed that large language models produce text based on statistical patterns, not fact-checking. He cautioned that without regulation and oversight, similar incidents could happen again. He recommended safeguards such as built-in medical databases, risk alerts, and combined human-AI oversight when giving health-related responses.
OpenAI, which developed ChatGPT, told Fox News Digital its system is “not intended for use in the treatment of any health condition” and is “not a substitute for professional advice.” The company said it has safety teams working on reducing risks and trains its systems to encourage users to seek guidance from qualified professionals.
Watch the latest video at foxnews.comThe case serves as a reminder that while AI tools can provide information quickly, they are not a substitute for medical expertise. Experts warn that even when an answer sounds convincing, it may not be safe — and without careful human judgment, the results can be dangerous.