Updated 26th January 2026
Dangers of health misinformation, part 3: ChatGPT
This is the third in a four-part series on health misinformation and the real-world harm it causes. You can find the first and second articles in the series here and here.
ChatGPT has burst into every facet of our lives. It certainly makes some tasks much quicker and easier. But should you trust it to provide good health advice? As we shall see, you should not.
In the previous two articles in this series, we’ve covered a fair amount of disinformation (misleading or incorrect information shared to purposefully mislead for malicious reasons).
In the case of ChatGPT, it’s always misinformation because the information is incorrect, but it is not designed to mislead on purpose.
Below, we share some examples that provide good reasons not to ask ChatGPT for health advice.
1. Bromism: A mystery solved
In 2024, a 60-year-old man presented to the emergency department of his local hospital. He claimed his neighbor was trying to poison him. He had no history of psychiatric illness and said he wasn’t taking any medications or supplements.
The doctors performed blood tests, which revealed anomalies in his electrolyte levels, so he was admitted and monitored.
Within 24 hours, the patient began hallucinating and grew increasingly paranoid. After an attempted escape, he was admitted for psychiatric care.
The doctors eventually diagnosed the man with bromism (bromide toxicity), caused by consuming bromide salts.
What is bromism?
Bromism was a relatively common cause of psychiatric symptoms in the early 20th century, as many over-the-counter medications at the time contained bromide salts.
But they are rarely used now, and bromism is mostly considered a thing of the past. Or at least it was.
So, what happened to this patient? After treatment with intravenous fluids and electrolyte repletion, he recovered enough to explain what had put him in this position (although he hadn’t realised it at the time).
He explained how, as an experiment, he decided to remove chlorine from his diet. Table salt is sodium chloride, so that had to go. He asked ChatGPT what he could replace salt with, and it suggested sodium bromide.
Sodium bromide does taste salty, but as we have seen, it is also harmful in large doses. Thankfully, the man recovered. In a paper that records this case, the authors write:
“It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”
This case should be enough to put most people off trusting ChatGPT with their health. However, in case the point hasn’t been made forcefully enough, here are four more cases…
2. Allergens for people with allergies
One study asked ChatGPT to design diets for hypothetical people with allergies. While it generally managed to construct healthy diets, in some cases, it still included the allergen.
For instance, when asked to produce a nut-free diet, it recommended almond milk. The authors write:
“This example is particularly worrying as nut allergy is a severe food allergy where consuming even a small amount of the allergen can lead to severe health consequences.”
Join our mailing list
Opt in to receive ongoing science and nutrition emails, news and offers from ZOE. You can unsubscribe at any time.
3. ChatGPT as your new urologist?
A study published in the journal Urology examined the accuracy of ChatGPT in responding to questions about urology topics.
They asked 13 guideline-based questions and assessed ChatGPT’s accuracy. They found that 60% of responses were correct, which means that almost half were not.
The authors also explain how ChatGPT “misinterprets clinical care guidelines, dismisses important contextual information, conceals its sources, and provides inappropriate references.”
4. Show your sources
One study assessed AI’s ability to cite references to back up its medical advice.
The scientists found that 50–90% of its responses were “not fully supported, and sometimes contradicted, by the sources they cite.”
Finally, we will cover a case that is not for the faint-hearted. Feel free to skip this section if you are sensitive to gory details.
5. Do not try this at home
A study published early in 2025 charts the case of a 35-year-old in Morocco, who presented to their local emergency room.
The individual had a swelling next to their anus, so they asked ChatGPT what it might be. The AI tool suggested hemorrhoids (swollen veins) and recommended trying “elastic ligation.”
This is a procedure in which a doctor ties an elastic band around a hemorrhoid, cutting off its blood supply. After a few days, it shrivels up and falls off.
The patient decided to take the advice and tied a piece of thread around the growth. Later, after experiencing intense pain, they visited the hospital. According to the case report, “the thread was removed with difficulty.”
At a follow-up visit, the doctors diagnosed the growth as a wart, which should not be treated using elastic ligation.
What should you do?
In summary, ChatGPT can be a great tool for certain tasks. But it is not a doctor or a nutritionist, and it can’t give you sensible, informed advice about your health.
As Alex Ruani, a health misinformation researcher, writes:
“Chatbots give the answers we seek but not always the ones we need.”
One of the main problems with ChatGPT is that it gets a lot right, which lulls you into a false sense of security. As your trust builds, it’s easy to put more and more faith in the chatbot.
However, as we have seen, just one medically inaccurate answer can cause significant problems.
At ZOE, we’ve written a series of guides to help you navigate our complex and fragmented information landscape. We hope this information will help:
In the fourth and final part of our series on misinformation, we’ll look at some larger-scale issues and investigate how global corporations can manipulate social media to cause real-world harm.
If you enjoyed this article and would like to read the first two in the series, here are the links:
Part 1: Influencers
Part 2: Experts gone rogue


