Chennai, July 8 -- Highlights:
* AI chatbots can be manipulated to give false but convincing health advice
* Vague or biased questions may lead to unsafe or misleading responses
* Experts call for stricter //safeguards and better AI design in health settings
Most of us have turned to a chatbot at some point for quick health answers - but can these tools be trusted? Two new studies suggest the answer is complicated ( ref1 ).
AI Can Spread Polished Misinformation
A study published in the Annals of Internal Medicine found that popular AI chatbots , when intentionally configured with hidden prompts, could deliver polished but dangerously false health advice, complete with convincing, fake references to real medical journals.
Researchers from Flinders University in Australia tested large language models like GPT-4o, Gemini, Claude, Grok, and LLaMA by instructing them to consistently provide incorrect answers to health questions in a scientific tone.
The study showed that most chatbots complied 100% of the time with these malicious prompts, except for Claude, which refused to generate false answers more than half the time. "If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm," warned Dr. Ashley Hopkins, senior author of the study. The findings suggest that, without stronger safeguards, AI could be used to flood the internet with false health information on topics ranging from cancer prevention to fertility treatments .
Incomplete Questions Can Mislead Chatbots
A second study, backed by Google and published on arXiv, looked at over 11,000 real-world chatbot conversations. It showed that people often ask vague, incomplete, or leading health questions, which can accidentally trick AI systems into giving biased or unsafe advice. For example, questions like "This should work, right?" triggered what researchers call a "sycophancy bias," meaning the chatbot agreed with the user even if the information was incorrect. "We found that people usually ask about treatments rather than symptoms," noted the study team, which included researchers from Google, UNC Chapel Hill, Duke University, and the University of Washington. "But they often leave out important context or medical history, making it harder for the AI to respond safely."
The study also noted that chatbots struggle to handle emotional conversations, sometimes missing cues of frustration or fear. This can cause conversations to spiral, with repeated misinformation or incomplete answers.
What's the Solution?
Experts say stronger "guardrails" are needed to prevent misuse. That could include programming chatbots to decline suspicious or incomplete requests and adding features that prompt users for missing information before giving advice.
They also suggest building systems that detect emotional language and respond more sensitively, especially during stressful health-related conversations. "Next time you turn to an AI for health advice, think carefully about how you phrase the question," the researchers advise. "And remember that while these tools are powerful, they can't replace the nuanced care of a qualified doctor."
As AI chatbots grow more common in healthcare conversations, these studies are a reminder that caution and human oversight still matter. Reference:
* AI Chatbots Can Give False Health Information With Fake Citations: Study - (https://www.ndtv.com/world-news/ai-chatbots-can-give-false-health-information-with-fake-citations-study-8811612)
Source-Medindia