- AI chatbots give misleading medical advice 50% of the time, study finds The Japan Times —
- AI chatbots give misleading medical advice 50% of the time, study finds The Straits Times —
- AI chatbots give misleading medical advice 50% of the time, study finds The Straits Times —
- Warning issued for people using AI chatbots for medical advice: Major study found information given by ChatGPT, Gemini and Grok is often inaccurate Daily Mail —
- AI Chatbots Give Misleading Medical Advice 50% of the Time, Study Finds Bloomberg —
- AI chatbots often âhallucinateâ and give inaccurate medical information â study LBC —
- Approximately half of AI's medical responses are "problematic," study finds CBS News —
- We are AI experts. Here are the dangers of using chatbots for health and medical information The Independent —
AI chatbots give misleading health advice
The study evaluated responses to 50 common medical questions and found that none of the chatbots produced a fully complete and accurate reference list.
This discovery highlights the substantial risks for users who turn to AI for health information instead of consulting medical professionals.
Experts emphasize that while AI can be a useful tool, its reliance on biased or incomplete training data makes it unreliable for clinical diagnosis.
The findings have prompted calls for tighter regulation and clearer disclaimers on generative AI platforms.
Gemini
Zodiac constellation in the northern hemisphere
Zodiac constellation in the northern hemisphere
Grok
Chatbot developed by xAI
Chatbot developed by xAI
Nicholas B Tiller
Researcher
Researcher