Hundreds of millions of people are turning to AI chatbots for health advice [1].
This shift in behavior highlights a growing reliance on artificial intelligence for medical information, raising critical questions about the safety and accuracy of non-professional guidance.
OpenAI introduced ChatGPT Health in January [2]. The tool allows users to input symptoms and receive potential explanations or suggestions for next steps. For some users, the results have been helpful. One user, Abi, said the chatbot correctly suggested seeing a pharmacist, which led to an appropriate antibiotic prescription.
Despite these individual successes, health experts said AI-generated guidance can be inaccurate. These tools are not replacements for professional medical consultations. The risks include misdiagnosis or the failure to recognize urgent symptoms that require immediate intervention.
Data suggests this trend is widespread in the U.S. Roughly 25% of U.S. adults used an AI tool for health information or advice in the past 30 days [3]. This high adoption rate underscores the accessibility of these tools compared to traditional healthcare providers.
Critics said that while chatbots can synthesize large amounts of data, they lack the clinical judgment of a trained physician. The tension between convenience and clinical safety remains a central point of debate as AI integration into healthcare continues to expand.
“Hundreds of millions of people are turning to chatbots for health advice”
The rapid adoption of AI for health advice reflects a gap in accessible healthcare and a trust in algorithmic efficiency. However, the contradiction between positive user anecdotes and expert warnings suggests that while AI can assist in navigation—such as directing a patient to a pharmacist—it cannot yet guarantee the diagnostic accuracy required for safe medical practice.





