Ai Tech1 min read·Edition #12

Hundreds of Millions Use AI Chatbots for Health Advice; Clinical and Liability Risks Escalate

Hundreds of millions of people are turning to AI chatbots including OpenAI's ChatGPT for health advice, forcing healthcare providers and regulators to confront rapid consumer adoption of unregulated medical guidance tools.

This trend is significant because it represents a structural shift in patient behavior prior to seeking professional care. Patients now have a first-line information source (AI) that operates outside professional oversight, liability frameworks, and clinical regulation. OpenAI's health-specific chatbot initiatives indicate major tech companies are deliberately expanding into medical advice territory, despite absence of FDA oversight or liability standards. The scale is massive—hundreds of millions of users globally means AI chatbot usage may now exceed patient-doctor consultations for initial symptom triage. This creates a two-part problem for practices: first, patients arrive with AI-generated information that may be incorrect or contraindicated, requiring time-consuming correction; second, practices face liability exposure if they don't explicitly address AI-sourced patient beliefs. Clinical trials, diagnostic accuracy studies, and adverse event tracking for AI health tools remain minimal, creating regulatory gaps that CMS, FDA, and state medical boards are only beginning to address.

For practice owners, the operational implication is immediate: patient intake processes need explicit questions about AI chatbot use and AI-sourced health information. Clinical teams should be trained to identify and correct AI-generated misinformation without dismissing patient agency. Dentists and physicians should document these conversations in charts. Larger practices and health systems should develop AI literacy protocols for staff and patient education materials that address chatbot limitations. For DSOs and health systems investing in patient engagement technology, the competitive advantage now lies in providing proprietary, AI-assisted tools that practices control—rather than leaving patients to external chatbots. The liability exposure is real: if a patient acts on incorrect AI advice that contradicts clinical guidance, and an adverse outcome occurs, did your practice clearly document the contradiction and correction?

What to watch: FDA regulatory guidance on AI health tools (expected mid-2026) and first litigation cases where AI chatbot advice conflicts with clinical outcomes.

More from Edition #12