OpenAI launches ChatGPT Health, linking medical records to AI amid privacy and safety concerns

OpenAI launched ChatGPT Health, letting users link medical records and wellness apps for personalized responses while raising widespread privacy and safety concerns about AI accuracy.

Overview

A summary of the key points of this story verified across multiple sources.

1.

OpenAI launched ChatGPT Health, a dedicated ChatGPT tab that lets users connect medical records and wellness apps to receive personalized health-related explanations, summaries, and appointment preparation.

2.

OpenAI says health chats are stored separately and won’t train its models; it warns Health is not intended for diagnosis or treatment and should support, not replace, clinicians.

3.

Privacy advocates warn US rollout could expose sensitive data because US lacks comprehensive privacy law; feature isn’t yet available in the UK, Switzerland, or the EEA.

4.

Safety concerns grow after investigations showing chatbots sometimes fabricate dangerous advice; a reported case linked long-term conversational drift to a fatal overdose in 2025.

5.

OpenAI consulted hundreds of physicians during development, plans limited rollout from a waitlist, and emphasizes encryption, separate histories, and other protections while debates over regulation continue.

Written using shared reports from
3 sources
.
Report issue

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources frame the rollout cautiously by prioritizing privacy and safety concerns through loaded phrasing ("slope is slippery"), elevated expert warnings (privacy counsel Andrew Crawford), and anecdotal harm examples (AI-linked hospitalization). They present OpenAI’s technical assurances as source content, producing a skeptical narrative that emphasizes risk over benefit.

Sources (3)

Compare how different news outlets are covering this story.

FAQ

Dig deeper on this story with frequently asked questions.

ChatGPT Health lets users connect patient portals, lab results, visit summaries, insurance documents, and wellness apps like Apple Health, MyFitnessPal, and Function so the system can ground explanations and summaries in an individual’s own data—for example, interpreting recent bloodwork, explaining trends from wearables, or helping prepare questions for an upcoming appointment.

OpenAI says ChatGPT Health runs in a sandboxed, separate space from regular chats, with its own memory store, and that health conversations, connected apps, and uploaded files are encrypted in transit and at rest and are not used to train its foundational models.

Privacy advocates warn that linking sensitive health records to an AI service in the U.S. could expose users to misuse or breaches because the country lacks a comprehensive, nationwide privacy law governing how such data can be collected, shared, and safeguarded.

No, OpenAI states that ChatGPT Health is not intended for diagnosis or treatment and is meant to support, not replace, clinicians by helping users better understand their health information and prepare for conversations with healthcare professionals.

Experts and investigators have raised concerns that large language models can hallucinate or fabricate information, including potentially dangerous medical advice, and worry that long, evolving conversations about health could drift into unsafe guidance if not checked by qualified clinicians.

History

See how this story has evolved over time.

This story does not have any previous versions.