ChatGPT Health Wants Your Medical Records. Should You Say Yes?

OpenAI’s new health-focused AI wants access to your medical records. Here’s why that might be a brilliant call, or a privacy nightmare waiting to happen.
Something remarkable happened last week: OpenAI launched ChatGPT Health, a dedicated space within its AI chatbot designed to help you navigate your health and wellness.
Upload your medical records. Connect your Apple Health data. Sync MyFitnessPal. Suddenly, the same tool you use to draft emails can summarize your bloodwork, explain what changed since last year, and tell you whether your cholesterol is trending in the right direction.
ChatGPT Health promises to be your health synthesizer: an always-available, infinitely patient assistant that can help make sense of it all.
For Super Agers, this could be transformational. Or it could be a privacy time bomb.
AI Assistants And [lon-jev-i-tee]nounLiving a long life; influenced by genetics, environment, and lifestyle.Learn More
Health is already one of the most common reasons people use ChatGPT. OpenAI says more than 230 million people globally ask health and wellness questions every week. ChatGPT Health formalizes what’s already happening and adds privacy-specific guardrails.
The use cases are definitely compelling:
- Before your annual physical, you could ask it to summarize health trends and generate smart questions based on your history.
- After lab results arrive, you could get plain-English explanations without WebMD catastrophizing.
- You could have it think through insurance tradeoffs before enrollment or a major care decision.
- Managing multiple conditions? Health can hold context across them, helping you see patterns and draft questions to bring to your clinician.
OpenAI says it developed ChatGPT Health with input from more than 260 physicians across 60 countries, using a clinical evaluation framework called HealthBench to assess safety, clarity, and appropriate escalation of care.
Having a health companion available at 3 a.m. that isn’t Dr. Google isn’t nothing.
The Privacy Reality Check
Now, the harder conversation.
When you share medical records with your doctor, HIPAA protections generally apply. When you share them with a consumer AI tool like ChatGPT Health, HIPAA typically does not. That means your protection depends on OpenAI’s policies, plus whatever state and federal consumer privacy laws apply.
Privacy advocates have been blunt about what that means.
When users upload medical records to ChatGPT Health, those records lose HIPAA protections because OpenAI is not a covered healthcare entity. OpenAI may promise safeguards, but those safeguards are policy-based, not statutory.
OpenAI says your health data will be encrypted, stored in a separate space from other chats, and not used to train its foundation models. They add: “If you start a health-related conversation in ChatGPT, we’ll suggest moving into Health for these additional protections.”
These are meaningful precautions, but they are not guarantees and some unresolved risks remain.
Even OpenAI’s CEO has acknowledged they haven’t figured out how to provide legal protection for sensitive conversations with AI, like you have with doctors or lawyers. And, if authorities request access to reproductive health data, mental health disclosures, or other sensitive information, it’s not clear how OpenAI would respond, a concern privacy experts have raised given that companies set their own rules for handling consumer health data.
Plus, OpenAI has signaled that advertising is on the table as a future business model, meaning our health data could potentially become fodder for ad targeting. It’s not clear whether our health data will remain fully firewalled from targeting or commercial influence.
The 23andMe Warning
We’ve been here before.
Millions of people trusted 23andMe with deeply personal genetic data, believing it was secure and responsibly stewarded. Then the company filed for bankruptcy. Suddenly, that data existed within a court-supervised sale process, raising alarms among regulators and privacy experts. The company ultimately emerged from bankruptcy under new ownership led by its founder through a nonprofit structure.
The lesson is that data safety depends not just on current policies, but on a company’s long-term legal and financial fate. Health data is forever. Companies are not.
How to Use ChatGPT Health Wisely
None of this means you shouldn’t use ChatGPT Health. It means using it with eyes open.
Do: Use it for general health education, appointment preparation, and maybe decoding complex insurance plans. Use it to identify questions worth asking your doctor.
Don’t: Upload information you’d be uncomfortable seeing referenced in a legal proceeding. Think carefully before sharing reproductive health details or deeply sensitive mental health disclosures.
Be aware: ChatGPT Health explicitly states it is not intended for diagnosis or treatment. It’s a support tool, a research assistant, not a replacement for clinical care.
But health experts warn that the risks of deeply personalized AI aren’t fully understood. There are investigations implicating that chatbots failed to prevent self-harm (and in some cases allegedly encouraged it). The Center for Humane Technology cautions that over-trust of AI tools is premature as AI systems become ever more personalized. The concern isn’t just bad advice, but what happens when vulnerable people come to trust a system that can’t actually care for them.
The Guardrails on AI Health Information
U.S. Food and Drug Administration Commissioner Marty Makary recently said the agency would limit regulation of wearables and lifestyle-oriented health software, saying if “software is simply providing information, they can do that without FDA regulation.”
The implication is clear: as AI-powered health tools expand, more responsibility shifts to consumers to understand what these technologies can and cannot reliably do.
This is the consumerization of health intelligence in real time.
While ChatGPT Health is rolling out via waitlist in the U.S., it is notably not launching yet in the EU, UK, or Switzerland, places with stronger health data protections.
For Super Agers committed to [helth-span]nounThe number of years you live in good health, free from chronic illness or disability.Learn More, having an AI that can hold your health context may be genuinely empowering. The key word is may.
The future isn’t just about whether AI will play a role in your health journey. It’s about whether you’re informed enough to navigate the tradeoffs before convenience makes the decision for you.
Read This Next
The information provided in this article is for educational and informational purposes only and is not intended as health, medical, or financial advice. Do not use this information to diagnose or treat any health condition. Always consult a qualified healthcare provider regarding any questions you may have about a medical condition or health objectives. Read our disclaimers.


