healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

ChatGPT Health Is a Terrible Idea

Posted by timmreardon on 01/09/2026
Posted in: Uncategorized.

Why AI Cannot Be Allowed to Mediate Medicine Without Accountability

By Katalin K. Bartfai-Walcott, CTO, Synovient Inc

On January 7, 2026, OpenAI announced ChatGPT Health, a new feature that lets users link their actual medical records and wellness data, from EMRs to Apple Health, to get personalized responses from an AI. It is positioned as a tool to help people interpret lab results, plan doctor visits, and understand health patterns. But this initiative is not just another health tech product. It is a dangerous architectural leap into personal medicine with very little regard for patient safety, accountability, or sovereignty.

The appeal is obvious. Forty million users already consult ChatGPT daily about health issues. Yet popularity does not equal safety. Connecting deep personal health data to a probabilistic language model, rather than a regulated medical device, creates a new class of risk.

As a new class of consumer AI products begins to position itself as a companion to healthcare, these systems offer to connect directly to personal medical records, wellness apps, and long-term health data to generate more personalized guidance, explanations, and insights. The promise is familiar and intuitively appealing. Doctor visits are short. Records are fragmented. Long gaps between appointments leave patients wanting to feel informed rather than passive. Into that space steps a conversational system offering continuous synthesis, reassurance, and pattern recognition at any hour. It presents itself as improvement and empowerment, yet it does so by asking patients to trade agency, control, accountability, and sovereignty for convenience.

This is a terrible idea, and it is terrible for reasons that have nothing to do with whether people ask health questions or whether the healthcare system is failing them.

Connecting longitudinal medical records to a probabilistic language model collapses aggregation, interpretation, and influence into a single system that cannot be held clinically, legally, or ethically accountable for the narratives it produces. Once that boundary is crossed, the risk becomes persistent, compounding, and largely invisible to the person whose data is being interpreted, and the results will be dire.

Medical records are not neutral inputs. They are identity-defining artifacts that shape access to care, insurance outcomes, employment decisions, and legal standing. Anyone who has worked inside healthcare systems understands that these records are often fragmented, duplicated, outdated, or simply wrong. Errors persist for years. Corrections are slow. Context is frequently missing. When those imperfections remain distributed across systems, the damage is contained. In that form, errors stop behaving as isolated inaccuracies and begin to shape enduring narratives about a person’s body, behavior, and risk profile.

Language models do not reason the way medicine reasons. They do not weigh uncertainty with caution, surface ambiguity as a first-class signal, or slow down when evidence conflicts. They produce fluent synthesis. That fluency reads as confidence, and confidence is precisely what medical practice treats carefully because it can crowd out questioning, second opinions, and clinical judgment. When such a synthesis is grounded in sensitive personal data, even minor errors cease to be informational. They become formative.

The repeated assurance that these systems are meant to support rather than replace medical care does not hold up under scrutiny. The moment a tool reframes symptoms, highlights trends, normalizes interpretations, or influences how someone prepares for or delays a medical visit, it is already shaping care pathways. That influence does not require diagnosis or prescription. It only requires trust, repetition, and perceived authority. Disclaimers do not meaningfully constrain that effect. Only enforceable architectural boundaries do, and those boundaries are absent.

Medicare is already moving in this direction, and that should give us pause. Algorithmic systems are increasingly used to assess coverage eligibility, utilization thresholds, and the medical necessity of procedures for elderly patients, often with limited transparency and constrained avenues for appeal. When these systems mediate access to care, they do not feel like decision support to the patient. They feel like authority. A recommendation becomes a gate. An inference becomes a delay or a denial. The individual rarely knows how the conclusion was reached, what data shaped it, or how to meaningfully challenge it. When AI interpretation is embedded into healthcare infrastructure without enforceable accountability, it quietly displaces human judgment while preserving the appearance of neutrality, and the people most affected are those with the least power to contest it.

What is missing most conspicuously is patient sovereignty at the data level. There is no object-level consent that limits use to a declared purpose. There is no lifecycle control that allows a patient to revoke access or correct errors in a way that propagates forward. There is no clear separation between information used transiently to answer a question and inference artifacts that may be retained, recombined, or learned from over time. Without those controls, the system recreates the worst failures of modern health IT while accelerating their impact through conversational authority.

The argument that people already seek health advice through AI misunderstands responsibility. Normalized behavior is not a justification for institutionalizing risk. People have always searched for symptoms online, yet that reality never warranted centralizing full medical histories into a single interpretive layer that speaks with personalized authority. Turning coping behavior into infrastructure without safeguards does not empower patients. It exposes them.

If the goal is to help individuals engage more actively in their health, the work must start with agency rather than intelligence. Patients need enforceable control over how their data is accessed, for what purpose, for how long, and with what guarantees around correction, provenance, and revocation. They need systems that preserve uncertainty rather than smoothing it away, and that prevent the silent accumulation of interpretive power.

Health data does not need to be smarter. It needs to remain governable by the person it represents. Until that principle is embedded at the architectural level, connecting medical records to probabilistic conversational systems is not progress. It is a failure to absorb decades of hard lessons about trust, error, and the irreversible consequences of speaking with authority where none can be justified.

If systems like this are going to exist at all, they must be built on a very different foundation. Patient agency cannot be an interface preference. It has to be enforced at the data level. Individuals must be able to control how their medical data is accessed, for what purpose, for how long, and with what guarantees around correction, revocation, and downstream use. Consent cannot be implied or perpetual. It must be explicit, contextual, and technically enforceable.

Data ownership and sovereignty are not philosophical positions in healthcare. They are safety requirements. Medical information must carry its provenance, its usage constraints, and its lifecycle rules with it, so that interpretation does not silently outlive permission. Traceability must extend not only to the source of the data, but to the inferences drawn from it, making it possible to understand how conclusions were reached and what inputs shaped them.

AI can have a role in medicine, but only when its use is managed, bounded, and accountable. That means clear separation between transient assistance and retained interpretation, between explanation and decision-making, and between support and authority. It means designing systems that preserve uncertainty rather than smoothing it away, and that prevent the accumulation of silent power through repetition and scale.

If companies building large AI systems are serious about improving healthcare, they should not be racing to aggregate more data or expand interpretive reach. They should engage with architectures and technologies that already prioritize enforceable consent, data-level governance, provenance, and patient-controlled use. Without those foundations, intelligence becomes the least important part of the system.

Health data does not need to be centralized to be helpful. It needs to remain governable by the person it represents. Until that principle is treated as a design requirement rather than a policy aspiration, tools like this will continue to promise empowerment while quietly eroding the very agency they claim to support.

Article link: https://www.linkedin.com/pulse/chatgpt-health-terrible-idea-katalin-bártfai-walcott-dchzc?

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Choose the human path for AI – MIT Sloan
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
    • AI’s missing ingredient: Shared wisdom – MIT Sloan 12/21/2025
    • Hype Correction – MIT Technology Review 12/15/2025
    • Semantic Collapse – NeurIPS 2025 12/12/2025
    • The arrhythmia of our current age – MIT Technology Review 12/11/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (4)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d