healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA

Posted by timmreardon on 03/27/2026
Posted in: Uncategorized.

ita Rubin, MA1

  1. Author Affiliations 
  2. Article Information

JAMA

Published Online: March 6, 2026

2026;335;(12):1019-1021. doi:10.1001/jama.2026.1122

In January, OpenAI, developer of ChatGPT, launched ChatGPT Health, one of many patient-facing generative artificial intelligence (AI) tools in various stages of development.

From educating patients on women’s sexual health and hip replacement surgery to generating postoperative instructions and digitizing informed consent, the potential medical applications of generative AI tools for the public are vast. In general, their goal is to increase patients’ comprehension of complex medical information and, in the case of ChatGPT Health, provide personalized information based on individual users’ own data. In the not-too-distant future, some experts predict new AI technologies will be able to independently make decisions about patient care.

At their most sophisticated, though, these technologies should serve as a “clinician extender,” not a clinician replacer, said cardiologist Haider Warraich, MD, a program manager at the US government’s Advanced Research Projects Agency for Health (ARPA-H) who previously helped shape digital health and AI policy at the US Food and Drug Administration (FDA).

“I hate the term AI doctor,” Warraich said. “There’s a lot more to me than what these technologies can do.”

There’s more than one reason why using an AI chatbot for health advice is not the same as consulting a physician. Recent studies have raised questions about the accuracy of health information provided by chatbots, and physicians and consumers have expressed concerns over the sharing of personal medical data with large language models (LLMs) that aren’t covered by the Health Insurance Portability and Accountability Act (HIPAA).

ChatGPT Health failed to properly triage the most and the least serious cases in what might be the first studyto assess the new tool’s performance, according to an accelerated preview of the article published in late February. The authors, who tested the chatbot using vignettes written by physicians, noted that under-triage of emergency conditions may delay or preclude lifesaving treatment, while over-triage of nonurgent presentations may increase health care utilization.

But LLMs hold promise as a way of expanding access to medical expertise or, at the very least, preparing patients to make the best use of visits with their physicians. “There’s a reason patients want to use these models,” said radiation oncologist Danielle Bitterman, MD, clinical lead for data science and AI at Mass General Brigham. “It’s so hard to access health care right now.”

ChatGPT vs ChatGPT Health

Of the 800 million users of ChatGPT each week, 1 in 4 seek health-related information, according to Nate Gross, MD, MBA, who leads health care strategy at OpenAI, which developed the chatbot.

“We said, ‘Hey, let’s build some differences to the product to make it a more contextually aware experience,’” as well as one with additional privacy and security connections, he recalled.

Users of “vanilla” ChatGPT, as Gross describes the forerunner of ChatGPT Health, can upload a physician’s note or copy laboratory results from their patient portal, he explained, but those bits of information lack context. “Just uploading a really short doctor’s note could be interpreted very differently if you’re age 20 or age 70.”

ChatGPT Health, on the other hand, invites users to upload all their personal health information, including laboratory test and imaging results as well as data collected by their Apple watch.

Although OpenAI consulted with hundreds of physicians from around the world to improve its models, ChatGPT Health is not designed to play doctor, Gross emphasized.

“We train our models specifically to guide patients to health care professionals for diagnosis and treatment,” he said. “We’re looking to give people information, not tell them if they’re sick, not tell them if they’re healthy. We’re a partner to the health care system in that regard.”

By late February, ChatGPT Health was not yet available to all comers; prospective users could add their name to a waitlist for using the chatbot. OpenAI declined to say how many people have used ChatGPT Health so far.

Privacy is one of users’ main concerns about ChatGPT Health and other LLMs that allow people to upload personal health information.

Elon Musk recently suggested in an X post that “[y]ou can just take a picture of your medical data or upload the file to get a second opinion from Grok,” an AI chatbot developed by his company, xAI.

Commenters were aghast at the idea. One decided to ask Grok’s opinion and posted its reply: “Grok is not HIPAA compliant, and we strongly advise against uploading sensitive medical data.”

Gross acknowledged that ChatGPT Health isn’t HIPAA compliant either. That’s not due to negligence, he pointed out, but because ChatGPT Health, like Grok, is not an entity covered by HIPAA, such as a physician or health insurance plan, or a business associate of a covered entity.

“They are not held to the same legal requirements that doctors and health care institutions are,” Bitterman said of the AI companies.

ChatGPT Health “is building on a lot of very proprivacy protections that ChatGPT already had, with additional layers of protection,” Gross said. “We wanted to set a really high bar.” For example, he noted, OpenAI will not include any ChatGPT Health conversations among the data it uses to train the LLM. And, he explained, as with ChatGPT, ChatGPT Health users can opt to make chats temporary, meaning they won’t appear in their history and ChatGPT Health won’t save them.

Even so, “those assurances may not be worth that much if companies get sold,” pointed out David Liebovitz, MD, codirector of the Institute for Artificial Intelligence in Medicine’s Center for Medical Education in Data Science and Digital Health at the Northwestern University Feinberg School of Medicine.

For now, he said, if patients asked him whether he thought they should try ChatGPT Health, he’d probably suggest “they could wait a little bit longer, when there could be more privacy-related tools.”

The Complete Picture?

Even if they want to, chatbot users—especially clinically challenging patients with long, complex medical histories—can’t always upload all their medical records, Bitterman pointed out.

“It’s very hard to ensure that you have all your medical records,” she said. “Those are the missing pieces that make clinical practice hard.”

Gross acknowledged that “our health care system is very fragmented.” But, he said, if patients forget to upload records from a particular physician or hospital, their physicians’ most recent notes likely will at least mention them.

Even patients who have all the relevant information may not paint a complete picture of their situation when interacting with LLMs, concluded research published in February.

The study, led by the Oxford Internet Institute in the UK, tested whether LLMs could help individuals without medical training identify underlying conditions and choose a course of action in 10 physician-drafted health scenarios. Researchers randomly assigned 1300 participants to receive assistance from 1 of 3 LLMs or, to serve as the control, a source of their choice, which was typically Google. The 3 LLMs were ChatGPT-4o, Meta’s Llama 3, and Command R+, which was developed by Cohere, a Canada-based company.

On average, when the scientists presented the vignettes directly to the LLMs, bypassing human interaction, the chatbots correctly identified the condition 95% of the time and the appropriate course of action 56% of the time.

But when study participants presented the vignettes to the same LLMs, the chatbots correctly identified relevant conditions only about a third of the time and the appropriate course of action less than 44% of the time. The LLMs performed no better than Google did in the control group.

“The limiting factor wasn’t just the model’s medical knowledge,” coauthor Rebecca Payne, MBBS, PhD, MPH, a general practitioner at the North Wales Medical School, Bangor University, said in an email. “It was the human-AI communication loop: people providing incomplete information, the model misinterpreting key details, and, importantly, people failing to carry forward a relevant diagnostic suggestion that the model didraise during the exchange.”

Whether using an LLM or Google, study participants “tended to underestimate the severity in the vignettes we tested,” Payne said. “That raises the risk that some users may feel falsely reassured or may delay seeking care.”

Payne’s findings didn’t surprise Bitterman. “With these chatbots, it’s incumbent on the user to know what they need to provide to the model to get the best information,” she said. “Having that kind of clinical nuance requires a lot of on-the-ground training,” not just the LLMs’ training on medical literature and textbooks.

The advice she gives to patients: “Don’t take immediate action just based on what you find online. We can discuss it together.”

Shorter and Simpler

The result could be deadly if, say, a chatbot mistakenly told a user that they didn’t need to go the emergency department because their chest pain was due to indigestion, not a heart attack.

That’s why Payne advises patients to use chatbots only for low-stakes support, such as explaining medical terms, preparing questions for a clinician, and summarizing what they’ve been told. “LLMs currently perform best as ‘assistants/secretaries’ that help organize known information rather than generate high-stakes clinical interpretations,” she said.

Physicians are working on a number of generative AI applications for more focused, lower-stake purposes.

For urologist Gio Cacciamani, MD, the diagnosis of a loved one with a serious disease unrelated to his specialty gave him a taste of what patients face when trying to decipher scientific information.

“When it comes to something outside my field, it’s very challenging to read,” said Cacciamani, director of the Artificial Intelligence Center for Surgical and Clinical Applications in Urology at USC’s Keck School of Medicine. “That situation opened my eyes.”

Cacciamani discovered 2 types of medical information online—either “extremely readable but not certified,” such as blog posts, or “peer-reviewed, certified, but not readable at all,” mainly publications in scientific journals.

Generative AI “has the potential to bridge long-standing gaps between certified medical knowledge and patient understanding,” Cacciamani and coauthors noted in a commentary published in February.

Using the retrieval-augmented generation, or RAG, technique, which trains the LLM with a medically verified knowledge base, he developed a new tool that can translate and summarize abstracts and full articles. More than 6000 people have turned to Pub2Post, and some medical journals are using it for their social media posts, Cacciamani said.

Antonio Forte, MD, a plastic surgeon at the Mayo Clinic in Jacksonville, Florida, used RAG to develop an LLM virtual assistant for postoperative instructions.

Patients often are discharged after surgery while still experiencing the residual effects of anesthesia or painkillers, making it difficult to remember postoperative instructions, Forte said. And, he added, they frequently misplace printouts of the information. “That’s why we thought, ‘What if we got patients the ability 24/7 to have access to high-quality, medically verified information?’”

Federal Initiatives

Using simulated patient interactions, testing the virtual assistant demonstrated strong technical accuracy, safety, and clinical relevance, albeit at a relatively high 11th-grade reading level, Forte and his coauthors recently reported.

And Bitterman has tested the ability of ChatGPT-4o and Llama 3.2-8B to answer patients’ questions about clinical trials with the goal of simplifying informed consent forms. In a recent study, she and her coauthors found that ChatGPT-4o was significantly more reliable and safer that Llama 3.2-8B in answering these queries.

In January, 2 federal agencies, both part of the US Department of Health and Human Services, launched initiatives focusing on digital health tools for patients with common, chronic conditions. One is designed to evaluate a regulatory pathway for digital health tools including LLMs, and the other aims to spur the development of an LLM for patients with heart failure.

The FDA, working with the Center for Medicare & Medicaid Innovation, announced the Technology-Enabled Meaningful Patient Outcomes (TEMPO) for Digital Health Devices Pilot.

According to the FDA, the voluntary pilot will evaluate a new enforcement approach “that supports digital health devices intended for use to improve patient outcomes in cardio-kidney-metabolic, musculoskeletal, and behavioral health conditions.”

The FDA has not yet authorized any LLM, Warraich said. Generative AI applications such as LLMs “present a unique challenge because of the potential for unforeseen, emergent consequences,” according to a Special Communication he coauthored in JAMA in 2024.

Today, Warraich is leading a new ARPA-H initiative whose goal is the development of new LLM systems that are ready for submission to the FDA within 2 years for authorization as medical devices. The Agentic AI-Enabled Cardiovascular Care Transformation (ADVOCATE) program “aims to transform advanced cardiovascular disease management with an agentic AI system that can provide 24/7 holistic clinical care.”

“I believe that as AI presents an opportunity to fundamentally transform what it means to be a clinician, a patient, and the relationship between them, cardiology will be at the tip of the spear…,” Warraich noted in an opinion piece published in February in the Journal of the American College of Cardiology.

The first use for technologies developed through ADVOCATE will be providing care for patients with congestive heart failure. If a patient is feeling short of breath, for example, the technology will decide if the patient should go to the emergency department and whether they might need a new prescription or a higher dose of a current medication, Warraich explained. Along with developing AI agents that can be trusted to make such changes autonomously, ADVOCATE will also support the creation of a supervisory AI “overseer” to monitor the safety and effectiveness of clinical AI agents after they’ve been deployed by health systems.

Given that ChatGPT is only 3 years old, the rapid development of new generative AI applications for patient use may seem like science fiction. As Bitterman said, “This is so far beyond what I would have predicted 5 years ago.”

Published Online: March 6, 2026. doi:10.1001/jama.2026.1122

Conflict of Interest Disclosures: Dr Bitterman reported serving as an associate editor of JCO Clinical Cancer Informatics, Annals of Oncology, and radiation oncology for HemOnc.org. She also reported receiving consulting fees from Inspire Exercise Medicine LLC and honoraria from Harvard Medical School, Med-IQ, and the National Comprehensive Cancer Network and serving as a scientific advisory board member for Blue Clay Health LLC and Mercurial AI. Dr Liebovitz reported receiving research grants from Children’s Hospital of Philadelphia, the FDA, Merck Sharp & Dohme, the National Institutes of Health, the National Science Foundation, and the University of Chicago. He also reported that he has an ownership or investment interest in CodeAccelerate, Dendritic Health AI, KYRAL Inc, and Optima Integrated Health Inc. Dr Cacciamani reported holding equity in EditorAIPro, of which Pub2Post is a product. Dr Forte reported that his research at Mayo has been funded by Dalio Philanthropies, the Gerstner Family Foundation, the Richard M. Schulze Family Foundation, and Schmidt Sciences and that he is a paid medical advisor for OpenEvidence. No other disclosures were reported.

Article link: https://jamanetwork.com/journals/jama/fullarticle/2846269

How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists

Posted by timmreardon on 03/25/2026
Posted in: Uncategorized.

By Andrew Gray | March 12, 2026

Scientific research underpins the things we do. Huge investments are made capitalizing on technological developments; governments declare that their policies will be based on academic evidence; doctors decide what treatments to use for their patients. And beneath all that is the idea that, ultimately, we can trust that published research fairly reflects the realities of the world: that it is true, that it is balanced, and that it has been produced and reviewed by expert researchers. But that foundation is starting to wobble.

Shortly after ChatGPT was released, it became clear that it was beginning to affect scholarly research. Published papers became much more likely to meticulously delve into intricate questions, and to do so with great enthusiasm, in ways they never had before (Stokel-Walker 2024). Distinctive quirks of large language model (LLM) writing such as these began to explode in popular usage, first in certain fields such as computer science or engineering, before spreading to other disciplines. Some researchers estimate that in 2024, 13.5 percent of all papers in PubMed indexed journals had been processed using LLMs, representing around 200,000 articles that year (Kobak 2025). In preprints—papers posted online as unreviewed drafts—the rates increased even faster, with more than 20 percent of computer science preprints showing signs of LLM involvement by late 2024 (Liang 2025).

In retrospect, this was not surprising. For many researchers, forced by the conventions of academia to publish in a second language, a tool that could help with fluent translation is a blessing. And across the world, researchers have been under strong pressure to publish more papers for decades; a tool which could speed up the process of writing was always going to be attractive. And it does speed it up; researchers who have used LLMs in their writing produce around a third more preprints than their colleagues (Kusumegi et al 2025).

But it can be tempting to use it too much. Some researchers have fallen into the trap of simply getting the LLM to generate large portions of papers for them, or to rewrite a draft so extensively that it might unintentionally change the meaning (Conroy 2023). What emerges is something that looks superficially like research, written fluently, convincingly, and confidently, but which might potentially just turn out to be so much smoke and mirrors. In extreme cases, they can be capable of generating entire papers based on research that simply never took place. It is no surprise that researchers have found that identifiably LLM-edited papers are retracted twice as often as average (Kousha & Thelwall 2025).

To a reader, though, LLM-copyedited papers are hard to distinguish from LLM-generated ones. One can sometimes tell that the tools were used, but not how much they were used in any given paper. When surveyed, 28 percent of researchers said they had used LLMs for copyediting and 8 percent for generating new text, but half or more of both groups didn’t disclose it in the paper (Kwon 2025).

Alongside this reluctance to disclose LLM use, many researchers appear keen to disguise it. When some of the distinctive markers of AI writing in research papers were first reported, they suddenly became less popular in newer publications, but the use of the less-publicized markers continued to grow (Geng & Trotta 2025). Together, this strongly implies that many authors just don’t want it known that they are using these tools.

And it is not just in writing the papers where people are trying to cut corners with AI. Most research papers are peer-reviewed by other researchers, giving a degree of confidence that the research is robust and legitimate. This can be a time-consuming and thankless task, and—unsurprisingly—is one where LLMs have begun to creep in. Most publishers now have explicit warnings against reviewing papers with LLMs, but it almost certainly still happens. Some less scrupulous authors have even been discovered leaving invisible comments in drafts, instructing the LLM that they expect will review it to skip straight to approval (Sugiyama & Eguchi 2025). If nothing else, this technology has invented new kinds of research integrity problems!

These tools are also beginning to affect how we find research. The major scholarly databases are all beginning to offer “AI-assisted search” in some form or another, using LLMs to interpret a user question and find results—either as a list of recommended papers, or as a summary and analysis of the results. When this works well, it can be very convincing. It may return six useful and interesting papers. But will it give you what you want: the right six papers, or the best six? We just don’t know.

And here lies a big risk. LLMs are often described as black boxes; any oddities in the way they work, or biases they encode, will be baked into the results, with no easy ways to spot them. There is no reason to think that any of the scholarly databases are intentionally skewing their results, but biases or censorship can easily arise unintentionally, especially in such a complex system as these (Tay 2025).

The most prominent and accessible of these databases, for non-academics, is Google Scholar. Google Scholar works by indexing everything found by Google, with results which look broadly like a research paper. This is unlike traditional databases, which work from a selective list of publications. It is more expansive than more traditional databases, indexing things like preprints and working papers as well as published research. But this has made it more vulnerable to disruption or manipulation by LLMs (Haider et al 2024). Because it includes a wider range of material, it already indexes a higher proportion of the unreviewed types of items that are more likely to involve LLM text. Because it is entirely automated, it does not have the manual screening which could keep out some of the lowest-value junk.

That automated approach causes other problems. Google Scholar identifies papers it does not otherwise know about by looking in the references list of the ones it indexes. This means it can report a reference to them even if no digital copy exists, which can be very useful for more obscure material. But one of the more dramatic failures of LLMs is that they often hallucinate citations—works that do not exist, plausible sounding mirages, often in journals that themselves do not exist. Google Scholar does not have any way to distinguish between real and false references—understandably, its developers never expected that anyone would be including false references—so it reports that they exist. People trying to validate what another LLM tells them look up the paper, find it indexed in Google Scholar, and, well, surely it must be real! It’s in the database.

Most researchers would never admit to citing a paper they have not read… but one can imagine that it is tempting, especially when it seems to perfectly address the question in hand, and you seem to have a fair summary of it but just can’t track it down however hard you try. And so those fictional citations creep out into real papers. Entire fictional journals may be conjured into a shadowy existence this way (Klee 2025).

This is a perfect storm brewing for the integrity of scholarly publishing. The volume of significantly AI-generated material is increasing, and it is being masked by a flood of “AI polished” papers, which have the same surface style. It’s no wonder that readers, especially casual readers, cannot be confident in distinguishing between real research and fictional, and cannot tell how much of the paper might potentially be hallucinated.

At the same time, the system is stumbling under the extra burdens placed on it by the use of LLMs; it has become easier to produce papers, without becoming easier to assess or peer review them. In late 2025, the preprint server arXiv reported that it would tighten its rules and no longer accept the submission of computer science review articles; the volume of them was simply too large for their moderators to cope with (Castelvecchi 2025). As the system creaks under strain, more and more venues will be faced with an unpleasant choice: Restrict submissions, and add yet more work to their volunteer reviewers? Or loosen standards and risk problematic material slipping through?

Then we have to consider why those problematic papers are out there. At the moment, most of the primarily AI-generated papers appear to be from academics trying to bolster their own publication lists. They are unlikely to be deliberately malicious, though they may fit into more traditional patterns of scientific fraud (Richardson et al 2025). But they are still cluttering up the databases, filled with information that may or may not be valid, conclusions and recommendations that may or may not be true, citations pointing to other non-existent literature. These AI-edited papers will place a burden on every future researcher to try to make sense of them, even if that’s not the intention.

But not all examples might be so innocent. Scientific papers—and all the prestige, reliability, and authority that they carry—are a prime target for intentional misinformation campaigns (Bergstrom & West 2023, Haider 2024). Should someone wish to publish a large number of deliberately skewed papers to bolster a certain position—that a new drug is remarkably effective; that an industrial process is perfectly safe; that a particular policy decision has made us all happier and wealthier—then they have found themselves a new tool to help produce them quickly and easily, at the same time that the system is less resilient at keeping them out. It is difficult to say for sure whether this is yet happening, but it is clear that the opportunity cost of doing it has become easier, cheaper, and more achievable.

The ways in which we access research are also changing. The move towards LLM-based information retrieval means that an opaque system is being inserted between readers and the information they are looking for, opening up the opportunity for third parties to control access to research in ways that may not be obvious, or even intentional.

And to cap it all off, anyone who is motivated to reject the validity of research which does not fit their preconceptions now has a perfect pretext to do so, regardless of its quality: “Oh, you can’t trust that anyway, don’t you know it’s all AI rubbish now?”

A compelling analogy here, suggested by the historian Kevin Baker, is to think of the publishing system as an immune system for science: It rejects things that might harm the system, perhaps not perfectly, but reliably enough to keep everything ticking along and reasonably healthy. But when our immune system is stressed, we can succumb more easily to a minor infection that we would normally brush off (Baker 2025).

The scholarly publishing system is, undeniably, not in the best of health. It is beset by a whole range of pressures. It carries on, but it is limping. The well-meaning use of AI to help speed things up might, in this analogy, be the fever that ends up sending the whole thing to its sickbed, opening the door for much more damaging illnesses—in the form of intentional and malicious disinformation—to take root and do real harm.

Article link: https://thebulletin.org/premium/2026-03/how-ai-use-in-scholarly-publishing-threatens-research-integrity-lessens-trust-and-invites-misinformation/

References

Baker, K. 2025. “Context Widows.” December 12. Artificial Bureaucracy – Substack. https://artificialbureaucracy.substack.com/p/context-widows

Bergstrom, C, & West, J. 2023. “How publishers can fight misinformation in and about science and medicine.” July 7. Nature Medicine. https://www.nature.com/articles/s41591-023-02411-7

Castelvecchi, D. 2025. “Preprint site arXiv is banning computer-science reviews: here’s why.” November 7. Nature. https://www.nature.com/articles/d41586-025-03664-7

Conroy, G. 2023. “Scientific sleuths spot dishonest ChatGPT use in papers.”  September 8. Nature https://www.nature.com/articles/d41586-023-02477-w

Geng, M, & Trotta, R. 2025. “Human-LLM coevolution: evidence from academic writing.” February 17. arXiv https://arxiv.org/abs/2502.09606

Haider, J, et al. 2024. “GPT-fabricated scientific papers on Google Scholar.” September 3. Misinformation Review. https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/

Klee, M. 2025. “AI is inventing academic papers that don’t exist – and they’re being cited in real journals.” December 17. Rolling Stone. https://www.rollingstone.com/culture/culture-features/ai-chatbot-journal-research-fake-citations-1235485484/

Kobak, D, et al. 2025. “Delving into LLM-assisted writing in biomedical publications through excess vocabulary.” July 2. Science Advances 11(27). https://www.science.org/doi/10.1126/sciadv.adt3813

Kousha K & Thelwall M. 2025. “How much are LLMs changing the language of academic papers after ChatGPT? A multi-database and full text analysis.” September 11. arXiv https://arxiv.org/abs/2509.09596

Kusumegi, K, et al. 2025. “Scientific production in the era of large language models.” December 18. Science390(6779) https://www.science.org/doi/10.1126/science.adw3000

Kwon 2025. “Is it OK for AI to write science papers? Nature survey shows researchers are split.” May 14. Naturehttps://www.nature.com/articles/d41586-025-01463-8

Liang, W, et al. 2025. “Quantifying large language model usage in scientific papers.” August 4. Nature Human Behaviour9. https://www.nature.com/articles/s41562-025-02273-8

Richardson, R, et al. 2025. “The entities enabling scientific fraud at scale are large, resilient, and growing rapidly.” August 4. Proceedings of the National Academy of Sciences122(32). https://www.pnas.org/doi/10.1073/pnas.2420092122

Stokel-Walker, C. 2024. “AI Chatbots Have Thoroughly Infiltrated Scientific Publishing.” May 1. Scientific American. https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/.

Sugiyama, S. & Eguchi, R. 2025. “’Positive review only’: Researchers hide AI prompts in papers.” July 1. Nikkei Asia. https://asia.nikkei.com/business/technology/artificial-intelligence/positive-review-only-researchers-hide-ai-prompts-in-papers

Tay, A. 2025. “The AI powered Library Search That Refused to Search.” July 28. Musings about Librarianship – Substack. https://aarontay.substack.com/p/the-ai-powered-library-search-that

VA Prepares April Relaunch of EHR Program – GovCIO

Posted by timmreardon on 03/19/2026
Posted in: Uncategorized.

WED, 03/18/2026 

VA will restart its EHR rollout in April, scaling to 13 sites in 2026 as leadership focuses on stability, interoperability and streamlined governance.

Written Henry Kenyon

The Department of Veterans Affairs is preparing to resume deployment of its electronic health record modernization effort, with new facilities scheduled to go live beginning in April.

VA Deputy Secretary and acting CIO Paul Lawrence said in a March 17 statement the EHR program — launched during the first Trump administration and has since experienced a series of delays — is now back on track, with 13 sites slated for deployment in 2026. The rollout will begin with four sites in April, followed by four in June, three in August and two in October.

Lawrence credited changes championed by VA Secretary Douglas Collins that streamlined decision making, created a strategic plan for the rollout and established strict accountability measures for vendors.

Deployments Expand

The 13 new sites will build on six facilities already operating the modernized EHR system. Those sites support more than 13,000 users delivering care to roughly 188,000 veterans.

Oracle Health operates and maintains the system under service-level agreements that Lawrence said are driving improvements in performance and reliability.

According to VA data, the system has operated without outages for 27 of 31 months between June 2023 and December 2025. Oracle Health also met 100% of ticket management targets for 30 consecutive months and recorded no major incidents from March 2024 through December 2025.

Lawrence said these benchmarks reflect a more stable system, reducing disruptions and supporting uninterrupted clinical workflows.

“The bottom line is that, this time, the Federal EHR is working, stable and reliable,” he said.

Driving Interoperability

The VA aims to deliver a single, longitudinal health record that follows service members from active duty through veteran care.

By integrating data across the War Department, VA and community providers, the system is designed to reduce duplicative tests and improve care coordination. Lawrence said greater visibility into patient records will also enhance safety and clinical decision-making. Lawrence added the transition should be largely seamless for veterans, with the primary impact being improved provider efficiency and more time for patient care.

“The only thing [veterans] will notice is that their doctors and nurses have more time for meaningful conversations with them,” Lawrence said.

Ongoing Restructuring

The EHR rollout aligns with the broader effort to modernize VA operations and standardize care delivery. The department is restructuring VHA governance to streamline management and reduce fragmentation. This includes consolidating planning and oversight functions to enable more consistent clinical and business operations.

VA officials said the effort also addresses longstanding challenges with inconsistent technology adoption. The department is working to standardize systems and processes to accelerate deployment of new capabilities and improve enterprise integration.

Article link: https://govciomedia.com/va-prepares-april-relaunch-of-ehr-program/

Strong call for universal healthcare from Pope Leo today – FAN

Posted by timmreardon on 03/18/2026
Posted in: Uncategorized.

EHR fragmentation offers an opportunity to enhance care coordination and experience

Posted by timmreardon on 03/16/2026
Posted in: Uncategorized.

Harmonizing electronic health record platforms and their legacy data tames complexity and enables easier patient access to information and greater patient trust in the healthcare system, says NewYork-Presbyterian’s EHR manager.

By Bill Siwicki , Managing Editor | March 16, 2026 | 12:35 PM

Electronic health record fragmentation across hospitals and providers highlights a powerful opportunity to improve coordination and patient experience – healthcare organizations use different EHR vendors and this diversity underscores the need for seamless data exchange across the care continuum, said Shruti Nayar, program manager, information technology for electronic health records and clinical IT health services, at NewYork-Presbyterian.

“Interoperability standards like HL7 and FHIR are accelerating progress; however, there still are challenges to support real-time data exchange, causing potential inconsistency in patient data,” she explained. “By working to defragment the patient records, clinicians gain a fuller view of a patient’s health.

“EHR consolidation also enhances security by streamlining access points and standardizing vendor oversight,” she added. “Ultimately, harmonizing EHR platforms transforms this complexity into a driver of better coordination, easier patient access to information and greater patient trust in the healthcare system.”

Consolidating EHRs

NewYork-Presbyterian recognized the opportunity it had to consolidate multiple EHR systems across its health system.

“We worked to consolidate to one system, while archiving legacy data to another and providing seamless integration from our EHR to legacy system data,” Nayar recalled. “This was possible by creating an enterprise master patient index for each patient across fragmented systems.

“A user is able to click on a link in the patient electronic chart to access the patient in context records from multiple legacy systems through single-sign-on to the legacy system records from within the EHR,” she continued. “This decision was a strategic enabler of compliance readiness and operational efficiency across the organization.”

This enables teams to have a longitudinal view of patient records and support them throughout their continuum of care. Staff ensured that along with saving their data in an archiving system, they also stored a copy in a data lake for easy access for reporting and research.

NewYork-Presbyterian is affiliated with two medical schools, and this health IT process also allowed staff to provide years’ worth of data across the health system for research purposes. In the past six years, staff have archived more than 120 applications into one system. That amounts to more than 175 terabytes of data and millions of patient records. This has helped the organization achieve “one patient, one record,” as staff say.

Path for improvement

EHR fragmentation can create challenges for quality, efficiency and security – but it also offers a clear path for improvement, Nayar observed. Streamlining systems can reduce unnecessary testing, lighten clinician workload and strengthen care coordination, she said.

“An enterprise-wide governance group – bringing together operations, analytics, security and clinical leaders – can help guide standards and integration strategy,” she explained. “This team can assess where consolidating redundant EHRs or standardizing ancillary systems makes sense.

“A unified patient record – supported by an enterprise master patient index and a longitudinal data repository – forms the backbone of any defragmentation effort,” she continued. “Centralizing data in a shared environment ensures patient information can be reliably matched and accessed across systems.”

Leaders can treat fragmentation as a strategic priority and track progress with clear metrics, such as the completeness of cross-system patient data, the number of clinical systems per site and cybersecurity exposure tied to system sprawl, she concluded.

Follow Bill’s health IT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Article link: https://www.healthcareitnews.com/news/ehr-fragmentation-offers-opportunity-enhance-care-coordination-and-experience

When AI Governance Fails

Posted by timmreardon on 03/15/2026
Posted in: Uncategorized.

AI governance fails when legal, risk, engineering, and business are all looking at the same system and calling it different things.

Treasury just made an uncomfortable point many AI programs are still ignoring: to build workable AI governance, we first need to solve the Tower of Babel problem inside enterprise AI.

One team says “copilot.”
Another says “agent.”
Another says “automation.”
Another says “pilot.”

This is not semantics. It is governance.

Each label implies a different approval path, different control level, different documentation burden, and different escalation route.

So Treasury did not start with a model benchmark.
It started with something more foundational: a shared AI Lexicon and a Financial Services AI Risk Management Framework.

Treasury’s stated concern was inconsistent terminology and uneven risk-management practices across the sector. 
The response was organizational before technical: align language first, then align treatment.

This is why the move matters far beyond financial services.

You cannot govern what functions cannot classify consistently.
And you cannot classify consistently if every department uses different words for the same capability, risk, or use case.

Shared language is what lets you decide:
👉 who owns the use case inventory,
👉 who classifies the system,
👉 who approves deployment into higher-risk workflows,
👉 which monitoring thresholds apply,
👉 and who has stop or escalation authority when teams disagree.

Most firms still think AI governance begins with model testing.

Treasury is signaling that real AI governance begins earlier: with semantic alignment, a common maturity lens, and a framework the whole institution can actually operate.

When your AI system crosses functions, who decides what it is, and therefore how it must be governed?

🔔 𝗙𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 if you want the next two breakdowns: I’ll unpack the actual AI Lexicon first, then the Treasury

Article link: https://www.linkedin.com/posts/michelevaccaro_ai-governance-fails-when-legal-risk-engineering-activity-7436787749581606913-2mKh?

Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists

Posted by timmreardon on 03/12/2026
Posted in: Uncategorized.

By Dan Drollette Jr | March 12, 2026

Deception, disinformation, and fakery are nothing new in the world.

Long before the current era (BCE), the ancient Greeks used deceptive tactics against their enemies during the Trojan War, when they constructed a gigantic, hollow wooden statue of a horse with a small, select team of soldiers hidden inside. Sometime in the 12th or 13th century BCE, they left the horse—with its hidden cargo—immediately outside the gates of Troy, their enemy’s capital city, and pretended to sail away. The city’s defenders then hauled the horse inside the city walls as a victory trophy—and later that night, the hidden soldiers crept out of the horse and opened the gates of the city to the rest of the Greek army (which had returned under the cover of night), allowing them to enter and utterly destroy the city.

They were so successful, in fact, that the phrase “Trojan horse” entered the lexicon, to describe any strategy that tricks a target into letting an enemy enter a protected inner sanctum.

Thousands of years later, that phrase is still used; in the world of computing, a “Trojan horse attack” describes how a certain type of malicious computer program is designed to disguise itself as a harmless, legitimate piece of software—and trick users into willingly letting it into a secure system where it can then steal data, create backdoors, install other malware, or spy on user activity. In the cyber world, Trojan horse attacks have likely been around since at least 1971, which is when they were mentioned in passing in one of the first Unix software manuals.

But while trickery is old, what is new is the very high level at which realistic -looking and -sounding  “deepfake” photos and videos, synthetic feeds, and fabricated accounts can now be made—and the sheer volume that can be produced, on relatively short notice. With the rapid advance of artificial intelligence or AI, the situation is likely to only get worse, overwhelming any timely evidence-based effort to sort out what is real and what is not in the information ecosystem. “Consequently, AI brings a significant possibility of elevating nuclear escalation risks by amplifying disinformation, overloading analysts, compressing decision timelines, and exploiting cognitive and institutional vulnerabilities in sociotechnical systems for nuclear command and control,” writes analyst (and former Bulletin Science and Security Board member) Herb Lin in his essay for this issue of the magazine.

Lin lays out three hypothetical scenarios where AI could have a role in nuclear escalation and  act as a threat multiplier—by shaping perceptions, contaminating intelligence, and destabilizing the nuclear “signaling” that nuclear-armed countries use to indicate their intentions to one another. His article, “AI in the information ecosystem and its impact on nuclear escalation,” is chilling in its graphic, specific, concrete detail, showing how this technology could cause events to rapidly spiral out of control.

And while the scenarios Lin portrays may seem implausible at first, recent history shows otherwise—as can be seen in “At the brink: How Moscow’s ‘dirty bomb’ disinformation campaign risked a NATO-Russia war in October 2022.” The author, Polina Sinovets, argues that Russian president Vladimir Putin used deepfakes and other disinformation to promote phony allegations that Ukraine was going to detonate a “dirty bomb” on the battlefield in the autumn of 2022, in order to justify in advance his own possible use of a Russian tactical nuclear weapon. His goal was to intimidate the Ukrainians and their allies, so that Russian forces would not be wiped out at a particularly critical juncture of the war, when Russia was attempting—and initially bungling—the withdrawal of 20,000 to 30,000 of its troops from a large part of southern Kherson and across the Dnipro River to safety.

And the role of modern disinformation is not just confined to warfare. The deepfake zeitgeist is percolating throughout society, leading to a general distrust of evidence and expertise—which seriously imperils just about everything, from healthcare to climate change to journalism and democracy. In such an environment, conspiracy theories flourish, even when they are unsupported by any hard facts. And without a basic shared reality, it is hard to get much accomplished: “The US Department of Health and Human Services is now run by conspiracy theorists who believe that the American public health system is hiding key data on vaccine safety and who spend their days spreading health misinformation,” as Lisa Fazio notes in her article “How to counter health misinformation when it’s coming from the top.”

But all is not lost. Disinformation and misinformation may be a complex problem with no simple solutions—made particularly difficult when it is spread by people in power (and at a time when social media companies seem to be abandoning any effort at fact-checking). But by targeting the supply, demand, distribution, and uptake of misinformation, it is possible to improve the information environment and help people make informed decisions.

And sometimes, the act of improving the information environment means calling out misinformation, disinformation, and conspiracy-mongering—even when it comes from one’s nearest and dearest. It can feel awkward but still must be done says Joseph Uscinski, a political science professor at the University of Miami who organized the first international conference on conspiracy theories more than a decade ago, and has written two books on the topic: American Conspiracy Theories,  and Conspiracy Theories and the People Who Believe Them.In his Bulletin interview, Uscinski argues that “Being tolerant and compassionate [about disinformation-riddled conspiracy thinking] isn’t the same as pretending that their behavior isn’t their behavior… I have compassion for them, but I hold them responsible for their beliefs and behaviors.”

Article link: https://thebulletin.org/premium/2026-03/introduction-disinformation-as-a-multiplier-of-existential-threat/

AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan

Posted by timmreardon on 03/08/2026
Posted in: Uncategorized.

byEmilio J. Castilla

Dec 15, 2025

In this opinion piece, MIT Sloan professor Emilio J. Castilla argues that:

  • Algorithms promise objectivity, but in hiring, they’re learning human biases all too well.
  • Until we build fairer systems for defining and rewarding talent, algorithms will simply mirror the inequities and unfairness we have yet to correct.
  • The AI hiring revolution doesn’t have to be a story of automated bias. Asking tough questions before automating recruitment and selection can lead to fairer systems.

In my MIT Sloan classroom, I often ask executives and MBAs, “Who here believes AI can eliminate bias and unfairness in hiring?” Most hands go up. Then I show them the data, and their optimism fades.

One example: Amazon was forced to scrap its AI-driven recruitment tool after discovering that it penalized resumes containing the word “women” — as in “women’s chess club captain” or “women’s college.”

Another case: HireVue’s speech recognition algorithms, used by more than 700 companies, including Goldman Sachs and Unilever, were designed to assess candidates’ proficiency in speaking English. But research found that those algorithms disadvantaged non-white and deaf applicants.

Those are not isolated events; they are warnings — especially considering that the market for AI screening tools in hiring is projected to surpass $1 billion by 2027, with an estimated 87% of companies already having deployed these systems. 

The appeal is clear: faster screening, lower costs, and the promise of bias-free hiring decisions. But the reality is more complex — and far more troubling.

The problem: Bad data in

AI tools don’t operate in a vacuum. They learn from existing data — which can be incomplete, poorly coded, or shaped by decades of exclusion and inequality. Feed this data into a machine, and the results aren’t fair. They represent bias and inefficiency at scale.

Some AI tools have downgraded resumes from graduates of historically Black colleges and women’s colleges because those schools haven’t traditionally fed into white-collar pipelines. Others have penalized candidates with gaps in employment, disadvantaging parents — especially mothers — who paused their careers for caregiving. What appears to be an objective evaluation is really a rerun of old prejudices, stereotypes, and other hiring mistakes, now stamped with the authority of data science.

Beware the “aura of neutrality”

This is the paradox of algorithmic meritocracy. Train an AI system on past hiring decisions — who passed the first screening, who got an interview, who was hired, and who was promoted — and it won’t necessarily learn fairness. But it will learn patterns that were likely shaped by flawed human assumptions.

And because these systems are marketed as “data-driven,” their decisions are harder to challenge. A manager’s judgment can be questioned; an algorithm’s ranking arrives with an aura of neutrality. We are teaching AI tools to potentially perpetuate every mistake, every prejudice, every lazy assumption that has shaped generations of bad decisions.

First, check your assumptions

In my 2025 book, “The Meritocracy Paradox,” I argue that organizations invoking meritocracy without addressing structural challenges risk deepening the very gaps they seek to close. The same holds true for AI. Before we let AI automate hiring decisions, we need to carefully examine the data and the assumptions being encoded into these systems.

That means asking tough questions before automating candidate recruitment and selection: What data are we encoding? What processes are these algorithms built on, and are they still relevant to our organization’s needs? Who defines merit? Whose career paths are rewarded — or ignored?

AI won’t fix the problem of bias and inefficiency in hiring, because the problem isn’t technological. It’s human. Until we build fairer systems for defining and rewarding talent, algorithms will simply mirror the inequities and unfairness we have yet to correct.

AI as a turning point

The AI hiring revolution doesn’t have to be a story of automated bias or unfairness. It can be a turning point — a chance to reset how organizations define, measure, and reward talent, with the promise of employment opportunities for all. But that requires humility about what algorithms can — and cannot — do. Instead of using AI to avoid hard questions, we should use it to expose where our assumptions fall short and to locate and target issues in our talent management strategies.

That means engaging in continuous monitoring to catch inequities and inefficiencies, not executing one-time fixes. If we fail to confront these issues, the promise of “bias-free” AI will remain just that — a promise. And yesterday’s biases and stereotypes will quietly shape tomorrow’s workforce — one resume at a time.

Emilio J. Castillais a professor of work and organization studies at MIT Sloan, co-director of the MIT Institute for Work and Employment Research, and author of “The Meritocracy Paradox: Where Talent Management Strategies Go Wrong and How to Fix Them” (Columbia University Press, 2025). Castilla’s research focuses on the organizational and social aspects of work and employment, with an emphasis on recruitment, hiring, development, and career management, as well as on the impact of teamwork and social relations on organizational performance and innovation. Recent work includes the role of worker voice in successful AI implementations and an examination of the effect of gendered language in job postings.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/ai-reinventing-hiring-same-old-biases-heres-how-to-avoid-trap?

Fiscal Year 2025 Year In Review – PEO DHMS

Posted by timmreardon on 02/26/2026
Posted in: Uncategorized.

TBT to a year of progress, partnership, and purpose.

PEO DHMS is proud to share our Fiscal Year 2025 Year In Review: Partner. Innovate. Optimize. This report highlights the milestones we achieved in support of the Military Health System, including strengthening collaboration, delivering modernized digital health capabilities, and enhancing enterprise performance to better serve Service members and their families.

From advancing interoperability and data integration to supporting operational medicine worldwide, our team remains focused on driving innovation that improves readiness and health outcomes.

Explore the FY2025 Year In Review: https://lnkd.in/gRWQgEjB

“𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE

Posted by timmreardon on 02/26/2026
Posted in: Uncategorized.

In 2025, we stress-tested the social media ecosystem. As part of the “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” experiment we examined how easy it is to buy inauthentic engagement and how effectively platforms detect and remove it.

𝗪𝗵𝗮𝘁 𝘄𝗲 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝗲𝗱:
🔹 More than 30,000 inauthentic accounts generated over 100,000 fake engagements – all for a relatively small budget.
🔹 While enforcement improved compared to previous years, manipulation remains widely accessible, scalable, and increasingly sophisticated.
🔹 AI-enabled orchestration allowed fully automated content production and distribution.
🔹 Ad manipulation, although more expensive than organic engagement, remains feasible.
🔹 Cryptocurrency infrastructure continues to provide low-visibility payment channels.
🔹 A shift in amplified narratives toward military themes,, particularly pro-China content, was observed across multiple platforms.
🔹 Platforms are improving but the manipulation market is adapting just as quickly.

🔎 Main takeaways:
1️⃣ 𝗘𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗶𝘀 𝗶𝗺𝗽𝗿𝗼𝘃𝗶𝗻𝗴, 𝗯𝘂𝘁 𝗻𝗼𝘁 𝗳𝗮𝘀𝘁 𝗲𝗻𝗼𝘂𝗴𝗵
Account and engagement removals increased compared to previous years, yet large portions of fake activity still remain online weeks later. Detection remains uneven across platforms.
2️⃣ 𝗔𝗜 𝗵𝗮𝘀 𝗹𝗼𝘄𝗲𝗿𝗲𝗱 𝘁𝗵𝗲 𝗯𝗮𝗿𝗿𝗶𝗲𝗿 𝘁𝗼 𝗶𝗻𝗳𝗹𝘂𝗲𝗻𝗰𝗲 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀
Fully automated workflows can now generate and distribute content across platforms without human intervention. Modern bots no longer rely on spam volume, they embed themselves into real conversations using AI-generated, context-aware content.
3️⃣ 𝗧𝗵𝗲 𝗺𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗲𝗰𝗼𝗻𝗼𝗺𝘆 𝗶𝘀 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝘁 𝗮𝗻𝗱 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲
Low costs, accessible providers, and crypto-based payments enable sustained commercial manipulation. Some providers processed six-figure USD volumes over short periods indicating consistent demand and operational scale.

The core challenge is no longer just spam detection. It is behavioural, financial, and cross-platform coordination detection in an environment where influence operations are becoming cheaper, smarter, and harder to attribute.

Link to the full research report available in comments! ⬇️

Read the full report here: https://stratcomcoe.org/publications/social-media-manipulation-for-sale-2025-experiment-on-platform-capabilities-to-detect-and-counter-inauthentic-social-media-engagement/338

SocialMedia #Disinformation #AI #StratCom

Posts navigation

← Older Entries
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (8)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...