healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

The Very Uncertain Future of Arms Control – Bulletin of the Atomic Scientists

Posted by timmreardon on 05/13/2026
Posted in: Uncategorized.

Read More: https://thebulletin.org/magazine/2026-05/?

Now Available: Expanded and Enhanced International Health Care System Profiles – Commonwealth Fund

Posted by timmreardon on 05/13/2026
Posted in: Uncategorized.

May 12, 2026

The Commonwealth Fund’s newly updated, enhanced, and expanded International Health Care System Profiles are now available. This rich resource now includes 31 countries across six continents to reflect the variety of approaches to health care around the world. Explore the profiles through our redesigned interactive format, filled with graphics and options to compare health systems side-by-side.

With 11 new countries in the 2026 edition — ranging from Mexico and Spain to Nigeria, Pakistan, and Vietnam — you’ll be able to learn how systems operating under vastly different resource constraints, governance structures, and historical contexts provide health care.

Every profile opens with a snapshot of key statistics and features data visualizations and expert voices throughout. For each country, you’ll learn:

  • Who gets health coverage, and for which services?
  • How do doctors get paid? How does the health system recruit and retain its workforce?
  • Is care affordable, or do people have high out-of-pocket costs?
  • What do health outcomes look like? Are there persistent inequities?
  • What innovations and reforms are countries pursuing to meet patients’ needs?

Check out the new profiles now

A Formal Model of How Artificial Intelligence Erodes Human Agency – RAND

Posted by timmreardon on 05/08/2026
Posted in: Uncategorized.

Alvin Moon, Benjamin Boudreaux

RESEARCHPublished Apr 20, 2026

As artificial intelligence (AI) systems assume more decisionmaking roles in government, the economy, and society, a question emerges: Will humans retain the capacity to shape collective outcomes? Several theories suggest that, once human decisionmaking erodes past a certain threshold, the skills, institutions, and political standing needed to reclaim that decisionmaking capacity may no longer exist. However, no widely accepted metrics exist for tracking this erosion. In this report, the authors draw on social choice theory to develop a formal model of how AI erodes collective human agency; they also model decisionmaking in terms of coalitions and propose quantitative metrics for tracking shifts in the distribution of decisionmaking power to identify the point beyond which those shifts could become irreversible.

Key Findings

  • Agency erosion is measurable across domains. Three metrics—distribution of decisive coalitions, minimal coalition size, and composition of minimal coalitions—are applicable to a wide variety of decisionmaking processes. These metrics provide a framework for tracking human agency impacts across domains, comparing changes, and detecting nonlinear acceleration.
  • Distinct mechanisms drive agency erosion. There are three pathways through which AI reduces human agency: human disenfranchisement (fewer humans in decisionmaking roles), AI enfranchisement (AI entities gaining decisionmaking power and changing the composition of decisive groups), and AI agenda control (AI systems shaping which alternatives reach human decisionmakers to consolidate power in unintended ways).
  • A terminal state exists. The mathematical structure of the model identifies a formal end state of agency erosion: a single minimal coalition that is decisive for all choices. This provides a target for monitoring how far the trajectory is from this point of irreversibility.

Recommendations

  • Develop agency evaluations. Existing AI evaluations assess capabilities, safety, and alignment, but they do not assess structural effects on human decisionmaking. Researchers should design benchmarks that measure when AI systems reduce the number of humans in decisive coalitions, influence outcomes, or shape which alternatives reach human decisionmakers.
  • Establish human participation thresholds. For high-stakes domains (such as democratic governance, military applications, and critical infrastructure), policymakers should consider minimum requirements for human presence in decisive coalitions. These thresholds should reflect domain-specific requirements for legitimacy and reversibility. Organizations, such as the National Institute of Standards and Technology, could incorporate coalition composition as a measurable dimension of AI risk alongside existing metrics for reliability and security.
  • Monitor coalition composition longitudinally. The danger of gradual disempowerment is that no single change appears catastrophic. Organizations and governments should track the human composition of decisive coalitions across domains over time.
  • Benchmark reversibility capacity. Organizations should assess whether they could restore human decisionmaking if AI-driven agency loss accelerates. Doing so would require maintaining the human expertise, institutional knowledge, and deliberative infrastructure needed to reverse course.
Cover: A Formal Model of How Artificial Intelligence Erodes Human Agency

DOWNLOAD PDF

One-page overview

Article link: https://www.rand.org/pubs/research_reports/RRA4817-1.html?

Changes in Nonprofit Hospitals’ Finances, Operations, and Quality of Care After Using Management Consultants – JAMA

Posted by timmreardon on 05/05/2026
Posted in: Uncategorized.
  1. Joseph Dov Bruch, PhD1; Cal Chengqi Fang, MPP1; Yan Bo Zeng, BA, BS2
  2. et al
  1. Author Affiliations 
  2. Article Information

JAMA

Published Online: May 4, 2026

doi: 10.1001/jama.2026.

Key Points

Question  How much do nonprofit hospitals spend on management consultants, and following the initiation of a management consulting contract, are there changes in hospital finances, operations, or quality of care?

Findings  Nonprofit hospitals in the US (n = 2343) collectively spent more than $7.8 billion on management consulting services from 2009 to 2023. A stacked difference-in-differences design comparing 306 US nonprofit hospitals that used a management consulting firm for the first time with 513 matched hospitals that did not use a management consulting firm during the study period found little evidence of substantial, statistically significant, or systematic changes attributable to management consulting engagements.

Meaning  These findings raise questions about the net value that nonprofit hospitals receive from management consulting services and call for careful examination of these contracts.

Abstract

Importance  The presence of management consultants in the US health care industry has increased dramatically in recent decades and is now higher than in most other sectors of the US economy. Hospitals hire management consultants to provide external expertise and advise on strategic planning, organizational change, cost cutting, and revenue-enhancement activities. Despite the prominence and influence of these firms, there is no empirical evidence in the US documenting either how much is spent on these services or whether that spending leads to measurable improvements.

Objective  To quantify nonprofit hospitals’ spending on management consultants and to evaluate changes in finances, operations, and quality of care following the initiation of a contract with a management consulting firm.

Design, Setting, and Population  Observational study using a stacked difference-in-differences design to compare 306 US nonprofit hospitals that used a management consultant firm for the first time in 2010-2022 with 513 matched hospitals that did not use management consultants during 2009-2023.

Exposure  First use of a management consulting firm.

Main Outcomes and Measures  Financial performance (eg, revenues, expenses, margins, cash reserves, and fixed assets), operational measures (eg, inpatient utilization, staffing, executive and worker compensation, charity care, and community benefits), and quality-of-care measures (eg, claims-based 30-day mortality and readmission for acute myocardial infarction, pneumonia, and stroke).

Results  More than 20% of nonprofit hospitals hired management consultants during the study period. Nonprofit hospitals that hired management consultants paid an average of $15.7 million for their services, and nonprofit hospitals collectively spent more than $7.8 billion on these services from 2009 to 2023. Despite this substantial investment, analyses of hospitals’ financial performance, operational decisions, and claims-based patient outcomes revealed little evidence of substantial, statistically significant, or systematic improvements attributable to consulting engagements. Relative changes were estimated for financial measures, such as net patient revenue (−2.22%; 95% CI, −5.11% to 0.76%; P = .14), operating expenses (−1.07%; 95% CI, −3.56% to 1.49%; P = .41), fixed assets (2.05%; 95% CI, −6.54% to 11.42%; P = .65), bad debt (−6.31%; 95% CI, −19.82% to 9.48%; P = .41), days’ cash on hand (−8.56%; 95% CI, −28.00% to 16.13%; P = .46), total margin (−0.19 [95% CI, −1.20 to 0.82] percentage points; P = .71), and operating margin (0.15 [95% CI, −0.94 to 1.23] percentage points; P = .79). Relative changes were estimated for operational measures, such as inpatient length of stay (1.71%; 95% CI, −0.34% to 3.81%; P = .10) and total inpatient days (0.29%; 95% CI, −2.57% to 3.23%; P = .85). Relative changes for quality-of-care outcomes were also generally not significant. The sole exception was 30-day readmission for patients with stroke (1.37 [95% CI, 0.14 to 2.61] percentage points; P = .03), which was not robust to alternative specifications.

Conclusions and Relevance  Nonprofit hospitals expend substantial resources on management consultants, but there was no evidence of meaningful changes in hospital finances, operations, or quality of care. These findings raise questions about the net value that nonprofit hospitals receive from management consulting services and suggest the need to carefully examine the widespread use of management consultants by hospitals and other organizations across the health care industry.

Introduction

The use of management consultants in the US health care industry has grown dramatically in recent years.1Management consultants now play a larger role in health care than in most other sectors of the US economy.2 While some parts of the health care industry have long used management consultants, including pharmaceutical companies, health insurers, and other financially motivated health care organizations, management consultants are also used by nonprofit health care organizations, government-run agencies (eg, Veterans Health Administration, Centers for Disease Control and Prevention), and local public health departments.3 The growing role of management consultants in the health care sector has raised public concern, with several recent news stories linking major public health failures to management consulting firms.4–6 On one hand, these services are expensive and may have little value or even cause harm if they sacrifice care in the service of improving financial or operating performance measures. On the other hand, management consultants may provide health care organizations with valuable expertise on how to improve operational efficiency, increase revenues, set strategic direction, and generally enhance management practices.7 If they do improve management practices, the literature suggests this could ultimately improve health care delivery and quality of care for patients.8–10

Despite the increasing role of management consultants in health care, there is almost no empirical research examining how management consultants affect health care organizations. The notable exceptions are studies focused on the UK National Health Service.11–13 Even outside of health care, little is known about the effect of management consultants owing to the difficulty in observing contracts and measuring their consequences. The only research we could identify was an experimental study based in India and a contemporaneous study using Belgian tax data.14,15

We leveraged detailed tax filings for all nonprofit hospitals in the US to provide conservative estimates of spending on management consultants by nonprofit hospitals. We characterized the nature of these management consulting contracts and leveraged a difference-in-differences approach to quantify their impact on hospital finances, operations, and quality of care. This study provides the first empirical analysis of the impact that management consultants have on the US health care industry. To our knowledge, it also provides the first systematic empirical analysis of management consultants in any US industry.

Methods

This research was approved by the institutional review board at the University of Chicago. Results are reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for observational studies.

Data Sources

IRS Form 990

The Internal Revenue Service (IRS) requires all nonprofit entities to report detailed financial information annually, including income statements, balance sheets, and spending on key line items, such as salaries and various community benefits. Importantly, Form 990 also requires nonprofits to detail their 5 largest external contracts, which may include contracts with management consultants. Form 990 data from 2009 to 2023 were obtained from a commercial vendor, Candid.16

Medicare Cost Reports

Limitations

Hospitals accepting Medicare submit cost reports to Medicare annually. These filings are publicly available and include detailed financial and operational data, such as information on utilization, discharges, revenues, unreimbursed costs, charges, and expenses. This study used Medicare cost report data from 2006 to 2023, compiled by RAND.17

Hospital Consumer Assessment of Healthcare Providers and Systems

This study used data from the 2009-2023 Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS), which surveys patients about their experience during an inpatient stay.18

Medicare Claims

This study used hospital claims for 100% of Medicare fee-for-service beneficiaries between 2009 and 2019 from the Medicare Provider Analysis and Review (MedPAR) files and patient-level enrollment and demographic information from the Medicare Beneficiary Summary Files. We also identified each beneficiary’s comorbidities from the MedPAR files.

S&P Capital IQ Dataset

The S&P Capital IQ dataset was used to collect financial data and market intelligence for businesses from 2009 to 2023.

Outcome Measures

We extracted the following annual financial and operational measures from Medicare cost reports: net patient revenue, operating expenses, total margin, operating margin, cash flow margin, current ratio (current assets/current liabilities), days’ cash on hand, fixed assets (land, buildings, and equipment), average inpatient length of stay, total inpatient days, Medicaid inpatient days, average worker salary, and number of employees on payroll.

We extracted the following operational outcomes from IRS Form 990 data: chief executive officer (CEO) salary, average salary of the top 5 executives, bad debt, total charity care, and total community benefits. We extracted 2 operational outcomes from the HCAHPS data: overall patient experience score and patient recommendation score. Finally, we used Medicare claims to construct 30-day mortality and readmission outcomes for patients admitted into the study hospitals with a primary diagnosis of acute myocardial infarction, stroke, or pneumonia.

Identifying Management Consulting Contracts

IRS Form 990 data were identified for 2343 nonprofit hospitals in the US (eAppendix 1 in Supplement 1). Each Form 990 includes information on the 5 highest compensated external contractors that received more than $100 000, including the amount paid and a description of the services provided. Nearly 80% of hospital-years in the sample indicate 5 or fewer contracts exceeding the $100 000 threshold. For these, the Form 990 captures all contracts above the threshold. For the remainder, there may be some management consulting contracts we did not observe because they are outside the 5 largest.

We then used information from S&P Capital IQ to determine which contractors were management consultants (eAppendix 2 in Supplement 1). The IRS Form 990 also includes a short description of each contract, which was used to exclude contracts with management consultants that were definitively not for management consulting services. For example, contracts with descriptions focused entirely on accounting and auditing services were excluded. Data were collected on other types of consultants, such as human resources consultants.

Drawing on broader management theory for why firms hire management consultants (eTable 1 in Supplement 1), we aimed to characterize the specific objective of these contracts into 1 or more of 5 categories: improving quality of care, reorganizing staffing, enhancing financial performance, integrating or advancing technological capabilities, and assisting with a merger or acquisition (eTable 2 in Supplement 1). However, as IRS Form 990 data include only a short description of each contract, we looked to news stories, public relations material, and publicly available hospital records to characterize the purpose of each contract.

Specifically, 1 analyst conducted structured web searches using both traditional search engines and artificial intelligence search tools (deep research) to identify publicly available information on the services provided (eAppendix 3 in Supplement 1). The analyst verified all information obtained by deep research against original sources. A second analyst then validated all categorizations. For contracts that could not be characterized through this process, a second analyst manually classified contract service descriptions using IRS Form 990. Classifications were independently cross-validated by a third analyst.

Statistical Analysis

Results

We used a stacked difference-in-differences estimator to assess the impact of management consultants on hospital financial, operational, and quality-of-care outcomes (eAppendix 4 in Supplement 1). This approach relies on the parallel-trends assumption that the average trends followed by matched hospitals not using management consultants approximate the average trends that hospitals using management consultants would have followed had they not used management consultants. The treatment event was defined as the first year in which a hospital was observed to contract with a management consulting firm.

We excluded treatment events in 2009 to ensure that each treatment event has at least 1 pre-event year without a management consulting contract. We also excluded treatment events in 2023 to ensure that each treatment event had at least 1 post–event year observation. We included hospitals’ observations from 4 years prior to each treatment event until 4 years after the treatment event (eAppendix 5 in Supplement 1).

To construct a valid control group, a set of matched hospitals was identified that did not use management consultants during the study period. Specifically, we identified up to 2 nonprofit hospitals not using management consultants for each hospital using management consultants by matching on total beds, total inpatient discharges, and operating margin at the start of the event window (event time −4) (eAppendix 6 in Supplement 1).

We identified 306 nonprofit hospitals with treatment events—ie, instances in which a hospital contracted with management consultants for the first time—and matched these cases to 513 hospital observations (among 404 unique hospitals) that did not use management consultants from 2009 to 2023. Conventions were followed in winsorizing all financial and operational measures at the 2.5 and 97.5 percentiles.19,20

When modeling quality-of-care outcomes at the patient level, we adjusted for patient sex, age, race, ethnicity, dual eligibility status, and Charlson-Elixhauser Comorbidity Index (eAppendix 7 in Supplement 1).

For the majority of the analysis of financial and operational outcomes, we applied a log transformation to outcomes and report regression estimates as percentage changes. For some outcomes, such as profit margins or other measures with negative values, we did not apply a log transformation, as doing so would be inappropriate.

As a sensitivity analysis, we performed the staggered difference-in-differences analyses using a Callaway and Sant’Anna estimator.21 The analysis was also performed without matching, using matching based on the full pretreatment period rather than only event time −4, and after eliminating observations during the COVID-19 pandemic. We also demonstrated the robustness of the findings to excluding large firms known for providing accounting services in addition to management consulting services, as well as to limiting the sample to short-term general hospitals. We also performed various subgroup analyses, including based on the intensity of the contract (defined as contract expenditure per hospital bed) and by the objective of the contract. Because sample sizes for analyses based on the objective of the contract were particularly small, P values were computed using a permutation test.

Reported confidence intervals and P values were not adjusted for multiple comparison testing. However, statistical significance of estimates are separately reported based on the Holm-Bonferroni adjustment for multiple comparisons using 26 tests where α = .05.

In total, nonprofit hospitals in the US (n = 2343) spent $732.9 million on management consultants in 2023, up from $273.2 million in 2009 (Figure 1). Approximately 21% of nonprofit hospitals had a management consulting contract during the study time frame, with an average of 77.5 nonprofit hospitals using management consultants each year (eFigure 1 in Supplement 1). On average, these hospitals paid $15.7 million (SD, $46.1 million) for management consulting services, with each contract costing an average of $6.2 million (SD, $14.8 million). Management consulting contracts lasted an average of 1.4 years (SD, 0.8 years), and the median number of management consulting contracts during the study period was 1.

In panel A, dots connected by the line represent the total number of contracts active in each year, including both newly initiated contracts and contracts that remained ongoing from previous years. In panel C, diamonds indicate means, horizontal lines indicate medians, box tops and bottoms indicate IQRs, vertical whiskers extend to the largest and smallest observations within 1.5 times the IQRs, and observations beyond that are shown as individual points. In panels B and C, spending values are inflation adjusted to 2023 US dollars using the Consumer Price Index.

Annual spending on consultants other than management consultants also grew over the study period, from $635.4 million in 2009 to almost $1.7 billion in 2023, totaling $17.2 billion across the entire study time frame (eFigure 2 in Supplement 1). The management consulting firms that were paid the most during the study period included Deloitte ($1.2 billion), Accenture ($1.2 billion), Huron Consulting Group ($1.0 billion), PricewaterhouseCoopers ($0.8 billion), Premier ($0.6 billion), and McKinsey & Company ($0.4 billion) (eFigure 3 in Supplement 1).

Hospitals using management consultants were geographically dispersed across the country: Midwest (27.2%), Northeast (31.1%), South (25.6%), and West (16.1%) (Table). On average, these hospitals were similar to the matched sample of hospitals not using management consultants on some dimensions, such as net income ($14.4 million vs $13.7 million, respectively; P= .70) and inpatient length of stay (4.8 vs 4.7 days, respectively; P= .36), but differed on other dimensions, such as operating margin (1.1% vs 2.8%, respectively; P= .004), charity care ($19.3 million vs $14.6 million, respectively; P= .001), and fixed assets ($145.6 million vs $114.2 million, respectively; P= .001).

The relative changes between hospitals using management consultants and those not using management consultants for all financial outcome measures were not statistically significant, including net patient revenue (−2.22%; 95% CI, −5.11% to 0.76%; P= .14), operating expenses (−1.07%; 95% CI, −3.56% to 1.49%; P= .41), fixed assets (2.05%; 95% CI, −6.54% to 11.42%; P= .65), bad debt (−6.31%; 95% CI, −19.82% to 9.48%; P= .41), days’ cash on hand (−8.56%; 95% CI, −28.00% to 16.13%; P= .46), total margin (−0.19 percentage points; 95% CI, −1.20 to 0.82 percentage points; P= .71), operating margin (0.15 percentage points; 95% CI, −0.94 to 1.23 percentage points; P= .79), cash flow margin (−0.78 percentage points; 95% CI, −2.85 to 1.29 percentage points; P= .46), and current ratio (0.02; 95% CI, −0.21 to 0.24; P= .88) (Figure 2).

After Holm-Bonferroni adjustment for multiple comparisons, no difference-in-differences estimate remained statistically significant at α = .05. Coefficients for log-transformed outcomes (panel A) were exponentiated, 1 was subtracted, and values were multiplied by 100, which were interpreted as percentage changes. Coefficients for non–log-transformed outcomes (panel B) were interpreted as estimated relative percentage point or unit changes. Median (IQR) values are reported for hospitals using consultants (treated) at relative time −1 to contextualize the magnitude of the estimated effects. Event time 0 (the treatment year) was excluded from pooled estimates to avoid partial exposure. Net patient revenue indicates total revenue from patient care services after contractual adjustments. Operating expenses reflect total costs of hospital operations. Fixed assets include land, buildings, and equipment (net of depreciation). Bad debt reflects unpaid patient care charges deemed uncollectible. Days’ cash on hand reflects the number of days a hospital can operate using available cash reserves. Inpatient length of stay represents the number of days patients remain hospitalized per admission. Total inpatient days represent aggregate days of inpatient care; Medicaid inpatient days reflect the subset attributable to Medicaid patients. Worker salary equals total salary expense divided by number of employees. Data on chief executive officer (CEO) salaries and average top 5 executive salaries were obtained from Internal Revenue Service Form 990. Total charity care reflects free or discounted care provided to eligible patients; total community benefits reflect total spending on community-oriented programs including charity care. Total margin represents overall net income divided by total revenue, and operating margin represents income from patient care operations divided by operating revenue. Cash flow margin reflects operating cash flow relative to revenue. Current ratio equals current assets divided by current liabilities and measures short-term liquidity.

Relative changes for operational outcomes were not statistically significant for almost all outcomes, including inpatient length of stay (1.71%; 95% CI, −0.34% to 3.81%; P = .10), total inpatient days (0.29%; 95% CI, −2.57% to 3.23%; P = .85), Medicaid inpatient days (−1.06%; 95% CI, −10.43% to 9.28%; P = .83), number of employees on payroll (−0.24%; 95% CI, −2.77% to 2.36%; P = .86), average CEO salary (9.25%; 95% CI, −1.28% to 20.90%; P = .09), average top 5 executive salary (−3.18%; 95% CI, −10.76% to 5.04%; P = .44), charity care (1.85%; 95% CI, −13.39% to 19.77%; P = .82), community benefit (3.10%; 95% CI, −8.64% to 16.36%; P = .62), overall patient experience score (0.00; 95% CI, −0.01 to 0.01; P = .48), and patient recommendation score (0.00; 95% CI, −0.01 to 0.01; P = .61). The only statistically significant change was a small decline in average worker salary (−1.03%; 95% CI, −2.03% to −0.01%; P = .047) (Figure 2).

The relative changes in most patient health outcomes were not statistically significant, including 30-day mortality (−0.29 percentage points; 95% CI, −0.90 to 0.33 percentage points; P= .36) and readmission (0.82 percentage points; 95% CI, −0.05 to 1.68 percentage points; P= .07) for patients with acute myocardial infarction, 30-day mortality (−0.04 percentage points; 95% CI, −0.58 to 0.50 percentage points; P= .88) and readmission (−0.25 percentage points; 95% CI, −0.88 to 0.38 percentage points; P= .44) for patients with pneumonia, and 30-day mortality (0.47 percentage points; 95% CI, −0.22 to 1.15 percentage points; P= .18) for patients with stroke (Figure 3). The only notable exception was a relative increase in 30-day readmission for patients with stroke (1.37 percentage points; 95% CI, 0.14 to 2.61 percentage points; P= .03) (Figure 3).

After Holm-Bonferroni adjustment for multiple comparisons, no difference-in-differences estimate remained statistically significant at α = .05. Estimates represent absolute percentage point changes. Mean percentages are reported for hospitals using management consultants (treated) at relative time −1 to contextualize the magnitude of the estimated effects. Event time 0 (the treatment year) was excluded from pooled estimates to avoid partial exposure.

All financial performance, operational measures, and quality-of-care measures were not statistically significant after applying the Holm-Bonferroni method for 26 outcomes at the significance level of α = .05.

We also estimated event studies for more than 2 dozen financial, operational, and quality-of-care measures and did not find evidence of systematic divergence in trends prior to treatment (eFigures 4 and 5 in Supplement 1). This lends credence to the parallel-trends assumption underlying the difference-in-differences estimator. We also did not find evidence of preceding shocks that might bias estimates, such as CEO or board turnover or mergers/acquisitions (eFigure 4 in Supplement 1).

Estimates were also qualitatively similar when using a Callaway and Sant’Anna estimator (eFigures 6 and 7 in Supplement 1), estimating without matching (eFigures 8 and 9 in Supplement 1), matching on the average of all preperiods (event times −1 to −4) (eFigures 10 and 11 in Supplement 1), eliminating observations during the COVID-19 pandemic (eFigure 12 in Supplement 1), excluding management consulting firms known for providing accounting services (eFigures 13 and 14 in Supplement 1), restricting the sample to only short-term general hospitals (eFigures 15 and 16 in Supplement 1), and stratifying by the size of the contract (eFigures 17-20 in Supplement 1).

We were able to identify the nature of 108 management consulting contracts. Of these contracts, 64.8% were aimed at enhancing financial performance, 27.8% at integrating or advancing technological capabilities, 24.1% at improving quality of care, 13.9% at reorganizing staffing, and 13.0% at assisting in a merger or acquisition. (These objectives were not mutually exclusive.) Estimates were mostly null regardless of objective (eFigures 21 and 22 in Supplement 1).

Discussion

Hospitals’ increasing use of management consultants is emblematic of the health care industry’s trend toward corporatized behavior.22 This study shows that these practices are common even among nonprofit hospitals, at least 21.3% of which have paid contracts to management consultants in recent years. Between 2009 and 2023, these nonprofit hospitals spent $7.8 billion on management consulting services, with many contracts aimed at improving financial performance. Such large outlays have real opportunity costs—for example, among nonprofit hospitals that hired management consulting firms, the average expenditure of $15.7 million could alternatively fund the annual salaries of approximately 46 hospitalists or 167 registered nurses.23,24

Given these hospitals’ nonprofit status and their importance to communities, there is a strong public interest in evaluating the effects of their spending on management consultants. We did not find evidence of clear, statistically discernible benefits or harms to hospitals’ financial, operational, or quality-of-care outcomes from these major expenditures. It may be that changes induced by management consultants are simply too small to detect, that they affect dimensions of performance that we did not observe, or that the activities of these contracts are sufficiently varied as to make effects on any individual dimension difficult to discern. Alternatively, it may be that management consultants are primarily validating decisions or otherwise making recommendations that are very similar to what the hospital would have done without the contract.

Future research may build on and improve this study to further improve understanding of the role of management consultants in the health care industry. First, qualitative research is required to understand the precise nature of these contracts and to determine appropriate measures of benefit. Second, while large effects were ruled out in this study, future research might aim to rule out small effects as well. Third, future research might aim to study the use of management consultants by for-profit hospitals or in other parts of the health care industry, such as by local, state, and federal health agencies. Finally, this study examines only management consulting contracts, which represent just 31% of the nearly $2 billion that nonprofit hospitals spend on consultants each year. Examining other consulting contracts may be an important area for future research.

There are several limitations in this study. First, hospitals using management consultants may be on a different outcomes trajectory than hospitals that do not use management consultants. While event-study estimates do not exhibit concerning pretrends, sharp contemporaneous shocks could bias the estimates. We did not find evidence that mergers or acquisitions, CEO turnover, or board changes in the preperiod drove the results, but unobserved shocks cannot be ruled out. Notably, however, for any large bias-inducing shocks to rationalize the estimated null effects, they would need to be systematically correlated with management consultant use in a way that exactly offsets true effects.

Second, we examined only a subset of the outcomes that management consultants might plausibly affect. While this study analyzed extensive outcomes, the possibility of effects on outcomes not measured or analyzed cannot be ruled out.

Third, we studied only hospitals’ first observed contracts with management consultants. Some hospitals have multiple contracts that may occur simultaneously or in sequence. As such, the estimates are most precisely interpreted as relative changes associated with nonprofit hospitals starting to use the services of management consultants. Importantly, the results are specific to nonprofit hospitals and may not generalize to other health care settings.

Fourth, the data include only the 5 highest-paid contracts for each hospital. Although nearly 80% of hospital-years in the sample indicate 5 or fewer contracts, the remaining 20% may have some smaller management consulting contracts that were not observed.

Fifth, there was very little visibility into the exact nature of each contract. We were able to obtain some information on the objectives of 108 contracts but did not discern a clear pattern in estimates across objectives. Most notably, even for contracts aimed at improving financial performance—the largest subgroup that included 44 hospitals—we did not observe systematic differential effects, including for financial measures. In general, however, we urge caution in interpreting these subgroup analyses, as many subgroup samples were small.

Sixth, the nature of performing an extensive analysis on a modest number of contracts creates some challenges in interpreting estimates. For some outcomes, only large effects could be ruled out. In others, some differences were observed in magnitude, direction, or significance across specifications or subsamples, as is expected when testing many hypotheses and specifications in finite samples. In line with the primary findings, the most consistent pattern across sensitivity and subgroup analyses was a lack of sizeable, clear, or systematic effect.

Conclusions

Nonprofit hospitals spend considerable amounts of money on management consultants, but this study found no clear evidence of meaningful changes in hospital finances, operations, or quality of care. The findings suggest the need for further research and for careful examination of the widespread use of management consultants by health care organizations.

Article Information

Corresponding Author: Joseph Dov Bruch, PhD, Department of Public Health Sciences, University of Chicago, 5841 S Maryland Ave, Room W-250C, Chicago, IL 60637 (jbruch@bsd.uchicago.edu).

Accepted for Publication: March 23, 2026.

Published Online: May 4, 2026. doi:10.1001/jama.2026.5027

Author Contributions: Mr Fang had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Additionally, Mr Zeng had full access to all of the cost report and Form 990 data in the study and takes responsibility for the integrity of these data and the accuracy of the respective data analyses.

Concept and design: Bruch, Fang, Zeng, Gandhi.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Bruch, Zeng, Gandhi.

Critical review of the manuscript for important intellectual content: All authors.

Statistical analysis: All authors.

Obtained funding: Bruch.

Administrative, technical, or material support: Bruch, Zeng, Parthan, Gandhi.

Supervision: Bruch, Gandhi.

Conflict of Interest Disclosures: Dr Bruch reported receipt of personal fees from the Robert Wood Johnson Foundation; receipt of grants from the Robert Wood Johnson Foundation, Rx Foundation, and Commonwealth Fund; receipt of personal fees from Adasina Social Capital through a grant from the Robert Wood Johnson Foundation; and provision of freelance consulting to McKinsey & Company from 2014 to 2017. No other disclosures were reported.

Data Sharing Statement: See Supplement 2.

Additional Information: Perplexity AI and ChatGPT were used during 2025 to identify additional online information on nonprofit hospital contracts. Dr Bruch takes responsibility for the integrity of the content generated.

References

1.

Grand View Research.  Healthcare Consulting Services Market Size, Share and Trends Analysis Report, 2024-2030. Grand View Research; 2024.

2.

Custom Market Insights. Latest global management consulting market size/share worth USD 8,97,442.21 million by 2034 at a 6.56% CAGR. Yahoo Finance. Published February 25, 2025. Accessed February 20, 2026. https://finance.yahoo.com/news/latest-global-management-consulting-market-083000592.html

3.

Bruch  JD, Feldman  J, Song  Z.  Use of private management consultants in public health.   BMJ. 2021;374:n2145. doi:10.1136/bmj.n2145PubMedGoogle ScholarCrossref

4.

Alderman  L. France hired McKinsey for its Covid vaccine rollout. Then came the questions. New York Times. Published February 22, 2021. Accessed February 20, 2026. https://www.nytimes.com/2021/02/22/business/france-mckinsey-consultants-covid-vaccine.html

5.

Forsythe  M, Bogdanich  W. McKinsey settles for nearly $600 million over role in opioids crisis. New York Times. Updated November 5, 2021. Accessed February 20, 2026. https://www.nytimes.com/2021/02/03/business/mckinsey-opioids-settlement.html

6.

Ferguson  C. What went wrong with America’s $44 million vaccine data system? MIT Technology Review. Published January 30, 2021. Accessed February 20, 2026. https://www.technologyreview.com/2021/01/30/1017086/cdc-44-million-vaccine-data-vams-problems/

7.

Canato  A, Giangreco  A.  Gurus or wizards? a review of the role of management consultants.   Eur Manag Rev. 2011;8(4):231-244. doi:10.1111/j.1740-4762.2011.01021.xGoogle ScholarCrossref

8.

Tsai  TC, Jha  AK, Gawande  AA, Huckman  RS, Bloom  N, Sadun  R.  Hospital board and management practices are strongly related to hospital performance on clinical quality metrics.   Health Aff (Millwood). 2015;34(8):1304-1311. doi:10.1377/hlthaff.2014.1282PubMedGoogle ScholarCrossref

9.

Salehnejad  R, Ali  M, Proudlove  NC.  The impact of management practices on relative patient mortality: evidence from public hospitals.   Health Serv Manage Res. 2022;35(4):240-250. doi:10.1177/09514848211068627PubMedGoogle ScholarCrossref

10.

McConnell  KJ, Lindrooth  RC, Wholey  DR, Maddox  TM, Bloom  N.  Management practices and the quality of care in cardiac units.   JAMA Intern Med. 2013;173(8):684-692. doi:10.1001/jamainternmed.2013.3577
ArticlePubMedGoogle ScholarCrossref

11.

Kirkpatrick  I, Sturdy  AJ, Reguera Alvarado  N, Veronesi  G.  Beyond hollowing out: public sector managers and the use of external management consultants.   Public Adm Rev. 2023;83(3):537-551. doi:10.1111/puar.13612Google ScholarCrossref

12.

Kirkpatrick  I, Sturdy  AJ, Reguera Alvarado  N, Blanco‐Oliver  A, Veronesi  G.  The impact of management consultants on public service efficiency.   Policy Politics. 2019;47(1):77-95. doi:10.1332/030557318X15167881150799PubMedGoogle ScholarCrossref

13.

Sturdy  AJ, Kirkpatrick  I, Reguera  N, Blanco‐Oliver  A, Veronesi  G.  The management consultancy effect: demand inflation and its consequences in the sourcing of external knowledge.   Public Adm. 2022;100(3):488-506. doi:10.1111/padm.12712Google ScholarCrossref

14.

Bloom  N, Eifert  B, Mahajan  A, McKenzie  D, Roberts  J.  Does management matter? evidence from India.   Q J Econ. 2013;128(1):1-51. doi:10.1093/qje/qjs044Google ScholarCrossref

15.

Bijnens  G, Jäger  S, Schoefer  B. What Does Consulting Do? NBER Working Paper 34072. National Bureau of Economic Research; 2025. Accessed February 20, 2026. https://www.nber.org/system/files/working_papers/w34072/w34072.pdf

16.

Candid. Look up nonprofits and identify prospects with Candid’s GuideStar. Accessed November 26, 2025. https://www.guidestar.org/search

17.

RAND. CMS hospital cost report data made easy. Accessed November 26, 2025. https://www.hospitaldatasets.org/

18.

Chandra  A, Finkelstein  A, Sacarny  A, Syverson  C.  Health care exceptionalism? performance and allocation in the US health care sector.   Am Econ Rev. 2016;106(8):2110-2144. doi:10.1257/aer.20151080PubMedGoogle ScholarCrossref

19.

Bruch  JD, Gondi  S, Song  Z.  Changes in hospital income, use, and quality associated with private equity acquisition.   JAMA Intern Med. 2020;180(11):1428-1435. doi:10.1001/jamainternmed.2020.3552
ArticlePubMedGoogle ScholarCrossref

20.

Joynt  KE, Orav  EJ, Jha  AK.  Association between hospital conversions to for-profit status and clinical and economic outcomes.   JAMA. 2014;312(16):1644-1652. doi:10.1001/jama.2014.13336
ArticlePubMedGoogle ScholarCrossref

21.

Callaway  B, Sant’Anna  PH.  Difference-in-differences with multiple time periods.   J Econom. 2021;225(2):200-230. doi:10.1016/j.jeconom.2020.12.001Google ScholarCrossref

22.

Fox  DM.  Policy commercializing nonprofits in health: the history of a paradox from the 19th century to the ACA.   Milbank Q. 2015;93(1):179-210. doi:10.1111/1468-0009.12109PubMedGoogle ScholarCrossref

23.

Today’s Hospitalist. Executive Summary Report of Research Findings From the Today’s Hospitalist 2023 Compensation & Career Survey. Published 2023. Accessed February 20, 2026. https://todayshospitalist.com/wp-content/uploads/2024/06/TH-2023-Adult-Hospitalist-C-C-Exec-Summary.pdf

24.

US Bureau of Labor Statistics. Occupational Outlook Handbook: registered nurses. Accessed November 26, 2025. https://www.bls.gov/ooh/healthcare/registered-nurses.htm

View Full Text  Download PDF

Article link: https://jamanetwork.com/journals/jama/fullarticle/2848641

The Price of Advice—Do Management Consultants Deliver Value to Nonprofit Hospitals? – JAMA

Posted by timmreardon on 05/05/2026
Posted in: Uncategorized.

Thomas C. Buchmueller, PhD1,2,3; Helen G. Levy, PhD1,2

  1. Author Affiliations 
  2. Article Information

JAMA

Published Online: May 4, 2026

doi: 10.1001/jama.2026.4362

In a well-publicized case that stoked public outrage, Providence, a nonprofit health system headquartered in the state of Washington, implemented a program in 2018 called Rev Up to increase revenue collection from patients.1 Following enforcement action by the state’s attorney general, Providence was ultimately forced to refund or forgive nearly $160 million worth of payments and outstanding debt,2 but not before throwing the management consultants behind the program squarely under the bus. “The intent of Rev Up, a program developed with the consulting firm McKinsey & Company, was not to target or pressure those in financial distress…We recognize the tone of the training materials developed by McKinsey was not consistent with our values.”3

Was Providence’s experience with McKinsey an outlier? In this issue of JAMA, Bruch and coauthors4 report on their systematic study of the effects associated with management consultants for nonprofit hospitals. The authors constructed an impressive database that combined detailed financial information from hospitals’ Internal Revenue Service Form 990 with measures of financial performance, operations, and outcomes for all nonprofit hospitals in the US spanning the years 2009 through 2023. In the Form 990 data, it is possible to identify most large contracts ($100 000 or more) with outside consultants and, for a subset of the contracts, to characterize the focus of the engagement.

The analysis shows that over this period, nearly one-quarter of nonprofit hospitals engaged a management consulting firm at least once, with an average total payment of about $6.2 million. In a matched comparison with otherwise similar hospitals that did not engage consultants, the analysis finds that hiring a consulting firm had essentially no effect on hospital finances, operations, or patient outcomes.

A superficial reading of these results is that on average, management consultants do not do much for nonprofit hospitals beyond consuming resources (which is, at least, not as bad as the cautionary tale of Providence and McKinsey). But there is likely more to the story.

An obvious concern with the study’s difference-in-differences approach, which the authors acknowledge and do their best to address, is that hospitals do not randomly decide to spend more than $100 000 on consulting. Hospitals that engage a consulting firm to improve financial performance are likely to include ones facing financial difficulties, which can have spillover effects on health outcomes and patient satisfaction. This would mean that the results may understate positive effects that consultants might have on outcomes. The event history analyses, which show generally similar patterns for hospitals that do and do not engage consultants prior to the start of the consulting contract, are reassuring but do not definitively rule out this type of selection bias. One productive direction for future research would be to investigate the factors that predict a hospital’s decision to hire a consulting firm.

In addition, some of the estimates have wide confidence intervals. Although it is accurate to say that the analysis shows no statistically significant improvements in finances, operations, or quality of care, for several outcomes, fairly large effects cannot be ruled out. For example, engagement of consultants may have increased or decreased Medicaid inpatient days by as much as 10%; charity care may have declined by 13% or increased by 20%; and chief executive officer salaries may have increased by as much as 20%.

The article also presents results that distinguish among different types of consulting contracts—quality of care, staffing, finances, technology, or mergers and acquisitions—for the approximately one-third of contracts for which it was possible to identify the nature of the engagement. Although the smaller sample size makes these estimates even more imprecise than the main results, drilling down into the substance of the contracts is likely a promising avenue for future research. For example, one-quarter of the contracts for which the purpose could be identified concerned “integrating or advancing technological capabilities.”4 Given the timing of the data, many of these likely involved implementing electronic health records systems in response to the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act.

On one hand, this study’s null results are in line with a research literature that finds that health care information technology has generally not delivered a large payoff in terms of greater efficiency or improved health outcomes.5 On the other hand, information technology implementation involves large initial outlays (including payments for consulting services), while the benefits may take time to be realized. And the benefits may be in specific areas that, while not reflected in aggregate data, are extremely important. For example, a study found that the implementation of electronic medical records led to a reduction in in-hospital infant mortality, driven by a reduction in deaths from conditions requiring careful monitoring.6 As a result, even successful implementation may not be evident based on the data and research design of this study. Qualitative research could be helpful to better understand the reasons that hospitals hire consultants and whether the engagement was successful in achieving specific goals.

Another important extension will be to consider spending on consultants in the context of overall administrative costs. All hospitals need someone to run them. But are consultants being used judiciously by hospitals that otherwise run a lean administrative shop? Or are consultants a symptom of administrative bloat? Again, there is likely to be heterogeneity, and the marginal value of consultant engagement may be highest at the extremes, where in-house administration is either too much or too little.

As one of the first studies of its kind, this article raises as many questions as it answers. We applaud the authors’ success in laying the groundwork for more research that can improve understanding of how nonprofit hospitals balance financial and social objectives.

Article Information

Corresponding Author: Thomas C. Buchmueller, PhD, Ross School of Business, University of Michigan, 701 Tappan St, Ann Arbor, MI 48104 (tbuch@umich.edu).

Published Online: May 4, 2026. doi:10.1001/jama.2026.4362

Conflict of Interest Disclosures: None reported.

References

1. Silver-Greenberg  J, Thomas  K. They were entitled to free care. Hospitals hounded them to pay. New York Times. Published September 24, 2022. Accessed February 25, 2026. https://www.nytimes.com/2022/09/24/business/nonprofit-hospitals-poor-patients.html

2. Washington State Office of the Attorney General. Providence must provide $157.8 million in refunds and debt relief for unlawful medical charges to low-income Washingtonians. Published February 1, 2024. Accessed February 25, 2026. https://www.atg.wa.gov/news/news-releases/ag-ferguson-providence-must-provide-1578-million-refunds-and-debt-relief

3. Providence. Statement in response to a story published in the New York Times on September 24, 2022. Accessed February 25, 2026. https://blog.providence.org/regional-blog-news/facts

4. Bruch  JD, Fang  CC, Zeng  YB, Parthan  A, Gandhi  AD.  Changes in nonprofit hospitals’ finances, operations, and quality of care after using management consultants.   JAMA. Published online May 4, 2026. doi:10.1001/jama.2026.5027
ArticleGoogle Scholar

5. Bronsoler  A, Doyle  J, Van Reenen  J.  The impact of health information and communication technology on clinical quality, productivity, and workers.   Annu Rev Econ. 2022;14(1):23-46. doi:10.1146/annurev-economics-080921-101909Google ScholarCrossref

6. Miller  AR, Tucker  CE.  Can health care information technology save babies?   J Polit Econ. 2011;119(2):289-324. doi:10.1086/660083PubMedGoogle ScholarCrossref

View Full Text Download PDF

Article link: https://www.linkedin.com/posts/editorial-by-thomas-buchmueller-phd-share-7457086638964490240-R68d?

When Disregard for Population Health Becomes US Policy – JAMA

Posted by timmreardon on 04/18/2026
Posted in: Uncategorized.

JAMA

Published Online: March 16, 2026

2026;335;(14):1205-1206. doi:10.1001/jama.2026.1396

Once, the dominant frustration among US population health experts was inaction, the nation’s failure to enact policies to improve health outcomes. Now the problem is action, the government’s adoption of sweeping policies that overtly threaten population health.

For decades, US disease and mortality rates exceeded those in other high-income countries,1 a gap that widened over time. When US life expectancy flatlined after 2010, experts recommended policies to address the leading causes of death and structural factors that systematically put the health of the US population at risk.2 They called for widening access to health care, alleviating economic stresses on low-income and middle-class households, reducing income inequality, strengthening the social safety net, and tightening regulations to protect public health.

Few of these recommendations were implemented. Such policies are politically unpopular in the US and are opposed by powerful special interests. Although the nation made some progress in addressing the drug and obesity epidemics, too little was done to address structural issues or slow the trajectory. Between 2010 and 2019, all-cause mortality at ages 25 to 64 years increased by 19.6%.3

Too little was done during the COVID-19 pandemic. Other countries outperformed the US in controlling viral transmission and vaccinating their populations. US life expectancy losses were greater than in most high-income countries.1 By 2023, 37 countries had higher life expectancy than the US.4 The high US mortality rates produced an enormous death toll. By one estimate, not having achieved the low mortality rates of peer countries cost 13.3 million US lives between 1984 and 2021.5

Actions by the Trump administration could escalate this crisis. A pivot has occurred: the nation’s inaction in addressing the US health disadvantage has been replaced by something worse, government actions that—intentionally or not—endanger population health. Since taking office, the Trump administration has done the opposite of what experts, policy research, and logic recommend to improve population health. Widening access to health care was recommended, but the Trump administration slashed Medicaid funding by more than $1 trillion and allowed Patient Protection and Affordable Care Act premiums to skyrocket.6 Tighter regulations were recommended, but the administration weakened health and safety regulations in what it called the “biggest deregulatory action in US history.”7

Education and income are the most powerful social determinants of health, but the administration began dismantling the Department of Education and adopted economic policies that tightened the vise on all but the wealthiest households. Job and wage growth slowed, prices increased, and social welfare programs were defunded to finance regressive tax cuts, what some consider the largest wealth transfer in US history.8

The administration’s Make America Healthy Again campaign took positive steps, such as working to speed approvals and lower the costs of prescription drugs. The Secretary of Health and Human Services, Robert F. Kennedy Jr, brought welcome attention to food quality. However, these positive steps occurred against the backdrop of countervailing policies that jeopardized health. The administration began dismantling the nation’s premiere health agencies, firing thousands of workers, replacing top scientists with ideologues, and terminating vital programs on disease surveillance, tobacco control, chronic diseases, injury prevention, firearms, primary care, mental health, and more. It cut medical research funding by more than $1 billion and banned work on health inequities and other topics disliked by the president.9

Secretary Kennedy took steps to decrease vaccine use, risking the return of preventable infectious diseases. Inexperienced advisors, who replaced vaccine experts on the Advisory Committee on Immunization Practices, began undoing childhood and COVID-19 vaccine recommendations. Kennedy canceled messenger RNA vaccine research, weakening the nation’s capacity to produce vaccines rapidly in future pandemics.6 Kennedy stoked parental worries about vaccine safety and encouraged states to drop school mandates for childhood immunizations. Levels of vaccine coverage and herd immunity waned.6 Measles cases reached record highs.6

It is as if the government’s policy is to no longer concern itself with the health consequences of its choices. Data collection to document the consequences is also ending. Health agencies have idled dozens of databases.10Along with cutting food assistance, the administration stopped tracking the prevalence of hunger.11 The Environmental Protection Agency stopped considering the cost of human life in cost-benefit analyses.12

This disregard for population health extends overseas. The administration banned global health research, slashed humanitarian assistance in low-income or low-resource countries, and gutted the US Agency for International Development, on which more than 100 countries depended. These actions could claim more than 14 million lives worldwide by 2030.13 Risking planetary health, the administration promoted fossil fuels and opposed climate mitigation.

To be fair, not everyone sees health as their top priority. Strengthening the economy, lowering taxes, satisfying shareholders, or retaining political office often takes precedence for those in power. Some US residents with fervent beliefs are willing to forgo health to preserve personal autonomy, limit government intrusion, or uphold other ideologic principles. The premise that the administration’s policies will compromise health is disputed. Deregulators consider market forces more effective in optimizing outcomes. Vaccine critics like Kennedy see net gain in reducing vaccine exposure; they apply a different risk-benefit calculus, assigning greater risks to vaccines and fewer benefits than conventional science would suggest. Public health has become politicized. Those who distrust data and mainstream scientists may question claims that current policies are harmful.

Evidence on how current policies are affecting health will take years to gather. Mortality data for 2025 and beyond will be unavailable until at least 2027. However, there are reasons to predict adverse health outcomes. The causal pathways are easy to imagine. Policies that do little to help people get an education, find sustainable employment, or earn livable wages diminish the resources they need to protect their health (eg, eat well, exercise, live in healthy homes and neighborhoods), screen for disease, or obtain care when illnesses occur. Reducing safety net assistance at a time of increasing prices, housing costs, health insurance premiums, and medical bills could deepen economic deprivation, forcing struggling families to neglect their health. Economic precarity and stress can heighten depression, smoking, addiction disorders, domestic violence, and self-harm.

Policies have consequences. Rural hospitals close when Medicaid funding declines. Injuries increase when safety regulations are lifted. Respiratory illnesses worsen when smokestacks emit more pollutants. Disease outbreaks widen as immunization levels wane. Deaths occur when lifesaving research is canceled. If current policies increase mortality rates, the gap in life expectancy between the US and other countries will likely widen further. The list of countries with better health statistics will grow. These are grim predictions, but a nation that removes health protections, heightens exposure to infectious diseases and toxins, slows scientific advances, and restricts access to health care should expect bad outcomes.

The degree to which the public supports, or is even tracking, these developments is unclear. Data are lacking to know how public attitudes are distributed across the population or what people understand about the health implications for themselves or their children. Some percentage of the US population is following and concerned about the tumult at health agencies and the policy drift from conventional science. Some percentage is pleased with what they see. Some are unaware, either uninformed or misinformed about recent developments. Some are disinterested, trusting the authorities to make responsible choices.

Regardless of their views, people deserve to know when policies will increase their risk of experiencing diseases, injuries, or an early death, even if they will dismiss the warning. When policies put lives at stake, health professionals and organizations must speak out. They cannot count on news organizations to keep the public informed. The duty to present the data with scientific rigor and to clarify how policy changes could help or hurt individuals falls on the health and scientific communities. Academic and scientific institutions should build coalitions to safeguard vital data and surveillance programs, conduct independent assessments that forecast the health consequences of policy choices, and communicate their concerns to legislatures, town halls, and media outlets. Although speaking out carries risks in the current climate, the duty to warn remains, even if it invites recrimination or will go unheeded. Informed consent matters. US citizens may be content to live shorter lives than people in other countries and to accept policies that further compromise their health, but they should do so knowingly.

Article Information

Corresponding Author: Steven H. Woolf, MD, MPH, Virginia Commonwealth University School of Medicine, Department of Family Medicine and Population Health, 830 E Main St, Ste 5035, Richmond, VA 23298-0212 (steven.woolf@vcuhealth.org).

Published Online: March 16, 2026. doi:10.1001/jama.2026.1396

Conflict of Interest Disclosures: None reported.

Disclaimer: The views expressed are those of the author and do not represent those of his employer.

References

1.

Woolf  SH.  Falling behind: the growing gap in life expectancy between the United States and other countries, 1933-2021.   Am J Public Health. 2023;113(9):970-980. doi:10.2105/AJPH.2023.307310PubMedGoogle ScholarCrossref

2.

National Research Council; Institute of Medicine.  US Health in International Perspective: Shorter Lives, Poorer Health. National Academies Press; 2013.

3.

Centers for Disease Control and Prevention. About underlying cause of death, 1999-2020. Accessed January 30, 2026. https://wonder.cdc.gov/ucd-icd10.html

4.

United Nations. World population prospects 2024. Accessed November 18, 2025. https://population.un.org/wpp/

5.

Bor  J, Stokes  AC, Raifman  J,  et al.  Missing Americans: early death in the United States—1933-2021.   Proc Natl Acad Sci U S A Nexus. 2023;2(6):pgad173. doi:10.1093/pnasnexus/pgad173PubMedGoogle ScholarCrossref

6.

Woolf  SH.  Evaluating how the Trump administration will affect health outcomes.   Lancet. 2025;406(10510):1320-1322. doi:10.1016/S0140-6736(25)01849-5PubMedGoogle ScholarCrossref

7.

US Environmental Protection Agency. EPA launches biggest deregulatory action in US history. Published March 12, 2025. Accessed February 1, 2026. https://www.epa.gov/newsreleases/epa-launches-biggest-deregulatory-action-us-history

8.

Chait  J. The largest upward transfer of wealth in American history. Atlantic. Published May 22, 2025. Accessed February 1, 2026. https://www.theatlantic.com/ideas/archive/2025/05/big-beautiful-transfer-of-wealth/682885/

9.

Liu  M, Kadakia  KT, Patel  VR, Krumholz  HM.  Characterization of research grant terminations at the National Institutes of Health.   JAMA. 2025;334(6):534-536. doi:10.1001/jama.2025.7707
ArticlePubMedGoogle ScholarCrossref

10.

Jacobs  JW, Booth  GS, Brewer  NT, Freilich  J.  Unexplained pauses in Centers for Disease Control and Prevention surveillance: erosion of the public evidence base for health policy.   Ann Intern Med. Published online January 27, 2026. doi:10.7326/ANNALS-25-04022PubMedGoogle ScholarCrossref

11.

DeParle  J. Trump administration to stop measuring food insecurity. New York Times. Published September 20, 2025. Accessed January 30, 2026. https://www.nytimes.com/2025/09/20/us/politics/trump-hunger-report-data.html

12.

Joselow  M. EPA to stop considering lives saved when setting rules on air pollution. New York Times. Published January 12, 2026. Accessed January 30, 2026. https://www.nytimes.com/2026/01/12/climate/trump-epa-air-pollution.html

13.

Cavalcanti  DM, de Oliveira Ferreira de Sales  L, da Silva  AF,  et al.  Evaluating the impact of two decades of USAID interventions and projecting the effects of defunding on mortality up to 2030: a retrospective impact evaluation and forecasting analysis.   Lancet. 2025;406(10500):283-294. doi:10.1016/S0140-6736(25)01186-9

Article link: https://www.linkedin.com/posts/perspective-policies-enacted-by-the-trump-share-7451223004711231488-Rcgl?

Why opinion on AI is so divided – MIT Technology Review

Posted by timmreardon on 04/14/2026
Posted in: Uncategorized.

AI power users are pulling away from everyone else.

By Will Douglas Heavenarchive page

April 13, 2026

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends, is a chance to take a breath. (It’s amarathon, not a sprint, after all.)

This year’s report, which dropped today, is full of striking stats. A lot of the value comes from having numbers to back up gut feelings you might already have, such as the sense that the US is gunning harder for AI than everyone else: It hosts 5,427 data centers (and counting). That’s more than 10 times as many as any other country.  

There’s also a reminder that the hardware supply chain the AI industry relies on has some major choke points. Here’s perhaps the most remarkable fact: “A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan.” One foundry! That’s just wild.

But the main takeaway I have from the 2026 AI Index is that the state of AI right now is shot through with inconsistencies. As my colleague Michelle Kim put it today in her piece about the report: “If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini Deep Think, scored a gold medal in the International Math Olympiad but is unable to read analog clocks half the time.)

Michelle does a great job covering the report’s highlights. But I wanted to dwell on a question that I can’t shake. Why is it so hard to know exactly what’s going on in AI right now?  

The widest gap seems to be between experts and non-experts. “AI experts and the general public view the technology’s trajectory very differently,” the authors of the AI Index write. “Assessing AI’s impact on jobs, 73% of U.S. experts are positive, compared with only 23% of the public, a 50 percentage point gap. Similar divides emerge with respect to the economy and medical care.”

That’s a huge gap. What’s going on? What do experts know that the public doesn’t? (“Experts” here means US-based researchers who took part in AI conferences in 2023 and 2024.)

I suspect part of what’s going on is that experts and non-experts base their views on very different experiences. “The degree to which you are awed by AI is perfectly correlated with how much you use AI to code,” a software developer posted on X the other day. Maybe that’s tongue-in-cheek, but there’s definitely something to it.

The latest models from the top labs are now better than ever at producing code. Because technical tasks like coding have right or wrong results, it is easier to train models to do them, compared with tasks that are more open-ended. What’s more, models that can code are proving to be profitable, so model makers are throwing resources at improving them.

This means that people who use those tools for coding or other technical work are experiencing this technology at its best. Outside of those use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon has become known as the “jagged frontier”: Models are very good at doing some things and less good at others.

The influential AI researcher Andrej Karpathy also had some thoughts. “Judging by my [timeline] there is a growing gap in understanding of AI capability,” he wrote in reply to that X post. He noted that power users (read: people who use LLMs for coding, math, or research) not only keep up to date with the latest models but will often pay $200 a month for the best versions. “The recent improvements in these domains as of this year have been nothing short of staggering,” he continued.

Because LLMs are still improving fast, someone who pays to use Claude Code will in effect be using a different technology from someone who tried using the free version of Claude to plan a wedding six months ago. Those two groups are speaking past each other.

Where does that leave us? I think there are two realities. Yes, AI is far better than a lot of people realize. And yes, it is still pretty bad at a lot of stuff that a lot of people care about (and it may stay that way). Anyone making bets about the future on either side should bear that in mind.

Article link: https://www.technologyreview.com/2026/04/13/1135720/why-opinion-on-ai-is-so-divided/

The Global Healthcare System Is Broken. Japan Fixed It for $4,100 Per Person.

Posted by timmreardon on 04/10/2026
Posted in: Uncategorized.

Let us begin with the Japanese dentist. Not a specific dentist. The concept. In Japan, you make an appointment, you see the dentist, you pay a modest fee that the national insurance has already negotiated, and you go home. The appointment was on time. The equipment was current. The dentist went to medical school without accumulating the debt load of a medium-sized house. This is not a utopia. Japan has its own healthcare problems, mostly involving an extremely old population and a hospital system that is at capacity. But the dentist was on time.

Now let us describe the British experience of the same dentist. There are not enough NHS dentists. This has been true for a decade and is getting worse. In 2023, 26% of people in the UK skipped dental care because of the cost, compared to just 6% in 2013. The people who wanted NHS dental appointments and could not get them because there were no appointments available had a choice: pay privately, which is expensive enough that a significant number of people did not do it, or fly to Turkey. They flew to Turkey. More than 1.2 million people from Europe visit Turkey annually for surgical procedures. Search volume for cosmetic surgery Turkey from UK users grew 500% between 2015 and 2023. Turkey’s health tourism revenue reached a record 10 billion dollars in 2024. The NHS then spent up to 20,000 pounds per patient managing the complications that some of them brought home. The NHS is paying for the sequel to a film it refused to produce.

This essay is about the global healthcare system, which is not a single thing but which has a very clear hierarchy of functioning. At the top are countries where healthcare works. At the bottom are countries where it demonstrably does not. And in the middle is a specific group of English-speaking countries that were once considered among the more enlightened models on earth, that made a series of structural decisions in the 1980s and 1990s, and that are now producing wait times, outcomes, and access failures that belong in a much lower tier than their GDP per capita would suggest. They planted the seeds thirty years ago. They are currently eating the harvest and calling it a crisis, which is what happens when you eat the harvest of seeds you planted and are surprised.

The Countries Where It Works. And the Specific Reason It Works There.

Japan’s healthcare system spends approximately 4,100 dollars per capita per year. Life expectancy is 84.6 years, the highest of any large economy. Infant mortality is 1.9 per 1,000 live births, among the lowest on earth. Out-of-pocket spending is 9% of total health expenditure. The system covers 100% of the population through a universal fee-schedule based insurance model. Administrative waste is kept to under 10% of total costs. Japan does not have a particularly simple system, it has multiple insurers, multiple payers, complex regional structures, but it has one thing that prevents complexity from becoming corruption: the government sets the fees. Every procedure has a nationally negotiated price. Nobody can charge more. Nobody can charge less. The market forces that would otherwise allow prices to detach from reality have been removed from the equation by design.

This is the thing that makes Japan’s model hard to copy and easy to explain. The price is the price. A hospital cannot charge more for an appendectomy because it is the only hospital in the region. A pharmaceutical company cannot charge 40 times more for a drug in Japan than in France because it needs to fund American marketing campaigns. The price is the price. The system has costs, mostly related to an ageing population stretching a fixed infrastructure, but the fundamental mechanism works because the fundamental mechanism is not permitted to be exploited.

Germany spends 12.7% of GDP on healthcare. Japan spends about 11%. The United States spends about 18%. Japan gets a life expectancy of 84.6 years. The US gets 76.4. Germany gets 81.1. These are not small differences. These are years of life per person that are simply not being lived in the country spending the most money. The US healthcare system is the most expensive in the world and produces outcomes that rank 69th globally. This is the same rank as countries with a fraction of the per capita spending. The graph of money versus outcomes for the United States looks like a child drew it.

Singapore is a slightly different model but produces a comparable result. It blends mandatory savings accounts, called MediSave, with government subsidies and a competitive private market. Life expectancy is 83.5 years. Patient satisfaction is 89%. Out-of-pocket spending averages 10% of total health costs, among the lowest for a high-income economy. Administrative waste is under 5%. Singapore ranks first for healthcare efficiency globally on multiple indices. The specific insight Singapore applied is that the government negotiates drug prices directly, which keeps medication costs low, and that the savings account mechanism means people have a personal financial stake in not overusing the system, without creating the American problem where fear of cost prevents people from using it at all. This balance is genuinely difficult to strike and Singapore mostly struck it.

South Korea, Taiwan, and the Netherlands produce similar results through similar philosophies: universal coverage, centrally negotiated prices, strong primary care as the first line, and administrative systems that are designed to serve the patient rather than to serve the administrative system. The Numbeo Healthcare Index for 2026 places Taiwan first, South Korea second, and the Netherlands third. The United States is not in the top 60. This is the country that spends the most money on healthcare, producing outcomes that rank 69th globally, which would be an extraordinary satirical invention if it were not a publicly verifiable fact.

The countries where healthcare works have a common feature. The price is controlled. Not by market forces, which left to their own devices in healthcare produce the American result. By governments that understood the information asymmetry problem, decided to address it rather than celebrate it, and built pricing mechanisms that prevent the extraction of maximum value from someone who is sick and has no alternative. This is not complicated. It required political will. The countries that applied it are, by outcome measurement, better places to have a body.

The English-Speaking Countries. A Separate Category of Avoidable Failure.

There is a specific phenomenon happening in the United Kingdom, Canada, Australia, and New Zealand that deserves its own section, because it is distinct from both the successful models described above and from the American catastrophe that gets most of the attention. These countries built functional universal healthcare systems in the mid-20th century. They built them well. They worked. And then, beginning in the 1980s and accelerating through the 1990s and 2000s, they made a series of structural decisions that have been slowly dismantling what they built while maintaining the name, the logo, and the political rhetoric.

Canada’s median wait time from referral by a general practitioner to specialist consultation to actual treatment in 2025 was 28.6 weeks, the second-longest ever recorded by the Fraser Institute in 30 years of tracking this data. That is seven months. For non-emergency treatment of conditions that are nonetheless affecting your quality of life, potentially your ability to work, potentially your ability to function. Seven months is what Canada produced with its universal healthcare system in 2025. Canada consistently ranks in the bottom tier of OECD countries for wait times despite spending 12 to 13% of GDP on healthcare, which is broadly comparable to Germany and Japan and Switzerland, all of which produce dramatically shorter waits. The money is going somewhere. It is not going to making treatment faster.

The United Kingdom’s National Health Service is a system that inspires genuine love and genuine despair in roughly equal proportions. In 2023, 61% of people in the UK reported waiting more than 4 weeks for a specialist appointment. That number was 14% in 2013. In one decade, a wait that affected one in seven people now affects three in five. 19% waited more than a year for non-emergency surgery. The NHS has 121,000 full-time equivalent staff vacancies. It has a 7.5% nursing vacancy rate. It has a dental crisis so severe that it is functionally creating a market for Turkish dentistry. The NHS was not always like this. The NHS spent decades being one of the more functional health systems on earth. The current situation is the result of specific policy decisions about funding, privatisation, outsourcing, and staffing that were made in the 1980s and 1990s and whose consequences are being harvested now.

Australia’s Medicare system is genuinely good by many measures, and the Australians are right to value it. The rural access problem is real and documented. The private insurance two-tier issue, where those who can afford private insurance access a faster and better-equipped system while those who cannot wait longer for public care, is a structural equity problem that has been growing since the Howard government’s incentive structures pushed people toward private insurance in 1997. The same incentive, applied at the same moment, by the same ideology, in multiple countries simultaneously. The English-speaking world had a shared political consensus in the 1990s about what healthcare should look like. That consensus has produced shared results. The results are not good.

New Zealand’s system has particular problems with primary care access. The general practitioner visit costs money in New Zealand in ways that it does not in Japan or Germany, which creates the same deterrence effect that the American system produces at scale, just at a lower level of severity. The deterrence means people do not see a doctor when a small problem is manageable. The small problem becomes a large one. The large one is addressed in an emergency department, which is expensive, which creates the same cost spiral that every system experiences when it prices primary care out of reach.

The English-speaking healthcare systems have a common problem that is distinct from the American problem and the Japanese solution. They built the architecture of universal care and then introduced market mechanics into the wrong parts of it. Fee structures, outsourcing, private participation incentives, workforce decisions driven by cost reduction rather than service capacity. The result is systems that are philosophically universal and operationally fragmented. They have the brand of Japan’s model without the discipline. They have the spend of America’s model without the innovation. They have produced a generation of patients who are technically covered and practically waiting.

Turkey. South Korea. The Countries That Saw a Gap and Built an Industry.

Here is the uncomfortable observation about medical tourism. It is not a story about adventurous individuals taking a calculated risk for a cheaper procedure. It is a story about the systematic failure of healthcare systems in wealthy countries creating demand that other countries are efficiently serving. Turkey did not build a 10 billion dollar health tourism industry by accident. It built it by correctly identifying that the UK could not provide timely dental care, that Canadians were waiting seven months for procedures, that Germans and Dutch and Scandinavians could access faster or cheaper options by flying four hours, and that people with money and a health problem and a long waiting list will spend that money somewhere. Turkey simply made itself that somewhere.

The numbers are striking. More than 1.2 million Europeans visited Turkey annually for surgical procedures as of 2024. Turkey’s health tourism revenue of 10 billion dollars in 2024 was generated at an average spend of 5,000 to 10,000 dollars per visitor. These are not budget travellers picking a destination on price alone. These are people with a medical need and the financial capacity to address it, who have calculated that flying to Istanbul and returning in a week is better than waiting seven months in their home country for a procedure that their home country is nominally obligated to provide.

South Korea has built a comparable industry but at the higher end of the quality spectrum. The K-beauty cultural phenomenon, which positioned Korean aesthetics and Korean medical standards as aspirational globally, has created a medical tourism market for cosmetic surgery and aesthetic medicine that attracts patients from China, Japan, Southeast Asia, the Middle East, and increasingly North America and Europe. South Korean clinics offer rhinoplasty, facial contouring, skin treatments, and complex aesthetic procedures with wait times measured in days and costs measured at 50 to 70% below comparable procedures in Western markets. Seoul’s Gangnam district has a higher density of plastic surgery clinics than any comparable area on earth. This is not incidental. It is a deliberately constructed industry responding to genuine global demand.

Malaysia has emerged as the preferred destination for Middle Eastern and South Asian patients who want Western-standard care at Asian prices, close to home, in a Muslim-majority country with English-speaking medical staff. Kuala Lumpur’s private hospital sector is accredited to international standards, priced at a fraction of Singapore or the Gulf’s premium facilities, and offers procedures from cardiac surgery to fertility treatment to oncology at price points that make flying there economically rational even for patients from wealthy countries. Malaysia’s medical tourism sector processed over a million patients in 2024 and continues to grow.

The logic is identical everywhere. A wealthy country’s healthcare system has a gap: in affordability, in access, in waiting time, in the specific procedure offered. Another country identifies the gap, builds the infrastructure to serve it, prices it appropriately, and captures the spend that the first country’s system was unwilling or unable to retain. This is how markets work. The irony is that the countries being bypassed are the ones that most loudly advocate for market solutions to healthcare problems. The market has spoken. The market says go to Istanbul.

The sequel cost is the part that the receiving healthcare systems prefer not to discuss. Bangor University research found that complications from medical tourism cost the NHS between 1,058 and 19,549 pounds per patient, with an average hospital stay for complications of 17 days, the longest being 45 days. The NHS is not only failing to provide timely dental care. It is paying to manage the consequences of that failure when the people it failed go abroad and return with complications. This is the structural logic of a system eating itself. The failure creates demand that goes elsewhere. The elsewhere creates complications. The complications come back. The complications cost more than the original failure would have cost to prevent.

The Philosophical Question That Is Also a Numbers Question.

The standard defence of the NHS, of Canadian Medicare, of Australian Medicare, of every struggling universal system, is that universal coverage is a moral achievement even if the execution is imperfect. This is true. Universal coverage is genuinely a moral achievement. The question worth asking is what universal coverage means when the waiting time for a specialist is seven months, when dental care is effectively inaccessible to a quarter of the population, when 19% of people wait more than a year for non-emergency surgery.

Japan covers 100% of its population. Singapore covers 100% of its population. Germany covers 100% of its population. Taiwan covers 100% of its population. These countries also have better outcomes, shorter wait times, lower costs per capita, and higher patient satisfaction than the English-speaking universal systems. The moral achievement of universal coverage does not require the operational failures that are being treated as inevitable consequences of providing that coverage. They are not inevitable. They are the result of specific decisions about funding, about pricing, about workforce, about the relationship between public and private provision, and about whether the system’s purpose is to serve patients or to serve the administrative and commercial interests that have grown up around patient care.

The United States is the clearest case of what happens when those commercial interests are given full authority. 18% of GDP. 69th globally in outcomes. A prior authorisation system where insurance companies employ physicians whose entire job is to deny the treatments that other physicians have prescribed. Pharmaceutical companies making 40% of their global revenue from 4% of the world’s population. A hospital consolidation process so advanced that private equity has created local monopolies in enough regions that patients have no alternative but to use the overpriced facility that has captured their geography. The US case is not a cautionary tale. It is the result of a deliberate and sustained policy of allowing commercial interests to dominate a market that does not behave like other markets because the consumer cannot shop around while dying.

The English-speaking universal systems are not the American system. But they have been moving toward it incrementally for thirty years. The outsourcing decisions. The private finance initiative contracts that locked public hospitals into expensive long-term arrangements with private providers. The two-tier incentive structures that pushed people toward private insurance. The workforce decisions that capped medical training places to protect income levels. The lobbying by various professional groups that turned healthcare administration into a growth industry while the number of actual clinicians relative to population fell. These are American decisions applied at a smaller scale. They produce American results at a smaller scale.

The countries that solved healthcare did not solve a medical problem. They solved a political economy problem. They decided who would be allowed to profit from illness, to what extent, and in what parts of the system. Japan and Singapore and Taiwan answered that question clearly and consistently. The English-speaking universal systems answered it less clearly, more inconsistently, in response to electoral cycles and industry lobbying rather than system design. The harvest is being eaten now. Seven months in Canada. Turkey teeth. Nineteen percent waiting a year for surgery. It is exactly what was planted.

What the Working Models Tell Us. And Why Nobody Copies Them.

There is a specific frustration in the global healthcare discussion, which is that the working models exist, have existed for decades, produce documented superior outcomes at comparable or lower cost, and are not copied. The reason they are not copied is not that they are too complex to replicate. Japan’s fee schedule model is well documented. Singapore’s 3M system is published. Germany’s regulated multi-payer model is publicly studied. The reason they are not copied is that copying them requires removing the profitability from the parts of the healthcare system that are currently most profitable, which are the parts that do not provide patient care.

The American private insurance industry generates several hundred billion dollars in annual revenue. A significant portion of this revenue is generated by the administrative machinery that exists between the patient and the physician: the prior authorisation system, the claims processing system, the appeals system, the legal system that defends denied claims, the consulting industry that advises on navigating the system. This machinery employs hundreds of thousands of people. It contributes to GDP. It lobbies Congress with approximately 750 million dollars per year to maintain the conditions that justify its existence. Removing it would require replacing it with something simpler, which would be better for patients and worse for the people currently employed to make the system complicated.

The NHS has a different but structurally analogous problem. The market mechanisms introduced into the NHS over thirty years have created internal markets, commissioning groups, procurement systems, outsourcing contracts, and quality monitoring frameworks that employ a large number of people who are not clinicians but whose existence depends on the NHS remaining complex enough to require their services. Simplifying the NHS would require acknowledging that complexity, and the employment it generates, is itself a cost with no patient care return. This is politically painful in a different way from the American political pain but produces the same result: the system that works for patients cannot be built because too many people are employed by the system that does not.

Japan avoided this by building the system before the administrative class had an opportunity to colonise it. Singapore avoided this by building the system with explicit constraints on what private profit could be extracted from it. South Korea avoided it through a combination of strong state architecture and a cultural consensus that healthcare outcomes mattered more than healthcare industry profits. The English-speaking countries built their systems in an era when these protections seemed unnecessary, watched the administrative class arrive and expand, and are now in the position of trying to remove it while it employs enough people to vote.

This is not a counsel of despair. It is a description of the mechanism. The mechanism can be changed. Simple antitrust enforcement in hospital consolidation. Transparent price-setting for common procedures. Workforce expansion to match population growth rather than to protect existing income levels. Primary care investment that reduces emergency department overload. These are not radical interventions. They are the normal maintenance of a system that stopped being maintained. The Japanese health ministry performs this maintenance continuously. The NHS maintenance budget was cut in the name of efficiency. The efficiency delivered 19% waiting more than a year for surgery and a medical tourism market worth 10 billion dollars in Turkey.

The final word belongs to the patient. Not the patient as a statistical unit in a healthcare quality index. The patient who checked their bank account before making the appointment. The Canadian who waited 28.6 weeks. The British person who flew to Istanbul for the teeth their NHS said could not see them for months. The Australian in a rural postcode whose nearest specialist is four hours away. The American who did not go to the doctor because the deductible was the rent. These people are not data points in a policy debate. They are the people the system was built for. The gap between what the system says it does and what it delivers to these specific people is where the cynicism earns its place. And where the accountability for the decisions of the 1990s should also live.

Article link: https://www.linkedin.com/pulse/global-healthcare-system-broken-japan-fixed-4100-per-person-alimov-n7xac?

When Not to Use AI – MIT Sloan

Posted by timmreardon on 04/01/2026
Posted in: Uncategorized.

Don’t outsource your human judgment on communication or decisions involving values, relationships, or trust.

Benjamin Laker March 30, 2026

SUMMARY: 

AI may accelerate work, but it can’t lead people. MIT SMR columnist Benjamin Laker suggests leaving the mechanical tasks to artificial intelligence so that you can focus on the meaningful work — the work that requires the management skills at which humans excel. Rely on your own judgment to deliver messages and make decisions involving values, relationships, or trust, he advises. That’s your job as a manager, and a responsibility that should not be outsourced to AI.

AI PROMISES TO MAKE MANAGERS more productive and give them access to more information more quickly. It can draft plans, summarize reports, and even coach you on how to deliver feedback. Yet the same technology that accelerates decision-making can also erode your judgment, if you let it. Rely on artificial intelligence too little, and you miss its advantages. Rely on it too much, and you risk delegating your thinking instead of sharpening it.

Leading well in the age of AI is about balance. You need to know when to let algorithms lighten the load — and when to carry the weight on your own shoulders so your judgment stays strong.

Where AI Helps You Think Faster

AI excels at compressing time. It can scan vast quantities of information, synthesize key points, and produce first drafts of documents or presentations in seconds. Used wisely, AI accelerates the slowest parts of managerial work: gathering data, preparing materials, and finding patterns.

When time is tight, use AI to handle the groundwork so that you can focus on sensemaking. Let it outline a report so that you can spend your energy on the real managerial work: deciding what findings matter, what signals to prioritize, and what the implications are for strategy or next steps. Have it summarize team feedback so that you can concentrate on what action to take. Use it to prepare talking points for a performance review, and then spend your time planning your tone and delivery. This keeps you in the driver’s seat of decisions rather than buried in prep work.

The key is to treat AI’s output as raw material, not finished work. Skim it, shape it, and then make it yours. If you publish or present it exactly as generated, you are not accelerating your thinking — you are bypassing it. The goal is speed with discernment, not speed alone.

Where AI Can Quiet Your Judgment

The danger comes when speed begins to replace scrutiny. AI makes suggestions confidently, even when they are shallow or wrong. It can lull you into skipping the second look you would normally take, which will dull your judgment over time.

This risk of using AI is highest when you are making decisions that depend on values, nuance, or relationships — precisely the work that defines good management. AI cannot sense the emotional weight of a change announcement, the politics around a promotion, or the fragility of a struggling employee’s confidence. It will give you an answer with no sense of the human context.

In hiring, for example, AI can short-list resumes in seconds, but it cannot gauge a candidate’s resilience based on how they talk about a setback during an interview. When it comes to strategy development, AI can surface competitive trends, but it cannot sense how your team will emotionally react to a bold new direction. In these moments, your presence matters more than your productivity.

If you notice yourself accepting AI’s outputs without editing them, slow down. Ask yourself: Would I stand by this recommendation if my name were on it alone? Would I say it out loud to someone I respect? Those questions reinsert accountability — and accountability sharpens judgment.

Putting AI in Its Place

You have the opportunity to make deliberate choices about how and when artificial intelligence can best serve you and your team. Here are three ways to make the most of AI — and your own skills.

Automate Tasks, Not Trust

A practical way to stay balanced is to divide your work into tasks and trust. Tasks are the repeatable processes that benefit from speed. Trust is the human currency of management — the beliefs, emotions, and loyalties that bind a team together.

Use AI on tasks. Let it draft timelines, crunch numbers, or generate slides. Do not use it where trust is paramount. Deliver feedback yourself. Write the opening paragraph of a promotion announcement in your own voice. Decide when to change a goal or approve a hire with your own mind engaged, not on autopilot.

This distinction keeps AI working as your tool, not your proxy. It does the mechanical work while you do the meaningful work.

Consider your weekly team meeting. AI can help you build the agenda, surface metrics, and compile questions from your team’s project boards. But the tone of that meeting — whether people feel heard, valued, and motivated — is yours alone to create. No algorithm can do that for you. When trust is at stake, resist the urge to outsource.

Use AI to Widen Perspective, Not Narrow It

Another trap is using AI only to confirm what you already believe. Because these tools are designed to be agreeable, they will happily produce arguments that support your instincts. This can make you feel more decisive while actually limiting the options you consider.

When trust is at stake, resist the urge to outsource.

To avoid getting stuck in your own ideas, occasionally instruct AI to make a counterargument to your preferred option. If you are leaning toward reorganizing a team, ask for reasons not to. If you are ready to approve a budget, ask for the strongest case to reject it. This will force you to confront counterarguments before you commit — and it protects you from becoming overly certain about a decision simply because a machine echoed your view.

The best managers use AI to challenge their thinking, not to cushion it. They treat it as a sparring partner, not a cheerleader.

Build a Personal Guardrail

Even experienced managers can slip from using AI wisely to leaning on it too heavily. The shift is subtle — and it often feels like efficiency. To prevent that, build a simple guardrail: Track how much of your day involves thinking that you could not delegate. Ask yourself: Did I use AI to enhance my thinking or replace it? Did I exercise my judgment critically, or did I accept recommendations more automatically? These questions force you to notice the slope before you slide.

Some leaders set time blocks for “AI-free thinking” each week — no prompts, no tools, just unstructured reflection. Others limit AI use to specific tasks and keep a manual list of decisions where they want to feel the full weight of responsibility. Whatever the method you choose, the point is to keep drawing on your own judgment and critical thinking.


Thriving in the AI era does not mean adopting it fastest but remaining unmistakably human while using it. AI can accelerate your work, but it cannot care. It can generate options, but it cannot hold responsibility. That is your job — and the more AI can do for you, the more deliberate you must be about what you still do yourself. Let the machine do the lifting, not the leading.

Article link: https://sloanreview.mit.edu/article/when-not-to-use-ai/

There are more AI health tools than ever—but how well do they work? – MIT Technology Review

Posted by timmreardon on 03/30/2026
Posted in: Uncategorized.

Specialized chatbots might make a difference for people with limited health-care access. Without more testing, we don’t know if they’ll help or harm.

By Grace Huckinsarchive page

March 30, 2026

Earlier this month, Microsoft launched Copilot Health, a new space within its Copilot app where users will be able to connect their medical records and ask specific questions about their health. A couple of days earlier, Amazon had announcedthat Health AI, an LLM-based tool previously restricted to members of its One Medical service, would now be widely available. These products join the ranks of ChatGPT Health, which OpenAI released back in January, and Anthropic’s Claude, which can access user health records if granted permission. Health AI for the masses is officially a trend. 

There’s a clear demand for chatbots that provide health advice, given how hard it is for many people to access it through existing medical systems. And some research suggests that current LLMs are capable of making safe and useful recommendations. But researchers say that these tools should be more rigorously evaluated by independent experts, ideally before they are widely released. 

In a high-stakes area like health, trusting companies to evaluate their own products could prove unwise, especially if those evaluations aren’t made available for external expert review. And even if the companies are doing quality, rigorous research—which some, including OpenAI, do seem to be—they might still have blind spots that the broader research community could help to fill.

“To the extent that you always are going to need more health care, I think we should definitely be chasing every route that works,” says Andrew Bean, a doctoral candidate at the Oxford Internet Institute. “It’s entirely plausible to me that these models have reached a point where they’re actually worth rolling out.”

“But,” he adds, “the evidence base really needs to be there.”

Tipping points 

To hear developers tell it, these health products are now being released because large language models have indeed reached a point where they can effectively provide medical advice. Dominic King, the vice president of health at Microsoft AI and a former surgeon, cites AI advancement as a core reason why the company’s health team was formed, and why Copilot Health now exists. “We’ve seen this enormous progress in the capabilities of generative AI to be able to answer health questions and give good responses,” he says.

But that’s only half the story, according to King. The other key factor is demand. Shortly before Copilot Health was launched, Microsoft published a report, and an accompanying blog post, detailing how people used Copilot for health advice. The company says it receives 50 million health questions each day, and health is the most popular discussion topic on the Copilot mobile app.

Other AI companies have noticed, and responded to, this trend. “Even before our health products, we were seeing just a rapid, rapid increase in the rate of people using ChatGPT for health-related questions,” says Karan Singhal, who leads OpenAI’s Health AI team. (OpenAI and Microsoft have a long-standing partnership, and Copilot is powered by OpenAI’s models.)

It’s possible that people simply prefer posing their health problems to a nonjudgmental bot that’s available to them 24-7. But many experts interpret this pattern in light of the current state of the health-care system. “There is a reason that these tools exist and they have a position in the overall landscape,” says Girish Nadkarni, chief AI officer at the Mount Sinai Health System. “That’s because access to health care is hard, and it’s particularly hard for certain populations.”

The virtuous vision of consumer-facing LLM health chatbots hinges on the possibility that they could improve user health while reducing pressure on the health-care system. That might involve helping users decide whether or not they need medical attention, a task known as triage. If chatbot triage works, then patients who need emergency care might seek it out earlier than they would have otherwise, and patients with more mild concerns might feel comfortable managing their symptoms at home with the chatbot’s advice rather than unnecessarily busying emergency rooms and doctor’s offices.

But a recent, widely discussed study from Nadkarni and other researchers at Mount Sinai found that ChatGPT Health sometimes recommends too much care for mild conditions and fails to identify emergencies. Though Singhaland  some other experts have suggested that its methodology might not provide a complete picture of ChatGPT Health’s capabilities, the study has surfaced concerns about how little external evaluation these tools see before being released to the public.

Most of the academic experts interviewed for this piece agreed that LLM health chatbots could have real upsides, given how little access to health care some people have. But all six of them expressed concerns that these tools are being launched without testing from independent researchers to assess whether they are safe. While some advertised uses of these tools, such as recommending exercise plans or suggesting questions that a user might ask a doctor, are relatively harmless, others carry clear risks. Triage is one; another is asking a chatbot to provide a diagnosis or a treatment plan. 

The ChatGPT Health interface includes a prominent disclaimer stating that it is not intended for diagnosis or treatment, and the announcements for Copilot Health and Amazon’s Health AI include similar warnings. But those warnings are easy to ignore. “We all know that people are going to use it for diagnosis and management,” says Adam Rodman, an internal medicine physician and researcher at Beth Israel Deaconess Medical Center and a visiting researcher at Google.

Medical testing

Companies say they are testing the chatbots to ensure that they provide safe responses the vast majority of the time. OpenAI has designed and released HealthBench, a benchmark that scores LLMs on how they respond in realistic health-related conversations—though the conversations themselves are LLM-generated. When GPT-5, which powers both ChatGPT Health and Copilot Health, was released last year, OpenAI reported the model’s HealthBench scores: It did substantially better than previous OpenAI models, though its overall performance was far from perfect. 

But evaluations like HealthBench have limitations. In a study published last month, Bean—the Oxford doctoral candidate—and his colleagues found that even if an LLM can accurately identify a medical condition from a fictional written scenario on its own, a non-expert user who is given the scenario and asked to determine the condition with LLM assistance might figure it out only a third of the time. If they lack medical expertise, users might not know which parts of a scenario—or their real-life experience—are important to include in their prompt, or they might misinterpret the information that an LLM gives them.

Bean says that this performance gap could be significant for OpenAI’s models. In the original HealthBench study, the company reported that its models performed relatively poorly in conversations that required them to seek more information from the user. If that’s the case, then users who don’t have enough medical knowledge to provide a health chatbot with the information that it needs from the get-go might get unhelpful or inaccurate advice.

Singhal, the OpenAI health lead, notes that the company’s current GPT-5 series of models, which had not yet been released when the original HealthBench study was conducted, do a much better job of soliciting additional information than their predecessors. However, OpenAI has reportedthat GPT-5.4, the current flagship, is actually worse at seeking context than GPT-5.2, an earlier version.

Ideally, Bean says, health chatbots would be subjected to controlled tests with human users, as they were in his study, before being released to the public. That might be a heavy lift, particularly given how fast the AI world moves and how long human studies can take. Bean’s own study used GPT-4o, which came out almost a year ago and is now outdated. 

Earlier this month, Google released a study that meets Bean’s standards. In the study, patients discussed medical concerns with the company’s Articulate Medical Intelligence Explorer (AMIE), a medical LLM chatbot that is not yet available to the public, before meeting with a human physician. Overall, AMIE’s diagnoses were just as accurate as physicians’, and none of the conversations raised major safety concerns for researchers. 

Despite the encouraging results, Google isn’t planning to release AMIE anytime soon. “While the research has advanced, there are significant limitations that must be addressed before real-world translation of systems for diagnosis and treatment, including further research into equity, fairness, and safety testing,” wrote Alan Karthikesalingam, a research scientist at Google DeepMind, in an email. Google did recently reveal that Health100, a health platform it is building in partnership with CVS, will include an AI assistant powered by its flagship Gemini models, though that tool will presumably not be intended for diagnosis or treatment.

Rodman, who led the AMIE study with Karthikesalingam, doesn’t think such extensive, multiyear studies are necessarily the right approach for chatbots like ChatGPT Health and Copilot Health. “There’s lots of reasons that the clinical trial paradigm doesn’t always work in generative AI,” he says. “And that’s where this benchmarking conversation comes in. Are there benchmarks [from] a trusted third party that we can agree are meaningful, that the labs can hold themselves to?”

They key there is “third party.” No matter how extensively companies evaluate their own products, it’s tough to trust their conclusions completely. Not only does a third-party evaluation bring impartiality, but if there are many third parties involved, it also helps protect against blind spots.

OpenAI’s Singhal says he’s strongly in favor of external evaluation. “We try our best to support the community,” he says. “Part of why we put out HealthBench was actually to give the community and other model developers an example of what a very good evaluation looks like.” 

Given how expensive it is to produce a high-quality evaluation, he says, he’s skeptical that any individual academic laboratory would be able to produce what he calls “the one evaluation to rule them all.” But he does speak highly of efforts that academic groups have made to bring preexisting and novel evaluations together into comprehensive evaluations suites—such as Stanford’s MedHELM framework, which tests models on a wide variety of medical tasks. Currently, OpenAI’s GPT-5 holds the highest MedHELM score.

Nigam Shah, a professor of medicine at Stanford University who led the MedHELM project, says it has limitations. In particular, it only evaluates individual chatbot responses, but someone who’s seeking medical advice from a chatbot tool might engage it in a multi-turn, back-and-forth conversation. He says that he and some collaborators are gearing up to build an evaluation that can score those complex conversations, but that it will take time, and money. “You and I have zero ability to stop these companies from releasing [health-oriented products], so they’re going to do whatever they damn please,” he says. “The only thing people like us can do is find a way to fund the benchmark.”

No one interviewed for this article argued that health LLMs need to perform perfectly on third-party evaluations in order to be released. Doctors themselves make mistakes—and for someone who has only occasional access to a doctor, a consistently accessible LLM that sometimes messes up could still be a huge improvement over the status quo, as long as its errors aren’t too grave. 

With the current state of the evidence, however, it’s impossible to know for sure whether the currently available tools do in fact constitute an improvement, or whether their risks outweigh their benefits.

Article link: https://www.technologyreview.com/2026/03/30/1134795/there-are-more-ai-health-tools-than-ever-but-how-well-do-they-work/

Posts navigation

← Older Entries
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • The Very Uncertain Future of Arms Control – Bulletin of the Atomic Scientists 05/13/2026
    • Now Available: Expanded and Enhanced International Health Care System Profiles – Commonwealth Fund 05/13/2026
    • A Formal Model of How Artificial Intelligence Erodes Human Agency – RAND 05/08/2026
    • Changes in Nonprofit Hospitals’ Finances, Operations, and Quality of Care After Using Management Consultants – JAMA 05/05/2026
    • The Price of Advice—Do Management Consultants Deliver Value to Nonprofit Hospitals? – JAMA 05/05/2026
    • When Disregard for Population Health Becomes US Policy – JAMA 04/18/2026
    • Why opinion on AI is so divided – MIT Technology Review 04/14/2026
    • The Global Healthcare System Is Broken. Japan Fixed It for $4,100 Per Person. 04/10/2026
    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • May 2026 (5)
    • April 2026 (4)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar

Loading Comments...