healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

The Future of EHR: Oracle Health vs. Epic Systems – A 10-Year Forecast (2025-2035)

Posted by timmreardon on 09/14/2025
Posted in: Uncategorized.

The electronic health record (EHR) market is poised for transformative change over the next decade, driven by artificial intelligence (AI) integration, cloud migration, and interoperability demands. Oracle Health (formerly Cerner) and Epic Systems, the two dominant players, are pursuing divergent strategies that will reshape healthcare technology. Oracle leverages its cloud infrastructure, AI capabilities, and enterprise expertise to create an integrated health ecosystem, while Epic relies on its deep EHR experience, market dominance, and standardized workflows. By 2035, Oracle Health is predicted to surpass Epic in market influence and technological leadership due to its vast resources, cloud-native architecture, and visionary approach to healthcare transformation. The global EHR market, valued at $35.2 billion in 2024, is expected to reach $62.7 billion by 2035, growing at a CAGR of 5.4% .

Current Market Position (2025 Baseline)

Epic’s Dominance: Epic currently holds 42.3% of the U.S. acute-care hospital EHR market and supports 54.9% of beds. It serves 190 million patients through its MyChart portal and has installations in 3,700 hospitals and 45,000 clinics globally . Oracle Health holds 23.4% of the acute-care hospital market, with 108 million patient records. It added 66 new hospitals from 2022-23 and 9,279 beds from 2023-24 . Epic’s 2023 revenue was $4.9 billion (6.5% YoY growth), while Oracle Health contributed $5.9 billion to Oracle’s total revenue, with expected growth of 4-6%.

Technology and Innovation Trajectory

Oracle’s AI Advantage: Oracle is building a voice-first, AI-native EHR with ambient listening, semantic knowledge graphs, and live data integration. Its AI agents automate prior authorizations, documentation, and coding, reducing physician documentation time by ~30% . Oracle’s AI is “built-in, not bolted-on,” leveraging its cloud infrastructure and semantic database for real-time adjustments without model retraining . Epic’s AI Approach: Epic has 160-200 AI projects underway, including ambient documentation and AI assistants.

Oracle Cloud Infrastructure (OCI) provides a scalable, secure foundation for EHR deployment. Oracle’s cloud-based ambulatory EHR allows clinics to go live within hours without formal training . By 2030, 80% of Oracle Health installations will be cloud-native. Epic partnered with Google Cloud in 2020 but remains primarily on-premises with cloud options. Its legacy architecture may hinder rapid innovation and scalability . Market Shift: Cloud-based EHRs will dominate by 2030 due to cost-effectiveness, scalability, and easier maintenance .

Oracle emphasizes open APIs and interoperability, connecting 130 EHRs, 120 payer sources, and 345 data systems. It is in the final stages of becoming a Qualified Health Information Network (QHIN) . Epic’s Care Everywhere network facilitates 220 million patient record exchanges monthly, but it struggles with third-party integrations. Government mandates (e.g., TEFCA) will force broader data sharing, benefiting Oracle’s approach.

Market Expansion and Growth Areas

Epic currently serves 16 countries, while Oracle Health is focusing on international growth through cloud-based deployments. By 2035, Asia-Pacific and Latin America will be key growth regions due to rising healthcare digitization . Oracle’s $16 billion VA contract provides a stable foundation for federal expansion, while Epic dominates academic medical centers . Oracle’s new ambulatory EHR targets primary care and pediatrics, with acute-care functionality expected by mid-2026 . Its modularity appeals to small and mid-sized practices. Epic excels in large health systems and academic medical centers, with comprehensive specialty modules (e.g., oncology, cardiology) . Oracle’s $513 billion market cap and massive R&D budget (likely exceeding Epic’s annual revenue) enable long-term bets on AI and cloud infrastructure . Epic’s private status limits its capital access despite steady revenue growth. Epic’s implementations are costlier ($16 million for a mid-sized hospital) and take 12-24 months. Oracle’s cloud-based solutions offer faster deployment and lower upfront costs . By 2030, Oracle’s cloud model will reduce TCO for most providers, while Epic’s standardized approach may remain preferable for large systems with resources for customization .

Regulatory and Security Challenges

Data Privacy: Both vendors must navigate evolving regulations (GDPR, HIPAA). Oracle’s enterprise security expertise gives it an edge in global compliance . Oracle supports government-driven interoperability initiatives and cloud security infrastructure will be critical as cyber threats increase .

Predictions for 2035

Oracle Health will surpass Epic in acute-care market share by 2030, reaching ~40% by 2035, driven by cloud migration and federal contracts. Epic will maintain stronghold in academic medical centers but lose ground in community hospitals and ambulatory settings to Oracle’s cost-effective solutions.

AI-Driven Healthcare: Oracle’s AI agents will automate ~50% of administrative tasks (e.g., prior auth, documentation), reducing clinician burnout. Oracle will lead a national health data network connecting EHRs, payers, labs, and patients, while Epic’s ecosystem remains more closed. Beyond EHR: Oracle will expand into pharmaceutical research, supply chain management, and value-based care administration, addressing the entire $16 trillion healthcare industry .

Consumerization of Health: Patients will expect Amazon-like experiences through AI-powered portals (e.g., Oracle’s conversational AI for patient education). Oracle’s open APIs will drive international data exchange standards, while Epic focuses on US-centric optimization. Tech giants (e.g., Google, Amazon) may enter the EHR space, but Oracle’s healthcare-specific expertise will provide a defensive moat.

Strategic Risks and Challenges

Oracle’s Integration Challenges: Successful integration of Cerner’s culture and technology remains critical. Oracle must avoid alienating existing Cerner customers during transitions. Epic’s resistance to open interoperability and slower cloud adoption may erode its long-term position. However, its deep EHR expertise and customer loyalty provide resilience .

Regulatory Uncertainty: Changing healthcare policies could disrupt both vendors’ strategies, particularly regarding data sharing and privacy.

The EHR landscape by 2035 is poised for significant transformation, driven by cloud, AI, and interoperability. In this evolving market, Oracle Health is positioned to become a dominant force, challenging the current status quo through several key advantages:

  A Modern, Integrated Stack: Its cloud-native, AI-driven architecture is built for continuous innovation and scalability.

Strategic Investment Capacity: Oracle’s vast financial resources allow for aggressive R&D and strategic acquisitions to accelerate capabilities.

· Alignment with Regulatory Trends: Its open interoperability approach and standards-based framework align with federal directives promoting data exchange.

· A Comprehensive Ecosystem Vision: Oracle aims to address the entire healthcare continuum, from the EHR to backend financial and data operations.

While Epic will undoubtedly remain a powerful and trusted competitor—particularly for large, complex health systems—it faces the challenge of evolving its historically more closed ecosystem and on-premises heritage to avoid being perceived as a legacy platform.

The future will likely be a platform-based ecosystem, where Oracle’s underlying cloud infrastructure and enterprise integration expertise could provide a compelling alternative for organizations prioritizing open data and AI agility.

Strategic Insight for Healthcare Organizations

This prediction isn’t about one vendor “winning,” but about recognizing a fundamental shift in what defines a competitive EHR. The choice is evolving from evaluating features to selecting a strategic technology partner.

Here’s how organizations should evaluate their long-term strategy:

 1. Interoperability is Non-Negotiable: The future is open data. Prioritize partners whose architecture is built on FHIR and open APIs as a core principle, not an add-on. This is your key to future-proofing and innovation.

Evaluate Total Cost of Ownership (TCO), Not Just Initial Cost: A cloud-native solution may offer a different financial model (operational expense vs. capital expense) with savings on hardware, maintenance, and internal IT support. Model this over a 10-year horizon.

Assess Your Own AI & Innovation Strategy: Your EHR platform is the foundation for your AI future. Do you want a closed, curated app store (Epic’s App Orchard) or an open platform where you can more easily integrate best-in-breed and proprietary AI tools? Your answer will guide your choice.

Understand the “Platform Play”: Oracle isn’t just selling an EHR; it’s selling an integrated stack (database, cloud infrastructure, analytics, EHR). The potential efficiency of this integration is their core argument. Evaluate if this bundled approach offers more value than assembling best-of-breed solutions yourself.

Culture is a Feature: Consider the vendor’s culture. Do you prefer a client-driven, controlled evolution (Epic) or a fast-moving, aggressive, acquisition-driven approach (Oracle)? Each has strengths and risks.

 The Bottom Line: The market is expanding beyond a single choice. Epic may remain the safe, superior choice for large AMCs seeking a proven, integrated clinical record. Oracle may become the preferred partner for those betting on a fully integrated data and AI stack or operating in government and new care models. The “right” platform will depend entirely on an organization’s specific resources, technical strategy, and vision for the future of care delivery.

Article link: https://www.linkedin.com/pulse/future-ehr-oracle-health-vs-epic-systems-10-year-forecast-meneses-pp1cc?

The ‘godfather of AI’ says it will create ‘massive’ unemployment, make the rich richer, and rob people of their dignity – Business Insider

Posted by timmreardon on 09/08/2025
Posted in: Uncategorized.

By Thibault Spirlet

Sep 8, 2025, 5:49 AM ET

  • Geoffrey Hinton warns AI will cause “massive unemployment” and make the rich even richer.
  • Sam Altman and Elon Musk have backed a universal basic income as a cushion against job losses.
  • But Hinton says UBI won’t solve the deeper problem of the dignity and worth people get from their jobs.

Geoffrey Hinton helped invent the technology behind ChatGPT. Now he’s warning it could destroy the very jobs it was meant to enhance.

“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton, who is often referred to as the “godfather of AI,” told the Financial Times last Friday.

“It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer.”

Hinton, who won the Nobel Prize for his pioneering work on neural networks and spent a decade at Google before leaving in 2023, said the disruption is less about the technology itself than the system it operates within.

“That’s not AI’s fault,” he said, instead blaming the “capitalist system.”

The 77-year-old researcher also dismissed ideas like a universal basic income as a solution, arguing that a cash stipend wouldn’t address the loss of dignity people derive from their jobs.

Universal basic income “won’t deal with human dignity,” he said, adding that people get their worth from their jobs.

Not everyone is so pessimistic

Not all tech leaders share Hinton’s bleak view on the future.

OpenAI CEO Sam Altman has long pitched a universal basic income as a cushion against job losses, even funding one of the largest UBI trials in the US.

Elon Musk has echoed those calls, telling an audience at VivaTech last year that in a benign AI future, “probably none of us will have a job” — but universal income could let humans pursue meaning while machines handle work.

The investor Vinod Khosla has gone further, predicting that AI will perform 80% of the work in 80% of jobs. That, he argues, will slash the value of human labor and make UBI “crucial” to prevent a surge in inequality.

Anthropic CEO Dario Amodei, meanwhile, has called UBI just “a small part” of the solution, warning that society will need to invent entirely new systems to manage the shift.

Hinton isn’t convinced. While he’s previously advised the UK government to explore UBI, he now says cash payments won’t replace the sense of dignity people derive from their work.

Having lost two wives to cancer, he still hopes AI delivers breakthroughs in healthcare and education.

But beyond that, he believes the technology is more likely to erode livelihoods than uplift them.

“We are at a point in history where something amazing is happening,” he said, “and it may be amazingly good, and it may be amazingly bad.”

Article link: https://www.businessinsider.com/geoffrey-hinton-warns-ai-will-cause-mass-unemployment-and-inequality-2025-9

Prayer for Our Nation

Posted by timmreardon on 08/26/2025
Posted in: Uncategorized.

Grant us Grace and Guide us through these perilous times.

The 15 Diseases of Leadership, According of to Pope Francis

https://hbr.org/2015/04/the-15-diseases-of-leadership-according-to-pope-francis

Why South Korea’s AI rollback in classrooms is a cautionary tale for the US

Posted by timmreardon on 08/22/2025
Posted in: Uncategorized.

By Ayelet Sheffey

Aug 22, 2025, 3:43 AM ET

  • South Korea rolled back an initiative to use AI textbooks in classrooms.
  • It followed pushback from teachers, who said they didn’t have sufficient preparation.
  • While there’s a similar AI push in the US, evidence is lacking on whether it best supports student outcomes.

Humans have revolted against the machine in South Korea — and, in this battle, they’ve won.

Following pushback from teachersand parents, South Korea’s National Assembly on August 4 passed an amendment to an education bill that stripped previously sanctioned AI textbooks of their legal status as official classroom textbooks, and reclassified them as supplementary educational materials.

The Korean Federation of Teachers’ Associations said that while teachers “are not opposed to digital education innovation,” rolling out the textbooks without proper preparation and evaluation actually increased some teachers’ workloads.

The US should take note, said Alex Kotran, the founder and CEO of the AI Education Project, a nonprofit aimed at advancing AI literacy. He said the rollback of AI textbooks and the fact that teachers were involved in the pushback were “totally unsurprising.”

“Research shows that you’re going to get the best outcomes in teacher-centered classrooms, and anything that’s trying to move too quickly, focus on just the technology, without the adequate support for professional learning and development risks undermining that,” Kotran said.

The debate comes as US schools experiment with how best to use AI to fulfill their promise of more personalized learning. The Trump administration supports a public-private approach to increasing the use of the tech in education, but critics maintain that schools should be careful, given the minimal evidence on AI and student achievement, and that teacher training is key.

That’s not to say that there isn’t a place for AI, Kotran said — helping students learn AI skills will equip them for the workforce, where AI is being increasingly used in some fields. But there isn’t extensive evidence that having students learn solely from AI is the best approach.

“The bigger question is, how do you make sure the students are ready to add economic value in the labor market? And it’s not just using AI, it’s actually durable skills like the ability to communicate, problem solve, it’s critical thinking,” Kotran said. “And to build those skills, these are teacher-centered endeavors.”

The role of AI in US education

A survey released by the Korean Federation of Teachers’ Associations in July found that 87.4% of teachers reported a lack of preparation and support for using the textbook materials. The majority of respondents said that they should be allowed to choose how to use the AI textbooks to best suit their needs.

The association added in a press release that it supports efforts to advance AI usage in classrooms, but “we must not be absorbed in introducing technology while ignoring the voices of teachers.”

Some US teachers are concerned. In April, President Donald Trump signed an executive order to establish an AI task force that will establish “public-private partnerships” with AI industry organizations to promote AI literacy in K-12 classrooms. The order also called for government agencies to look into redirecting funding toward AI efforts.

Randi Weingarten, president of the American Federation of Teachers, said in a statement that the order “should be rejected in favor of what the research says works best: investing in classrooms and instruction designed by educators who work directly with students and who have the knowledge and expertise to meet their needs.

Amid concerns about AI adoption, however, some teachers have experienced positive outcomes with incorporating the technology. In an April survey of over 2,000 teachers, Gallup and the Walton Family Foundation found that among the teachers who use AI tools, 64% of respondents said that AI led to higher-quality modifications to student materials, and 61% said it helped them generate better insights on student learning and achievement.

Still, the report said that “no clear consensus exists on whether AI tools should be used in K-12 schools.”

Without comprehensive data on student outcomes using AI, it’s important to approach the topic with a focus on teacher training, not removing teachers from the equation, Kotran said. He added that, at the same time, educators and policymakers need to consider “the freight train that is barreling towards us in terms of job displacement.”

A JPMorgan analyst said there’s an increased risk that AI could replace white-collar jobs, potentially resulting in a “jobless recovery” in which that group is at higher risk of unemployment. Tech leaders are already warning of white-collar job cuts due to AI, and Kotran said the US should take this into account as Gen Zers continue to pursue those careers.

“When it comes to education, the AI just isn’t good enough to replace teachers yet,” Kotran said. “And it’s a bad bet as a school, you’re basically saying, ‘Well, we assume the technology is going to get better and we’re going to somehow be able to get past all of the downside risks of overrelying on AI.’ These are unknown things. It’s a huge, huge risk to take. And if you’re a parent, do you really want to experiment on your kid?”

Article link: https://www.businessinsider.com/ai-in-school-south-korea-textbook-rollback-jobs-education-2025-8

China built hundreds of AI data centers to catch the AI boom. Now many stand unused – MIT Technology Review

Posted by timmreardon on 08/21/2025
Posted in: Uncategorized.


The country poured billions into AI infrastructure, but the data center gold rush is unraveling as speculative investments collide with weak demand and DeepSeek shifts AI trends.

By Caiwei Chen

March 26, 2025

A year or so ago, Xiao Li was seeing floods of Nvidia chip deals on WeChat. A real estate contractor turned data center project manager, he had pivoted to AI infrastructure in 2023, drawn by the promise of China’s AI craze. 

At that time, traders in his circle bragged about securing shipments of high-performing Nvidia GPUs that were subject to US export restrictions. Many were smuggled through overseas channels to Shenzhen. At the height of the demand, a single Nvidia H100 chip, a kind that is essential to training AI models, could sell for up to 200,000 yuan ($28,000) on the black market. 

Now, his WeChat feed and industry group chats tell a different story. Traders are more discreet in their dealings, and prices have come back down to earth. Meanwhile, two data center projects Li is familiar with are struggling to secure further funding from investors who anticipate poor returns, forcing project leads to sell off surplus GPUs. “It seems like everyone is selling, but few are buying,” he says.

Just months ago, a boom in data center construction was at its height, fueled by both government and private investors. However, many newly built facilities are now sitting empty. According to people on the ground who spoke to MIT Technology Review—including contractors, an executive at a GPU server company, and project managers—most of the companies running these data centers are struggling to stay afloat. The local Chinese outlets Jiazi Guangnianand 36Kr report that up to 80% of China’s newly built computing resources remain unused.

Renting out GPUs to companies that need them for training AI models—the main business model for the new wave of data centers—was once seen as a sure bet. But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering.

“The growing pain China’s AI industry is going through is largely a result of inexperienced players—corporations and local governments—jumping on the hype train, building facilities that aren’t optimal for today’s need,” says Jimmy Goodrich, senior advisor for technology to the RAND Corporation. 

The upshot is that projects are failing, energy is being wasted, and data centers have become “distressed assets” whose investors are keen to unload them at below-market rates. The situation may eventually prompt government intervention, he says: “The Chinese government is likely to step in, take over, and hand them off to more capable operators.”

A chaotic building boom

When ChatGPT exploded onto the scene in late 2022, the response in China was swift. The central government designated AI infrastructure as a national priority, urging local governments to accelerate the development of so-called smart computing centers—a term coined to describe AI-focused data centers.

In 2023 and 2024, over 500 new data center projects were announced everywhere from Inner Mongolia to Guangdong, according to KZ Consulting, a market research firm. According to the China Communications Industry Association Data Center Committee, a state-affiliated industry association, at least 150 of the newly built data centers were finished and running by the end of 2024. State-owned enterprises, publicly traded firms, and state-affiliated funds lined up to invest in them, hoping to position themselves as AI front-runners. Local governments heavily promoted them in the hope they’d stimulate the economy and establish their region as a key AI hub. 

However, as these costly construction projects continue, the Chinese frenzy over large language models is losing momentum. In 2024 alone, over 144 companies registered with the Cyberspace Administration of China—the country’s central internet regulator—to develop their own LLMs. Yet according to the Economic Observer, a Chinese publication, only about 10% of those companies were still actively investing in large-scale model training by the end of the year.

China’s political system is highly centralized, with local government officials typically moving up the ranks through regional appointments. As a result, many local leaders prioritize short-term economic projects that demonstrate quick results—often to gain favor with higher-ups—rather than long-term development. Large, high-profile infrastructure projects have long been a tool for local officials to boost their political careers.

The post-pandemic economic downturn only intensified this dynamic. With China’s real estate sector—once the backbone of local economies—slumping for the first time in decades, officials scrambled to find alternative growth drivers. In the meantime, the country’s once high-flying internet industry was also entering a period of stagnation. In this vacuum, AI infrastructure became the new stimulus of choice.

“AI felt like a shot of adrenaline,” says Li. “A lot of money that used to flow into real estate is now going into AI data centers.”

By 2023, major corporations—many of them with little prior experience in AI—began partnering with local governments to capitalize on the trend. Some saw AI infrastructure as a way to justify business expansion or boost stock prices, says Fang Cunbao, a data center project manager based in Beijing. Among them were companies like Lotus, an MSG manufacturer, and Jinlun Technology, a textile firm—hardly the names one would associate with cutting-edge AI technology.

This gold-rush approach meant that the push to build AI data centers was largely driven from the top down, often with little regard for actual demand or technical feasibility, say Fang, Li, and multiple on-the-ground sources, who asked to speak anonymously for fear of political repercussions. Many projects were led by executives and investors with limited expertise in AI infrastructure, they say. In the rush to keep up, many were constructed hastily and fell short of industry standards. 

“Putting all these large clusters of chips together is a very difficult exercise, and there are very few companies or individuals who know how to do it at scale,” says Goodrich. “This is all really state-of-the-art computer engineering. I’d be surprised if most of these smaller players know how to do it. A lot of the freshly built data centers are quickly strung together and don’t offer the stability that a company like DeepSeek would want.”

To make matters worse, project leaders often relied on middlemen and brokers—some of whom exaggerated demand forecasts or manipulated procurement processes to pocket government subsidies, sources say. 

By the end of 2024, the excitement that once surrounded China’s data center boom was  curdling into disappointment. The reason is simple: GPU rental is no longer a particularly  lucrative business.

The DeepSeek reckoning

The business model of data centers is in theory straightforward: They make money by renting out GPU clusters to companies that need computing capacity for AI training. In reality, however, securing clients is proving difficult. Only a few top tech companies in China are now drawing heavily on computing power to train their AI models. Many smaller players have been giving up on pretraining their models or otherwise shifting their strategy since the rise of DeepSeek, which broke the internet with R1, its open-source reasoning model that matches the performance of ChatGPT o1 but was built at a fraction of its cost. 

“DeepSeek is a moment of reckoning for the Chinese AI industry. The burning question shifted from ‘Who can make the best large language model?’ to ‘Who can use them better?’” says Hancheng Cao, an assistant professor of information systems at Emory University. 

The rise of reasoning models like DeepSeek’s R1 and OpenAI’s ChatGPT o1 and o3 has also changed what businesses want from a data center. With this technology, most of the computing needs come from conducting step-by-step logical deductions in response to users’ queries, not from the process of training and creating the model in the first place. This reasoning process often yields better results but takes significantly more time. As a result, hardware with low latency (the time it takes for data to pass from one point on a network to another) is paramount. Data centers need to be located near major tech hubs to minimize transmission delays and ensure access to highly skilled operations and maintenance staff. 

This change means many data centers built in central, western, and rural China—where electricity and land are cheaper—are losing their allure to AI companies. In Zhengzhou, a city in Li’s home province of Henan, a newly built data center is even distributing free computing vouchers to local tech firms but still struggles to attract clients. 

Additionally, a lot of the new data centers that have sprung up in recent years were optimized for pretraining workloads—large, sustained computations run on massive data sets—rather than for inference, the process of running trained reasoning models to respond to user inputs in real time. Inference-friendly hardware differs from what’s traditionally used for large-scale AI training. 

GPUs like Nvidia H100 and A100 are designed for massive data processing, prioritizing speed and memory capacity. But as AI moves toward real-time reasoning, the industry seeks chips that are more efficient, responsive, and cost-effective. Even a minor miscalculation in infrastructure needs can render a data center suboptimal for the tasks clients require.

In these circumstances, the GPU rental price has dropped to an all-time low. A recent report from the Chinese media outlet Zhineng Yongxian said that an Nvidia H100 server configured with eight GPUs now rents for 75,000 yuan per month, down from highs of around 180,000. Some data centers would rather leave their facilities sitting empty than run the risk of losing even more money because they are so costly to run, says Fan: “The revenue from having a tiny part of the data center running simply wouldn’t cover the electricity and maintenance cost.”

“It’s paradoxical—China faces the highest acquisition costs for Nvidia chips, yet GPU leasing prices are extraordinarily low,” Li says. There’s an oversupply of computational power, especially in central and west China, but at the same time, there’s a shortage of cutting-edge chips. 

However, not all brokers were looking to make money from data centers in the first place. Instead, many were interested in gaming government benefits all along. Some operators exploit the sector for subsidized green electricity, obtaining permits to generate and sell power, according to Fang and some Chinese media reports. Instead of using the energy for AI workloads, they resell it back to the grid at a premium. In other cases, companies acquire land for data center development to qualify for state-backed loans and credits, leaving facilities unused while still benefiting from state funding, according to the local media outlet Jiazi Guangnian.

“Towards the end of 2024, no clear-headed contractor and broker in the market would still go into the business expecting direct profitability,” says Fang. “Everyone I met is leveraging the data center deal for something else the government could offer.”

A necessary evil

Despite the underutilization of data centers, China’s central government is still throwing its weight behind a push for AI infrastructure. In early 2025, it convened an AI industry symposium, emphasizing the importance of self-reliance in this technology. 

Major Chinese tech companies are taking note, making investments aligning with this national priority. Alibaba Group announced plans to invest over $50 billion in cloud computing and AI hardware infrastructure over the next three years, while ByteDance plans to invest around $20 billion in GPUs and data centers.

In the meantime, companies in the US are doing likewise. Major tech firms including OpenAI, Softbank, and Oracle have teamed up to commit to the Stargate initiative, which plans to invest up to $500 billion over the next four years to build advanced data centers and computing infrastructure. ​Given the AI competition between the two countries, experts say that China is unlikely to scale back its efforts. “If generative AI is going to be the killer technology, infrastructure is going to be the determinant of success,”  says Goodrich, the tech policy advisor to RAND.

“The Chinese central government will likely see [underused data centers] as a necessary evil to develop an important capability, a growing pain of sorts. You have the failed projects and distressed assets, and the state will consolidate and clean it up. They see the end, not the means,” Goodrich says.

Demand remains strong for Nvidia chips, and especially the H20 chip, which was custom-designed for the Chinese market. One industry source, who requested not to be identified under his company policy, confirmed that the H20, a lighter, faster model optimized for AI inference, is currently the most popular Nvidia chip, followed by the H100, which continues to flow steadily into China even though sales are officially restricted by US sanctions. Some of the new demand is driven by companies deploying their own versions of DeepSeek’s open-source models.

For now, many data centers in China sit in limbo—built for a future that has yet to arrive. Whether they will find a second life remains uncertain. For Fang Cunbao, DeepSeek’s success has become a moment of reckoning, casting doubt on the assumption that an endless expansion of AI infrastructure guarantees progress. 

That’s just a myth, he now realizes. At the start of this year, Fang decided to quit the data center industry altogether. “The market is too chaotic. The early adopters profited, but now it’s just people chasing policy loopholes,” he says. He’s decided to go into AI education next. 

“What stands between now and a future where AI is actually everywhere,” he says, “is not infrastructure anymore, but solid plans to deploy the technology.”

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2025/03/26/1113802/china-ai-data-centers-unused/amp/

2025 Scorecard on State Health System Performance – Commonwealth Fund

Posted by timmreardon on 08/16/2025
Posted in: Uncategorized.

Fragile Progress, Continuing Disparities

AUTHORS

David C. Radley, Kristen Kolb,Sara R. CollinsDOWNLOADS

  • Appendices ↓
  • State Profiles (zip) ↓
  • News Release ↓

RELATED CONTENT

  • 2023 Scorecard on State Health System Performance
  • 2022 Scorecard on State Health System Performance
  • 2024 State Scorecard on Women’s Health and Reproductive Care
  • Advancing Racial Equity in U.S. Health Care: The Commonwealth Fund 2024 State Health Disparities Report

Scorecard Highlights

  • Topping the 2025 Scorecard’s overall health system rankings are Massachusetts, Hawaii, New Hampshire, Rhode Island, and the District of Columbia, based on 50 measures of health care access and affordability, prevention and treatment, avoidable hospital use and costs, health outcomes and healthy behaviors, income disparity, and equity.
  • The lowest-ranked states are Mississippi, Texas, Oklahoma, Arkansas, and West Virginia.
  • Uninsured rates fell to record lows in all states by 2023, and differences in health coverage and access to care narrowed between states. These improvements were in all likelihood due to the Affordable Care Act’s coverage expansions, recent state expansions of Medicaid eligibility, and more affordable marketplace plan premiums.
  • The number of children receiving all doses of seven recommended early childhood vaccines fell in most states between 2019 and 2023. In five states, including Nebraska and Minnesota, the decline exceeded 10 percent.
  • The infant mortality rate (deaths within the first year of life) worsened in 20 states between 2018 and 2022, with considerable variation across states.
  • Premature, avoidable deaths vary considerably across states — the rate in West Virginia is more than twice as high as the rate in Massachusetts. Not only are avoidable mortality rates higher in the United States than in other high-income countries, but they are also on the rise, even as they fall elsewhere.
  • Wide racial disparities in premature deaths are the norm in most states. In 42 states and D.C., avoidable mortality for Black people is at least two times the rate for the group with the lowest rate.
  • When it comes to having affordable health coverage, good-quality care, and the opportunity to live a healthy life, where you live matters in the U.S. Targeted, coordinated federal and state policies are needed to raise health system performance across the nation.

Read the 2025 Full Report: https://www.commonwealthfund.org/publications/scorecard/2025/jun/2025-scorecard-state-health-system-performance?

GSA to unveil USAi, a new tool for federal agencies to experiment with AI models 

Posted by timmreardon on 08/14/2025
Posted in: Uncategorized.

Models from four major AI firms will be available for immediate testing upon launch. Notably, Elon Musk’s xAI Grok chatbot will not be one of these four. 

BYMIRANDA NAZZARO

AUGUST 14, 2025

The General Services Administration will roll out a new governmentwide tool Thursday that gives federal agencies the ability to test major artificial intelligence models, a continuation of Trump administration efforts to ramp up government use of automation. 

The AI evaluation suite, titled USAi.gov, will launch later Thursday morning and allow federal agencies to test various AI models, including those from Anthropic, OpenAI, Google and Meta to start, two senior GSA officials told FedScoop. 

The launch of USAi underscores the Trump administration’s increasing appetite for AI integration into federal government workspaces. The GSA has described these tools as a way to help federal workers with time-consuming tasks, like document summaries, and give government officials access to some of the country’s leading AI firms. 

The GSA, according to one of the officials, will act as a “curator of sorts” for determining which models will be available for testing on USAi. The official noted that additional models are being considered for the platform, with input from GSA’s industry and federal partners, and that American-made models are the primary focus. 

Grok, the chatbot made by Elon Musk’s xAI firm, is notably not included on the platform for its launch Thursday. xAI introduced a Grok for Government product last month, days after FedScoop reportedon the GSA’s interest in the chatbot for government use. 

FedScoop reported last month that GSA recently registered the domain usai.gov. 

How USAi.gov will work

The USAi tool builds upon GSA’s internal chatbot, GSAi, which was rolled out internally in March to give GSA employees access to different enterprise AI models. Zach Whitman, GSA’s chief AI officer and data officer, hinted last month that the GSA was exploring how it could implement its internal AI chatbot in other agencies. 

Once an agency tests the model on USAi, it has the option to procure it from the normal federal marketplace, one of the officials said. In other cases, an agency may stay on the USAi platform in the wake of changing market dynamics but can still access the model for testing, the official added. 

The platform appears to directly coincide with the GSA’s ongoing rehaul of the federal procurement process, which is focused on consolidating the government’s purchasing of goods and services. 

“What we don’t want is to get into this situation where we buy a few licenses for something here and a few licenses for something there, so being able to blanket our entire workforce with the same market-leading capabilities was hugely valuable to us, right off the bat,” Whitman said in an interview with FedScoop about the USAi launch. 

GSA has announced a number of new collaborations this month with firms like OpenAI, Anthropic and Box, to offer their products at a significantly discounted price to federal agencies. And FedScoop reported this week that the GSA is considering prioritizing the review of AI technologies in the procurement process. 

The USAi launch comes on the heels of the White House’s AI Action Plan, which calls on the GSA to establish an “AI procurement toolbox” to encourage “uniformity across the federal enterprise.” The plan, released last month, mandates that federal agencies guarantee any employee “whose work could benefit” from frontier language models has access to them. 

Building trust with models

Whitman said GSA is hopeful federal users will have more trust to work with a platform like USAi, noting public tools on their own can prompt fears around working with sensitive materials. 

Dave Shive, GSA’s chief information officer, said in an interview with FedScoop that the agency is “not just prototyping technology.” 

“We’re also prototyping new ways to do business and it made a bunch of sense for us to build … a ‘model garden’ — a portfolio of models that our users can choose from, because they all have different strengths and weaknesses,” Shive said. “And those are models across a variety of vendors, because they’re trying to think of new, creative ways to do 21st-century business at GSA. 

“And if they have that full suite of models, instead of being limited to just one vendor, it allows them to do that business level, business architecture, prototyping, the very things that we’re all expecting AI can help with,” he added. 

In addition to the chatbot and API testing features on USAi, agency administrators can also view GSA’s data-based evaluations for the models to determine which are best for their specific use cases, one of the officials said. 

“You can define ‘best’ in any number of ways, from cost implications, from speed implications, from usability implications to bias sensitivity implications,” the other official said, adding that “we have all this kind of decision criteria across a vast number of domains that go into them.” 

The GSA said it is offering USAi to all civilian federal agencies, along with the Defense Department. A person familiar with the matter said that as of late Wednesday afternoon, chief AI officers had not yet been briefed about the launch of the USAI.gov platform.

Three evaluations take place prior to a model being available for testing on USAi, one of the officials explained. The first focuses on safety, such as looking at whether a model outputs hate speech, while the second is based on performance at answering questions and the third involves red-teaming, or testing of durability. 

The safety teams reviewing the report are specific to USAi, the official noted, emphasizing that this process is not intended to “overstep the role or function of a USAi platform” that welcomes agency input.

Rebecca Heilweil contributed reporting. 

Article link: https://fedscoop.com/usai-general-services-administration-artificial-intelligence-google-meta-anthropic-claude/?

Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System – Commonwealth Fund

Posted by timmreardon on 08/12/2025
Posted in: Uncategorized.

Comparing Performance in 10 Nations

AUTHORS

David Blumenthal, Evan D. Gumas,Arnav Shah, Munira Z. Gunja,Reginald D. Williams IIDOWNLOADS

  • Fund Report ↓
  • Chartpack (pdf) ↓
  • Chartpack (ppt) ↓
  • News Release ↓

RELATED CONTENT

  • International Health Care System Profiles
  • High U.S. Health Care Spending: Where Is It All Going?
  • U.S. Health Care from a Global Perspective, 2022: Accelerating Spending, Worsening Outcomes
  • The Cost of Not Getting Care: Income Disparities in the Affordability of Health Services Across High-Income Countries
  • Mirror, Mirror 2021: Reflecting Poorly
  • Americans, No Matter the State They Live In, Die Younger Than People in Many Other Countries

Abstract

  • Goal: Compare health system performance in 10 countries, including the United States, to glean insights for U.S. improvement.
  • Methods: Analysis of 70 health system performance measures in five areas: access to care, care process, administrative efficiency, equity, and health outcomes.
  • Key Findings: The top three countries are Australia, the Netherlands, and the United Kingdom, although differences in overall performance between most countries are relatively small. The only clear outlier is the U.S., where health system performance is dramatically lower.
  • Conclusion: The U.S. continues to be in a class by itself in the underperformance of its health care sector. While the other nine countries differ in the details of their systems and in their performance on domains, unlike the U.S., they all have found a way to meet their residents’ most basic health care needs, including universal coverage.

SECTIONS

  • 01Performance Overview
  • 02Access to Care
  • 03Care Process
  • 04Administrative Efficiency
  • 05Equity
  • 06Health Outcomes
  • 07What the U.S. Can Do to Improve
  • 08How We Conducted This Study
  • 09How We Measured Performance

Introduction

Mirror, Mirror 2024 is the Commonwealth Fund’s eighth report comparing the performance of health systems in selected countries. Since the first edition in 2004, our goal has remained the same: to highlight lessons from the experiences of these nations, with special attention to how they might inform health system improvement in the United States.

While each country’s health system is unique — evolving over decades, sometimes centuries, in tandem with shifts in political culture, history, and resources — comparisons can offer rich insights to inform policy thinking. Perhaps above all, they can demonstrate the profound impact of national policy choices on a country’s health and well-being.

In this edition of Mirror, Mirror, we compare the health systems of 10 countries: Australia, Canada, France, Germany, the Netherlands, New Zealand, Sweden, Switzerland, the United Kingdom, and the United States. We examine five key domains of health system performance: access to care, care process, administrative efficiency, equity, and health outcomes (each is defined below).

Despite their overall rankings, all the countries have strengths and weaknesses, ranking high on some dimensions and lower on others. No country is at the top or bottom on all areas of performance. Even the top-ranked country — Australia — does less well, for example, on measures of access to care and care process. And even the U.S., with the lowest-ranked health system, ranks second in the care process domain.

Nevertheless, in the aggregate, the nine nations we examined are more alike than different with respect to their higher and lower performance in various domains. But there is one glaring exception — the U.S. (see “How We Conducted This Study”). Especially concerning is the U.S. record on health outcomes, particularly in relation to how much the U.S. spends on health care. The ability to keep people healthy is a critical indicator of a nation’s capacity to achieve equitable growth. In fulfilling this fundamental obligation, the U.S. continues to fail.

PREVIOUS EDITIONS OF MIRROR, MIRROR

Illustration of the earth reflected in floating mirrors

IMPROVING HEALTH CARE QUALITY

Mirror, Mirror 2021: Reflecting Poorly

FUND REPORTS / AUG 04, 2021

IMPROVING HEALTH CARE QUALITY

Mirror, Mirror 2017: International Comparison Reflects Flaws and Opportunities for Better U.S. Health Care

FUND REPORTS / JUL 14, 2017

IMPROVING HEALTH CARE QUALITY

Mirror, Mirror on the Wall, 2014 Update: How the U.S. Health Care System Compares Internationally

FUND REPORTS / JUN 16, 2014

How We Measured Performance

Our approach to assessing nations’ health systems mostly resembles recent editions of Mirror, Mirror, involving 70 unique measures in five performance domains. The data sources for our assessments are rich and varied. First, we rely on the unique data collected from international surveys that the Commonwealth Fund conducts in close collaboration with participating countries.1 On a three-year rotating basis, the Fund and its partners survey older adults (age 65 and older), primary care physicians, and the general population (age 18 and older) in each nation. The 2024 edition relies on surveys from 2021, 2022, and 2023.

We also rely on published and unpublished data from cross-national organizations including the World Health Organization (WHO), the Organisation for Economic Co-operation and Development (OECD), and Our World in Data, as well as national data registries and the research literature.

Mirror, Mirror 2024 differs from past reports in certain respects:

  • It covers 10 countries instead of the previous 11, after Norway exited the Commonwealth Fund’s international surveys. Norway was the top-ranked country in the 2021 edition of Mirror, Mirror.
  • It accounts for the impact of COVID-19 on health system performance, as we are able to use data collected since the onset of the pandemic and do not use data pre-2020.
  • It investigates several dimensions of equity. In addition to comparisons between residents with above-average and below-average income, this edition examines health system performance differences based on gender (limited to male and female because of insufficient sample size to include additional gender identities) and location (rural and nonrural) as well as patients’ experiences of discrimination, as reported by physicians. Comparisons of performance with respect to race and ethnicity were not possible because of data limitations: many countries do not collect information on these variables and the constructs of identity vary from country to country. To allow for continuity and comparison with previous editions, we present separate analyses for those based only on income and those based on income, gender, and geography combined. Only the analysis based on income was included in our overall rankings. For further detail, see “How We Conducted This Study.”
Article link: https://www.commonwealthfund.org/publications/fund-reports/2024/sep/mirror-mirror-2024

These protocols will help AI agents navigate our messy lives – MIT Technology Review

Posted by timmreardon on 08/11/2025
Posted in: Uncategorized.

Anthropic, Google, and others are developing better ways for agents to interact with our programs and each other, but there’s still more work to be done.

By Peter Hallarchive page

August 4, 2025

A growing number of companies are launching AI agents that can do things on your behalf—actions like sending an email, making a document, or editing a database. Initial reviews for these agents have been mixed at best, though, because they struggle to interact with all the different components of our digital lives.

Part of the problem is that we are still building the necessary infrastructure to help agents navigate the world. If we want agents to complete tasks for us, we need to give them the necessary tools while also making sure they use that power responsibly.

Anthropic and Google are among the companies and groups working on exactly that. Over the past year, they have both introduced protocols that try to define how AI agents should interact with each other and the world around them. These protocols could make it easier for agents to control other programs like email clients and note-taking apps. 

The reason has to do with application programming interfaces, the connections between computers or programs that govern much of our online world. APIs currently reply to “pings” with standardized information. But AI models aren’t made to work exactly the same every time. The very randomness that helps them come across as conversational and expressive also makes it difficult for them to both call an API and understand the response. 

“Models speak a natural language,” says Theo Chu, a project manager at Anthropic. “For [a model] to get context and do something with that context, there is a translation layer that has to happen for it to make sense to the model.” Chu works on one such translation technique, the Model Context Protocol (MCP), which Anthropic introduced at the end of last year. 

Related Story

three identical agents with notepads and faces obscured by a digital pattern

What are AI agents? 

The next big thing is AI tools that can do more complex tasks. Here’s how they will work.

MCP attempts to standardize how AI agents interact with the world via various programs, and it’s already very popular. One web aggregator for MCP servers (essentially, the portals for different programs or tools that agents can access) lists over 15,000 servers already. 

Working out how to govern how AI agents interact with each other is arguably an even steeper challenge, and it’s one the Agent2Agent protocol (A2A), introduced by Google in April, tries to take on. Whereas MCP translates requests between words and code, A2A tries to moderate exchanges between agents, which is an “essential next step for the industry to move beyond single-purpose agents,” Rao Surapaneni, who works with A2A at Google Cloud, wrote in an email to MIT Technology Review. 

Google says 150 companies have already partnered with it to develop and adopt A2A, including Adobe and Salesforce. At a high level, both MCP and A2A tell an AI agent what it absolutely needs to do, what it should do, and what it should not do to ensure a safe interaction with other services. In a way, they are complementary—each agent in an A2A interaction could individually be using MCP to fetch information the other asks for. 

However, Chu stresses that it is “definitely still early days” for MCP, and the A2A road map lists plenty of tasks still to be done. We’ve identified the three main areas of growth for MCP, A2A, and other agent protocols: security, openness, and efficiency.

What should these protocols say about security?

Researchers and developers still don’t really understand how AI models work, and new vulnerabilities are being discovered all the time. For chatbot-style AI applications, malicious attacks can cause models to do all sorts of bad things, including regurgitating training data and spouting slurs. But for AI agents, which interact with the world on someone’s behalf, the possibilities are far riskier. 

For example, one AI agent, made to read and send emails for someone, has already been shown to be vulnerable to what’s known as an indirect prompt injection attack. Essentially, an email could be written in a way that hijacksthe AI model and causes it to malfunction. Then, if that agent has access to the user’s files, it could be instructed to send private documents to the attacker. 

Some researchers believe that protocols like MCP should prevent agents from carrying out harmful actions like this. However, it does not at the moment. “Basically, it does not have any security design,” says Zhaorun Chen, a  University of Chicago PhD student who works on AI agent security and uses MCP servers. 

Bruce Schneier, a security researcher and activist, is skeptical that protocols like MCP will be able to do much to reduce the inherent risks that come with AI and is concerned that giving such technology more power will just give it more ability to cause harm in the real, physical world. “We just don’t have good answers on how to secure this stuff,” says Schneier. “It’s going to be a security cesspool really fast.” 

Others are more hopeful. Security design could be added to MCP and A2A similar to the way it is for internet protocols like HTTPS (though the nature of attacks on AI systems is very different). And Chen and Anthropic believe that standardizing protocols like MCP and A2A can help make it easier to catch and resolve security issues even as is. Chen uses MCP in his research to test the roles different programs can play in attacks to better understand vulnerabilities. Chu at Anthropic believes that these tools could let cybersecurity companies more easily deal with attacks against agents, because it will be easier to unpack who sent what. 

How open should these protocols be?

Although MCP and A2A are two of the most popular agent protocols available today, there are plenty of others in the works. Large companies like Cisco and IBM are working on their own protocols, and other groups have put forth different designs like Agora, designed by researchers at the University of Oxford, which upgrades an agent-service communication from human language to structured data in real time.

Many developers hope there could eventually be a registry of safe, trusted systems to navigate the proliferation of agents and tools. Others, including Chen, want users to be able to rate different services in something like a Yelp for AI agent tools. Some more niche protocols have even built blockchains on top of MCP and A2A so that servers can show they are not just spam. 

Both MCP and A2A are open-source, which is common for would-be standards as it lets others work on building them. This can help protocols develop faster and more transparently. 

“If we go build something together, we spend less time overall, because we’re not having to each reinvent the wheel,” says David Nalley, who leads developer experience at Amazon Web Services and works with a lot of open-source systems, including A2A and MCP. 

Google donated A2A to the Linux Foundation, a nonprofit organization that guides open-source projects, back in June, and Amazon Web Services is now one of the collaborators on the project. With the foundation’s stewardship, the developers who work on A2A (including employees at Google and many others) all get a say in how it should evolve. MCP, on the other hand, is owned by Anthropic and licensed for free. That is a sticking point for some open-source advocates, who want others to have a say in how the code base itself is developed. 

“There’s admittedly some increased risk around a single person or a single entity being in absolute control,” says Nalley. He says most people would prefer multiple groups to have a “seat at the table” to make sure that these protocols are serving everyone’s best interests. 

However, Nalley does believe Anthropic is acting in good faith—its license, he says, is incredibly permissive, allowing other groups to create their own modified versions of the code (a process known as “forking”). 

“Someone could fork it if they needed to, if something went completely off the rails,” says Nalley. IBM’s Agent Communication Protocol was actually spun off of MCP. 

Anthropic is still deciding exactly how to develop MCP. For now, it works with a steering committee of outside companies that help make decisions on MCP’s development, but Anthropic seems open to changing this approach. “We are looking to evolve how we think about both ownership and governance in the future,” says Chu.

Is natural language fast enough?

MCP and A2A work on the agents’ terms—they use words and phrases (termed natural language in AI), just as AI models do when they are responding to a person. This is part of the selling point for these protocols, because it means the model doesn’t have to be trained to talk in a way that is unnatural to it. “Allowing a natural-language interface to be used between agents and not just with humans unlocks sharing the intelligence that is built into these agents,” says Surapaneni.

But this choice does come with drawbacks. Natural-language interfaces lack the precision of APIs, and that could result in incorrect responses. And it creates inefficiencies. 

Related Story

virtual head between strata of screens

Are we ready to hand AI agents the keys?

We’re starting to give AI agents real autonomy, and we’re not prepared for what could happen next.

Usually, an AI model reads and responds to text by splitting words into tokens. The AI model will read a prompt, split it into input tokens, generate a response in the form of output tokens, and then put these tokens into words to send back. These tokens define in some sense how much work the AI model has to do—that’s why most AI platforms charge users according to the number of tokens used. 

But the whole point of working in tokens is so that people can understand the output—it’s usually faster and more efficient for machine-to-machine communication to just work over code. MCP and A2A both work in natural language, so they require the model to spend tokens as the agent talks to other machines, like tools and other agents. The user never even sees these exchanges—all the effort of making everything human-readable doesn’t ever get read by a human. “You waste a lot of tokens if you want to use MCP,” says Chen. 

Chen describes this process as potentially very costly. For example, suppose the user wants the agent to read a document and summarize it. If the agent uses another program to summarize here, it needs to read the document, write the document to the program, read back the summary, and write it back to the user. Since the agent needed to read and write everything, both the document and the summary get doubled up. According to Chen, “It’s actually a lot of tokens.”

As with so many aspects of MCP and A2A’s designs, their benefits also create new challenges. “There’s a long way to go if we want to scale up and actually make them useful,” says Chen.

Correction: This story was updated to clarify Nalley’s involvement with A2A. 

Article link: https://www.technologyreview.com/2025/08/04/1120996/protocols-help-agents-navigate-lives-mcp-a2a?

America Should Assume the Worst About AI – Foreign Affairs

Posted by timmreardon on 07/23/2025
Posted in: Uncategorized.
How to Plan for a Tech-Driven Geopolitical Crisis
Matan Chorev and Joel Predd

July 22, 2025

National security leaders rarely get to choose what to care about and how much to care about it. They are more often subjects of circumstances beyond their control. The September 11 attacks reversed the George W. Bush administration’s plan to reduce the United States’ global commitments and responsibilities. Revolutions across the Arab world pushed President Barack Obama back into the Middle East just as he was trying to pull the United States out. And Russia’s invasion of Ukraine upended the Biden administration’s goal of establishing “stable and predictable” relations with Moscow so that it could focus on strategic competition with China.

Policymakers could foresee many of the underlying forces and trends driving these agenda-shaping events. Yet for the most part, they failed to plan for the most challenging manifestations of where these forces would lead. They had to scramble to reconceptualize and recalibrate their strategies to respond to unfolding events.

The rapid advance of artificial intelligence—and the possible emergence of artificial general intelligence—promises to present policymakers with even greater disruption. Indicators of a coming powerful change are everywhere. Beijing and Washington have made global AI leadership a strategic imperative, and leading U.S. and Chinese companies are racing to achieve AGI. News coverage features near-daily announcements of technical breakthroughs, discussions of AI-driven job loss, and fears of catastrophic global risks such as the AI-enabled engineering of a deadly pandemic.

There is no way of knowing with certainty the exact trajectory along which AI will develop or precisely how it will transform national security. Policymakers should therefore assess and debate the merits of competing AI strategies with humility and caution. Whether one is bullish or bearish about AI’s prospects, though, national security leaders need to be ready to adapt their strategic plans to respond to events that could impose themselves on decision-makers this decade, if not during this presidential term. Washington must prepare for potential policy tradeoffs and geopolitical shifts, and identify practical steps it can take today to mitigate risks and turbocharge U.S. competitiveness. Some ideas and initiatives that today may seem infeasible or unnecessary will seem urgent and self-evident with the benefit of hindsight.

THINKING OUTSIDE THE BOX

There is no standard, shared definition of AGI or consensus on whether, when, or how it might emerge. Today’s frontier AI models are already increasingly capable of performing a greater number and complexity of cognitive tasks than the most skilled and best resourced humans. Since ChatGPT launched in 2022, the power of AI has increased by leaps and bounds. It is reasonable to assume that these models will become more powerful, autonomous, and diffuse in the coming years.

Nevertheless, the AGI era is not likely to announce itself with an earth-shattering moment as the nuclear era did with the first nuclear weapons test. Nor are the economic and technological circumstances as favorable to U.S. planners as they were in the past. In the nuclear era, for example, the U.S. government controlled the new technology, and planners had two decades to develop policy frameworks before a nuclear rival emerged. Planners today, by contrast, have less agency and time to adapt. China is already a near peer in technology, a handful of private companies are steering development, and AI is a general-purpose technology that is spreading to nearly every part of the economy and society.

In this rapidly changing environment, national security leaders should dedicate scarce planning resources to plausible but acutely challenging events. These types of events are not merely disruptions to the status quo but also signposts of alternative futures.

Say, for instance, that a U.S. company claims to have made the transformative technological leap to AGI. Leaders must decide how the U.S. government should respond if the company requests to be treated as a “national security asset.” This designation would grant the company public support that could allow it to secure its facilities, access sensitive or proprietary data, acquire more advanced chips, and avoid certain regulations. Alternatively, a Chinese firm may declare that it has achieved AGI before any of its U.S. rivals.

Planning for AGI cannot be delegated to futurists sent to a far-off bunker.

Policymakers grappling with these scenarios will have to balance competing and sometimes contradictory assessments, which will lead to different judgments about how much risk to accept and which concerns to prioritize. Without robust, independent analytic capabilities, the U.S. government may struggle to determine whether the firms’ claims are credible. National security leaders will also have to consider whether the new technological advance could provide China with a strategic advantage. If they fear AGI could give Beijing the ability to identify and exploit vulnerabilities in U.S. critical infrastructure faster than cyberdefenses can patch them, for example, they may prescribe actions—such as trying to slow or sabotage China’s AI development—that could escalate the risk of geopolitical conflict. On the other hand, if national security leaders are more concerned that nonstate actors or terrorists could use this new technology to create catastrophic bioweapons, they may prefer to try to cooperate with Beijing to prevent proliferation of a larger global threat.

Enhancing preparedness for AGI scenarios requires better understanding of the AI ecosystem at home and abroad. Government agencies need to keep up with how AI is developing to identify where new advances are most likely to emerge. This will reduce the risk of strategic surprise and help inform policy choices on which bottlenecks to prioritize and which vulnerabilities to exploit to potentially slow China’s progress.

Policymakers also need to explore ways to work with the private sector and with other countries. A scalable, dynamic, and two-way private-public partnership is crucial for a strategic response to the current challenges that AI presents, and this will be even more the case in an AGI world. Mutual suspicion between government and the private sector could cripple any crisis response. Meanwhile, leaders will need to develop policies to share sensitive, proprietary information on developments in frontier AI with partners and allies. Without such policies, it will be challenging to build the international coalition needed to respond to an AI-induced crisis, reduce global risk, and hold countries and companies accountable for irresponsible behavior.

ADVERSARIAL INTELLIGENCE

Artificial general intelligence will not only complicate existing geopolitical dynamics; it will also present novel national security challenges. Imagine an unprecedented AI-enabled cyberattack that wreaks havoc on financial institutions, private corporations, and government agencies and shuts down physical systems ranging from critical infrastructure to industrial robotics. In today’s world, determining who is responsible for cyberwarfare is already a challenging and time-intensive task. Any number of state and nonstate actors possess both the means and motivations to carry out destabilizing attacks. In a world with increasingly advanced AI, however, the situation would be even more complex. Policymakers would have to contemplate not only the possibility that an operation of this scale might be the prelude to a military campaign but also that it might be the work of an autonomous, self-replicating AI agent.

Planning for this scenario requires evaluating how today’s capabilities can handle tomorrow’s challenges. Governments cannot rely on present-day tools and techniques to quickly and confidently assess a threat, let alone apply relevant countermeasures. Given AI systems’ proven capacity to deceive and dissemble, current systems may be unable to determine whether an AI agent is operating on its own or at the behest of an adversary. Planners need to find new ways to assess its motivations and how to deter escalation.

Preparing for the worst requires reevaluating “attribution agnostic” steps to harden cyberdefenses, isolate potentially compromised data centers, and prevent the incapacitation of drones or connected vehicles. Planners need to assess whether current military and continuity of operations protocols can handle threats from adversarial AI. Public distrust of the government and technology companies will make it even more difficult to reassure a worried populace in the event of artificial intelligence–fueled misinformation. Given that an autonomous AI agent is not likely to respect national boundaries, adequate preparations would involve setting up channels with partners and adversaries alike to coordinate an effective international response.

How leaders diagnose the external impacts of an impending threat will shape how they react. In the event of a cyberattack, policymakers will have to make a real-time decision about whether to pursue targeted shutdowns of vulnerable cyber-physical systems and compromised data centers or—fearing the potential for rapid replication—impose a more comprehensive shutdown, which could prevent escalation but inhibit the functioning of the digital economy and systems on which airports and power plants rely. This loss-of-control scenario highlights the importance of clarifying legal authority and developing incident-response plans. More broadly, it reinforces the urgency of creating policies and technical strategies to address how advanced models are inclined to misbehave.

At minimum, planning should involve four types of actions. First, it should establish “no regret” actions that policymakers and private-sector players can take today to respond to events from a position of strength. Second, it should create “break glass” playbooks for future emergencies that can be continually updated as new threats, opportunities, and concepts emerge. Third, it should invest in capabilities that seem crucial across multiple scenarios. Finally, it should prioritize early indicators and warnings of strategic failure and create conditions for course corrections.

NO COUNTRY FOR OLD HABITS

Planning for the impacts of AGI on national security needs to start now. In an increasingly competitive and combustible world, and with an economically fragile and politically polarized domestic environment, the United States cannot afford being caught by surprise.

Although it is possible that AI will ultimately prove to be a “normal technology”—a technology, like the Internet or electricity, that transforms the world but whose pace of adoption has natural limits that governments and societies can control—it would be foolish to assume that preparing for major disruption would be a mistake. Planning for more difficult challenges can help leaders identify core strategic issues and build response tools that will be equally useful in less severe circumstances. It would also be unwise to presume that such planning will generate policy instincts and pathways that exacerbate risks or slow AI advances. In the nuclear era, for example, planning for potential nuclear terrorism inspired global initiatives to secure the fissile material needed to make nuclear weapons that ultimately made the world safer.

It would also be dangerous to treat the possibility of AGI like any “normal scenario” in the national security world. Technological expertise and fluency across the government is limited and uneven, and the institutional players that would be involved in responding to any scenario extend far beyond traditional national security agencies. Most scenarios are likely to occur abroad and at home simultaneously. Any response will rely heavily on the choices and decisions of actors outside government, including companies and civil society organizations, that do not have a seat in the White House Situation Room and may not prioritize national security. Likewise, planning cannot be delegated to futurists and technical experts sent to a far-off bunker to spend months crafting detailed plans in isolation. Preparing for a future with AGI must continuously inform today’s strategic debates.

There is an active debate about the merits of various strategies to win the competition for AI while avoiding catastrophe, but there has been less discussion about how AGI might reshape the international landscape, the distribution of global power, and geopolitical alliances. In an increasingly multipolar world, emerging players see advanced AI—and how the United States and China diffuse AI technology and its underlying digital architecture—as key to their national aspirations. Early planning, tabletop exercises with allies and partners, and sustained dialogue with countries that want to hedge their diplomatic bets will ensure that strategic choices are mutually beneficial. Any AI strategy that fails to account for a multipolar world and a more distributed global technology ecosystem will fail. And any national security strategy that fails to grapple with the potentially transformative effects of AGI will become irrelevant.

National security leaders don’t get to choose their crises. They do, however, get to choose what to plan for and where to allocate resources to prepare for future challenges. Planning for AGI is not an indulgence in science fiction or a distraction from existing problems and opportunities. It is a responsible way to prepare for the very real possibility of a new set of national security challenges in a radically transformed world.

Article link: https://www.foreignaffairs.com/united-states/artificial-intelligence-geopolitics-worst-about-ai

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...