healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

The ‘godfather of AI’ says it will create ‘massive’ unemployment, make the rich richer, and rob people of their dignity – Business Insider

Posted by timmreardon on 09/08/2025
Posted in: Uncategorized.

By Thibault Spirlet

Sep 8, 2025, 5:49 AM ET

  • Geoffrey Hinton warns AI will cause “massive unemployment” and make the rich even richer.
  • Sam Altman and Elon Musk have backed a universal basic income as a cushion against job losses.
  • But Hinton says UBI won’t solve the deeper problem of the dignity and worth people get from their jobs.

Geoffrey Hinton helped invent the technology behind ChatGPT. Now he’s warning it could destroy the very jobs it was meant to enhance.

“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton, who is often referred to as the “godfather of AI,” told the Financial Times last Friday.

“It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer.”

Hinton, who won the Nobel Prize for his pioneering work on neural networks and spent a decade at Google before leaving in 2023, said the disruption is less about the technology itself than the system it operates within.

“That’s not AI’s fault,” he said, instead blaming the “capitalist system.”

The 77-year-old researcher also dismissed ideas like a universal basic income as a solution, arguing that a cash stipend wouldn’t address the loss of dignity people derive from their jobs.

Universal basic income “won’t deal with human dignity,” he said, adding that people get their worth from their jobs.

Not everyone is so pessimistic

Not all tech leaders share Hinton’s bleak view on the future.

OpenAI CEO Sam Altman has long pitched a universal basic income as a cushion against job losses, even funding one of the largest UBI trials in the US.

Elon Musk has echoed those calls, telling an audience at VivaTech last year that in a benign AI future, “probably none of us will have a job” — but universal income could let humans pursue meaning while machines handle work.

The investor Vinod Khosla has gone further, predicting that AI will perform 80% of the work in 80% of jobs. That, he argues, will slash the value of human labor and make UBI “crucial” to prevent a surge in inequality.

Anthropic CEO Dario Amodei, meanwhile, has called UBI just “a small part” of the solution, warning that society will need to invent entirely new systems to manage the shift.

Hinton isn’t convinced. While he’s previously advised the UK government to explore UBI, he now says cash payments won’t replace the sense of dignity people derive from their work.

Having lost two wives to cancer, he still hopes AI delivers breakthroughs in healthcare and education.

But beyond that, he believes the technology is more likely to erode livelihoods than uplift them.

“We are at a point in history where something amazing is happening,” he said, “and it may be amazingly good, and it may be amazingly bad.”

Article link: https://www.businessinsider.com/geoffrey-hinton-warns-ai-will-cause-mass-unemployment-and-inequality-2025-9

Prayer for Our Nation

Posted by timmreardon on 08/26/2025
Posted in: Uncategorized.

Grant us Grace and Guide us through these perilous times.

The 15 Diseases of Leadership, According of to Pope Francis

https://hbr.org/2015/04/the-15-diseases-of-leadership-according-to-pope-francis

Why South Korea’s AI rollback in classrooms is a cautionary tale for the US

Posted by timmreardon on 08/22/2025
Posted in: Uncategorized.

By Ayelet Sheffey

Aug 22, 2025, 3:43 AM ET

  • South Korea rolled back an initiative to use AI textbooks in classrooms.
  • It followed pushback from teachers, who said they didn’t have sufficient preparation.
  • While there’s a similar AI push in the US, evidence is lacking on whether it best supports student outcomes.

Humans have revolted against the machine in South Korea — and, in this battle, they’ve won.

Following pushback from teachersand parents, South Korea’s National Assembly on August 4 passed an amendment to an education bill that stripped previously sanctioned AI textbooks of their legal status as official classroom textbooks, and reclassified them as supplementary educational materials.

The Korean Federation of Teachers’ Associations said that while teachers “are not opposed to digital education innovation,” rolling out the textbooks without proper preparation and evaluation actually increased some teachers’ workloads.

The US should take note, said Alex Kotran, the founder and CEO of the AI Education Project, a nonprofit aimed at advancing AI literacy. He said the rollback of AI textbooks and the fact that teachers were involved in the pushback were “totally unsurprising.”

“Research shows that you’re going to get the best outcomes in teacher-centered classrooms, and anything that’s trying to move too quickly, focus on just the technology, without the adequate support for professional learning and development risks undermining that,” Kotran said.

The debate comes as US schools experiment with how best to use AI to fulfill their promise of more personalized learning. The Trump administration supports a public-private approach to increasing the use of the tech in education, but critics maintain that schools should be careful, given the minimal evidence on AI and student achievement, and that teacher training is key.

That’s not to say that there isn’t a place for AI, Kotran said — helping students learn AI skills will equip them for the workforce, where AI is being increasingly used in some fields. But there isn’t extensive evidence that having students learn solely from AI is the best approach.

“The bigger question is, how do you make sure the students are ready to add economic value in the labor market? And it’s not just using AI, it’s actually durable skills like the ability to communicate, problem solve, it’s critical thinking,” Kotran said. “And to build those skills, these are teacher-centered endeavors.”

The role of AI in US education

A survey released by the Korean Federation of Teachers’ Associations in July found that 87.4% of teachers reported a lack of preparation and support for using the textbook materials. The majority of respondents said that they should be allowed to choose how to use the AI textbooks to best suit their needs.

The association added in a press release that it supports efforts to advance AI usage in classrooms, but “we must not be absorbed in introducing technology while ignoring the voices of teachers.”

Some US teachers are concerned. In April, President Donald Trump signed an executive order to establish an AI task force that will establish “public-private partnerships” with AI industry organizations to promote AI literacy in K-12 classrooms. The order also called for government agencies to look into redirecting funding toward AI efforts.

Randi Weingarten, president of the American Federation of Teachers, said in a statement that the order “should be rejected in favor of what the research says works best: investing in classrooms and instruction designed by educators who work directly with students and who have the knowledge and expertise to meet their needs.

Amid concerns about AI adoption, however, some teachers have experienced positive outcomes with incorporating the technology. In an April survey of over 2,000 teachers, Gallup and the Walton Family Foundation found that among the teachers who use AI tools, 64% of respondents said that AI led to higher-quality modifications to student materials, and 61% said it helped them generate better insights on student learning and achievement.

Still, the report said that “no clear consensus exists on whether AI tools should be used in K-12 schools.”

Without comprehensive data on student outcomes using AI, it’s important to approach the topic with a focus on teacher training, not removing teachers from the equation, Kotran said. He added that, at the same time, educators and policymakers need to consider “the freight train that is barreling towards us in terms of job displacement.”

A JPMorgan analyst said there’s an increased risk that AI could replace white-collar jobs, potentially resulting in a “jobless recovery” in which that group is at higher risk of unemployment. Tech leaders are already warning of white-collar job cuts due to AI, and Kotran said the US should take this into account as Gen Zers continue to pursue those careers.

“When it comes to education, the AI just isn’t good enough to replace teachers yet,” Kotran said. “And it’s a bad bet as a school, you’re basically saying, ‘Well, we assume the technology is going to get better and we’re going to somehow be able to get past all of the downside risks of overrelying on AI.’ These are unknown things. It’s a huge, huge risk to take. And if you’re a parent, do you really want to experiment on your kid?”

Article link: https://www.businessinsider.com/ai-in-school-south-korea-textbook-rollback-jobs-education-2025-8

China built hundreds of AI data centers to catch the AI boom. Now many stand unused – MIT Technology Review

Posted by timmreardon on 08/21/2025
Posted in: Uncategorized.


The country poured billions into AI infrastructure, but the data center gold rush is unraveling as speculative investments collide with weak demand and DeepSeek shifts AI trends.

By Caiwei Chen

March 26, 2025

A year or so ago, Xiao Li was seeing floods of Nvidia chip deals on WeChat. A real estate contractor turned data center project manager, he had pivoted to AI infrastructure in 2023, drawn by the promise of China’s AI craze. 

At that time, traders in his circle bragged about securing shipments of high-performing Nvidia GPUs that were subject to US export restrictions. Many were smuggled through overseas channels to Shenzhen. At the height of the demand, a single Nvidia H100 chip, a kind that is essential to training AI models, could sell for up to 200,000 yuan ($28,000) on the black market. 

Now, his WeChat feed and industry group chats tell a different story. Traders are more discreet in their dealings, and prices have come back down to earth. Meanwhile, two data center projects Li is familiar with are struggling to secure further funding from investors who anticipate poor returns, forcing project leads to sell off surplus GPUs. “It seems like everyone is selling, but few are buying,” he says.

Just months ago, a boom in data center construction was at its height, fueled by both government and private investors. However, many newly built facilities are now sitting empty. According to people on the ground who spoke to MIT Technology Review—including contractors, an executive at a GPU server company, and project managers—most of the companies running these data centers are struggling to stay afloat. The local Chinese outlets Jiazi Guangnianand 36Kr report that up to 80% of China’s newly built computing resources remain unused.

Renting out GPUs to companies that need them for training AI models—the main business model for the new wave of data centers—was once seen as a sure bet. But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering.

“The growing pain China’s AI industry is going through is largely a result of inexperienced players—corporations and local governments—jumping on the hype train, building facilities that aren’t optimal for today’s need,” says Jimmy Goodrich, senior advisor for technology to the RAND Corporation. 

The upshot is that projects are failing, energy is being wasted, and data centers have become “distressed assets” whose investors are keen to unload them at below-market rates. The situation may eventually prompt government intervention, he says: “The Chinese government is likely to step in, take over, and hand them off to more capable operators.”

A chaotic building boom

When ChatGPT exploded onto the scene in late 2022, the response in China was swift. The central government designated AI infrastructure as a national priority, urging local governments to accelerate the development of so-called smart computing centers—a term coined to describe AI-focused data centers.

In 2023 and 2024, over 500 new data center projects were announced everywhere from Inner Mongolia to Guangdong, according to KZ Consulting, a market research firm. According to the China Communications Industry Association Data Center Committee, a state-affiliated industry association, at least 150 of the newly built data centers were finished and running by the end of 2024. State-owned enterprises, publicly traded firms, and state-affiliated funds lined up to invest in them, hoping to position themselves as AI front-runners. Local governments heavily promoted them in the hope they’d stimulate the economy and establish their region as a key AI hub. 

However, as these costly construction projects continue, the Chinese frenzy over large language models is losing momentum. In 2024 alone, over 144 companies registered with the Cyberspace Administration of China—the country’s central internet regulator—to develop their own LLMs. Yet according to the Economic Observer, a Chinese publication, only about 10% of those companies were still actively investing in large-scale model training by the end of the year.

China’s political system is highly centralized, with local government officials typically moving up the ranks through regional appointments. As a result, many local leaders prioritize short-term economic projects that demonstrate quick results—often to gain favor with higher-ups—rather than long-term development. Large, high-profile infrastructure projects have long been a tool for local officials to boost their political careers.

The post-pandemic economic downturn only intensified this dynamic. With China’s real estate sector—once the backbone of local economies—slumping for the first time in decades, officials scrambled to find alternative growth drivers. In the meantime, the country’s once high-flying internet industry was also entering a period of stagnation. In this vacuum, AI infrastructure became the new stimulus of choice.

“AI felt like a shot of adrenaline,” says Li. “A lot of money that used to flow into real estate is now going into AI data centers.”

By 2023, major corporations—many of them with little prior experience in AI—began partnering with local governments to capitalize on the trend. Some saw AI infrastructure as a way to justify business expansion or boost stock prices, says Fang Cunbao, a data center project manager based in Beijing. Among them were companies like Lotus, an MSG manufacturer, and Jinlun Technology, a textile firm—hardly the names one would associate with cutting-edge AI technology.

This gold-rush approach meant that the push to build AI data centers was largely driven from the top down, often with little regard for actual demand or technical feasibility, say Fang, Li, and multiple on-the-ground sources, who asked to speak anonymously for fear of political repercussions. Many projects were led by executives and investors with limited expertise in AI infrastructure, they say. In the rush to keep up, many were constructed hastily and fell short of industry standards. 

“Putting all these large clusters of chips together is a very difficult exercise, and there are very few companies or individuals who know how to do it at scale,” says Goodrich. “This is all really state-of-the-art computer engineering. I’d be surprised if most of these smaller players know how to do it. A lot of the freshly built data centers are quickly strung together and don’t offer the stability that a company like DeepSeek would want.”

To make matters worse, project leaders often relied on middlemen and brokers—some of whom exaggerated demand forecasts or manipulated procurement processes to pocket government subsidies, sources say. 

By the end of 2024, the excitement that once surrounded China’s data center boom was  curdling into disappointment. The reason is simple: GPU rental is no longer a particularly  lucrative business.

The DeepSeek reckoning

The business model of data centers is in theory straightforward: They make money by renting out GPU clusters to companies that need computing capacity for AI training. In reality, however, securing clients is proving difficult. Only a few top tech companies in China are now drawing heavily on computing power to train their AI models. Many smaller players have been giving up on pretraining their models or otherwise shifting their strategy since the rise of DeepSeek, which broke the internet with R1, its open-source reasoning model that matches the performance of ChatGPT o1 but was built at a fraction of its cost. 

“DeepSeek is a moment of reckoning for the Chinese AI industry. The burning question shifted from ‘Who can make the best large language model?’ to ‘Who can use them better?’” says Hancheng Cao, an assistant professor of information systems at Emory University. 

The rise of reasoning models like DeepSeek’s R1 and OpenAI’s ChatGPT o1 and o3 has also changed what businesses want from a data center. With this technology, most of the computing needs come from conducting step-by-step logical deductions in response to users’ queries, not from the process of training and creating the model in the first place. This reasoning process often yields better results but takes significantly more time. As a result, hardware with low latency (the time it takes for data to pass from one point on a network to another) is paramount. Data centers need to be located near major tech hubs to minimize transmission delays and ensure access to highly skilled operations and maintenance staff. 

This change means many data centers built in central, western, and rural China—where electricity and land are cheaper—are losing their allure to AI companies. In Zhengzhou, a city in Li’s home province of Henan, a newly built data center is even distributing free computing vouchers to local tech firms but still struggles to attract clients. 

Additionally, a lot of the new data centers that have sprung up in recent years were optimized for pretraining workloads—large, sustained computations run on massive data sets—rather than for inference, the process of running trained reasoning models to respond to user inputs in real time. Inference-friendly hardware differs from what’s traditionally used for large-scale AI training. 

GPUs like Nvidia H100 and A100 are designed for massive data processing, prioritizing speed and memory capacity. But as AI moves toward real-time reasoning, the industry seeks chips that are more efficient, responsive, and cost-effective. Even a minor miscalculation in infrastructure needs can render a data center suboptimal for the tasks clients require.

In these circumstances, the GPU rental price has dropped to an all-time low. A recent report from the Chinese media outlet Zhineng Yongxian said that an Nvidia H100 server configured with eight GPUs now rents for 75,000 yuan per month, down from highs of around 180,000. Some data centers would rather leave their facilities sitting empty than run the risk of losing even more money because they are so costly to run, says Fan: “The revenue from having a tiny part of the data center running simply wouldn’t cover the electricity and maintenance cost.”

“It’s paradoxical—China faces the highest acquisition costs for Nvidia chips, yet GPU leasing prices are extraordinarily low,” Li says. There’s an oversupply of computational power, especially in central and west China, but at the same time, there’s a shortage of cutting-edge chips. 

However, not all brokers were looking to make money from data centers in the first place. Instead, many were interested in gaming government benefits all along. Some operators exploit the sector for subsidized green electricity, obtaining permits to generate and sell power, according to Fang and some Chinese media reports. Instead of using the energy for AI workloads, they resell it back to the grid at a premium. In other cases, companies acquire land for data center development to qualify for state-backed loans and credits, leaving facilities unused while still benefiting from state funding, according to the local media outlet Jiazi Guangnian.

“Towards the end of 2024, no clear-headed contractor and broker in the market would still go into the business expecting direct profitability,” says Fang. “Everyone I met is leveraging the data center deal for something else the government could offer.”

A necessary evil

Despite the underutilization of data centers, China’s central government is still throwing its weight behind a push for AI infrastructure. In early 2025, it convened an AI industry symposium, emphasizing the importance of self-reliance in this technology. 

Major Chinese tech companies are taking note, making investments aligning with this national priority. Alibaba Group announced plans to invest over $50 billion in cloud computing and AI hardware infrastructure over the next three years, while ByteDance plans to invest around $20 billion in GPUs and data centers.

In the meantime, companies in the US are doing likewise. Major tech firms including OpenAI, Softbank, and Oracle have teamed up to commit to the Stargate initiative, which plans to invest up to $500 billion over the next four years to build advanced data centers and computing infrastructure. ​Given the AI competition between the two countries, experts say that China is unlikely to scale back its efforts. “If generative AI is going to be the killer technology, infrastructure is going to be the determinant of success,”  says Goodrich, the tech policy advisor to RAND.

“The Chinese central government will likely see [underused data centers] as a necessary evil to develop an important capability, a growing pain of sorts. You have the failed projects and distressed assets, and the state will consolidate and clean it up. They see the end, not the means,” Goodrich says.

Demand remains strong for Nvidia chips, and especially the H20 chip, which was custom-designed for the Chinese market. One industry source, who requested not to be identified under his company policy, confirmed that the H20, a lighter, faster model optimized for AI inference, is currently the most popular Nvidia chip, followed by the H100, which continues to flow steadily into China even though sales are officially restricted by US sanctions. Some of the new demand is driven by companies deploying their own versions of DeepSeek’s open-source models.

For now, many data centers in China sit in limbo—built for a future that has yet to arrive. Whether they will find a second life remains uncertain. For Fang Cunbao, DeepSeek’s success has become a moment of reckoning, casting doubt on the assumption that an endless expansion of AI infrastructure guarantees progress. 

That’s just a myth, he now realizes. At the start of this year, Fang decided to quit the data center industry altogether. “The market is too chaotic. The early adopters profited, but now it’s just people chasing policy loopholes,” he says. He’s decided to go into AI education next. 

“What stands between now and a future where AI is actually everywhere,” he says, “is not infrastructure anymore, but solid plans to deploy the technology.”

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2025/03/26/1113802/china-ai-data-centers-unused/amp/

2025 Scorecard on State Health System Performance – Commonwealth Fund

Posted by timmreardon on 08/16/2025
Posted in: Uncategorized.

Fragile Progress, Continuing Disparities

AUTHORS

David C. Radley, Kristen Kolb,Sara R. CollinsDOWNLOADS

  • Appendices ↓
  • State Profiles (zip) ↓
  • News Release ↓

RELATED CONTENT

  • 2023 Scorecard on State Health System Performance
  • 2022 Scorecard on State Health System Performance
  • 2024 State Scorecard on Women’s Health and Reproductive Care
  • Advancing Racial Equity in U.S. Health Care: The Commonwealth Fund 2024 State Health Disparities Report

Scorecard Highlights

  • Topping the 2025 Scorecard’s overall health system rankings are Massachusetts, Hawaii, New Hampshire, Rhode Island, and the District of Columbia, based on 50 measures of health care access and affordability, prevention and treatment, avoidable hospital use and costs, health outcomes and healthy behaviors, income disparity, and equity.
  • The lowest-ranked states are Mississippi, Texas, Oklahoma, Arkansas, and West Virginia.
  • Uninsured rates fell to record lows in all states by 2023, and differences in health coverage and access to care narrowed between states. These improvements were in all likelihood due to the Affordable Care Act’s coverage expansions, recent state expansions of Medicaid eligibility, and more affordable marketplace plan premiums.
  • The number of children receiving all doses of seven recommended early childhood vaccines fell in most states between 2019 and 2023. In five states, including Nebraska and Minnesota, the decline exceeded 10 percent.
  • The infant mortality rate (deaths within the first year of life) worsened in 20 states between 2018 and 2022, with considerable variation across states.
  • Premature, avoidable deaths vary considerably across states — the rate in West Virginia is more than twice as high as the rate in Massachusetts. Not only are avoidable mortality rates higher in the United States than in other high-income countries, but they are also on the rise, even as they fall elsewhere.
  • Wide racial disparities in premature deaths are the norm in most states. In 42 states and D.C., avoidable mortality for Black people is at least two times the rate for the group with the lowest rate.
  • When it comes to having affordable health coverage, good-quality care, and the opportunity to live a healthy life, where you live matters in the U.S. Targeted, coordinated federal and state policies are needed to raise health system performance across the nation.

Read the 2025 Full Report: https://www.commonwealthfund.org/publications/scorecard/2025/jun/2025-scorecard-state-health-system-performance?

GSA to unveil USAi, a new tool for federal agencies to experiment with AI models 

Posted by timmreardon on 08/14/2025
Posted in: Uncategorized.

Models from four major AI firms will be available for immediate testing upon launch. Notably, Elon Musk’s xAI Grok chatbot will not be one of these four. 

BYMIRANDA NAZZARO

AUGUST 14, 2025

The General Services Administration will roll out a new governmentwide tool Thursday that gives federal agencies the ability to test major artificial intelligence models, a continuation of Trump administration efforts to ramp up government use of automation. 

The AI evaluation suite, titled USAi.gov, will launch later Thursday morning and allow federal agencies to test various AI models, including those from Anthropic, OpenAI, Google and Meta to start, two senior GSA officials told FedScoop. 

The launch of USAi underscores the Trump administration’s increasing appetite for AI integration into federal government workspaces. The GSA has described these tools as a way to help federal workers with time-consuming tasks, like document summaries, and give government officials access to some of the country’s leading AI firms. 

The GSA, according to one of the officials, will act as a “curator of sorts” for determining which models will be available for testing on USAi. The official noted that additional models are being considered for the platform, with input from GSA’s industry and federal partners, and that American-made models are the primary focus. 

Grok, the chatbot made by Elon Musk’s xAI firm, is notably not included on the platform for its launch Thursday. xAI introduced a Grok for Government product last month, days after FedScoop reportedon the GSA’s interest in the chatbot for government use. 

FedScoop reported last month that GSA recently registered the domain usai.gov. 

How USAi.gov will work

The USAi tool builds upon GSA’s internal chatbot, GSAi, which was rolled out internally in March to give GSA employees access to different enterprise AI models. Zach Whitman, GSA’s chief AI officer and data officer, hinted last month that the GSA was exploring how it could implement its internal AI chatbot in other agencies. 

Once an agency tests the model on USAi, it has the option to procure it from the normal federal marketplace, one of the officials said. In other cases, an agency may stay on the USAi platform in the wake of changing market dynamics but can still access the model for testing, the official added. 

The platform appears to directly coincide with the GSA’s ongoing rehaul of the federal procurement process, which is focused on consolidating the government’s purchasing of goods and services. 

“What we don’t want is to get into this situation where we buy a few licenses for something here and a few licenses for something there, so being able to blanket our entire workforce with the same market-leading capabilities was hugely valuable to us, right off the bat,” Whitman said in an interview with FedScoop about the USAi launch. 

GSA has announced a number of new collaborations this month with firms like OpenAI, Anthropic and Box, to offer their products at a significantly discounted price to federal agencies. And FedScoop reported this week that the GSA is considering prioritizing the review of AI technologies in the procurement process. 

The USAi launch comes on the heels of the White House’s AI Action Plan, which calls on the GSA to establish an “AI procurement toolbox” to encourage “uniformity across the federal enterprise.” The plan, released last month, mandates that federal agencies guarantee any employee “whose work could benefit” from frontier language models has access to them. 

Building trust with models

Whitman said GSA is hopeful federal users will have more trust to work with a platform like USAi, noting public tools on their own can prompt fears around working with sensitive materials. 

Dave Shive, GSA’s chief information officer, said in an interview with FedScoop that the agency is “not just prototyping technology.” 

“We’re also prototyping new ways to do business and it made a bunch of sense for us to build … a ‘model garden’ — a portfolio of models that our users can choose from, because they all have different strengths and weaknesses,” Shive said. “And those are models across a variety of vendors, because they’re trying to think of new, creative ways to do 21st-century business at GSA. 

“And if they have that full suite of models, instead of being limited to just one vendor, it allows them to do that business level, business architecture, prototyping, the very things that we’re all expecting AI can help with,” he added. 

In addition to the chatbot and API testing features on USAi, agency administrators can also view GSA’s data-based evaluations for the models to determine which are best for their specific use cases, one of the officials said. 

“You can define ‘best’ in any number of ways, from cost implications, from speed implications, from usability implications to bias sensitivity implications,” the other official said, adding that “we have all this kind of decision criteria across a vast number of domains that go into them.” 

The GSA said it is offering USAi to all civilian federal agencies, along with the Defense Department. A person familiar with the matter said that as of late Wednesday afternoon, chief AI officers had not yet been briefed about the launch of the USAI.gov platform.

Three evaluations take place prior to a model being available for testing on USAi, one of the officials explained. The first focuses on safety, such as looking at whether a model outputs hate speech, while the second is based on performance at answering questions and the third involves red-teaming, or testing of durability. 

The safety teams reviewing the report are specific to USAi, the official noted, emphasizing that this process is not intended to “overstep the role or function of a USAi platform” that welcomes agency input.

Rebecca Heilweil contributed reporting. 

Article link: https://fedscoop.com/usai-general-services-administration-artificial-intelligence-google-meta-anthropic-claude/?

Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System – Commonwealth Fund

Posted by timmreardon on 08/12/2025
Posted in: Uncategorized.

Comparing Performance in 10 Nations

AUTHORS

David Blumenthal, Evan D. Gumas,Arnav Shah, Munira Z. Gunja,Reginald D. Williams IIDOWNLOADS

  • Fund Report ↓
  • Chartpack (pdf) ↓
  • Chartpack (ppt) ↓
  • News Release ↓

RELATED CONTENT

  • International Health Care System Profiles
  • High U.S. Health Care Spending: Where Is It All Going?
  • U.S. Health Care from a Global Perspective, 2022: Accelerating Spending, Worsening Outcomes
  • The Cost of Not Getting Care: Income Disparities in the Affordability of Health Services Across High-Income Countries
  • Mirror, Mirror 2021: Reflecting Poorly
  • Americans, No Matter the State They Live In, Die Younger Than People in Many Other Countries

Abstract

  • Goal: Compare health system performance in 10 countries, including the United States, to glean insights for U.S. improvement.
  • Methods: Analysis of 70 health system performance measures in five areas: access to care, care process, administrative efficiency, equity, and health outcomes.
  • Key Findings: The top three countries are Australia, the Netherlands, and the United Kingdom, although differences in overall performance between most countries are relatively small. The only clear outlier is the U.S., where health system performance is dramatically lower.
  • Conclusion: The U.S. continues to be in a class by itself in the underperformance of its health care sector. While the other nine countries differ in the details of their systems and in their performance on domains, unlike the U.S., they all have found a way to meet their residents’ most basic health care needs, including universal coverage.

SECTIONS

  • 01Performance Overview
  • 02Access to Care
  • 03Care Process
  • 04Administrative Efficiency
  • 05Equity
  • 06Health Outcomes
  • 07What the U.S. Can Do to Improve
  • 08How We Conducted This Study
  • 09How We Measured Performance

Introduction

Mirror, Mirror 2024 is the Commonwealth Fund’s eighth report comparing the performance of health systems in selected countries. Since the first edition in 2004, our goal has remained the same: to highlight lessons from the experiences of these nations, with special attention to how they might inform health system improvement in the United States.

While each country’s health system is unique — evolving over decades, sometimes centuries, in tandem with shifts in political culture, history, and resources — comparisons can offer rich insights to inform policy thinking. Perhaps above all, they can demonstrate the profound impact of national policy choices on a country’s health and well-being.

In this edition of Mirror, Mirror, we compare the health systems of 10 countries: Australia, Canada, France, Germany, the Netherlands, New Zealand, Sweden, Switzerland, the United Kingdom, and the United States. We examine five key domains of health system performance: access to care, care process, administrative efficiency, equity, and health outcomes (each is defined below).

Despite their overall rankings, all the countries have strengths and weaknesses, ranking high on some dimensions and lower on others. No country is at the top or bottom on all areas of performance. Even the top-ranked country — Australia — does less well, for example, on measures of access to care and care process. And even the U.S., with the lowest-ranked health system, ranks second in the care process domain.

Nevertheless, in the aggregate, the nine nations we examined are more alike than different with respect to their higher and lower performance in various domains. But there is one glaring exception — the U.S. (see “How We Conducted This Study”). Especially concerning is the U.S. record on health outcomes, particularly in relation to how much the U.S. spends on health care. The ability to keep people healthy is a critical indicator of a nation’s capacity to achieve equitable growth. In fulfilling this fundamental obligation, the U.S. continues to fail.

PREVIOUS EDITIONS OF MIRROR, MIRROR

Illustration of the earth reflected in floating mirrors

IMPROVING HEALTH CARE QUALITY

Mirror, Mirror 2021: Reflecting Poorly

FUND REPORTS / AUG 04, 2021

IMPROVING HEALTH CARE QUALITY

Mirror, Mirror 2017: International Comparison Reflects Flaws and Opportunities for Better U.S. Health Care

FUND REPORTS / JUL 14, 2017

IMPROVING HEALTH CARE QUALITY

Mirror, Mirror on the Wall, 2014 Update: How the U.S. Health Care System Compares Internationally

FUND REPORTS / JUN 16, 2014

How We Measured Performance

Our approach to assessing nations’ health systems mostly resembles recent editions of Mirror, Mirror, involving 70 unique measures in five performance domains. The data sources for our assessments are rich and varied. First, we rely on the unique data collected from international surveys that the Commonwealth Fund conducts in close collaboration with participating countries.1 On a three-year rotating basis, the Fund and its partners survey older adults (age 65 and older), primary care physicians, and the general population (age 18 and older) in each nation. The 2024 edition relies on surveys from 2021, 2022, and 2023.

We also rely on published and unpublished data from cross-national organizations including the World Health Organization (WHO), the Organisation for Economic Co-operation and Development (OECD), and Our World in Data, as well as national data registries and the research literature.

Mirror, Mirror 2024 differs from past reports in certain respects:

  • It covers 10 countries instead of the previous 11, after Norway exited the Commonwealth Fund’s international surveys. Norway was the top-ranked country in the 2021 edition of Mirror, Mirror.
  • It accounts for the impact of COVID-19 on health system performance, as we are able to use data collected since the onset of the pandemic and do not use data pre-2020.
  • It investigates several dimensions of equity. In addition to comparisons between residents with above-average and below-average income, this edition examines health system performance differences based on gender (limited to male and female because of insufficient sample size to include additional gender identities) and location (rural and nonrural) as well as patients’ experiences of discrimination, as reported by physicians. Comparisons of performance with respect to race and ethnicity were not possible because of data limitations: many countries do not collect information on these variables and the constructs of identity vary from country to country. To allow for continuity and comparison with previous editions, we present separate analyses for those based only on income and those based on income, gender, and geography combined. Only the analysis based on income was included in our overall rankings. For further detail, see “How We Conducted This Study.”
Article link: https://www.commonwealthfund.org/publications/fund-reports/2024/sep/mirror-mirror-2024

These protocols will help AI agents navigate our messy lives – MIT Technology Review

Posted by timmreardon on 08/11/2025
Posted in: Uncategorized.

Anthropic, Google, and others are developing better ways for agents to interact with our programs and each other, but there’s still more work to be done.

By Peter Hallarchive page

August 4, 2025

A growing number of companies are launching AI agents that can do things on your behalf—actions like sending an email, making a document, or editing a database. Initial reviews for these agents have been mixed at best, though, because they struggle to interact with all the different components of our digital lives.

Part of the problem is that we are still building the necessary infrastructure to help agents navigate the world. If we want agents to complete tasks for us, we need to give them the necessary tools while also making sure they use that power responsibly.

Anthropic and Google are among the companies and groups working on exactly that. Over the past year, they have both introduced protocols that try to define how AI agents should interact with each other and the world around them. These protocols could make it easier for agents to control other programs like email clients and note-taking apps. 

The reason has to do with application programming interfaces, the connections between computers or programs that govern much of our online world. APIs currently reply to “pings” with standardized information. But AI models aren’t made to work exactly the same every time. The very randomness that helps them come across as conversational and expressive also makes it difficult for them to both call an API and understand the response. 

“Models speak a natural language,” says Theo Chu, a project manager at Anthropic. “For [a model] to get context and do something with that context, there is a translation layer that has to happen for it to make sense to the model.” Chu works on one such translation technique, the Model Context Protocol (MCP), which Anthropic introduced at the end of last year. 

Related Story

three identical agents with notepads and faces obscured by a digital pattern

What are AI agents? 

The next big thing is AI tools that can do more complex tasks. Here’s how they will work.

MCP attempts to standardize how AI agents interact with the world via various programs, and it’s already very popular. One web aggregator for MCP servers (essentially, the portals for different programs or tools that agents can access) lists over 15,000 servers already. 

Working out how to govern how AI agents interact with each other is arguably an even steeper challenge, and it’s one the Agent2Agent protocol (A2A), introduced by Google in April, tries to take on. Whereas MCP translates requests between words and code, A2A tries to moderate exchanges between agents, which is an “essential next step for the industry to move beyond single-purpose agents,” Rao Surapaneni, who works with A2A at Google Cloud, wrote in an email to MIT Technology Review. 

Google says 150 companies have already partnered with it to develop and adopt A2A, including Adobe and Salesforce. At a high level, both MCP and A2A tell an AI agent what it absolutely needs to do, what it should do, and what it should not do to ensure a safe interaction with other services. In a way, they are complementary—each agent in an A2A interaction could individually be using MCP to fetch information the other asks for. 

However, Chu stresses that it is “definitely still early days” for MCP, and the A2A road map lists plenty of tasks still to be done. We’ve identified the three main areas of growth for MCP, A2A, and other agent protocols: security, openness, and efficiency.

What should these protocols say about security?

Researchers and developers still don’t really understand how AI models work, and new vulnerabilities are being discovered all the time. For chatbot-style AI applications, malicious attacks can cause models to do all sorts of bad things, including regurgitating training data and spouting slurs. But for AI agents, which interact with the world on someone’s behalf, the possibilities are far riskier. 

For example, one AI agent, made to read and send emails for someone, has already been shown to be vulnerable to what’s known as an indirect prompt injection attack. Essentially, an email could be written in a way that hijacksthe AI model and causes it to malfunction. Then, if that agent has access to the user’s files, it could be instructed to send private documents to the attacker. 

Some researchers believe that protocols like MCP should prevent agents from carrying out harmful actions like this. However, it does not at the moment. “Basically, it does not have any security design,” says Zhaorun Chen, a  University of Chicago PhD student who works on AI agent security and uses MCP servers. 

Bruce Schneier, a security researcher and activist, is skeptical that protocols like MCP will be able to do much to reduce the inherent risks that come with AI and is concerned that giving such technology more power will just give it more ability to cause harm in the real, physical world. “We just don’t have good answers on how to secure this stuff,” says Schneier. “It’s going to be a security cesspool really fast.” 

Others are more hopeful. Security design could be added to MCP and A2A similar to the way it is for internet protocols like HTTPS (though the nature of attacks on AI systems is very different). And Chen and Anthropic believe that standardizing protocols like MCP and A2A can help make it easier to catch and resolve security issues even as is. Chen uses MCP in his research to test the roles different programs can play in attacks to better understand vulnerabilities. Chu at Anthropic believes that these tools could let cybersecurity companies more easily deal with attacks against agents, because it will be easier to unpack who sent what. 

How open should these protocols be?

Although MCP and A2A are two of the most popular agent protocols available today, there are plenty of others in the works. Large companies like Cisco and IBM are working on their own protocols, and other groups have put forth different designs like Agora, designed by researchers at the University of Oxford, which upgrades an agent-service communication from human language to structured data in real time.

Many developers hope there could eventually be a registry of safe, trusted systems to navigate the proliferation of agents and tools. Others, including Chen, want users to be able to rate different services in something like a Yelp for AI agent tools. Some more niche protocols have even built blockchains on top of MCP and A2A so that servers can show they are not just spam. 

Both MCP and A2A are open-source, which is common for would-be standards as it lets others work on building them. This can help protocols develop faster and more transparently. 

“If we go build something together, we spend less time overall, because we’re not having to each reinvent the wheel,” says David Nalley, who leads developer experience at Amazon Web Services and works with a lot of open-source systems, including A2A and MCP. 

Google donated A2A to the Linux Foundation, a nonprofit organization that guides open-source projects, back in June, and Amazon Web Services is now one of the collaborators on the project. With the foundation’s stewardship, the developers who work on A2A (including employees at Google and many others) all get a say in how it should evolve. MCP, on the other hand, is owned by Anthropic and licensed for free. That is a sticking point for some open-source advocates, who want others to have a say in how the code base itself is developed. 

“There’s admittedly some increased risk around a single person or a single entity being in absolute control,” says Nalley. He says most people would prefer multiple groups to have a “seat at the table” to make sure that these protocols are serving everyone’s best interests. 

However, Nalley does believe Anthropic is acting in good faith—its license, he says, is incredibly permissive, allowing other groups to create their own modified versions of the code (a process known as “forking”). 

“Someone could fork it if they needed to, if something went completely off the rails,” says Nalley. IBM’s Agent Communication Protocol was actually spun off of MCP. 

Anthropic is still deciding exactly how to develop MCP. For now, it works with a steering committee of outside companies that help make decisions on MCP’s development, but Anthropic seems open to changing this approach. “We are looking to evolve how we think about both ownership and governance in the future,” says Chu.

Is natural language fast enough?

MCP and A2A work on the agents’ terms—they use words and phrases (termed natural language in AI), just as AI models do when they are responding to a person. This is part of the selling point for these protocols, because it means the model doesn’t have to be trained to talk in a way that is unnatural to it. “Allowing a natural-language interface to be used between agents and not just with humans unlocks sharing the intelligence that is built into these agents,” says Surapaneni.

But this choice does come with drawbacks. Natural-language interfaces lack the precision of APIs, and that could result in incorrect responses. And it creates inefficiencies. 

Related Story

virtual head between strata of screens

Are we ready to hand AI agents the keys?

We’re starting to give AI agents real autonomy, and we’re not prepared for what could happen next.

Usually, an AI model reads and responds to text by splitting words into tokens. The AI model will read a prompt, split it into input tokens, generate a response in the form of output tokens, and then put these tokens into words to send back. These tokens define in some sense how much work the AI model has to do—that’s why most AI platforms charge users according to the number of tokens used. 

But the whole point of working in tokens is so that people can understand the output—it’s usually faster and more efficient for machine-to-machine communication to just work over code. MCP and A2A both work in natural language, so they require the model to spend tokens as the agent talks to other machines, like tools and other agents. The user never even sees these exchanges—all the effort of making everything human-readable doesn’t ever get read by a human. “You waste a lot of tokens if you want to use MCP,” says Chen. 

Chen describes this process as potentially very costly. For example, suppose the user wants the agent to read a document and summarize it. If the agent uses another program to summarize here, it needs to read the document, write the document to the program, read back the summary, and write it back to the user. Since the agent needed to read and write everything, both the document and the summary get doubled up. According to Chen, “It’s actually a lot of tokens.”

As with so many aspects of MCP and A2A’s designs, their benefits also create new challenges. “There’s a long way to go if we want to scale up and actually make them useful,” says Chen.

Correction: This story was updated to clarify Nalley’s involvement with A2A. 

Article link: https://www.technologyreview.com/2025/08/04/1120996/protocols-help-agents-navigate-lives-mcp-a2a?

America Should Assume the Worst About AI – Foreign Affairs

Posted by timmreardon on 07/23/2025
Posted in: Uncategorized.
How to Plan for a Tech-Driven Geopolitical Crisis
Matan Chorev and Joel Predd

July 22, 2025

National security leaders rarely get to choose what to care about and how much to care about it. They are more often subjects of circumstances beyond their control. The September 11 attacks reversed the George W. Bush administration’s plan to reduce the United States’ global commitments and responsibilities. Revolutions across the Arab world pushed President Barack Obama back into the Middle East just as he was trying to pull the United States out. And Russia’s invasion of Ukraine upended the Biden administration’s goal of establishing “stable and predictable” relations with Moscow so that it could focus on strategic competition with China.

Policymakers could foresee many of the underlying forces and trends driving these agenda-shaping events. Yet for the most part, they failed to plan for the most challenging manifestations of where these forces would lead. They had to scramble to reconceptualize and recalibrate their strategies to respond to unfolding events.

The rapid advance of artificial intelligence—and the possible emergence of artificial general intelligence—promises to present policymakers with even greater disruption. Indicators of a coming powerful change are everywhere. Beijing and Washington have made global AI leadership a strategic imperative, and leading U.S. and Chinese companies are racing to achieve AGI. News coverage features near-daily announcements of technical breakthroughs, discussions of AI-driven job loss, and fears of catastrophic global risks such as the AI-enabled engineering of a deadly pandemic.

There is no way of knowing with certainty the exact trajectory along which AI will develop or precisely how it will transform national security. Policymakers should therefore assess and debate the merits of competing AI strategies with humility and caution. Whether one is bullish or bearish about AI’s prospects, though, national security leaders need to be ready to adapt their strategic plans to respond to events that could impose themselves on decision-makers this decade, if not during this presidential term. Washington must prepare for potential policy tradeoffs and geopolitical shifts, and identify practical steps it can take today to mitigate risks and turbocharge U.S. competitiveness. Some ideas and initiatives that today may seem infeasible or unnecessary will seem urgent and self-evident with the benefit of hindsight.

THINKING OUTSIDE THE BOX

There is no standard, shared definition of AGI or consensus on whether, when, or how it might emerge. Today’s frontier AI models are already increasingly capable of performing a greater number and complexity of cognitive tasks than the most skilled and best resourced humans. Since ChatGPT launched in 2022, the power of AI has increased by leaps and bounds. It is reasonable to assume that these models will become more powerful, autonomous, and diffuse in the coming years.

Nevertheless, the AGI era is not likely to announce itself with an earth-shattering moment as the nuclear era did with the first nuclear weapons test. Nor are the economic and technological circumstances as favorable to U.S. planners as they were in the past. In the nuclear era, for example, the U.S. government controlled the new technology, and planners had two decades to develop policy frameworks before a nuclear rival emerged. Planners today, by contrast, have less agency and time to adapt. China is already a near peer in technology, a handful of private companies are steering development, and AI is a general-purpose technology that is spreading to nearly every part of the economy and society.

In this rapidly changing environment, national security leaders should dedicate scarce planning resources to plausible but acutely challenging events. These types of events are not merely disruptions to the status quo but also signposts of alternative futures.

Say, for instance, that a U.S. company claims to have made the transformative technological leap to AGI. Leaders must decide how the U.S. government should respond if the company requests to be treated as a “national security asset.” This designation would grant the company public support that could allow it to secure its facilities, access sensitive or proprietary data, acquire more advanced chips, and avoid certain regulations. Alternatively, a Chinese firm may declare that it has achieved AGI before any of its U.S. rivals.

Planning for AGI cannot be delegated to futurists sent to a far-off bunker.

Policymakers grappling with these scenarios will have to balance competing and sometimes contradictory assessments, which will lead to different judgments about how much risk to accept and which concerns to prioritize. Without robust, independent analytic capabilities, the U.S. government may struggle to determine whether the firms’ claims are credible. National security leaders will also have to consider whether the new technological advance could provide China with a strategic advantage. If they fear AGI could give Beijing the ability to identify and exploit vulnerabilities in U.S. critical infrastructure faster than cyberdefenses can patch them, for example, they may prescribe actions—such as trying to slow or sabotage China’s AI development—that could escalate the risk of geopolitical conflict. On the other hand, if national security leaders are more concerned that nonstate actors or terrorists could use this new technology to create catastrophic bioweapons, they may prefer to try to cooperate with Beijing to prevent proliferation of a larger global threat.

Enhancing preparedness for AGI scenarios requires better understanding of the AI ecosystem at home and abroad. Government agencies need to keep up with how AI is developing to identify where new advances are most likely to emerge. This will reduce the risk of strategic surprise and help inform policy choices on which bottlenecks to prioritize and which vulnerabilities to exploit to potentially slow China’s progress.

Policymakers also need to explore ways to work with the private sector and with other countries. A scalable, dynamic, and two-way private-public partnership is crucial for a strategic response to the current challenges that AI presents, and this will be even more the case in an AGI world. Mutual suspicion between government and the private sector could cripple any crisis response. Meanwhile, leaders will need to develop policies to share sensitive, proprietary information on developments in frontier AI with partners and allies. Without such policies, it will be challenging to build the international coalition needed to respond to an AI-induced crisis, reduce global risk, and hold countries and companies accountable for irresponsible behavior.

ADVERSARIAL INTELLIGENCE

Artificial general intelligence will not only complicate existing geopolitical dynamics; it will also present novel national security challenges. Imagine an unprecedented AI-enabled cyberattack that wreaks havoc on financial institutions, private corporations, and government agencies and shuts down physical systems ranging from critical infrastructure to industrial robotics. In today’s world, determining who is responsible for cyberwarfare is already a challenging and time-intensive task. Any number of state and nonstate actors possess both the means and motivations to carry out destabilizing attacks. In a world with increasingly advanced AI, however, the situation would be even more complex. Policymakers would have to contemplate not only the possibility that an operation of this scale might be the prelude to a military campaign but also that it might be the work of an autonomous, self-replicating AI agent.

Planning for this scenario requires evaluating how today’s capabilities can handle tomorrow’s challenges. Governments cannot rely on present-day tools and techniques to quickly and confidently assess a threat, let alone apply relevant countermeasures. Given AI systems’ proven capacity to deceive and dissemble, current systems may be unable to determine whether an AI agent is operating on its own or at the behest of an adversary. Planners need to find new ways to assess its motivations and how to deter escalation.

Preparing for the worst requires reevaluating “attribution agnostic” steps to harden cyberdefenses, isolate potentially compromised data centers, and prevent the incapacitation of drones or connected vehicles. Planners need to assess whether current military and continuity of operations protocols can handle threats from adversarial AI. Public distrust of the government and technology companies will make it even more difficult to reassure a worried populace in the event of artificial intelligence–fueled misinformation. Given that an autonomous AI agent is not likely to respect national boundaries, adequate preparations would involve setting up channels with partners and adversaries alike to coordinate an effective international response.

How leaders diagnose the external impacts of an impending threat will shape how they react. In the event of a cyberattack, policymakers will have to make a real-time decision about whether to pursue targeted shutdowns of vulnerable cyber-physical systems and compromised data centers or—fearing the potential for rapid replication—impose a more comprehensive shutdown, which could prevent escalation but inhibit the functioning of the digital economy and systems on which airports and power plants rely. This loss-of-control scenario highlights the importance of clarifying legal authority and developing incident-response plans. More broadly, it reinforces the urgency of creating policies and technical strategies to address how advanced models are inclined to misbehave.

At minimum, planning should involve four types of actions. First, it should establish “no regret” actions that policymakers and private-sector players can take today to respond to events from a position of strength. Second, it should create “break glass” playbooks for future emergencies that can be continually updated as new threats, opportunities, and concepts emerge. Third, it should invest in capabilities that seem crucial across multiple scenarios. Finally, it should prioritize early indicators and warnings of strategic failure and create conditions for course corrections.

NO COUNTRY FOR OLD HABITS

Planning for the impacts of AGI on national security needs to start now. In an increasingly competitive and combustible world, and with an economically fragile and politically polarized domestic environment, the United States cannot afford being caught by surprise.

Although it is possible that AI will ultimately prove to be a “normal technology”—a technology, like the Internet or electricity, that transforms the world but whose pace of adoption has natural limits that governments and societies can control—it would be foolish to assume that preparing for major disruption would be a mistake. Planning for more difficult challenges can help leaders identify core strategic issues and build response tools that will be equally useful in less severe circumstances. It would also be unwise to presume that such planning will generate policy instincts and pathways that exacerbate risks or slow AI advances. In the nuclear era, for example, planning for potential nuclear terrorism inspired global initiatives to secure the fissile material needed to make nuclear weapons that ultimately made the world safer.

It would also be dangerous to treat the possibility of AGI like any “normal scenario” in the national security world. Technological expertise and fluency across the government is limited and uneven, and the institutional players that would be involved in responding to any scenario extend far beyond traditional national security agencies. Most scenarios are likely to occur abroad and at home simultaneously. Any response will rely heavily on the choices and decisions of actors outside government, including companies and civil society organizations, that do not have a seat in the White House Situation Room and may not prioritize national security. Likewise, planning cannot be delegated to futurists and technical experts sent to a far-off bunker to spend months crafting detailed plans in isolation. Preparing for a future with AGI must continuously inform today’s strategic debates.

There is an active debate about the merits of various strategies to win the competition for AI while avoiding catastrophe, but there has been less discussion about how AGI might reshape the international landscape, the distribution of global power, and geopolitical alliances. In an increasingly multipolar world, emerging players see advanced AI—and how the United States and China diffuse AI technology and its underlying digital architecture—as key to their national aspirations. Early planning, tabletop exercises with allies and partners, and sustained dialogue with countries that want to hedge their diplomatic bets will ensure that strategic choices are mutually beneficial. Any AI strategy that fails to account for a multipolar world and a more distributed global technology ecosystem will fail. And any national security strategy that fails to grapple with the potentially transformative effects of AGI will become irrelevant.

National security leaders don’t get to choose their crises. They do, however, get to choose what to plan for and where to allocate resources to prepare for future challenges. Planning for AGI is not an indulgence in science fiction or a distraction from existing problems and opportunities. It is a responsible way to prepare for the very real possibility of a new set of national security challenges in a radically transformed world.

Article link: https://www.foreignaffairs.com/united-states/artificial-intelligence-geopolitics-worst-about-ai

China’s Evolving Industrial Policy for AI – RAND

Posted by timmreardon on 07/20/2025
Posted in: Uncategorized.

Kyle Chan, Gregory Smith, Jimmy Goodrich, Gerard DiPippo, Konstantin F. Pilz

EXPERT INSIGHTSPublished Jun 26, 2025

Note: This publication was revised on June 27, 2025, to update the example organizations in Figure 1 following recommendations from subject-matter experts.

China wants to become the global leader in artificial intelligence (AI) by 2030.[1]To achieve this goal, Beijing is deploying industrial policy tools across the full AI technology stack, from chips to applications. This expansion of AI industrial policy leads to two questions: What is Beijing doing to support its AI industry, and will it work?

China’s AI industrial policy will likely accelerate the country’s rapid progress in AI, particularly through support for research, talent, subsidized compute, and applications. Chinese AI models are closing the performance gap with top U.S. models, and AI adoption in China is growing quickly across sectors, from electric vehicles and robotics to health care and biotechnology.[2] Although most of this growth is driven by innovation at China’s private tech firms, state support has helped enhance the competitiveness of China’s AI industry.

However, some aspects of China’s AI industrial policy are wasteful, such as the inefficient allocation of AI chips to companies.[3] Other bottlenecks are hard to overcome, even with massive state support: U.S.-led export controls on AI chips and the semiconductor manufacturing equipment needed to produce such chips are limiting the compute available to Chinese AI developers.[4]Limited access to compute forces Chinese companies to make trade-offs between investing in near-term progress in model development and building longer-term resilience to sanctions.

Ultimately, despite some waste and conflicting priorities, China’s AI industrial policy will help Chinese companies compete with U.S. AI firms by providing talent and capital to an already strong sector. China’s AI development will likely remain at least a close second place behind that of the United States, as such development benefits from both private market competition and the Chinese government’s investments.

Beijing’s AI Policy Goals and Tools

The policy goals and discourse surrounding AI are different in China than in the United States. Chinese leaders want AI to advance the country’s economic development and military capabilities. In Washington, the AI policy discourse is sometimes framed as a “race to AGI [artificial general intelligence].”[5] In contrast, in Beijing, the AI discourse is less abstract and focuses on economic and industrial applications that can support Beijing’s overall economic objectives.

By 2030, Beijing is aiming for AI to become a $100 billion industry and to create more than $1 trillion of additional value in other industries.[6]This goal includes leveraging AI to upgrade traditional sectors, such as health care, manufacturing, and agriculture. It also includes harnessing AI to power emerging industries, particularly hard techsectors with physical applications, such as robotics, autonomous vehicles, and unmanned systems.

Beijing is using a wide variety of policy tools (see Figure 1). State-led AI investment funds are pouring capital into the development of AI models and applications, including an $8.2 billion AI fund for start-ups.[7] China is building a National Integrated Computing Network to pool computing resources across public and private data centers.[8]Local governments from Shanghai to Shenzhen have set up state-backed AI labs and AI pilot zones to accelerate AI research and talent development.[9] All of this state support comes on top of tens of billions of dollars in private AI investment from Chinese tech companies, such as Alibaba and ByteDance. Still, such investment trails private investments in the United States, such as OpenAI’s Stargate Project investment of $100–500 billion.

U.S. Export Controls to Constrain China’s Compute

Intensifying geopolitical tensions, particularly with the United States, have reshaped China’s AI industrial policy—along with its broader techno-industrial policies—to focus more on self-reliance and strategic competition. Export controls have cut off China’s access to advanced computing chips that are critical to AI development and deployment.[12]Chinese AI firms, such as ByteDance and Baidu, already complain about being compute constrained; as the demand for compute for AI development and deployment grows, the lack of access to advanced chips could significantly limit the growth of China’s AI industry.[13] In addition, export controls on semiconductor manufacturing equipment that date back to 2018 have cut off China’s access to advanced semiconductor manufacturing equipment, delaying Chinese efforts to mass-produce domestic AI chips by years.[14]

The United States enjoys a large lead in total compute capacity, partly because of export controls.[15]Circumventing or mitigating the impact of U.S.-led export restrictions on advanced semiconductors has become a focus of Beijing’s AI policy efforts. At an April 2025 Politburo meeting on AI, Chinese President Xi Jinping emphasized “self-reliance” and the creation of an “autonomously controllable” AI hardware and software ecosystem.[16]

In terms of AI chips, Beijing is supporting the development of domestic alternatives to Nvidia graphics processing units (GPUs), such as Huawei’s Ascend series, which lag behind in performance and production volume.[17] Relying on fewer and less powerful chips forces companies to ration their computing power, reducing the number and size of training and model deployment workloads they can conduct at any one time, and fewer than ten models have been trained on Huawei hardware.[18]

In addition, Chinese AI firms are pursuing other strategies to bypass export controls and access banned Nvidia GPUs, including chip stockpiling, chip smuggling, and building data centers around the world, from Mexico to Malaysia.[19]Therefore, although export controls are important for the U.S. goal of slowing China’s AI development, they are unlikely to halt China’s AI progress altogether and likely will bolster aspects of China’s chip industry.[20]

Another issue that Chinese AI developers are facing is a lack of mature alternatives to U.S. software. To overcome this limitation and promote self-reliance, Beijing is funding Denglin Technology and Moore Threads to develop alternatives to Nvidia’s CUDA software.[21] For AI frameworks, Beijing is supporting the adoption of Huawei’s MindSpore and Baidu’s PaddlePaddle as alternatives to Meta’s PyTorch and Google’s TensorFlow.[22] However, these frameworks still lag behind U.S. ones in terms of adoption, receiving much less attention on GitHub compared with U.S. repositories.[23]

Although China’s domestic platform alternatives lag behind their international counterparts in adoption and capabilities, such software alternatives could reduce the cost of switching from a superior U.S. hardware stack to less mature Chinese AI chips. For now, however, the Chinese alternatives to the Western AI software stack appear to be too immature to fully substitute for Western frameworks. This may change, however, should such alternatives mature and establish themselves as a true alternative ecosystem. This dynamic is reflective of the overall state of Chinese measures to build resilience against U.S. export controls: Such measures are not yet sufficient to overcome the significant limitations that export controls have imposed but have the potential to provide alternatives to the Western semiconductor and software stack.

Will China’s AI Policies Work?

Will China’s state support allow its AI ecosystem to catch up to or even surpass that of the United States and its allies? It is too early in the industry’s development to confidently answer. Overall, however, the state support probably will not hurt, as the policies that China is prioritizing appear, on net, to be targeted to the key needs of the AI industry as a whole.

China’s state support will be essential for its AI progress, particularly in addressing three critical bottlenecks. First, as discussed above, developing domestic AI chips and a sanction-resistant semiconductor supply chain is make-or-break for competing against U.S.-led export controls. Second, despite strong AI research rankings, China’s AI leaders identify talent shortages as a key constraint.[24] Success in these areas will determine whether state support can help enable China’s goal of global AI leadership. Third, China must rapidly scale energy production to meet a projected threefold increase in data center demand by 2030, although China is able to build new power plants much faster than the United States and is therefore likely to be able to meet this challenge.[25]

At the same time, China’s AI industrial policy could be counterproductive in several ways. First, pressure on Chinese AI companies to use less advanced, homegrown alternatives to global platforms will likely slow their progress in developing frontier models, at least over the next several years.[26] iFlytek, which claims to have the only public AI model fully trained with Chinese-made compute hardware aside from Huawei’s models, said that the switch from Nvidia to Huawei chips (including the Ascend 910B) caused a three-month delay in development time.[27]Second, if scarce AI chips are not allocated efficiently, resources could be diverted from more-productive users, such as private tech companies.

Third, Chinese AI firms that receive state support may come under greater scrutiny by the United States and other countries, prompting restrictions that might limit the ability of those firms to access critical resources, such as advanced chips, or to enter international markets. For example, DeepSeek’s sudden rise to prominence has prompted U.S. officials and institutions to restrict its access to U.S. technology, limiting its use.[28]DeepSeek has already been banned on government devices by such states as Texas, New York, and Virginia and by federal bodies, such as the Department of Defense, Department of Commerce, and NASA.[29]

AI is fundamentally different from other sectors in which China has used industrial policy, such as shipbuilding and electric vehicles, partly because of AI development’s reliance on fast-changing, wide-ranging innovation. Frequent paradigm shifts, such as the emergence of reasoning models, and a lack of consensus about AI’s trajectory make it difficult to carry out long-term state planning. Unlike many traditional sectors, the AI industry relies heavily on intangible inputs, such as talent and data, which are less responsive to capital subsidies and harder for the state to control. Although state support can help in some areas, such as capital-intensive computing infrastructure, other areas (such as progress on foundation models and applications) will primarily be driven by the private sector.

The fact that the United States is competitive in AI without any meaningful state support (at least financially) and instead based on private-sector investment and research suggests that industrial policy may not be an essential ingredient for AI competitiveness, unlike other industries. AI has a large and growing private market that can draw in companies and investors and that is already valued at $750 billion and forecast to continue to grow.[30]Furthermore, China’s private-sector companies, such as DeepSeek, have led the development of AI rather than state firms, suggesting that the private sector may have the advantage in driving innovation in this sector.

China’s progress on AI is likely to continue to be driven by its innovative private tech firms and start-ups. Insofar as China’s industrial policy synergizes with or supports that private ecosystem, such policy is likely to help private AI development succeed and therefore “work” from Beijing’s perspective. Where such industrial policy does not clearly link to the private AI ecosystem’s needs and challenges, it is more likely to be wasted. And even with massive state subsidies, Chinese AI developers will have to attract substantially more private investment if they want to close the AI investment gap: Currently, U.S. AI companies receive more than ten times as much private investment as their Chinese counterparts, according to one estimate.[31]

Whether Chinese AI “surpasses” Western providers will also depend on the innovations of the private sector. Even if Chinese AI does not surpass Western offerings, it is likely to remain a close competitor because of the vibrant mixture of private innovation and public support that is already in place.

China’s Layered State Support for AI

China’s AI industrial policy is multilayered, including initiatives across much of the AI stack and efforts that are not explicitly in support of AI but nonetheless are helpful to the Chinese AI industry. Although a major area of Chinese state support is in alternatives to semiconductors and other export-controlled components, state support also stretches into such areas as energy and data center construction, which are necessary for AI success. In this appendix, we take a deeper look at these policies across the AI tech stack.

Energy

China’s AI industry enjoys an energy advantage for data centers, driven by aggressive state-backed power infrastructure expansion and the strategic deployment of renewables at large-scale computing hubs.[32]China’s ability to quickly build and connect new power plants removes a key bottleneck for data center expansion that the United States is grappling with.[33] Moreover, China’s energy abundance allows Chinese AI firms to use less-energy-efficient, homegrown AI hardware, such as Huawei’s CloudMatrix 384 cluster.[34]

In 2021, China’s State Grid Corporation estimated that its data center electricity demand would double from more than 38 gigawatts (GW) in 2020 to more than 76 GW, making up 3.7 percent of its total electricity demand.[35] Beijing has made renewable energy expansion and energy efficiency a central focus of its data center expansion strategy, although coal still made up 58 percent of China’s overall power generation mix in 2024.[36] China’s data center build-out benefits from the country’s broader ability to rapidly add grid capacity at scale. In 2024 alone, China added 429 GW of net new power generation capacity overall, more than 15 times the net capacity added in the United States during the same period.[37]

China’s historic success in developing new energy generation and its continued investments in this space suggest that China will be able to meet the increased power demands of deploying AI and could provide subsidized electricity to AI developers and deployers, which could reduce the operating costs associated with AI.

Chips

As discussed above, China is pursuing a large-scale industrial policy effort aimed at developing a self-reliant semiconductor supply chain. Although this effort was not originally targeted at AI, it has become critical to China’s AI industry as demand for compute skyrockets and U.S.-led export controls limit China’s access to AI chips and the equipment needed to produce them.[38]

Beijing is supporting the development of domestic AI chips, such as Huawei’s Ascend series, as alternatives to AI chips from Nvidia and AMD. Beijing is also pushing Chinese AI companies to switch to domestic AI chips.[39] DeepSeek is experimenting with Huawei Ascend 910C chips for inference, while ByteDance and Ant Group are using Huawei Ascend 910B chips for model training.[40] However, Chinese AI chips have yet to find adoption for AI training workloads. Among Epoch AI’s 321 notable AI models with known hardware types, 319 have been trained on U.S. AI chips, and only two have been trained on Chinese hardware.[41] Even DeepSeek’s recent AI training run still used Nvidia’s GPUs, highlighting that Chinese hardware is not yet mature enough for large-scale AI model training, though it has been used for inference on trained models.[42]

Attempting to close the gap in AI chip manufacturing, Beijing is supporting research and development in chipmaking technology to overcome U.S.-led export controls on semiconductor manufacturing equipment, such as extreme ultraviolet (EUV) lithography machines from the Dutch firm ASML. This includes research on EUV lithography, multi-patterning, and advanced packaging technology.[43]Beijing has backed these efforts with large-scale public funding programs, such as the National Integrated Circuit Industry Investment Fund (also known as the “Big Fund”), with the latest round reaching $47 billion.[44] Huawei plays a central role in this effort by recruiting industry talent, partnering with national labs, and sending task forces to support domestic firms.[45] Although China has made progress in pushing the limits of older manufacturing techniques, China’s chipmaking capabilities remain years behind industry leaders, such as the Taiwan Semiconductor Manufacturing Company (TSMC).

Computing Infrastructure

The rapid expansion of computing infrastructure is also a top priority for Chinese policymakers and could provide Chinese tech companies (particularly start-ups, as well as small and medium-sized firms) with much-needed access to scarce compute resources. Beijing is developing a National Integrated Computing Network that will integrate private and public cloud computing resources into a single nationwide platform that can optimize the allocation of compute resources.[46] Beijing launched the “Eastern Data, Western Computing” initiative in 2022 as part of this effort, aimed at building eight “national computing hubs,” particularly in western provinces with abundant clean energy resources.[47]

By June 2024, China had 246 EFLOP/s of total compute capacity—including both public and commercial data centers—and aims to reach 300 EFLOP/s by 2025, according to the 2023 Action Plan for the High-Quality Development of Computing Power Infrastructure.[48]However, not all of this compute is intended for or well suited to supporting AI workloads. Other research suggests that China controls about 15 percent of total AI compute, while the United States controls about 75 percent of that total.[49] This demonstrates the significant deficit in computing infrastructure that China’s AI industry faces and that state support might attempt to alleviate as China begins to scale the deployment of its models.

Research and Talent

Beijing’s support for basic research and talent development is a key enabler for China’s AI industry. Beijing provides funding for fundamental AI research at universities and state-backed AI labs through several channels, including grants from China’s National Natural Science Foundation and its National Key Research and Development Programs.[50] This public AI research funding has helped turn China’s universities and research labs into world-class AI research centers. Chinese-affiliated authors made up the second-largest share of highly cited AI researchers as of 2024.[51]

Chinese universities and AI firms work closely together, sharing breakthroughs and forming a broader AI research community. One of DeepSeek’s seminal research papers on mixture-of-experts models was co-authored with researchers at Tsinghua University, Peking University, and Nanjing University.[52]More than half of DeepSeek’s AI researchers were trained exclusively at Chinese universities, including founder Liang Wenfeng, who graduated from Zhejiang University.[53] China has been expanding AI education and training across the board, from primary schools to universities.[54] Some of these efforts are more symbolic than substantive, such as AI classes for six-year-olds and the proliferation of university courses on DeepSeek.[55] But Beijing’s efforts to cultivate a deep, highly integrated network of top-tier AI researchers across universities, AI labs, and tech firms directly underpins the ability of China’s AI industry to operate at the global frontier.

State-Backed AI Labs

China’s state-backed AI labs play a critical role in carrying out fundamental research, coordinating common industry standards, developing road maps, and fostering talent.[56] Beijing supports AI research at State Key Laboratories, such as the State Key Laboratory of Intelligent Technology and Systems at Tsinghua University.[57]

As an example, Zhejiang Lab in Hangzhou is one of China’s premier state-backed AI labs and conducts research in a wide variety of fields, from quantum sensing to industrial AI.[58] It was established in 2017 by the Zhejiang Provincial Government in partnership with Zhejiang University and Alibaba. The Shanghai AI Lab is another prominent AI lab that has developed widely used AI benchmarks, such as MVBench, as well as a world-class reasoning model called InternLM3.[59] Peng Cheng Lab, a state-backed AI lab in Shenzhen, has played an important role in supporting the development of frontier AI models by Baidu and Huawei.[60] These labs blur the line between private- and public-sector AI development in China, with state-backed labs supporting both Chinese government programs and private-sector AI development.

Beijing also has two major AI labs created by China’s Ministry of Science and Technology and the Beijing Municipal Government. The Beijing Academy of Artificial Intelligence (BAAI), also called the Zhiyuan Institute, is known for its work on AI safety and standards, foundational theory, and the development of open-source frontier models, such as WuDao and Emu3.[61]The Beijing Institute for General Artificial Intelligence is unique in explicitly focusing on AGI through an alternative approach based on human cognition.[62] Both Beijing labs work closely with Peking University and Tsinghua University and offer talent development programs.

The exact impact of China’s state-backed AI labs is difficult to estimate; China’s most advanced and most widely adopted AI models were developed primarily by private companies. However, Chinese AI labs also provide incubators for talent that can later support China’s private-sector AI growth and support government priorities across the tech sector.

AI-Specific Funding

Beijing is also increasing public funding for China’s AI industry through specialized industry funds, bank loan programs, and local government funding. Although there likely will be significant waste in the process, public funding will help support a growing AI start-up ecosystem, particularly for applications. In January 2025, China launched an $8.2 billion National AI Industry Investment Fund.[63] China’s broader $138 billion National Venture Capital Guidance Fund will target several AI-related fields, such as robotics and “embodied intelligence.”[64] Local governments, such as Hangzhou and Beijing, have followed suit with their own state-led AI investment funds.[65]

Major banks have also launched AI industry lending programs, most notably including the Bank of China’s five-year, $138 billion financing program for AI-related industries.[66]Other banks, such as the People’s Bank of China and the Industrial and Commercial Bank of China (ICBC), have launched financing programs for the tech industry, which will likely include funding for AI specifically.[67] Many of these AI and tech funds were launched this year.

Local Government Support

Local governments have also taken a role in promoting AI within China. Although most efforts to transform inland cities into AI hubs are unlikely to succeed, efforts in such cities as Shenzhen and Hangzhou to build on their existing strengths as global tech hubs will significantly enhance China’s national AI capabilities. Shanghai was singled out by Xi during an April 2025 visit, when he called on the city to take the lead on AI development and promoted the Shanghai Foundation Model Innovation Center (an AI start-up incubator) and the city’s ability to attract foreign talent.[68]

China is also developing AI pilot zones across 20 cities, where AI companies can receive special financing and operate in a favorable regulatory environment.[69] Local governments often provide funding for start-ups through public investment funds and “computing vouchers” that offer subsidized access to computing resources.[70]Cities such as Beijing and Ningxia have set up computing exchange platforms to more effectively allocate compute resources across regions and data centers.[71]

Following Beijing’s lead, many Chinese cities have launched AI and “AI+” action plans aimed at supporting local start-ups and promoting AI adoption in other sectors. The Beijing city government’s AI+ action plan aims to integrate AI into government services and build a shared computing platform for training large language models (LLMs).[72] Shenzhen has launched an AI action plan aimed at building a 4,000 PFLOP/s intelligent computing center (the equivalent of about 4,000 Nvidia H100s).[73]

Promoting Open Source

Beijing promotes open-source AI platforms, datasets, and models, which it views as a way to accelerate industry progress and circumvent potential export controls on proprietary technology. This open-source approach also allows China to potentially shape AI industry standards abroad through the adoption of its low-cost, open-source offerings.[74] China has been promoting its open-source AI collaboration platform called OpenI, in which participants can share AI models and datasets and access computing resources, though it is in its infancy in comparison with Western platforms, such as Hugging Face.[75]

Beijing has also been encouraging greater use of a Chinese alternative to Microsoft-owned GitHub called Gitee, which claims to have more than 13.5 million registered users, compared with GitHub’s more than 100 million users.[76] In addition to providing a domestic platform that is safe from U.S. policy action, Gitee allows Beijing to enforce greater censorship control.[77] However, subjecting code to a political review process on Gitee slows software development and makes the platform much less attractive to non-Chinese users.[78] Lastly, commercial players are also embracing open-source AI models after the success of DeepSeek’s R1 model.[79] Although an open-source approach spurs greater adoption and increases opportunities for commercialization, there are questions as to whether Beijing will continue to tolerate the corresponding more-limited censorship and state control that come with open-source models.[80]

Data

Beijing is also aiming to turn data into a strategic resource to give China an edge in AI, although efforts to date have been mixed.[81] Beijing wants to turn data into a new “factor of production” and has modified accounting rules to allow firms to classify data as intangible assets.[82]Local governments have established data marketplaces, such as the Shenzhen Data Exchange, to allow data to be traded by private firms, state-owned enterprises, and state agencies. China’s National Data Administration is preparing to launch a National Public Data Resource Platform to facilitate data trading on a national scale.[83] However, although Beijing has been pushing organizations to share data on these public exchanges, private firms are often reluctant to share their data because of concerns related to control risks and compliance with data protection laws.[84]

Instead, Beijing’s support for open data-sharing platforms is likely to play a greater role in advancing China’s AI industry by increasing general access to large training sets without the ownership complexities of a data trading exchange. State support for open data-sharing include open data platforms, such as OpenI, as well as the creation of open datasets, such as FlagData, BAAI’s Chinese multimodal dataset.[85]Beijing is particularly focused on promoting data-sharing for robotics through such institutions as the Beijing Embodied Artificial Intelligence Robotics Innovation Center and the National Local Joint Humanoid Robot Innovation Center in Shanghai.[86] Several leading Chinese robotics companies, such as AgiBot and Fourier, also have released open training datasets, augmenting the country’s broader pool of robotics training data.[87]

Applications

Finally, Beijing has begun directly promoting the adoption of AI applications across all sectors of society as part of its AI industrial policy. In an April 2025 Politburo meeting on AI, Xi argued that China’s AI industry should be “strongly oriented toward applications.”[88]National AI plans, such as the 2017 AI development plan, as well as local government AI+ action plans, focus heavily on AI integration into public services and government operations.[89] China’s State-owned Assets Supervision and Administration Commission of the State Council, the parent organization that controls China’s most powerful central state firms, is also pushing AI integration across its member state-owned enterprises.[90]

Beijing is seeking to integrate AI into a wide variety of sectors in addition to government services. These include traditional sectors, from manufacturing and agriculture to education and health care, as well as emerging fields. In particular, Beijing is prioritizing AI development in robotics and “embodied intelligence.”[91] China released the 14th Five-Year Plan for the Development of the Robot Industry in 2021, followed by the Robot+ Application Action Plan in 2023 aimed at spurring the development and adoption of robots.[92]

Article link: https://www.rand.org/pubs/perspectives/PEA4012-1.html?

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When Disregard for Population Health Becomes US Policy – JAMA 04/18/2026
    • Why opinion on AI is so divided – MIT Technology Review 04/14/2026
    • The Global Healthcare System Is Broken. Japan Fixed It for $4,100 Per Person. 04/10/2026
    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (4)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...