healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

China’s Evolving Industrial Policy for AI – RAND

Posted by timmreardon on 07/20/2025
Posted in: Uncategorized.

Kyle Chan, Gregory Smith, Jimmy Goodrich, Gerard DiPippo, Konstantin F. Pilz

EXPERT INSIGHTSPublished Jun 26, 2025

Note: This publication was revised on June 27, 2025, to update the example organizations in Figure 1 following recommendations from subject-matter experts.

China wants to become the global leader in artificial intelligence (AI) by 2030.[1]To achieve this goal, Beijing is deploying industrial policy tools across the full AI technology stack, from chips to applications. This expansion of AI industrial policy leads to two questions: What is Beijing doing to support its AI industry, and will it work?

China’s AI industrial policy will likely accelerate the country’s rapid progress in AI, particularly through support for research, talent, subsidized compute, and applications. Chinese AI models are closing the performance gap with top U.S. models, and AI adoption in China is growing quickly across sectors, from electric vehicles and robotics to health care and biotechnology.[2] Although most of this growth is driven by innovation at China’s private tech firms, state support has helped enhance the competitiveness of China’s AI industry.

However, some aspects of China’s AI industrial policy are wasteful, such as the inefficient allocation of AI chips to companies.[3] Other bottlenecks are hard to overcome, even with massive state support: U.S.-led export controls on AI chips and the semiconductor manufacturing equipment needed to produce such chips are limiting the compute available to Chinese AI developers.[4]Limited access to compute forces Chinese companies to make trade-offs between investing in near-term progress in model development and building longer-term resilience to sanctions.

Ultimately, despite some waste and conflicting priorities, China’s AI industrial policy will help Chinese companies compete with U.S. AI firms by providing talent and capital to an already strong sector. China’s AI development will likely remain at least a close second place behind that of the United States, as such development benefits from both private market competition and the Chinese government’s investments.

Beijing’s AI Policy Goals and Tools

The policy goals and discourse surrounding AI are different in China than in the United States. Chinese leaders want AI to advance the country’s economic development and military capabilities. In Washington, the AI policy discourse is sometimes framed as a “race to AGI [artificial general intelligence].”[5] In contrast, in Beijing, the AI discourse is less abstract and focuses on economic and industrial applications that can support Beijing’s overall economic objectives.

By 2030, Beijing is aiming for AI to become a $100 billion industry and to create more than $1 trillion of additional value in other industries.[6]This goal includes leveraging AI to upgrade traditional sectors, such as health care, manufacturing, and agriculture. It also includes harnessing AI to power emerging industries, particularly hard techsectors with physical applications, such as robotics, autonomous vehicles, and unmanned systems.

Beijing is using a wide variety of policy tools (see Figure 1). State-led AI investment funds are pouring capital into the development of AI models and applications, including an $8.2 billion AI fund for start-ups.[7] China is building a National Integrated Computing Network to pool computing resources across public and private data centers.[8]Local governments from Shanghai to Shenzhen have set up state-backed AI labs and AI pilot zones to accelerate AI research and talent development.[9] All of this state support comes on top of tens of billions of dollars in private AI investment from Chinese tech companies, such as Alibaba and ByteDance. Still, such investment trails private investments in the United States, such as OpenAI’s Stargate Project investment of $100–500 billion.

U.S. Export Controls to Constrain China’s Compute

Intensifying geopolitical tensions, particularly with the United States, have reshaped China’s AI industrial policy—along with its broader techno-industrial policies—to focus more on self-reliance and strategic competition. Export controls have cut off China’s access to advanced computing chips that are critical to AI development and deployment.[12]Chinese AI firms, such as ByteDance and Baidu, already complain about being compute constrained; as the demand for compute for AI development and deployment grows, the lack of access to advanced chips could significantly limit the growth of China’s AI industry.[13] In addition, export controls on semiconductor manufacturing equipment that date back to 2018 have cut off China’s access to advanced semiconductor manufacturing equipment, delaying Chinese efforts to mass-produce domestic AI chips by years.[14]

The United States enjoys a large lead in total compute capacity, partly because of export controls.[15]Circumventing or mitigating the impact of U.S.-led export restrictions on advanced semiconductors has become a focus of Beijing’s AI policy efforts. At an April 2025 Politburo meeting on AI, Chinese President Xi Jinping emphasized “self-reliance” and the creation of an “autonomously controllable” AI hardware and software ecosystem.[16]

In terms of AI chips, Beijing is supporting the development of domestic alternatives to Nvidia graphics processing units (GPUs), such as Huawei’s Ascend series, which lag behind in performance and production volume.[17] Relying on fewer and less powerful chips forces companies to ration their computing power, reducing the number and size of training and model deployment workloads they can conduct at any one time, and fewer than ten models have been trained on Huawei hardware.[18]

In addition, Chinese AI firms are pursuing other strategies to bypass export controls and access banned Nvidia GPUs, including chip stockpiling, chip smuggling, and building data centers around the world, from Mexico to Malaysia.[19]Therefore, although export controls are important for the U.S. goal of slowing China’s AI development, they are unlikely to halt China’s AI progress altogether and likely will bolster aspects of China’s chip industry.[20]

Another issue that Chinese AI developers are facing is a lack of mature alternatives to U.S. software. To overcome this limitation and promote self-reliance, Beijing is funding Denglin Technology and Moore Threads to develop alternatives to Nvidia’s CUDA software.[21] For AI frameworks, Beijing is supporting the adoption of Huawei’s MindSpore and Baidu’s PaddlePaddle as alternatives to Meta’s PyTorch and Google’s TensorFlow.[22] However, these frameworks still lag behind U.S. ones in terms of adoption, receiving much less attention on GitHub compared with U.S. repositories.[23]

Although China’s domestic platform alternatives lag behind their international counterparts in adoption and capabilities, such software alternatives could reduce the cost of switching from a superior U.S. hardware stack to less mature Chinese AI chips. For now, however, the Chinese alternatives to the Western AI software stack appear to be too immature to fully substitute for Western frameworks. This may change, however, should such alternatives mature and establish themselves as a true alternative ecosystem. This dynamic is reflective of the overall state of Chinese measures to build resilience against U.S. export controls: Such measures are not yet sufficient to overcome the significant limitations that export controls have imposed but have the potential to provide alternatives to the Western semiconductor and software stack.

Will China’s AI Policies Work?

Will China’s state support allow its AI ecosystem to catch up to or even surpass that of the United States and its allies? It is too early in the industry’s development to confidently answer. Overall, however, the state support probably will not hurt, as the policies that China is prioritizing appear, on net, to be targeted to the key needs of the AI industry as a whole.

China’s state support will be essential for its AI progress, particularly in addressing three critical bottlenecks. First, as discussed above, developing domestic AI chips and a sanction-resistant semiconductor supply chain is make-or-break for competing against U.S.-led export controls. Second, despite strong AI research rankings, China’s AI leaders identify talent shortages as a key constraint.[24] Success in these areas will determine whether state support can help enable China’s goal of global AI leadership. Third, China must rapidly scale energy production to meet a projected threefold increase in data center demand by 2030, although China is able to build new power plants much faster than the United States and is therefore likely to be able to meet this challenge.[25]

At the same time, China’s AI industrial policy could be counterproductive in several ways. First, pressure on Chinese AI companies to use less advanced, homegrown alternatives to global platforms will likely slow their progress in developing frontier models, at least over the next several years.[26] iFlytek, which claims to have the only public AI model fully trained with Chinese-made compute hardware aside from Huawei’s models, said that the switch from Nvidia to Huawei chips (including the Ascend 910B) caused a three-month delay in development time.[27]Second, if scarce AI chips are not allocated efficiently, resources could be diverted from more-productive users, such as private tech companies.

Third, Chinese AI firms that receive state support may come under greater scrutiny by the United States and other countries, prompting restrictions that might limit the ability of those firms to access critical resources, such as advanced chips, or to enter international markets. For example, DeepSeek’s sudden rise to prominence has prompted U.S. officials and institutions to restrict its access to U.S. technology, limiting its use.[28]DeepSeek has already been banned on government devices by such states as Texas, New York, and Virginia and by federal bodies, such as the Department of Defense, Department of Commerce, and NASA.[29]

AI is fundamentally different from other sectors in which China has used industrial policy, such as shipbuilding and electric vehicles, partly because of AI development’s reliance on fast-changing, wide-ranging innovation. Frequent paradigm shifts, such as the emergence of reasoning models, and a lack of consensus about AI’s trajectory make it difficult to carry out long-term state planning. Unlike many traditional sectors, the AI industry relies heavily on intangible inputs, such as talent and data, which are less responsive to capital subsidies and harder for the state to control. Although state support can help in some areas, such as capital-intensive computing infrastructure, other areas (such as progress on foundation models and applications) will primarily be driven by the private sector.

The fact that the United States is competitive in AI without any meaningful state support (at least financially) and instead based on private-sector investment and research suggests that industrial policy may not be an essential ingredient for AI competitiveness, unlike other industries. AI has a large and growing private market that can draw in companies and investors and that is already valued at $750 billion and forecast to continue to grow.[30]Furthermore, China’s private-sector companies, such as DeepSeek, have led the development of AI rather than state firms, suggesting that the private sector may have the advantage in driving innovation in this sector.

China’s progress on AI is likely to continue to be driven by its innovative private tech firms and start-ups. Insofar as China’s industrial policy synergizes with or supports that private ecosystem, such policy is likely to help private AI development succeed and therefore “work” from Beijing’s perspective. Where such industrial policy does not clearly link to the private AI ecosystem’s needs and challenges, it is more likely to be wasted. And even with massive state subsidies, Chinese AI developers will have to attract substantially more private investment if they want to close the AI investment gap: Currently, U.S. AI companies receive more than ten times as much private investment as their Chinese counterparts, according to one estimate.[31]

Whether Chinese AI “surpasses” Western providers will also depend on the innovations of the private sector. Even if Chinese AI does not surpass Western offerings, it is likely to remain a close competitor because of the vibrant mixture of private innovation and public support that is already in place.

China’s Layered State Support for AI

China’s AI industrial policy is multilayered, including initiatives across much of the AI stack and efforts that are not explicitly in support of AI but nonetheless are helpful to the Chinese AI industry. Although a major area of Chinese state support is in alternatives to semiconductors and other export-controlled components, state support also stretches into such areas as energy and data center construction, which are necessary for AI success. In this appendix, we take a deeper look at these policies across the AI tech stack.

Energy

China’s AI industry enjoys an energy advantage for data centers, driven by aggressive state-backed power infrastructure expansion and the strategic deployment of renewables at large-scale computing hubs.[32]China’s ability to quickly build and connect new power plants removes a key bottleneck for data center expansion that the United States is grappling with.[33] Moreover, China’s energy abundance allows Chinese AI firms to use less-energy-efficient, homegrown AI hardware, such as Huawei’s CloudMatrix 384 cluster.[34]

In 2021, China’s State Grid Corporation estimated that its data center electricity demand would double from more than 38 gigawatts (GW) in 2020 to more than 76 GW, making up 3.7 percent of its total electricity demand.[35] Beijing has made renewable energy expansion and energy efficiency a central focus of its data center expansion strategy, although coal still made up 58 percent of China’s overall power generation mix in 2024.[36] China’s data center build-out benefits from the country’s broader ability to rapidly add grid capacity at scale. In 2024 alone, China added 429 GW of net new power generation capacity overall, more than 15 times the net capacity added in the United States during the same period.[37]

China’s historic success in developing new energy generation and its continued investments in this space suggest that China will be able to meet the increased power demands of deploying AI and could provide subsidized electricity to AI developers and deployers, which could reduce the operating costs associated with AI.

Chips

As discussed above, China is pursuing a large-scale industrial policy effort aimed at developing a self-reliant semiconductor supply chain. Although this effort was not originally targeted at AI, it has become critical to China’s AI industry as demand for compute skyrockets and U.S.-led export controls limit China’s access to AI chips and the equipment needed to produce them.[38]

Beijing is supporting the development of domestic AI chips, such as Huawei’s Ascend series, as alternatives to AI chips from Nvidia and AMD. Beijing is also pushing Chinese AI companies to switch to domestic AI chips.[39] DeepSeek is experimenting with Huawei Ascend 910C chips for inference, while ByteDance and Ant Group are using Huawei Ascend 910B chips for model training.[40] However, Chinese AI chips have yet to find adoption for AI training workloads. Among Epoch AI’s 321 notable AI models with known hardware types, 319 have been trained on U.S. AI chips, and only two have been trained on Chinese hardware.[41] Even DeepSeek’s recent AI training run still used Nvidia’s GPUs, highlighting that Chinese hardware is not yet mature enough for large-scale AI model training, though it has been used for inference on trained models.[42]

Attempting to close the gap in AI chip manufacturing, Beijing is supporting research and development in chipmaking technology to overcome U.S.-led export controls on semiconductor manufacturing equipment, such as extreme ultraviolet (EUV) lithography machines from the Dutch firm ASML. This includes research on EUV lithography, multi-patterning, and advanced packaging technology.[43]Beijing has backed these efforts with large-scale public funding programs, such as the National Integrated Circuit Industry Investment Fund (also known as the “Big Fund”), with the latest round reaching $47 billion.[44] Huawei plays a central role in this effort by recruiting industry talent, partnering with national labs, and sending task forces to support domestic firms.[45] Although China has made progress in pushing the limits of older manufacturing techniques, China’s chipmaking capabilities remain years behind industry leaders, such as the Taiwan Semiconductor Manufacturing Company (TSMC).

Computing Infrastructure

The rapid expansion of computing infrastructure is also a top priority for Chinese policymakers and could provide Chinese tech companies (particularly start-ups, as well as small and medium-sized firms) with much-needed access to scarce compute resources. Beijing is developing a National Integrated Computing Network that will integrate private and public cloud computing resources into a single nationwide platform that can optimize the allocation of compute resources.[46] Beijing launched the “Eastern Data, Western Computing” initiative in 2022 as part of this effort, aimed at building eight “national computing hubs,” particularly in western provinces with abundant clean energy resources.[47]

By June 2024, China had 246 EFLOP/s of total compute capacity—including both public and commercial data centers—and aims to reach 300 EFLOP/s by 2025, according to the 2023 Action Plan for the High-Quality Development of Computing Power Infrastructure.[48]However, not all of this compute is intended for or well suited to supporting AI workloads. Other research suggests that China controls about 15 percent of total AI compute, while the United States controls about 75 percent of that total.[49] This demonstrates the significant deficit in computing infrastructure that China’s AI industry faces and that state support might attempt to alleviate as China begins to scale the deployment of its models.

Research and Talent

Beijing’s support for basic research and talent development is a key enabler for China’s AI industry. Beijing provides funding for fundamental AI research at universities and state-backed AI labs through several channels, including grants from China’s National Natural Science Foundation and its National Key Research and Development Programs.[50] This public AI research funding has helped turn China’s universities and research labs into world-class AI research centers. Chinese-affiliated authors made up the second-largest share of highly cited AI researchers as of 2024.[51]

Chinese universities and AI firms work closely together, sharing breakthroughs and forming a broader AI research community. One of DeepSeek’s seminal research papers on mixture-of-experts models was co-authored with researchers at Tsinghua University, Peking University, and Nanjing University.[52]More than half of DeepSeek’s AI researchers were trained exclusively at Chinese universities, including founder Liang Wenfeng, who graduated from Zhejiang University.[53] China has been expanding AI education and training across the board, from primary schools to universities.[54] Some of these efforts are more symbolic than substantive, such as AI classes for six-year-olds and the proliferation of university courses on DeepSeek.[55] But Beijing’s efforts to cultivate a deep, highly integrated network of top-tier AI researchers across universities, AI labs, and tech firms directly underpins the ability of China’s AI industry to operate at the global frontier.

State-Backed AI Labs

China’s state-backed AI labs play a critical role in carrying out fundamental research, coordinating common industry standards, developing road maps, and fostering talent.[56] Beijing supports AI research at State Key Laboratories, such as the State Key Laboratory of Intelligent Technology and Systems at Tsinghua University.[57]

As an example, Zhejiang Lab in Hangzhou is one of China’s premier state-backed AI labs and conducts research in a wide variety of fields, from quantum sensing to industrial AI.[58] It was established in 2017 by the Zhejiang Provincial Government in partnership with Zhejiang University and Alibaba. The Shanghai AI Lab is another prominent AI lab that has developed widely used AI benchmarks, such as MVBench, as well as a world-class reasoning model called InternLM3.[59] Peng Cheng Lab, a state-backed AI lab in Shenzhen, has played an important role in supporting the development of frontier AI models by Baidu and Huawei.[60] These labs blur the line between private- and public-sector AI development in China, with state-backed labs supporting both Chinese government programs and private-sector AI development.

Beijing also has two major AI labs created by China’s Ministry of Science and Technology and the Beijing Municipal Government. The Beijing Academy of Artificial Intelligence (BAAI), also called the Zhiyuan Institute, is known for its work on AI safety and standards, foundational theory, and the development of open-source frontier models, such as WuDao and Emu3.[61]The Beijing Institute for General Artificial Intelligence is unique in explicitly focusing on AGI through an alternative approach based on human cognition.[62] Both Beijing labs work closely with Peking University and Tsinghua University and offer talent development programs.

The exact impact of China’s state-backed AI labs is difficult to estimate; China’s most advanced and most widely adopted AI models were developed primarily by private companies. However, Chinese AI labs also provide incubators for talent that can later support China’s private-sector AI growth and support government priorities across the tech sector.

AI-Specific Funding

Beijing is also increasing public funding for China’s AI industry through specialized industry funds, bank loan programs, and local government funding. Although there likely will be significant waste in the process, public funding will help support a growing AI start-up ecosystem, particularly for applications. In January 2025, China launched an $8.2 billion National AI Industry Investment Fund.[63] China’s broader $138 billion National Venture Capital Guidance Fund will target several AI-related fields, such as robotics and “embodied intelligence.”[64] Local governments, such as Hangzhou and Beijing, have followed suit with their own state-led AI investment funds.[65]

Major banks have also launched AI industry lending programs, most notably including the Bank of China’s five-year, $138 billion financing program for AI-related industries.[66]Other banks, such as the People’s Bank of China and the Industrial and Commercial Bank of China (ICBC), have launched financing programs for the tech industry, which will likely include funding for AI specifically.[67] Many of these AI and tech funds were launched this year.

Local Government Support

Local governments have also taken a role in promoting AI within China. Although most efforts to transform inland cities into AI hubs are unlikely to succeed, efforts in such cities as Shenzhen and Hangzhou to build on their existing strengths as global tech hubs will significantly enhance China’s national AI capabilities. Shanghai was singled out by Xi during an April 2025 visit, when he called on the city to take the lead on AI development and promoted the Shanghai Foundation Model Innovation Center (an AI start-up incubator) and the city’s ability to attract foreign talent.[68]

China is also developing AI pilot zones across 20 cities, where AI companies can receive special financing and operate in a favorable regulatory environment.[69] Local governments often provide funding for start-ups through public investment funds and “computing vouchers” that offer subsidized access to computing resources.[70]Cities such as Beijing and Ningxia have set up computing exchange platforms to more effectively allocate compute resources across regions and data centers.[71]

Following Beijing’s lead, many Chinese cities have launched AI and “AI+” action plans aimed at supporting local start-ups and promoting AI adoption in other sectors. The Beijing city government’s AI+ action plan aims to integrate AI into government services and build a shared computing platform for training large language models (LLMs).[72] Shenzhen has launched an AI action plan aimed at building a 4,000 PFLOP/s intelligent computing center (the equivalent of about 4,000 Nvidia H100s).[73]

Promoting Open Source

Beijing promotes open-source AI platforms, datasets, and models, which it views as a way to accelerate industry progress and circumvent potential export controls on proprietary technology. This open-source approach also allows China to potentially shape AI industry standards abroad through the adoption of its low-cost, open-source offerings.[74] China has been promoting its open-source AI collaboration platform called OpenI, in which participants can share AI models and datasets and access computing resources, though it is in its infancy in comparison with Western platforms, such as Hugging Face.[75]

Beijing has also been encouraging greater use of a Chinese alternative to Microsoft-owned GitHub called Gitee, which claims to have more than 13.5 million registered users, compared with GitHub’s more than 100 million users.[76] In addition to providing a domestic platform that is safe from U.S. policy action, Gitee allows Beijing to enforce greater censorship control.[77] However, subjecting code to a political review process on Gitee slows software development and makes the platform much less attractive to non-Chinese users.[78] Lastly, commercial players are also embracing open-source AI models after the success of DeepSeek’s R1 model.[79] Although an open-source approach spurs greater adoption and increases opportunities for commercialization, there are questions as to whether Beijing will continue to tolerate the corresponding more-limited censorship and state control that come with open-source models.[80]

Data

Beijing is also aiming to turn data into a strategic resource to give China an edge in AI, although efforts to date have been mixed.[81] Beijing wants to turn data into a new “factor of production” and has modified accounting rules to allow firms to classify data as intangible assets.[82]Local governments have established data marketplaces, such as the Shenzhen Data Exchange, to allow data to be traded by private firms, state-owned enterprises, and state agencies. China’s National Data Administration is preparing to launch a National Public Data Resource Platform to facilitate data trading on a national scale.[83] However, although Beijing has been pushing organizations to share data on these public exchanges, private firms are often reluctant to share their data because of concerns related to control risks and compliance with data protection laws.[84]

Instead, Beijing’s support for open data-sharing platforms is likely to play a greater role in advancing China’s AI industry by increasing general access to large training sets without the ownership complexities of a data trading exchange. State support for open data-sharing include open data platforms, such as OpenI, as well as the creation of open datasets, such as FlagData, BAAI’s Chinese multimodal dataset.[85]Beijing is particularly focused on promoting data-sharing for robotics through such institutions as the Beijing Embodied Artificial Intelligence Robotics Innovation Center and the National Local Joint Humanoid Robot Innovation Center in Shanghai.[86] Several leading Chinese robotics companies, such as AgiBot and Fourier, also have released open training datasets, augmenting the country’s broader pool of robotics training data.[87]

Applications

Finally, Beijing has begun directly promoting the adoption of AI applications across all sectors of society as part of its AI industrial policy. In an April 2025 Politburo meeting on AI, Xi argued that China’s AI industry should be “strongly oriented toward applications.”[88]National AI plans, such as the 2017 AI development plan, as well as local government AI+ action plans, focus heavily on AI integration into public services and government operations.[89] China’s State-owned Assets Supervision and Administration Commission of the State Council, the parent organization that controls China’s most powerful central state firms, is also pushing AI integration across its member state-owned enterprises.[90]

Beijing is seeking to integrate AI into a wide variety of sectors in addition to government services. These include traditional sectors, from manufacturing and agriculture to education and health care, as well as emerging fields. In particular, Beijing is prioritizing AI development in robotics and “embodied intelligence.”[91] China released the 14th Five-Year Plan for the Development of the Robot Industry in 2021, followed by the Robot+ Application Action Plan in 2023 aimed at spurring the development and adoption of robots.[92]

Article link: https://www.rand.org/pubs/perspectives/PEA4012-1.html?

The AI Backlash Keeps Growing Stronger – Wired

Posted by timmreardon on 06/29/2025
Posted in: Uncategorized.

As generative artificial intelligence tools continue to proliferate, pushback against the technology and its negative impacts grows stronger.

BEFORE DUOLINGO WIPED its videos from TikTok and Instagram in mid-May, social media engagement was one of the language-learning app’s most recognizable qualities. Its green owl mascot had gone viral multiple times and was well known to younger users—a success story other marketers envied.

But, when news got out that Duolingo was making the switch to become an “AI-first” company, planning to replace contractors who work on tasks generative AI could automate, public perception of the brand soured.

Young people started posting on social media about how they were outraged at Duolingo as they performatively deleted the app—even if it meant losing the precious streak awards they earned through continued, daily usage. The comments on Duolingo’s TikTok posts in the days after the announcement were filled with rage, primarily focused on a single aspect: workers being replaced with automation.


FEATURED VIDEO

Cheating Expert Answers Casino Cheating Questions

The negative response online is indicative of a larger trend: Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back.

When reached for comment, Duolingo spokesperson Sam Dalsimer stressed that “AI isn’t replacing our staff” and said all AI-generated content on the platform would be created “under the direction and guidance of our learning experts.” The company’s plan is still to reduce its use of non-staff contractors for tasks that can be automated using generative AI.

Duolingo’s embrace of workplace automation is part of a broad shift within the tech industry. Leaders at Klarna, a buy now, pay later service, and Salesforce, a software company, have also made sweeping statements about AI reducing the need for new hires in roles like customer service and engineering. These decisions were being made at the same time as developers sold “agents,” which are designed to automate software tasks, as a way to reduce the amount of workers needed to complete certain tasks.

Still, the potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI. Add that to the error-ridden outputs, the environmental damage, the potential mental health impactsfor users, and the concerns about copyright violations when AI tools are trained on existing works.

Many people were initially in awe of ChatGPT and other generative AI tools when they first arrived in late 2022. You could make a cartoon of a duck riding a motorcycle! But soon artists started speaking out, noting that their visual and textual works were being scraped to train these systems. The pushback from the creative community ramped up during the 2023 Hollywood writer’s strike, and continued to accelerate through the current wave of copyright lawsuitsbrought by publishers, creatives, and Hollywood studios.

Right now, the general vibe aligns even more with the side of impacted workers. “I think there is a new sort of ambient animosity towards the AI systems,” says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. “AI companies have speedrun the Silicon Valley trajectory.”

Before ChatGPT’s release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since.

Ethical AI researchers have long warned about the potential negative impacts of this technology. The amplification of harmful stereotypes, increased environmental pollution, and potential displacement of workers are all widely researched and reported. These concerns were often previously reserved to academic discourse and online leftists paying attention to labor issues.

As AI outputs continued to proliferate, so did the cutting jokes. Alex Hanna, coauthor of The AI Con and director of research at the Distributed AI Research Institute, mentions how people have been “trolling” in the comment sections of YouTube Shorts and Instagram Reels whenever they see AI-generated content in their feeds. “I’ve seen this on the web for a while,” she says.

This generalized animosity towards AI has not abated over time. Rather, it’s metastasized. LinkedIn users have complained about being constantly prompted with AI-generated questions. Spotify listeners have been frustrated to hear AI-generated podcasts recapping their top-listened songs. Reddit posters have been upset to see AI-generated images on their microwavable noodles at the grocery store.

Tensions are so high that even the suspicion of AI usage is now enough to draw criticism. I wouldn’t be surprised if social media users screenshotted the em dashes in this piece—a supposed giveaway of AI-generated text outputs—and cast suspicions about whether I used a chatbot to spin up sections of the article.

A few days after I first contacted Duolingo for comment, the company hid all of its social media videos on TikTok and Instagram. But, soon the green owl was back online with a satirical post about conspiracy theories. “I’ve had it with the CEOs and those in power. It’s time we show them who’s in charge,” said a person wearing a three-eyed Duolingo mask. The video uploaded right afterwards was a direct message from the company’s CEO attempting to explain how humans would still be working at Duolingo, but AI could help them produce more language learning courses.

While the videos got millions of views on TikTok, the top comments continued to criticize Duolingo for AI-enabled automation: “Keep in mind they are still using AI for their lessons, this doesn’t change anything.”

This frustration over AI’s steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child’s mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate.

Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work.

“Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible,” says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. “Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources.”

Not only are the rich getting richer during the AI era, but many of the technology’s harms are falling on people of color and other marginalized communities. “Data centers are being located in these really poor areas that tend to be more heavily Black and brown,” Hanna says. She points out how locals have not just been fighting back online, but have also been organizing even more in-person to protect their communities from environmental pollution. We saw this in Memphis, Tennessee, recently, where Elon Musk’s artificial intelligence company xAI is building a large data center with over 30 methane-gas-powered generators that are spewing harmful exhaust.

The impacts of generative AI on the workforce are another core issue that critics are organizing around. “Workers are more intuitive than a lot of the pundit class gives them credit for,” says Merchant. “They know this has been a naked attempt to get rid of people.” The next major shift in public opinion will likely follow previous patterns, occurring when broad swaths of workers feel further threatened and organize in response. And this time, the in-person protests may be just as big as the online backlash.

Article link: https://www.wired.com/story/generative-ai-backlash/?

I’ve watched 3 “revolutionary” healthcare technologies fail spectacularly.

Posted by timmreardon on 06/28/2025
Posted in: Uncategorized.

Each time, the technology was perfect.

The implementation was disastrous.

Google Health (shut down twice). Microsoft HealthVault (lasted 12 years, then folded). IBM Watson for Oncology (massively overpromised).

Billions invested. Solid technology. Total failure.

Not because the vision was wrong, but because healthcare adoption follows different rules than consumer tech.

Here’s what I learned building healthcare tech for 15 years:
1/ Healthcare moves at the speed of trust, not innovation
↳ Lives are at stake, so skepticism is protective
↳ Regulatory approval takes years usually for good reason
↳ Doctors need extensive validation before adoption
↳ Patients want proven solutions, not beta testing

2/ Integration trumps innovation every time
↳ The best tool that no one uses is worthless
↳ Workflow integration matters more than features
↳ EMR compatibility determines adoption rates
↳ Training time is always underestimated

3/ The “cool factor” doesn’t predict success
↳ Flashy demos rarely translate to daily use
↳ Simple solutions often outperform complex ones
↳ User interface design beats artificial intelligence
↳ Reliability matters more than cutting-edge features

4/ Reimbursement determines everything
↳ No CPT code = no sustainable business model
↳ Insurance coverage drives provider adoption
↳ Value-based care is changing this slowly
↳ Free trials don’t create lasting change

5/ Clinical champions make or break technology
↳ One enthusiastic doctor can drive adoption
↳ Early adopters must see immediate benefits
↳ Word-of-mouth beats marketing every time
↳ Resistance from key stakeholders kills innovations

The pattern I’ve seen: companies build technology for the healthcare system they wish existed, not the one that actually exists.

They optimize for TechCrunch headlines instead of clinic workflows.

They design for Silicon Valley investors instead of 65-year-old physicians.

A successful healthcare technology I’ve implemented?

A simple visit summarization app that saved me time and let me focus on the patient.

No fancy interface, very lightweight, integrated into my clinical workflow, effortless to use.

Just solved an problem that users had.

Healthcare doesn’t need more revolutionary technology.

It needs evolutionary technology that works within existing systems.

⁉️ What’s the simplest technology that’s made the biggest difference in your healthcare experience? Sometimes basic beats brilliant.
♻️ Repost if you believe implementation beats innovation in healthcare
👉 Follow me (Reza Hosseini Ghomi, MD, MSE) for realistic perspectives on healthcare technology

Article link: https://www.linkedin.com/posts/rezahg_ive-watched-3-revolutionary-healthcare-activity-7342178230193295360-XWK_?

This AI Model Never Stops Learning – Wired

Posted by timmreardon on 06/21/2025
Posted in: Uncategorized.

Scientists at Massachusetts Institute of Technology have devised a way for large language models to keep learning on the fly—a step toward building AI that continually improves itself.

MODERN LARGE LANGUAGEmodels (LLMs) might write beautiful sonnets and elegant code, but they lack even a rudimentary ability to learn from experience.

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligencemodels that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.

“The initial idea was to explore if tokens [units of text fed to LLMs and generated by them] could cause a powerful update to a model,” says Jyothish Pari, a PhD student at MIT involved with developing SEAL. Pari says the idea was to see if a model’s output could be used to train it.

Adam Zweiger, an MIT undergraduate researcher involved with building SEAL, adds that although newer models can “reason” their way to better solutions by performing more complex inference, the model itself does not benefit from this reasoning over the long term.

SEAL, by contrast, generates new insights and then folds it into its own weights or parameters. Given a statement about the challenges faced by the Apollo space program, for instance, the model generated new passages that try to describe the implications of the statement. The researchers compared this to the way a human student writes and reviews notes in order to aid their learning.

The system then updated the model using this data and tested how well the new model is able to answer a set of questions. And finally, this provides a reinforcement learning signal that helps guide the model toward updates that improve its overall abilities and which help it carry on learning.

The researchers tested their approach on small and medium-size versions of two open source models, Meta’s Llama and Alibaba’s Qwen. They say that the approach ought to work for much larger frontier models too.

The researchers tested the SEAL approach on text as well as a benchmark called ARC that gauges an AI model’s ability to solve abstract reasoning problems. In both cases they saw that SEAL allowed the models to continue learning well beyond their initial training.

Pulkit Agrawal, a professor at MIT who oversaw the work, says that the SEAL project touches on important themes in AI, including how to get AI to figure out for itself what it should try to learn. He says it could well be used to help make AI models more personalized. “LLMs are powerful but we don’t want their knowledge to stop,” he says.

SEAL is not yet a way for AI to improve indefinitely. For one thing, as Agrawal notes, the LLMs tested suffer from what’s known as “catastrophic forgetting,” a troubling effect seen when ingesting new information causes older knowledge to simply disappear. This may point to a fundamental difference between artificial neural networks and biological ones. Pari and Zweigler also note that SEAL is computationally intensive, and it isn’t yet clear how best to most effectively schedule new periods of learning. One fun idea, Zweigler mentions, is that, like humans, perhaps LLMs could experience periods of “sleep” where new information is consolidated.

Still, for all its limitations, SEAL is an exciting new path for further AI research—and it may well be something that finds its way into future frontier AI models.

What do you think about AI that is able to keep on learning? Send an email to hello@wired.com to let me know.

Article link; https://www.wired.com/story/this-ai-model-never-stops-learning/?

FEHRM CTO Targets Two-Year Cloud Migration for Federal EHR

Posted by timmreardon on 06/20/2025
Posted in: Uncategorized.

WED, 06/18/2025 

Lance Scott touts new EHR tech advancements, including cloud migration, expanded data exchange and AI integration to improve care delivery.

The Federal Electronic Health Record Modernization Office is targeting new tech advancements for the federal EHR, including moving to the cloud, boosting interoperability through new information exchange programs and integrating AI, the office’s  CTO Lance Scott explained earlier this month during the 2025 ACT-IAC Health Innovation Conference in Reston, Virginia.

Moving the Federal EHR to the Cloud

Federal EHR agencies will transition the EHR to the cloud in tranches, according to Scott, and the deployment could take nearly two years to complete as agencies develop “flexible scalability.”

“We want to take advantage of the inherent native cloud services that we’ve got. It’s no small feat. It’s going to take better part of 18 months to two years to do,” Scott said.

Scott said that his team is working to ensure that the transition is as seamless for the user as possible as the EHR continues to be developed and moved to the cloud. Ideally, the user would not recognize a significant change as the system switches over.

“We’re trying to keep as much functionality turmoil out of the mix as possible to make sure that we don’t impact the users too much now. However, what we’re doing is we’re setting the stage,” Scott said during the conference.

Despite the potential promise of the EHR, Scott said he still has lingering concerns about cost increases of the modernization effort as it moves to the cloud, specifically hidden costs that have yet to materialize.

“I think the biggest thing that I’m worried about is functionality that’s going to be enabled by us going to the cloud that we haven’t looked at yet, that will cost extra money,” Scott said. “As far as the general move to the cloud, I don’t think I’ve seen any use case that says that it makes more sense to stay on prem.”

Expanding the Seamless Exchange Program

Scott said the Department of Veterans Affairs’ Seamless Exchange program is “finally reaching fruition.” VA first piloted the program in last year in Walla Walla, Washington. The pilot was successful enough that VA plans to launch the program on a wider scale in November of this year. The program offers new opportunities for interoperability between the Defense Department and VA, and DOD intends to roll out its own Seamless Exchange capability following the success of the VA program.

“The reason why it’s so exciting is years ago, my focus was to get more data, get more partners, do as much as we can to bring in data. Now we’ve got 96% of the U.S. market that we exchange data with. Now we’ve got another problem. The problem is information overflow,” Scott said.

The seamless data exchange is built upon three foundational pillars: data de-duplication, which Scott said has a huge impact on performance and cost; data provenance, as data shared over and over between partners loses its origin; and auto-ingestion, which brings in data from hundreds or even thousands of partners and needs to be analyzed by clinicians to drive best outcomes.

According to Scott, lessons learned from these pilots will directly affect the deployment of the EHR and lead to better outcomes overall. The VA is currently on track to deploy the EHR at 13 new sites in fiscal year 2026 following a nearly three-year deployment pause.

AI’s Role in the Future Federal EHR

Within the DOD, Scott pointed to U.S. Military Entrance Processing Command, which uses data gathered by the EHR to filter candidates looking to join the military. The influx of data has allowed employees to sift through candidates at a much more efficient pace and approve or decline candidates based on a number of factors, such as medical history or drug use.

In the future, Scott says the next generation of the EHR will be AI-enabled, with new technologies augmenting the ability of clinicians to provide quality care.

“They’re going to have digital assistants. They’re going to have ambient listening. There’s going to be agents listening into what the doctor and patient talk back and forth about,” Scott said. “They actually will draft up diagnoses and notes for the clinician to look at and finalize and sign.”

Article link: https://govciomedia.com/fehrm-cto-targets-two-year-cloud-migration-for-federal-ehr/

The American Sense of Fair Play

Posted by timmreardon on 06/20/2025
Posted in: Uncategorized.

Somehow we seemed to have lost the American sense of fair play. It’s the intuitive sense that people, regardless of status can have the opportunity to pursue their goals and interests, without interference from the government. Depriving immigrants, well situated and contributing to the American economy, paying taxes, and peaceful, not committing crimes,and just trying to survive for of what can best be described as a meager existence, in menial jobs to exist and sustain their lives is morally and ethically wrong and anti Christian in nature. Those who wish to disrupt their peaceful pursuit of a peaceful life are monsters of chaos and misinformation, and hardship for the poor and disenfranchised is their condemnation. The American sense of Fair Play requires that they be given an equal opportunity to prosper, regardless of station in life. Fair Play is not tax cuts for those who don’t need it at the expense of those who are barely surviving on the edge of life. Where is our collective humanity?

Our brain is quietly paying a price for using ChatGPT… –

Posted by timmreardon on 06/17/2025
Posted in: Uncategorized.

A recent study from MIT researchers (12 June), which explored what happens when people rely on AI tools like ChatGPT for tasks like essay writing.

One of the key findings (probably not very surprising):
—— 𝐔𝐬𝐢𝐧𝐠 𝐂𝐡𝐚𝐭𝐆𝐏𝐓 𝐫𝐞𝐝𝐮𝐜𝐞𝐝 𝐧𝐞𝐮𝐫𝐚𝐥 𝐜𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐯𝐢𝐭𝐲 𝐚𝐧𝐝 𝐜𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐞𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭 (compared to other groups)

So what they did is, 
54 participants were split into three groups:

  • One group used ChatGPT
  • One used a search engine
  • One worked without any digital assistance

They wrote essays while their brain activity was tracked (using EEG), their writing was analyzed, and they were interviewed about the experience.

Other interesting findings:
—— People relying on ChatGPT felt less ownership of their work and struggled more to recall or quote it.

—— When switching tools, those moving from ChatGPT to Brain-only found it harder to work unaided, while those moving the other way adapted quickly, some said it felt like gaining a superpower.

The study warns of cognitive debt: offloading too much thinking to AI could quietly erode critical thinking and deeper engagement over time.

The paper is quite long, full of rich details (I haven’t gone through it all yet!).

I’ll drop the link in the comments if you’re curious to explore it. Also, check page 5 first – it’s the “How to read this paper” guide from the authors, which is super helpful especially if you don’t have time for 200+ pages!

📍Btw, if you want to keep the brain active and build something great with AI, join our 𝐋𝐞𝐚𝐝𝐖𝐢𝐭𝐡𝐀𝐈𝐀𝐠𝐞𝐧𝐭𝐬 Hackathon (July 11–14)

Article link: https://www.linkedin.com/posts/alexwang2911_ai-cognitivescience-chatgpt-activity-7340798998154223618-8rvr?utm_source=share&utm_medium=member_ios&rcm=ACoAAAMNLzwBx4gZFYdrkprBeSa7F0HmSkFdYwU

VA official touts progress on EHR modernization project – Nextgov

Posted by timmreardon on 06/17/2025
Posted in: Uncategorized.

By EDWARD GRAHAMJUNE 16, 2025 04:38 PM ET

The agency is undertaking “a no-fail mission to deliver a Federal EHR at every VA medical center by 2031,” VA Deputy Secretary Paul Lawrence said.

The Department of Veterans Affairs is making “real progress” in its push to deploy its new electronic health record at 13 VA medical facilities next year, according to a top agency official. 

VA Deputy Secretary Paul Lawrence said in a Monday LinkedIn post that the agency is “currently up and running with deployment activities at 11 sites across Michigan, Southern Ohio, and Indiana going live in 2026,” and is also planning to begin activities at two additional medical facilities in Cleveland and Anchorage later this month.

VA initially signed a $10 billion contract — which was later revised to over $16 billion — with Cerner in May 2018 to modernize its legacy health record system and make it interoperable with the Pentagon’s new health record, which was also provided by Cerner. Oracle later acquired Cerner in 2022.

The agency paused most deployments of its modernized EHR system in April 2023, however, to address patient safety concerns, technical issues and usability challenges at the sites where the new software had been deployed. 

VA announced in December that it was moving out of its operational pause and was looking to deploy the new EHR system at four Michigan-based medical sites in mid-2026. VA Secretary Doug Collins subsequently announced in March that the agency was planning to implement the modernized software at nine additional medical facilities next year, bringing the total to 13 sites.

As of this month, the new EHR system has been fully deployed at just six of VA’s 170 medical centers. During a congressional hearing last month, however, Collins told lawmakers he was optimistic about efforts to speed up rollouts of the new software, saying that “once you get momentum, you can add more sites as you go.” 

The Trump administration’s fiscal year 2026 budget proposal, which was released in May, also included a roughly $2.2 billion boost for the rollout of the new EHR system.

Lawrence said he is holding regular bi-weekly working sessions with the team at Oracle Health about the modernization project, and that VA is making a concerted push to complete the deployment.

“We’re rolling up our sleeves to tackle the tough issues head-on, from pharmacy modules to referrals, and eliminating outdated processes that are holding us back,” Lawrence said. “This is a no-fail mission to deliver a Federal EHR at every VA medical center by 2031.”

Even as VA works to ramp up its activities ahead of next year’s planned deployments, congressional lawmakers are still looking to shore up the modernization project. Last week, Republican lawmakers on the House Veterans’ Affairs Committee put forward a discussion draft of legislation that seeks to improve oversight and governance of the EHR software’s rollout. 

Article link: https://www.nextgov.com/modernization/2025/06/va-official-touts-progress-ehr-modernization-project/406113/

New database details AI risks – MIT

Posted by timmreardon on 06/12/2025
Posted in: Uncategorized.

by Beth Stackpole

Nov 26, 2024

Why It Matters

The AI Risk Repository aims to provide industry, policymakers, and academics with a shared framework for monitoring and maintaining AI risk oversight.

As artificial intelligence sees unprecedented growth and industry use cases soar, concerns mount about the technology’s risks, including bias, data breaches, job loss, and misuse. 

According to research firm Arize AI, the number of Fortune 500 companies citing AI as a risk in their annual financial reports hit 281 this year. That represents a 473.5% increase from 2022, when just 49 companies flagged the technology as a risk factor.

Given the scope and seriousness of the risk climate, a team of researchers that included MIT Sloan research scientist Neil Thompson has created the AI Risk Repository, a living database of over 700 risks posed by AI, categorized by cause and risk domain. The project aims to provide industry, policymakers, academics, and risk evaluators with a shared framework for monitoring and maintaining oversight of AI risks. The repository can also aid organizations with their internal risk assessments, risk mitigation strategies, and research and training development. 

777

The AI Risk Database details 777 different risks cited in AI literature to date.

While other entities have attempted to classify AI risks, existing classifications have generally been focused on only a small part of the overall AI risk landscape.

“The risks posed by AI systems are becoming increasingly significant as AI adoption accelerates across industry and society,” said Peter Slattery, a researcher at MIT FutureTech and the project lead. “However, these risks are often discussed in fragmented ways, across different industries and academic fields, without a shared vocabulary or consistent framework.”

Creating a unified risk view

To create the risk repository, the researchers searched academic databases and consulted other resources to review existing taxonomies and structured classifications of AI risk. They found that two types of classification systems were common in existing literature: high-level categorizations of causes of AI risks, such as when and why risks from AI occur; and midlevel categorizations of hazards and harms from AI, such as using AI to develop weapons or training AI systems on limited data.

Both types of classification systems are used in the AI Risk Repository, which has three components:

  • The AI Risk Database captures 777 different risks from 43 documents, with quotes and page numbers included. It will be updated as new risks emerge.
  • The Causal Taxonomy of AI Risksclassifies how, when, and why such risks occur, based on their root causes. Causes are broken out into three categories: entity responsible (human or AI), the intentionality behind the risk (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment).
  • The Domain Taxonomy of AI Riskssegments risks by the domain in which they occur, such as privacy, misinformation, or AI systems safety. This section mentions seven domains and 23 subdomains. 

The two taxonomies can be used separately to filter the database for specific risks and domains, or they can be used in tandem to understand how each causal factor relates to each risk domain. For example, a user can use both filters to differentiate between discrimination and toxicity risks when AI is deliberately trained on toxic content from the outset, and instances of risk where AI inadvertently causes harm after the fact by displaying toxic content.

As part of the exercise, the researchers uncovered some interesting insights about the current literature. Among them:

  • Most risks were attributed to AI systems rather than to humans (51% versus 34%).
  • Most of the risks discussed occurred after an AI model had been trained and deployed (65%) rather than before (10%).
  • Nearly an equal number of intentional (35%) and unintentional (37%) risks were identified.

Putting the AI Risk Repository to work 

The MIT AI Risk Repository will have different uses for different audiences.

RELATED ARTICLES

A framework for assessing AI risk

From MIT, a technically informed approach to governing

AIThird-party AI tools pose risks for organizations

Policymakers. The repository can serve as a guide for developing and enacting regulations on AI systems. For example, it can be used to identify the type and nature of risks and their sources as AI developers aim to comply with regulations like the EU AI Act. The tool also creates a common language and set of criteria for discussing AI risks at a global scale.

Auditors. The repository provides a shared understanding of risks from AI systems that can guide those in charge of evaluating and auditing AI risks. While some AI risk management frameworks had already been developed, they are much less comprehensive.

Academics. The taxonomy can be used to synthesize information about AI risks across studies and sources. It can also help identify gaps in current knowledge so efforts can be directed toward those areas. The AI Risk Repository can also play a role in education and training, acclimating students and professionals to the inner workings of the AI risk landscape.

Industry. The AI Risk Repository can be a critical tool for safe and responsible AI application development as organizations build new systems. The AI Risk Database can also help identify specific behaviors that mitigate risk exposure.

“The risks of AI are poised to become increasingly common and pressing,” the MIT researchers write. “Efforts to understand and address these risks must be able to keep pace with the advancements in deployment of AI systems. We hope our living, common frame of reference will help these endeavors to be more accessible, incremental, and successful.”

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/new-database-details-ai-risks?

Algorithms are everywhere – MIT Technology Review

Posted by timmreardon on 06/07/2025
Posted in: Uncategorized.


Three new books warn against turning into the person the algorithm thinks you are.

By Bryan Gardiner

February 27, 2024

Like a lot of Netflix subscribers, I find that my personal feed tends to be hit or miss. Usually more miss. The movies and shows the algorithms recommend often seem less predicated on my viewing history and ratings, and more geared toward promoting whatever’s newly available. Still, when a superhero movie starring one of the world’s most famous actresses appeared in my “Top Picks” list, I dutifully did what 78 million other households did and clicked.

As I watched the movie, something dawned on me: recommendation algorithms like the ones Netflix pioneered weren’t just serving me what they thought I’d like—they were also shaping what gets made. And not in a good way. 

The movie in question wasn’t bad, necessarily. The acting was serviceable, and it had high production values and a discernible plot (at least for a superhero movie). What struck me, though, was a vague sense of déjà vu—as if I’d watched this movie before, even though I hadn’t. When it ended, I promptly forgot all about it. 

That is, until I started reading Kyle Chayka’s recent book, Filterworld: How Algorithms Flattened Culture. A staff writer for the New Yorker, Chayka is an astute observer of the ways the internet and social media affect culture. “Filterworld” is his coinage for “the vast, interlocking … network of algorithms” that influence both our daily lives and the “way culture is distributed and consumed.” 

Music, film, the visual arts, literature, fashion, journalism, food—Chayka argues that algorithmic recommendations have fundamentally altered all these cultural products, not just influencing what gets seen or ignored but creating a kind of self-reinforcing blandness we are all contending with now.

That superhero movie I watched is a prime example. Despite my general ambivalence toward the genre, Netflix’s algorithm placed the film at the very top of my feed, where I was far more likely to click on it. And click I did. That “choice” was then recorded by the algorithms, which probably surmised that I liked the movie and then recommended it to even more viewers. Watch, wince, repeat.  

“Filterworld culture is ultimately homogenous,” writes Chayka, “marked by a pervasive sense of sameness even when its artifacts aren’t literally the same.” We may all see different things in our feeds, he says, but they are increasingly the same kind of different. Through these milquetoast feedback loops, what’s popular becomes more popular, what’s obscure quickly disappears, and the lowest-­common-denominator forms of entertainment inevitably rise to the top again and again. 

This is actually the opposite of the personalization Netflix promises, Chayka notes. Algorithmic recommendations reduce taste—traditionally, a nuanced and evolving opinion we form about aesthetic and artistic matters—into a few easily quantifiable data points. That oversimplification subsequently forces the creators of movies, books, and music to adapt to the logic and pressures of the algorithmic system. Go viral or die. Engage. Appeal to as many people as possible. Be popular.  

A joke posted on X by a Google engineer sums up the problem: “A machine learning algorithm walks into a bar. The bartender asks, ‘What’ll you have?’ The algorithm says, ‘What’s everyone else having?’” “In algorithmic culture, the right choice is always what the majority of other people have already chosen,” writes Chayka. 

One challenge for someone writing a book like Filterworld—or really any book dealing with matters of cultural import—is the danger of (intentionally or not) coming across as a would-be arbiter of taste or, worse, an outright snob. As one might ask, what’s wrong with a little mindless entertainment? (Many asked just that in response to Martin Scorsese’s controversial Harper’sessay  in 2021, which decried Marvel movies and the current state of cinema.) 

Chayka addresses these questions head on. He argues that we’ve really only traded one set of gatekeepers (magazine editors, radio DJs, museum curators) for another (Google, Facebook, TikTok, Spotify). Created and controlled by a handful of unfathomably rich and powerful companies (which are usually led by a rich and powerful white man), today’s algorithms don’t even attempt to reward or amplify quality, which of course is subjective and hard to quantify. Instead, they focus on the one metric that has come to dominate all things on the internet: engagement.

There may be nothing inherently wrong (or new) about paint-by-numbers entertainment designed for mass appeal. But what algorithmic recommendations do is supercharge the incentives for creating only that kind of content, to the point that we risk not being exposed to anything else.

“Culture isn’t a toaster that you can rate out of five stars,” writes Chayka, “though the website Goodreads, now owned by Amazon, tries to apply those ratings to books. There are plenty of experiences I like—a plotless novel like Rachel Cusk’s Outline, for example—that others would doubtless give a bad grade. But those are the rules that Filterworld now enforces for everything.”

Chayka argues that cultivating our own personal taste is important, not because one form of culture is demonstrably better than another, but because that slow and deliberate process is part of how we develop our own identity and sense of self. Take that away, and you really do become the person the algorithm thinks you are. 

Algorithmic omnipresence

As Chayka points out in Filterworld, algorithms “can feel like a force that only began to exist … in the era of social networks” when in fact they have “a history and legacy that has slowly formed over centuries, long before the Internet existed.” So how exactly did we arrive at this moment of algorithmic omnipresence? How did these recommendation machines come to dominate and shape nearly every aspect of our online and (increasingly) our offline lives? Even more important, how did we ourselves become the data that fuels them?

These are some of the questions Chris Wiggins and Matthew L. Jones set out to answer in How Data Happened: A History from the Age of Reason to the Age of Algorithms. Wiggins is a professor of applied mathematics and systems biology at Columbia University. He’s also the New York Times’ chief data scientist. Jones is now a professor of history at Princeton. Until recently, they both taught an undergrad course at Columbia, which served as the basis for the book.

They begin their historical investigation at a moment they argue is crucial to understanding our current predicament: the birth of statistics in the late 18th and early 19th century. It was a period of conflict and political upheaval in Europe. It was also a time when nations were beginning to acquire both the means and the motivation to track and measure their populations at an unprecedented scale.

“War required money; money required taxes; taxes required growing bureaucracies; and these bureaucracies needed data,” they write. “Statistics”may have originally described “knowledge of the state and its resources, without any particularly quantitative bent or aspirations at insights,” but that quickly began to change as new mathematical tools for examining and manipulating data emerged.

One of the people wielding these tools was the 19th-century Belgian astronomer Adolphe Quetelet. Famous for, among other things, developing the highly problematic body mass index (BMI), Quetelet had the audacious idea of taking the statistical techniques his fellow astronomers had developed to study the position of stars and using them to better understand society and its people. This new “social physics,” based on data about phenomena like crime and human physical characteristics, could in turn reveal hidden truths about humanity, he argued.

“Quetelet’s flash of genius—whatever its lack or rigor—was to treat averages about human beings as if they were real quantities out there that we were discovering,” write Wiggins and Jones. “He acted as if the average height of a population was a real thing, just like the position of a star.” 

From Quetelet and his “average man” to Francis Galton’s eugenics to Karl Pearson and Charles Spearman’s “general intelligence,” Wiggins and Jones chart a depressing progression of attempts—many of them successful—to use data as a scientific basis for racial and social hierarchies. Data added “a scientific veneer to the creation of an entire apparatus of discrimination and disenfranchisement,” they write. It’s a legacy we’re still contending with today. 

Another misconception that persists? The notion that data about people are somehow objective measures of truth. “Raw data is an oxymoron,” observed the media historian Lisa Gitelman a number of years ago. Indeed, all data collection is the result of human choice, from what to collect to how to classify it to who’s included and excluded. 

Whether it’s poverty, prosperity, intelligence, or creditworthiness, these aren’t real things that can be measured directly, note Wiggins and Jones. To quantify them, you need to choose an easily measured proxy. This “reification” (“literally, making a thing out of an abstraction about real things”) may be necessary in many cases, but such choices are never neutral or unproblematic. “Data is made, not found,” they write, “whether in 1600 or 1780 or 2022.”

Perhaps the most impressive feat Wiggins and Jones pull off in the book as they continue to chart data’s evolution throughout the 20th century and the present day is dismantling the idea that there is something inevitable about the way technology progresses. 

For Quetelet and his ilk, turning to numbers to better understand humans and society was not an obvious choice. Indeed, from the beginning, everyone from artists to anthropologists understood the inherent limitations of data and quantification, making some of the same critiques of statisticians that Chayka makes of today’s algorithmic systems (“Such statisticians ‘see quality not at all, but only quantity’”).

Whether they’re talking about the machine-learning techniques that underpin today’s AI efforts or an internet built to harvest our personal data and sell us stuff, Wiggins and Jones recount many moments in history when things could have just as likely gone a different way.

“The present is not a prison sentence, but merely our current snapshot,” they write. “We don’t have to use unethical or opaque algorithmic decision systems, even in contexts where their use may be technically feasible. Ads based on mass surveillance are not necessary elements of our society. We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future. Privacy is not dead because of technology; it’s not true that the only way to support journalism or book writing or any craft that matters to you is spying on you to service ads. There are alternatives.” 

A pressing need for regulation

If Wiggins and Jones’s goal was to reveal the intellectual tradition that underlies today’s algorithmic systems, including “the persistent role of data in rearranging power,” Josh Simons is more interested in how algorithmic power is exercised in a democracy and, more specifically, how we might go about regulating the corporations and institutions that wield it.

Currently a research fellow in political theory at Harvard, Simons has a unique background. Not only did he work for four years at Facebook, where he was a founding member of what became the Responsible AI team, but he previously served as a policy advisor for the Labour Party in the UK Parliament. 

In Algorithms for the People: Democracy in the Age of AI, Simons builds on the seminal work of authors like Cathy O’Neil, Safiya Noble, and Shoshana Zuboff to argue that algorithmic prediction is inherently political. “My aim is to explore how to make democracy work in the coming age of machine learning,” he writes. “Our future will be determined not by the nature of machine learning itself—machine learning models simply do what we tell them to do—but by our commitment to regulation that ensures that machine learning strengthens the foundations of democracy.”

Much of the first half of the book is dedicated to revealing all the ways we continue to misunderstand the nature of machine learning, and how its use can profoundly undermine democracy. And what if a “thriving democracy”—a term Simons uses throughout the book but never defines—isn’t always compatible with algorithmic governance? Well, it’s a question he never really addresses. 

Whether these are blind spots or Simons simply believes that algorithmic prediction is, and will remain, an inevitable part of our lives, the lack of clarity doesn’t do the book any favors. While he’s on much firmer ground when explaining how machine learning works and deconstructing the systems behind Google’s PageRank and Facebook’s Feed, there remain omissions that don’t inspire confidence. For instance, it takes an uncomfortably long time for Simons to even acknowledge one of the key motivations behind the design of the PageRank and Feed algorithms: profit. Not something to overlook if you want to develop an effective regulatory framework. 

Much of what’s discussed in the latter half of the book will be familiar to anyone following the news around platform and internet regulation (hint: that we should be treating providers more like public utilities). And while Simons has some creative and intelligent ideas, I suspect even the most ardent policy wonks will come away feeling a bit demoralized given the current state of politics in the United States. 

In the end, the most hopeful message these books offer is embedded in the nature of algorithms themselves. In Filterworld, Chayka includes a quote from the late, great anthropologist David Graeber: “The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.” It’s a sentiment echoed in all three books—maybe minus the “easily” bit. 

Algorithms may entrench our biases, homogenize and flatten culture, and exploit and suppress the vulnerable and marginalized. But these aren’t completely inscrutable systems or inevitable outcomes. They can do the opposite, too. Look closely at any machine-learning algorithm and you’ll inevitably find people—people making choices about which data to gather and how to weigh it, choices about design and target variables. And, yes, even choices about whether to use them at all. As long as algorithms are something humans make, we can also choose to make them differently. 

Bryan Gardiner is a writer based in Oakland, California.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/02/27/1088164/algorithms-book-reviews-kyle-chayka-chris-wiggins-matthew-l-jones-josh-simons/amp/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...