healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

America Should Assume the Worst About AI – Foreign Affairs

Posted by timmreardon on 07/23/2025
Posted in: Uncategorized.
How to Plan for a Tech-Driven Geopolitical Crisis
Matan Chorev and Joel Predd

July 22, 2025

National security leaders rarely get to choose what to care about and how much to care about it. They are more often subjects of circumstances beyond their control. The September 11 attacks reversed the George W. Bush administration’s plan to reduce the United States’ global commitments and responsibilities. Revolutions across the Arab world pushed President Barack Obama back into the Middle East just as he was trying to pull the United States out. And Russia’s invasion of Ukraine upended the Biden administration’s goal of establishing “stable and predictable” relations with Moscow so that it could focus on strategic competition with China.

Policymakers could foresee many of the underlying forces and trends driving these agenda-shaping events. Yet for the most part, they failed to plan for the most challenging manifestations of where these forces would lead. They had to scramble to reconceptualize and recalibrate their strategies to respond to unfolding events.

The rapid advance of artificial intelligence—and the possible emergence of artificial general intelligence—promises to present policymakers with even greater disruption. Indicators of a coming powerful change are everywhere. Beijing and Washington have made global AI leadership a strategic imperative, and leading U.S. and Chinese companies are racing to achieve AGI. News coverage features near-daily announcements of technical breakthroughs, discussions of AI-driven job loss, and fears of catastrophic global risks such as the AI-enabled engineering of a deadly pandemic.

There is no way of knowing with certainty the exact trajectory along which AI will develop or precisely how it will transform national security. Policymakers should therefore assess and debate the merits of competing AI strategies with humility and caution. Whether one is bullish or bearish about AI’s prospects, though, national security leaders need to be ready to adapt their strategic plans to respond to events that could impose themselves on decision-makers this decade, if not during this presidential term. Washington must prepare for potential policy tradeoffs and geopolitical shifts, and identify practical steps it can take today to mitigate risks and turbocharge U.S. competitiveness. Some ideas and initiatives that today may seem infeasible or unnecessary will seem urgent and self-evident with the benefit of hindsight.

THINKING OUTSIDE THE BOX

There is no standard, shared definition of AGI or consensus on whether, when, or how it might emerge. Today’s frontier AI models are already increasingly capable of performing a greater number and complexity of cognitive tasks than the most skilled and best resourced humans. Since ChatGPT launched in 2022, the power of AI has increased by leaps and bounds. It is reasonable to assume that these models will become more powerful, autonomous, and diffuse in the coming years.

Nevertheless, the AGI era is not likely to announce itself with an earth-shattering moment as the nuclear era did with the first nuclear weapons test. Nor are the economic and technological circumstances as favorable to U.S. planners as they were in the past. In the nuclear era, for example, the U.S. government controlled the new technology, and planners had two decades to develop policy frameworks before a nuclear rival emerged. Planners today, by contrast, have less agency and time to adapt. China is already a near peer in technology, a handful of private companies are steering development, and AI is a general-purpose technology that is spreading to nearly every part of the economy and society.

In this rapidly changing environment, national security leaders should dedicate scarce planning resources to plausible but acutely challenging events. These types of events are not merely disruptions to the status quo but also signposts of alternative futures.

Say, for instance, that a U.S. company claims to have made the transformative technological leap to AGI. Leaders must decide how the U.S. government should respond if the company requests to be treated as a “national security asset.” This designation would grant the company public support that could allow it to secure its facilities, access sensitive or proprietary data, acquire more advanced chips, and avoid certain regulations. Alternatively, a Chinese firm may declare that it has achieved AGI before any of its U.S. rivals.

Planning for AGI cannot be delegated to futurists sent to a far-off bunker.

Policymakers grappling with these scenarios will have to balance competing and sometimes contradictory assessments, which will lead to different judgments about how much risk to accept and which concerns to prioritize. Without robust, independent analytic capabilities, the U.S. government may struggle to determine whether the firms’ claims are credible. National security leaders will also have to consider whether the new technological advance could provide China with a strategic advantage. If they fear AGI could give Beijing the ability to identify and exploit vulnerabilities in U.S. critical infrastructure faster than cyberdefenses can patch them, for example, they may prescribe actions—such as trying to slow or sabotage China’s AI development—that could escalate the risk of geopolitical conflict. On the other hand, if national security leaders are more concerned that nonstate actors or terrorists could use this new technology to create catastrophic bioweapons, they may prefer to try to cooperate with Beijing to prevent proliferation of a larger global threat.

Enhancing preparedness for AGI scenarios requires better understanding of the AI ecosystem at home and abroad. Government agencies need to keep up with how AI is developing to identify where new advances are most likely to emerge. This will reduce the risk of strategic surprise and help inform policy choices on which bottlenecks to prioritize and which vulnerabilities to exploit to potentially slow China’s progress.

Policymakers also need to explore ways to work with the private sector and with other countries. A scalable, dynamic, and two-way private-public partnership is crucial for a strategic response to the current challenges that AI presents, and this will be even more the case in an AGI world. Mutual suspicion between government and the private sector could cripple any crisis response. Meanwhile, leaders will need to develop policies to share sensitive, proprietary information on developments in frontier AI with partners and allies. Without such policies, it will be challenging to build the international coalition needed to respond to an AI-induced crisis, reduce global risk, and hold countries and companies accountable for irresponsible behavior.

ADVERSARIAL INTELLIGENCE

Artificial general intelligence will not only complicate existing geopolitical dynamics; it will also present novel national security challenges. Imagine an unprecedented AI-enabled cyberattack that wreaks havoc on financial institutions, private corporations, and government agencies and shuts down physical systems ranging from critical infrastructure to industrial robotics. In today’s world, determining who is responsible for cyberwarfare is already a challenging and time-intensive task. Any number of state and nonstate actors possess both the means and motivations to carry out destabilizing attacks. In a world with increasingly advanced AI, however, the situation would be even more complex. Policymakers would have to contemplate not only the possibility that an operation of this scale might be the prelude to a military campaign but also that it might be the work of an autonomous, self-replicating AI agent.

Planning for this scenario requires evaluating how today’s capabilities can handle tomorrow’s challenges. Governments cannot rely on present-day tools and techniques to quickly and confidently assess a threat, let alone apply relevant countermeasures. Given AI systems’ proven capacity to deceive and dissemble, current systems may be unable to determine whether an AI agent is operating on its own or at the behest of an adversary. Planners need to find new ways to assess its motivations and how to deter escalation.

Preparing for the worst requires reevaluating “attribution agnostic” steps to harden cyberdefenses, isolate potentially compromised data centers, and prevent the incapacitation of drones or connected vehicles. Planners need to assess whether current military and continuity of operations protocols can handle threats from adversarial AI. Public distrust of the government and technology companies will make it even more difficult to reassure a worried populace in the event of artificial intelligence–fueled misinformation. Given that an autonomous AI agent is not likely to respect national boundaries, adequate preparations would involve setting up channels with partners and adversaries alike to coordinate an effective international response.

How leaders diagnose the external impacts of an impending threat will shape how they react. In the event of a cyberattack, policymakers will have to make a real-time decision about whether to pursue targeted shutdowns of vulnerable cyber-physical systems and compromised data centers or—fearing the potential for rapid replication—impose a more comprehensive shutdown, which could prevent escalation but inhibit the functioning of the digital economy and systems on which airports and power plants rely. This loss-of-control scenario highlights the importance of clarifying legal authority and developing incident-response plans. More broadly, it reinforces the urgency of creating policies and technical strategies to address how advanced models are inclined to misbehave.

At minimum, planning should involve four types of actions. First, it should establish “no regret” actions that policymakers and private-sector players can take today to respond to events from a position of strength. Second, it should create “break glass” playbooks for future emergencies that can be continually updated as new threats, opportunities, and concepts emerge. Third, it should invest in capabilities that seem crucial across multiple scenarios. Finally, it should prioritize early indicators and warnings of strategic failure and create conditions for course corrections.

NO COUNTRY FOR OLD HABITS

Planning for the impacts of AGI on national security needs to start now. In an increasingly competitive and combustible world, and with an economically fragile and politically polarized domestic environment, the United States cannot afford being caught by surprise.

Although it is possible that AI will ultimately prove to be a “normal technology”—a technology, like the Internet or electricity, that transforms the world but whose pace of adoption has natural limits that governments and societies can control—it would be foolish to assume that preparing for major disruption would be a mistake. Planning for more difficult challenges can help leaders identify core strategic issues and build response tools that will be equally useful in less severe circumstances. It would also be unwise to presume that such planning will generate policy instincts and pathways that exacerbate risks or slow AI advances. In the nuclear era, for example, planning for potential nuclear terrorism inspired global initiatives to secure the fissile material needed to make nuclear weapons that ultimately made the world safer.

It would also be dangerous to treat the possibility of AGI like any “normal scenario” in the national security world. Technological expertise and fluency across the government is limited and uneven, and the institutional players that would be involved in responding to any scenario extend far beyond traditional national security agencies. Most scenarios are likely to occur abroad and at home simultaneously. Any response will rely heavily on the choices and decisions of actors outside government, including companies and civil society organizations, that do not have a seat in the White House Situation Room and may not prioritize national security. Likewise, planning cannot be delegated to futurists and technical experts sent to a far-off bunker to spend months crafting detailed plans in isolation. Preparing for a future with AGI must continuously inform today’s strategic debates.

There is an active debate about the merits of various strategies to win the competition for AI while avoiding catastrophe, but there has been less discussion about how AGI might reshape the international landscape, the distribution of global power, and geopolitical alliances. In an increasingly multipolar world, emerging players see advanced AI—and how the United States and China diffuse AI technology and its underlying digital architecture—as key to their national aspirations. Early planning, tabletop exercises with allies and partners, and sustained dialogue with countries that want to hedge their diplomatic bets will ensure that strategic choices are mutually beneficial. Any AI strategy that fails to account for a multipolar world and a more distributed global technology ecosystem will fail. And any national security strategy that fails to grapple with the potentially transformative effects of AGI will become irrelevant.

National security leaders don’t get to choose their crises. They do, however, get to choose what to plan for and where to allocate resources to prepare for future challenges. Planning for AGI is not an indulgence in science fiction or a distraction from existing problems and opportunities. It is a responsible way to prepare for the very real possibility of a new set of national security challenges in a radically transformed world.

Article link: https://www.foreignaffairs.com/united-states/artificial-intelligence-geopolitics-worst-about-ai

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← China’s Evolving Industrial Policy for AI – RAND
These protocols will help AI agents navigate our messy lives – MIT Technology Review →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Hype Correction – MIT Technology Review 12/15/2025
    • Semantic Collapse – NeurIPS 2025 12/12/2025
    • The arrhythmia of our current age – MIT Technology Review 12/11/2025
    • AI: The Metabolic Mirage 12/09/2025
    • When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists 12/05/2025
    • Why Digital Transformation—And AI—Demands Systems Thinking – Forbes 12/02/2025
    • How artificial intelligence impacts the US labor market – MIT Sloan 12/01/2025
    • Will quantum computing be chemistry’s next AI? 12/01/2025
    • Ontology is having its moment. 11/28/2025
    • Disconnected Systems Lead to Disconnected Care 11/26/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • December 2025 (8)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 154 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d