healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Is AI an Existential Risk? Q&A with RAND Experts

Posted by timmreardon on 03/15/2024
Posted in: Uncategorized.

March 11, 2024

What are the potential risks associated with artificial intelligence? Might any of these be catastrophic or even existential? And as momentum builds toward boundless applications of this technology, how might humanity reduce AI risk and navigate an uncertain future?

At a recent RAND event, a panel of five experts explored these emerging questions. While the gathering highlighted RAND’s diversity in academic disciplines and perspectives, the panelists were unsurprisingly unanimous that independent, high-quality research will play a pivotal role in exploring AI’s short- and long-term risks—as well as the implications for public policy.

What do you view as the biggest risks posed by AI?

BENJAMIN BOUDREAUX AI could pose a significant risk to our quality of life and the institutions we need to flourish. The risk I’m concerned about isn’t a sudden, immediate event. It’s a set of incremental harms that worsen over time. Like climate change, AI might be a slow-moving catastrophe, one that diminishes the institutions and agency we need to live meaningful lives.

This risk doesn’t require superintelligence or artificial general intelligence or AI sentience. Rather, it’s a continuation and worsening of effects that are already happening. For instance, there’s significant evidence that social media, a form of AI, has serious effects on institutions and mental well-being.

AI seems to promote mistrust that fractures shared identities and a shared sense of reality. There’s already evidence that AI has undermined the credibility and legitimacy of our election system. And there’s significant evidence that AI has exacerbated inequity and bias. There’s evidence that AI has impacted industries like journalism, producing cascading effects across society. And as AI becomes a driver of international competition, it could become harder to respond to other catastrophes, like another pandemic or climate change. As AI gets more capable, as AI companies become more powerful, and as we become dependent on AI, I worry that all those existing risks and harms will get worse.

JONATHAN WELBURN It’s important to note that we’ve had previous periods in history with revolutionary technological shocks: electricity, the printing press, the internet. This moment is similar to those. AI will lead to a series of technological innovations, some of which we might be able to imagine now, but many that we won’t be able to imagine.

AI might exacerbate existing risks and create new ones. I think about inequity and inequality. AI bias might undermine social and economic mobility. Racial and gender biases might be baked into models. Things like deepfakes might undermine our trust in institutions.

But I’m looking at more of a total system collapse as a worst-case scenario. The world in 2023 already had high levels of inequality. And so, building from that foundation, where there’s already a high level of concentration of wealth and power—that’s where the potential worst-case scenario is for me. Capital owners own all the newest technology that’s about to be created and owned, all the wealth, and all the decisionmaking. And this is a system that undermines many democratic norms.

But I don’t see this as a permanent state, necessarily. I see it as a temporary state that could have generations of harm. We would transition through this AI shock. But it’s still up to humanity to create the policy solutions that prevent the worst-case scenario.

JEFF ALSTOTT There’s a real diversity of which risks we’re all studying at RAND. And that doesn’t mean that any of these risks are invalid or necessarily more or less important than the others. So, I agree with everything that Ben and Jon have said.

One of the risks that keeps me up at night is the resurrection of smallpox. The story here is formerly exquisite technical knowledge being used by bad actors. Bioweapons happens to be one example where, historically, the barriers have been information and knowledge. You don’t need much in the way of specialized matériel or expensive sets of equipment any longer in order to achieve devastating effects, with the launching of pandemics. AI could close the knowledge gap. And bio is just one example. The same story repeats with AI and chemical weapons, nuclear weapons, cyber weapons.

Then, eventually, there’s the issue of not just bad people doing bad things but AIs themselves running off and doing bad things, plus anything else they want to do—AI run amok.

NIDHI KALRA To me, AI is gas on the fire. I’m less concerned with the risk of AI than with the fires themselves—the literal fires of climate change and potential nuclear war, and the figurative fires of rising income inequality and racial animus. Those are the realities of the world. We’ve been living those for generations. And so, I’m not any more awake at night because of AI than I was already because of the wildfires in Canada, for instance.

But I do have a concern: What does the world look like when we, even more than is already the case today, can’t distinguish fact from fiction? What if we can’t distinguish a human being from something else? I don’t know what that does to the kind of humanity that we’ve lived with for the entire existence of our species. I worry about a future in which we don’t know who we are. We can’t recognize each other. That vague foreboding of a loss of humanity is what, if anything, keeps me up. Otherwise, I think we’ll be just as fine as we were yesterday.

EDWARD GEIST AI threatens to be an amplifier for human stupidity. That characterization captures the types of harms that are already occurring, like what Ben was discussing, but also more speculative types of harms. So, for instance, the idea of machines that do what you ask for—rather than what you wanted or should have asked for—or machines that make the same kind of mistakes that humans make, only faster and in larger quantities.

Some of you have addressed this implicitly, but let’s tackle the question head on, as briefly as you can: With the caveat that there’s still much uncertainty surrounding AI, do you think it poses an existential risk?

WELBURN No. I don’t think AI poses an irreversible harm to humanity. I think it can worsen our lives. I think it can have long-lasting harm. But I think it’s ultimately something that we can recover from.

KALRA I second Jon: No. We are an incredibly resilient species, looking back over millions of years. I think that’s not to be taken lightly.

BOUDREAUX Yes. An existential risk is an unrecoverable harm to humanity’s potential. One way that could happen is that humans die. But the other way that can happen is if we no longer engage in meaningful human activity, if we no longer have embodied experience, if we’re no longer connected to our fellow humans. That, I think, is the existential risk of AI.

ALSTOTT Yes.

GEIST I’m not sure, and here’s why: I’m a nuclear strategist, and I’ve learned through my studies just how hard it can be to tell the difference.

For example, is the hydrogen bomb an existential risk? I think most laypeople would probably say, “Yes, of course.” But even using a straightforward definition of existential risk (humans go extinct), the answer isn’t obvious.

The current scientific understanding is that the nuclear weapons that exist today could probably not be used in a way that would result in human extinction. That scenario would require more and bigger bombs. That said, a nuclear arsenal that could plausibly cause human extinction via fallout and other effects does appear to be something a country could build if it really wanted to.

Are there AI policies that reasonable people could consider and potentially agree on, despite thinking differently about the question of whether AI poses an existential risk?

KALRA Perhaps ensuring that we’re not moving too quickly in integrating AI into our critical infrastructure systems.

BOUDREAUX Transparency or oversight, so we can actually audit tech companies for the claims they’re making. They extoll all these benefits of AI, so they should show us the data. Let us look behind the scenes, as much as we can, to see whether those claims are true. And this isn’t just a role for researchers. For instance, the Federal Trade Commission might need greater funding to ensure that it can prohibit unfair and deceptive trade practices. I think just holding companies to the standards that they themselves have set could be a good step.

What else are you thinking about the role that research can play as we move into this new era?

KALRA I want to see questions about AI policy problems asked in a very RAND-like way: What would we have to believe about our world and about the costs of various actions to prefer Action A over Action B? What’s the quantity of evidence you need for this to be a good decision? Do we have anything resembling that level of evidence? What are the trade-offs if we’re right? What are the trade-offs if we’re wrong? That’s the kind of framing I’d like to see in discussions of AI policy.

WELBURN The lack of diversity in the AI world is a huge concern. That leads to racial and gender biases in AI modelsthemselves.

I think RAND can make diversity a strong part of our AI research agenda. That can come in part from bringing together a lot of different stakeholders and not just people with computer science degrees. How can we bring people like sociologists into this conversation, too?

GEIST I’d like to see RAND play the kind of vital role in these discussions about AI policy that we played in shaping policy that mitigated the threat of thermonuclear war back in the 1950s and 1960s. In fact, our predecessors invented methodologies back then that could either serve as the inspiration for or perhaps even be directly adapted to AI-related policy problems today.

Zooming out, I think humanity needs to lay out a research agenda that will get us to answer the right questions. Because until very recently, AI as an academic exercise has been pursued in a very ad hoc way. There hasn’t been a systemic research agenda that’s been designed to answer some very concrete questions. It may be that there’s more low-hanging fruit than is obvious if we frame the questions in very practical terms, especially given now that there are so many more eyeballs on AI. The number of people trying to work on these problems has just exploded in the last few years.

ALSTOTT As Ed alludes to, it’s long-standing practice here at RAND to be looking forward multiple decades to contemplate different tech that could exist in the future, so we can understand the potential implications and identify evidence-based actions that might need to be taken now to mitigate future threats. We have started doing this with the AI of today and tomorrow and need to do much more.

We also need a lot more of the science of AI threat assessment. RAND is starting to be known as the place that’s doing that kind of analysis today. We just need to keep going. But we probably have at least a decade’s worth of work and analysis that needs to happen. So if we don’t have it all sorted out ahead of time, it might be that, whatever the threat is, it lands, and then it’s too late.

BOUDREAUX Researchers have a special responsibility to look at the harms and the risks. This isn’t just looking at the technology itself but also at the context—looking into how AI is being integrated into criminal justice and education and employment systems, so we can see the interaction between AI and human well-being. More could also be done to engage affected stakeholders before AI systems are deployed in schools, health care, and so on. We also need to take a systemic approach, where we’re looking at the relationship between AI and all the other societal challenges we face.

It’s also worth thinking about building communities that are resilient to this broad range of crises. AI might play a role by fostering more human connection or providing a framework for deliberation on really challenging issues. But I don’t think there’s a technical fix alone. We need to have a much broader view of how we build resilient communities that can deal with societal challenges.


Benjamin Boudreaux is a policy researcher who studies the intersection of ethics, emerging technology, and security.

Jonathan Welburn is a senior researcher who studies emerging systemic risks, cyber deterrence, and market failures.

Jeff Alstott is a senior information scientist and directs the RAND Center for Technology and Security Policy.

Nidhi Kalra is a senior information scientist whose research examines climate change mitigation, adaptation, and decarbonization planning, as well as decisionmaking amid deep uncertainty.

Edward Geist is a policy researcher whose interests include Russia, civil defense, AI, and the potential effect of emerging technologies on nuclear strategy.

Special thanks to Anu Narayanan, associate director of the RAND National Security Research Division, who moderated the discussion, and to Gary Briggs, who organized this event. Excerpts presented here were edited for length and clarity.

Article link: https://www.rand.org/pubs/commentary/2024/03/is-ai-an-existential-risk-qa-with-rand-experts.html?

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← US Federal LCNC Subcommunity of Practice Session on March 27th
Former Google CEO Eric Schmidt on the challenges of regulating AI – MIT Sloan →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d