healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

The AI Apocalypse: A Scorecard > How worried are top AI experts about the threat posed by large language models like GPT-4? – IEEE Spectrum

Posted by timmreardon on 07/06/2023
Posted in: Uncategorized.

What should we make of OpenAI’s GPT-4, anyway? Is the large language model a major step on the way to an artificial general intelligence (AGI)—the insider’s term for an AI system with a flexible human-level intellect? And if we do create an AGI, might it be so different from human intelligence that it doesn’t see the point of keeping Homo sapiensaround?

If you query the world’s best minds on basic questions like these, you won’t get anything like a consensus. Consider the question of GPT-4’s implications for the creation of an AGI. Among AI specialists, convictions range from Eliezer Yudkowsky’s view that GPT-4 is a clear sign of the imminence of AGI, to Rodney Brooks’s assertion that we’re absolutely no closer to an AGI than we were 30 years ago.

On the topic of the potential of GPT-4 and its successors to wreak civilizational havoc, there’s similar disunity. One of the earliest doomsayers was Nick Bostrom; long before GPT-4, he argued that once an AGI far exceeds our capabilities, it will likely find ways to escape the digital world andmethodically destroy human civilization. On the other end are people like Yann LeCun, who reject such scenarios as sci-fi twaddle.

In between are researchers who worry about the abilities of GPT-4 and future instances of generative AI to cause major disruptions in employment, to exacerbate the biases in today’s society, and to generate propaganda, misinformation, and deep fakery on a massive scale. Worrisome? Yes, extremely so. Apocalyptic? No.

Many worried AI experts signed an open letter in March asking all AI labs to immediately pause “giant AI experiments” for six months. While the letter didn’t succeed in pausing anything, it did catch the attention of the general public, and suddenly made AI safety a water-cooler conversation. Then, at the end of May, an overlapping set of experts—academics and executives—signed a one-sentence statementurging the world to take seriously the risk of “extinction from AI.”

Below, we’ve put together a kind of scorecard. IEEE Spectrum has distilled the published thoughts and pronouncements of 22 AI luminaries on large language models, the likelihood of an AGI, and the risk of civilizational havoc. We scoured news articles, social media feeds, and books to find public statements by these experts, then used our best judgment to summarize their beliefs and to assign them yes/no/maybe positions below. If you’re one of the luminaries and you’re annoyed because we got something wrong about your perspective, please let us know. We’ll fix it.

And if we’ve left out your favorite AI pundit, our apologies. Let us know in the comments section below whom we should have included, and why. And feel free to add your own pronouncements, too.

Profile picture

. 

SAM ALTMAN

CEO, OpenAI

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

LLMs are an important step toward artificial general intelligence, but it will be a “slow takeoff” in which we’ll have time to address the real risks that the technology brings.

Quote

“Future versions of AI will solve some of our most pressing problems, really increase the standard of life, and also figure out much better uses for human will and creativity.”

Photo: OpenAI

Profile picture

JACOB ANDREAS

Professor, MIT

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Large language models such as GPT-4 offer “a massive space of possibilities,” but even the best models today are not reliable or trustworthy, and a lot of work will be required to fix these problems.

Quote

“The thing that I’m most scared about has to do with… truthfulness and coherence issues.”

Photo: Gretchen Ertl/MIT

Profile picture

EMILY M. BENDER

Professor, University of Washington

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Powerful corporations’ hype about LLMs and their progress toward AGI create a narrative that distracts from the need for regulations that protect people from these corporations.

Quote

“We can imagine other futures, but to do so, we have to maintain independence from the narrative being pushed by those who believe that AGI is desirable and that LLMs are the path to it.”

Photo: Emily M. Bender

Profile picture

YOSHUA BENGIO

Professor, University of Montreal

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Maybe

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

We are not prepared for the proliferation of powerful AI tools. The danger will not come from these tools becoming autonomous but rather from military applications and from misuse of the tools that sows disinformation and discrimination.

Quote

“We have passed a critical threshold: Machines can now converse with us and pretend to be human beings. This power can be misused for political purposes at the expense of democracy.”

Photo: Camille Gladu-Drouin

Profile picture

NICK BOSTROM

Professor, University of Oxford

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

Sentience is a matter of degrees, and today’s LLMs could be considered to have some small degree of sentience. Future versions could have more.

Quote

“Variations of these AIs may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.”

Photo: University of Oxford

Profile picture

RODNEY BROOKS

Professor emeritus, MIT, and cofounder, RobustAI

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

The rise of large language models and the accompanying furor is not much different from many previous such upheavals in technology in general and AI in particular.

Quote

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”

Photo: Christopher Michel

Profile picture

SÉBASTIEN BUBECK

Research manager, Microsoft Research

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

GPT-4 has a type of generalized intelligence and exhibits a flexible understanding of concepts. But it lacks certain fundamental building blocks of human intelligence, such as memory and the ability to learn.

Quote

“All of the things I thought [GPT-4] wouldn’t be able to do? It was certainly able to do many of them—if not most of them.”

Photo: Microsoft Research

Profile picture

JOY BUOLAMWINI

Founder, Algorithmic Justice League

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

While LLMs are not likely to lead to a civilization-crushing AGI, people’s fear of that hypothetical can be used to slow down the progress of AI and address pressing near-term concerns.

Quote

“Honest  question: If you believe you are unleashing the end of the world, why continue your current path?”

Photo: Getty Images

Profile picture

TIMNIT GEBR

Founder, Distributed AI Research Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Discussion of AGI is a deliberate distraction from the real and pressing risks of today’s LLMs. The companies responsible need regulations to increase transparency and accountability and to end exploitative labor practices.

Quote

“Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of ‘powerful digital minds.’”

Photo: Timnit Gebru

Profile picture

ALISON GOPNIK

Professor, UC Berkeley

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Human intelligence evolved from our interactions with the natural world. It’s unlikely that LLMs will become intelligent through text alone.

Quote

“What [LLMs] let us do is take all the words, all the text that people have written over all time, and summarize those in a way that is effective and lets us interact. I think what it isn’t is a new kind of intelligence.”

Photo: Gary Doak/Alamy

Profile picture

DAN HENDRYCKS

Director and cofounder, Center for AI Safety

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

Economic and strategic pressures are pushing AI companies to develop powerful AI systems that can’t be reliably controlled. Unless AI safety research becomes a worldwide priority, humanity is in danger of extinction.

Quote

“Whereas AI researchers once spoke of ‘designing’ AIs, they now speak of ‘steering’ them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand.”

Photo: Dan Hendrycks

Profile picture

GEOFFREY HINTON

Professor, University of Toronto

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

LLM technology is advancing too rapidly; it shouldn’t advance any further until scientists are confident that they can control it.

Quote

“Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI. And now I think it may be 20 years or less.”

Photo: University of Toronto

Profile picture

CHRISTOF KOCH

Chief scientist, MindScope Program, Allen Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

GPT-4 and other large language models will exceed human capabilities in some endeavors, and approach AGI capabilities, without human-type understanding or consciousness.

Quote

“What it shows, very clearly, is that there are different routes to intelligence.“

Photo: Allen Institute

Profile picture

JARON LANIER

Computer scientist, entrepreneur, author, artist

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Generative AI and chatbots are an important advance because they offer users choice. By offering a range of responses, these models will erode the illusion of the “monolithic truth” of the Internet and AI.

Quote

“This idea of surpassing human ability is silly because it’s made of human abilities. It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”

Photo: Jaron Lanier

Profile picture

YANN LECUN

Chief AI scientist, Meta

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

GPT-4 itself is dazzling in some ways, but it is not particularly innovative. And GPT-style large language models will never lead to an AI with common sense and real, human-type understanding.

Quote

“The amplification of human intelligence by machine will enable a new renaissance or a new age of enlightenment, propelled by an acceleration of scientific, technical, medical, and social progress thanks to AI.”

Photo: Meta

Profile picture

GARY MARCUS

Professor, NYU

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

Maybe

Core belief about the future of large language models

Making large language models more powerful is necessary but not sufficient to create an AGI.

Quote

“There is still an immense amount of work to be done in making machines that truly can comprehend and reason about the world around them.”

Photo: NYU

Profile picture

MARGARET MITCHELL

Chief ethics scientist, Hugging Face

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Narratives that present AGI as ushering in either a utopia or extinction both contribute to hype and disguise the concentration of power in a handful of corporations.

Quote

“Ignoring active harms right now is a privilege that some of us don’t have.”

Photo: Hugging Face

Profile picture

MELANIE MITCHELL

Professor, Sante Fe Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

Both the abilities and the dangers of LLMs are overhyped. We should be thinking more about LLMs’ immediate risks, such as disinformation and displays of harmful bias.

Quote

“We humans are continually at risk of over-anthropomorphizing and over-trusting these systems, attributing agency to them when none is there.”

Photo: Melanie Mitchell

Profile picture

ANDREW NG

Founder and CEO, Landing AI

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

While the latest LLMs are far from being AGIs, they have some superhuman capabilities that can be harnessed for human advancement.

Quote

“In the past year, I think we’ve made one year of wildly exciting progress in what might be a 50- or 100-year journey.”

Photo: Andrew Ng

Profile picture

MAX TEGMARK

Professor, MIT and president, Future of Life Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

More powerful LLMs and AI systems should be developed only once researchers are confident that their effects will be positive and their risks will be manageable.

Quote

“Our letter mainstreamed pausing; [the statement from the Center for AI Safety] mainstreams extinction. Now a constructive open conversation can finally start.”

Photo: Max Tegmark

Profile picture

MEREDITH WHITTAKER

President, Signal Foundation

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

No

Is an AGI likely to cause civilizational disaster if we do nothing?

No

Core belief about the future of large language models

LLMs are already doing serious damage to society by reflecting all the biases inherent in their training data.

Quote

“Why do we need to create these? What are the collateral consequences of deploying these models in contexts where they’re going to be informing people’s decisions?”

Photo: Signal Foundation

Profile picture

ELIEZER YUDKOWSKY

Cofounder, Machine Intelligence Research Institute

Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?

Yes

Is an AGI likely to cause civilizational disaster if we do nothing?

Yes

Core belief about the future of large language models

LLMs could lead to an AGI, and the first superintelligent AGI will, by default, kill literally everyone on the planet. And we are not on track to solve this problem on the first try.

Quote

“Unaligned operation at a dangerous level of intelligence kills everybody on Earth, and then we don’t get to try again.”

Article link: https://spectrum.ieee.org/artificial-general-intelligence

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Unique Approaches to Lovell FHCC Federal EHR Deployment – FEHRM
Exposed Interfaces in US Federal Networks: A Breach Waiting to Happen – HACKRead →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d