
What should we make of OpenAI’s GPT-4, anyway? Is the large language model a major step on the way to an artificial general intelligence (AGI)—the insider’s term for an AI system with a flexible human-level intellect? And if we do create an AGI, might it be so different from human intelligence that it doesn’t see the point of keeping Homo sapiensaround?
If you query the world’s best minds on basic questions like these, you won’t get anything like a consensus. Consider the question of GPT-4’s implications for the creation of an AGI. Among AI specialists, convictions range from Eliezer Yudkowsky’s view that GPT-4 is a clear sign of the imminence of AGI, to Rodney Brooks’s assertion that we’re absolutely no closer to an AGI than we were 30 years ago.
On the topic of the potential of GPT-4 and its successors to wreak civilizational havoc, there’s similar disunity. One of the earliest doomsayers was Nick Bostrom; long before GPT-4, he argued that once an AGI far exceeds our capabilities, it will likely find ways to escape the digital world andmethodically destroy human civilization. On the other end are people like Yann LeCun, who reject such scenarios as sci-fi twaddle.
In between are researchers who worry about the abilities of GPT-4 and future instances of generative AI to cause major disruptions in employment, to exacerbate the biases in today’s society, and to generate propaganda, misinformation, and deep fakery on a massive scale. Worrisome? Yes, extremely so. Apocalyptic? No.
Many worried AI experts signed an open letter in March asking all AI labs to immediately pause “giant AI experiments” for six months. While the letter didn’t succeed in pausing anything, it did catch the attention of the general public, and suddenly made AI safety a water-cooler conversation. Then, at the end of May, an overlapping set of experts—academics and executives—signed a one-sentence statementurging the world to take seriously the risk of “extinction from AI.”
Below, we’ve put together a kind of scorecard. IEEE Spectrum has distilled the published thoughts and pronouncements of 22 AI luminaries on large language models, the likelihood of an AGI, and the risk of civilizational havoc. We scoured news articles, social media feeds, and books to find public statements by these experts, then used our best judgment to summarize their beliefs and to assign them yes/no/maybe positions below. If you’re one of the luminaries and you’re annoyed because we got something wrong about your perspective, please let us know. We’ll fix it.
And if we’ve left out your favorite AI pundit, our apologies. Let us know in the comments section below whom we should have included, and why. And feel free to add your own pronouncements, too.

.
CEO, OpenAI
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
LLMs are an important step toward artificial general intelligence, but it will be a “slow takeoff” in which we’ll have time to address the real risks that the technology brings.
Quote
Photo: OpenAI

Professor, MIT
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Large language models such as GPT-4 offer “a massive space of possibilities,” but even the best models today are not reliable or trustworthy, and a lot of work will be required to fix these problems.
Quote
“The thing that I’m most scared about has to do with… truthfulness and coherence issues.”
Photo: Gretchen Ertl/MIT

Professor, University of Washington
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Powerful corporations’ hype about LLMs and their progress toward AGI create a narrative that distracts from the need for regulations that protect people from these corporations.
Quote
Photo: Emily M. Bender

Professor, University of Montreal
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Maybe
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
We are not prepared for the proliferation of powerful AI tools. The danger will not come from these tools becoming autonomous but rather from military applications and from misuse of the tools that sows disinformation and discrimination.
Quote
Photo: Camille Gladu-Drouin

Professor, University of Oxford
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
Sentience is a matter of degrees, and today’s LLMs could be considered to have some small degree of sentience. Future versions could have more.
Quote
Photo: University of Oxford

Professor emeritus, MIT, and cofounder, RobustAI
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
The rise of large language models and the accompanying furor is not much different from many previous such upheavals in technology in general and AI in particular.
Quote
Photo: Christopher Michel

Research manager, Microsoft Research
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
GPT-4 has a type of generalized intelligence and exhibits a flexible understanding of concepts. But it lacks certain fundamental building blocks of human intelligence, such as memory and the ability to learn.
Quote
Photo: Microsoft Research

Founder, Algorithmic Justice League
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
While LLMs are not likely to lead to a civilization-crushing AGI, people’s fear of that hypothetical can be used to slow down the progress of AI and address pressing near-term concerns.
Quote
Photo: Getty Images

Founder, Distributed AI Research Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Discussion of AGI is a deliberate distraction from the real and pressing risks of today’s LLMs. The companies responsible need regulations to increase transparency and accountability and to end exploitative labor practices.
Quote
Photo: Timnit Gebru

Professor, UC Berkeley
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Human intelligence evolved from our interactions with the natural world. It’s unlikely that LLMs will become intelligent through text alone.
Quote
Photo: Gary Doak/Alamy

Director and cofounder, Center for AI Safety
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
Economic and strategic pressures are pushing AI companies to develop powerful AI systems that can’t be reliably controlled. Unless AI safety research becomes a worldwide priority, humanity is in danger of extinction.
Quote
Photo: Dan Hendrycks

Professor, University of Toronto
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
LLM technology is advancing too rapidly; it shouldn’t advance any further until scientists are confident that they can control it.
Quote
Photo: University of Toronto

Chief scientist, MindScope Program, Allen Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
GPT-4 and other large language models will exceed human capabilities in some endeavors, and approach AGI capabilities, without human-type understanding or consciousness.
Quote
“What it shows, very clearly, is that there are different routes to intelligence.“
Photo: Allen Institute

Computer scientist, entrepreneur, author, artist
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Generative AI and chatbots are an important advance because they offer users choice. By offering a range of responses, these models will erode the illusion of the “monolithic truth” of the Internet and AI.
Quote
Photo: Jaron Lanier

Chief AI scientist, Meta
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
GPT-4 itself is dazzling in some ways, but it is not particularly innovative. And GPT-style large language models will never lead to an AI with common sense and real, human-type understanding.
Quote
Photo: Meta

Professor, NYU
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
Making large language models more powerful is necessary but not sufficient to create an AGI.
Quote
Photo: NYU

Chief ethics scientist, Hugging Face
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Narratives that present AGI as ushering in either a utopia or extinction both contribute to hype and disguise the concentration of power in a handful of corporations.
Quote
“Ignoring active harms right now is a privilege that some of us don’t have.”
Photo: Hugging Face

Professor, Sante Fe Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Both the abilities and the dangers of LLMs are overhyped. We should be thinking more about LLMs’ immediate risks, such as disinformation and displays of harmful bias.
Quote
Photo: Melanie Mitchell

Founder and CEO, Landing AI
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
While the latest LLMs are far from being AGIs, they have some superhuman capabilities that can be harnessed for human advancement.
Quote
Photo: Andrew Ng

Professor, MIT and president, Future of Life Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
More powerful LLMs and AI systems should be developed only once researchers are confident that their effects will be positive and their risks will be manageable.
Quote
Photo: Max Tegmark

President, Signal Foundation
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
LLMs are already doing serious damage to society by reflecting all the biases inherent in their training data.
Quote
Photo: Signal Foundation

Cofounder, Machine Intelligence Research Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
LLMs could lead to an AGI, and the first superintelligent AGI will, by default, kill literally everyone on the planet. And we are not on track to solve this problem on the first try.
Quote
Article link: https://spectrum.ieee.org/artificial-general-intelligence

