What should we make of OpenAI’s GPT-4, anyway? Is the large language model a major step on the way to an artificial general intelligence (AGI)—the insider’s term for an AI system with a flexible human-level intellect? And if we do create an AGI, might it be so different from human intelligence that it doesn’t see the point of keeping Homo sapiensaround?
If you query the world’s best minds on basic questions like these, you won’t get anything like a consensus. Consider the question of GPT-4’s implications for the creation of an AGI. Among AI specialists, convictions range from Eliezer Yudkowsky’s view that GPT-4 is a clear sign of the imminence of AGI, to Rodney Brooks’s assertion that we’re absolutely no closer to an AGI than we were 30 years ago.
On the topic of the potential of GPT-4 and its successors to wreak civilizational havoc, there’s similar disunity. One of the earliest doomsayers was Nick Bostrom; long before GPT-4, he argued that once an AGI far exceeds our capabilities, it will likely find ways to escape the digital world andmethodically destroy human civilization. On the other end are people like Yann LeCun, who reject such scenarios as sci-fi twaddle.
In between are researchers who worry about the abilities of GPT-4 and future instances of generative AI to cause major disruptions in employment, to exacerbate the biases in today’s society, and to generate propaganda, misinformation, and deep fakery on a massive scale. Worrisome? Yes, extremely so. Apocalyptic? No.
Many worried AI experts signed an open letter in March asking all AI labs to immediately pause “giant AI experiments” for six months. While the letter didn’t succeed in pausing anything, it did catch the attention of the general public, and suddenly made AI safety a water-cooler conversation. Then, at the end of May, an overlapping set of experts—academics and executives—signed a one-sentence statementurging the world to take seriously the risk of “extinction from AI.”
Below, we’ve put together a kind of scorecard. IEEE Spectrum has distilled the published thoughts and pronouncements of 22 AI luminaries on large language models, the likelihood of an AGI, and the risk of civilizational havoc. We scoured news articles, social media feeds, and books to find public statements by these experts, then used our best judgment to summarize their beliefs and to assign them yes/no/maybe positions below. If you’re one of the luminaries and you’re annoyed because we got something wrong about your perspective, please let us know. We’ll fix it.
And if we’ve left out your favorite AI pundit, our apologies. Let us know in the comments section below whom we should have included, and why. And feel free to add your own pronouncements, too.
.
SAM ALTMAN
CEO, OpenAI
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
LLMs are an important step toward artificial general intelligence, but it will be a “slow takeoff” in which we’ll have time to address the real risks that the technology brings.
Quote
“Future versions of AI will solve some of our most pressing problems, really increase the standard of life, and also figure out much better uses for human will and creativity.”
Photo: OpenAI
JACOB ANDREAS
Professor, MIT
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Large language models such as GPT-4 offer “a massive space of possibilities,” but even the best models today are not reliable or trustworthy, and a lot of work will be required to fix these problems.
Quote
“The thing that I’m most scared about has to do with… truthfulness and coherence issues.”
Photo: Gretchen Ertl/MIT
EMILY M. BENDER
Professor, University of Washington
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Powerful corporations’ hype about LLMs and their progress toward AGI create a narrative that distracts from the need for regulations that protect people from these corporations.
Quote
“We can imagine other futures, but to do so, we have to maintain independence from the narrative being pushed by those who believe that AGI is desirable and that LLMs are the path to it.”
Photo: Emily M. Bender
YOSHUA BENGIO
Professor, University of Montreal
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Maybe
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
We are not prepared for the proliferation of powerful AI tools. The danger will not come from these tools becoming autonomous but rather from military applications and from misuse of the tools that sows disinformation and discrimination.
Quote
“We have passed a critical threshold: Machines can now converse with us and pretend to be human beings. This power can be misused for political purposes at the expense of democracy.”
Photo: Camille Gladu-Drouin
NICK BOSTROM
Professor, University of Oxford
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
Sentience is a matter of degrees, and today’s LLMs could be considered to have some small degree of sentience. Future versions could have more.
Quote
“Variations of these AIs may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.”
Photo: University of Oxford
RODNEY BROOKS
Professor emeritus, MIT, and cofounder, RobustAI
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
The rise of large language models and the accompanying furor is not much different from many previous such upheavals in technology in general and AI in particular.
Quote
“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”
Photo: Christopher Michel
SÉBASTIEN BUBECK
Research manager, Microsoft Research
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
GPT-4 has a type of generalized intelligence and exhibits a flexible understanding of concepts. But it lacks certain fundamental building blocks of human intelligence, such as memory and the ability to learn.
Quote
“All of the things I thought [GPT-4] wouldn’t be able to do? It was certainly able to do many of them—if not most of them.”
Photo: Microsoft Research
JOY BUOLAMWINI
Founder, Algorithmic Justice League
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
While LLMs are not likely to lead to a civilization-crushing AGI, people’s fear of that hypothetical can be used to slow down the progress of AI and address pressing near-term concerns.
Quote
“Honest question: If you believe you are unleashing the end of the world, why continue your current path?”
Photo: Getty Images
TIMNIT GEBR
Founder, Distributed AI Research Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Discussion of AGI is a deliberate distraction from the real and pressing risks of today’s LLMs. The companies responsible need regulations to increase transparency and accountability and to end exploitative labor practices.
Quote
“Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of ‘powerful digital minds.’”
Photo: Timnit Gebru
ALISON GOPNIK
Professor, UC Berkeley
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Human intelligence evolved from our interactions with the natural world. It’s unlikely that LLMs will become intelligent through text alone.
Quote
“What [LLMs] let us do is take all the words, all the text that people have written over all time, and summarize those in a way that is effective and lets us interact. I think what it isn’t is a new kind of intelligence.”
Photo: Gary Doak/Alamy
DAN HENDRYCKS
Director and cofounder, Center for AI Safety
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
Economic and strategic pressures are pushing AI companies to develop powerful AI systems that can’t be reliably controlled. Unless AI safety research becomes a worldwide priority, humanity is in danger of extinction.
Quote
“Whereas AI researchers once spoke of ‘designing’ AIs, they now speak of ‘steering’ them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand.”
Photo: Dan Hendrycks
GEOFFREY HINTON
Professor, University of Toronto
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
LLM technology is advancing too rapidly; it shouldn’t advance any further until scientists are confident that they can control it.
Quote
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI. And now I think it may be 20 years or less.”
Photo: University of Toronto
CHRISTOF KOCH
Chief scientist, MindScope Program, Allen Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
GPT-4 and other large language models will exceed human capabilities in some endeavors, and approach AGI capabilities, without human-type understanding or consciousness.
Quote
“What it shows, very clearly, is that there are different routes to intelligence.“
Photo: Allen Institute
JARON LANIER
Computer scientist, entrepreneur, author, artist
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Generative AI and chatbots are an important advance because they offer users choice. By offering a range of responses, these models will erode the illusion of the “monolithic truth” of the Internet and AI.
Quote
“This idea of surpassing human ability is silly because it’s made of human abilities. It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
Photo: Jaron Lanier
YANN LECUN
Chief AI scientist, Meta
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
GPT-4 itself is dazzling in some ways, but it is not particularly innovative. And GPT-style large language models will never lead to an AI with common sense and real, human-type understanding.
Quote
“The amplification of human intelligence by machine will enable a new renaissance or a new age of enlightenment, propelled by an acceleration of scientific, technical, medical, and social progress thanks to AI.”
Photo: Meta
GARY MARCUS
Professor, NYU
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
Maybe
Core belief about the future of large language models
Making large language models more powerful is necessary but not sufficient to create an AGI.
Quote
“There is still an immense amount of work to be done in making machines that truly can comprehend and reason about the world around them.”
Photo: NYU
MARGARET MITCHELL
Chief ethics scientist, Hugging Face
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Narratives that present AGI as ushering in either a utopia or extinction both contribute to hype and disguise the concentration of power in a handful of corporations.
Quote
“Ignoring active harms right now is a privilege that some of us don’t have.”
Photo: Hugging Face
MELANIE MITCHELL
Professor, Sante Fe Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
Both the abilities and the dangers of LLMs are overhyped. We should be thinking more about LLMs’ immediate risks, such as disinformation and displays of harmful bias.
Quote
“We humans are continually at risk of over-anthropomorphizing and over-trusting these systems, attributing agency to them when none is there.”
Photo: Melanie Mitchell
ANDREW NG
Founder and CEO, Landing AI
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
While the latest LLMs are far from being AGIs, they have some superhuman capabilities that can be harnessed for human advancement.
Quote
“In the past year, I think we’ve made one year of wildly exciting progress in what might be a 50- or 100-year journey.”
Photo: Andrew Ng
MAX TEGMARK
Professor, MIT and president, Future of Life Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
More powerful LLMs and AI systems should be developed only once researchers are confident that their effects will be positive and their risks will be manageable.
Quote
“Our letter mainstreamed pausing; [the statement from the Center for AI Safety] mainstreams extinction. Now a constructive open conversation can finally start.”
Photo: Max Tegmark
MEREDITH WHITTAKER
President, Signal Foundation
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
No
Is an AGI likely to cause civilizational disaster if we do nothing?
No
Core belief about the future of large language models
LLMs are already doing serious damage to society by reflecting all the biases inherent in their training data.
Quote
“Why do we need to create these? What are the collateral consequences of deploying these models in contexts where they’re going to be informing people’s decisions?”
Photo: Signal Foundation
ELIEZER YUDKOWSKY
Cofounder, Machine Intelligence Research Institute
Is the success of GPT-4 and today’s other large language models a sign that an AGI is likely?
Yes
Is an AGI likely to cause civilizational disaster if we do nothing?
Yes
Core belief about the future of large language models
LLMs could lead to an AGI, and the first superintelligent AGI will, by default, kill literally everyone on the planet. And we are not on track to solve this problem on the first try.
Quote
“Unaligned operation at a dangerous level of intelligence kills everybody on Earth, and then we don’t get to try again.”
Article link: https://spectrum.ieee.org/artificial-general-intelligence