What are the potential risks associated with artificial intelligence? Might any of these be catastrophic or even existential? And as momentum builds toward boundless applications of this technology, how might humanity reduce AI risk and navigate an uncertain future?
At a recent RAND event, a panel of five experts explored these emerging questions. While the gathering highlighted RAND’s diversity in academic disciplines and perspectives, the panelists were unsurprisingly unanimous that independent, high-quality research will play a pivotal role in exploring AI’s short- and long-term risks—as well as the implications for public policy.
What do you view as the biggest risks posed by AI?
BENJAMIN BOUDREAUX AI could pose a significant risk to our quality of life and the institutions we need to flourish. The risk I’m concerned about isn’t a sudden, immediate event. It’s a set of incremental harms that worsen over time. Like climate change, AI might be a slow-moving catastrophe, one that diminishes the institutions and agency we need to live meaningful lives.
This risk doesn’t require superintelligence or artificial general intelligence or AI sentience. Rather, it’s a continuation and worsening of effects that are already happening. For instance, there’s significant evidence that social media, a form of AI, has serious effects on institutions and mental well-being.
AI seems to promote mistrust that fractures shared identities and a shared sense of reality. There’s already evidence that AI has undermined the credibility and legitimacy of our election system. And there’s significant evidence that AI has exacerbated inequity and bias. There’s evidence that AI has impacted industries like journalism, producing cascading effects across society. And as AI becomes a driver of international competition, it could become harder to respond to other catastrophes, like another pandemic or climate change. As AI gets more capable, as AI companies become more powerful, and as we become dependent on AI, I worry that all those existing risks and harms will get worse.
JONATHAN WELBURN It’s important to note that we’ve had previous periods in history with revolutionary technological shocks: electricity, the printing press, the internet. This moment is similar to those. AI will lead to a series of technological innovations, some of which we might be able to imagine now, but many that we won’t be able to imagine.
AI might exacerbate existing risks and create new ones. I think about inequity and inequality. AI bias might undermine social and economic mobility. Racial and gender biases might be baked into models. Things like deepfakes might undermine our trust in institutions.
But I’m looking at more of a total system collapse as a worst-case scenario. The world in 2023 already had high levels of inequality. And so, building from that foundation, where there’s already a high level of concentration of wealth and power—that’s where the potential worst-case scenario is for me. Capital owners own all the newest technology that’s about to be created and owned, all the wealth, and all the decisionmaking. And this is a system that undermines many democratic norms.
But I don’t see this as a permanent state, necessarily. I see it as a temporary state that could have generations of harm. We would transition through this AI shock. But it’s still up to humanity to create the policy solutions that prevent the worst-case scenario.
JEFF ALSTOTT There’s a real diversity of which risks we’re all studying at RAND. And that doesn’t mean that any of these risks are invalid or necessarily more or less important than the others. So, I agree with everything that Ben and Jon have said.
One of the risks that keeps me up at night is the resurrection of smallpox. The story here is formerly exquisite technical knowledge being used by bad actors. Bioweapons happens to be one example where, historically, the barriers have been information and knowledge. You don’t need much in the way of specialized matériel or expensive sets of equipment any longer in order to achieve devastating effects, with the launching of pandemics. AI could close the knowledge gap. And bio is just one example. The same story repeats with AI and chemical weapons, nuclear weapons, cyber weapons.
Then, eventually, there’s the issue of not just bad people doing bad things but AIs themselves running off and doing bad things, plus anything else they want to do—AI run amok.
NIDHI KALRA To me, AI is gas on the fire. I’m less concerned with the risk of AI than with the fires themselves—the literal fires of climate change and potential nuclear war, and the figurative fires of rising income inequality and racial animus. Those are the realities of the world. We’ve been living those for generations. And so, I’m not any more awake at night because of AI than I was already because of the wildfires in Canada, for instance.
But I do have a concern: What does the world look like when we, even more than is already the case today, can’t distinguish fact from fiction? What if we can’t distinguish a human being from something else? I don’t know what that does to the kind of humanity that we’ve lived with for the entire existence of our species. I worry about a future in which we don’t know who we are. We can’t recognize each other. That vague foreboding of a loss of humanity is what, if anything, keeps me up. Otherwise, I think we’ll be just as fine as we were yesterday.
EDWARD GEIST AI threatens to be an amplifier for human stupidity. That characterization captures the types of harms that are already occurring, like what Ben was discussing, but also more speculative types of harms. So, for instance, the idea of machines that do what you ask for—rather than what you wanted or should have asked for—or machines that make the same kind of mistakes that humans make, only faster and in larger quantities.
Some of you have addressed this implicitly, but let’s tackle the question head on, as briefly as you can: With the caveat that there’s still much uncertainty surrounding AI, do you think it poses an existential risk?
WELBURN No. I don’t think AI poses an irreversible harm to humanity. I think it can worsen our lives. I think it can have long-lasting harm. But I think it’s ultimately something that we can recover from.
KALRA I second Jon: No. We are an incredibly resilient species, looking back over millions of years. I think that’s not to be taken lightly.
BOUDREAUX Yes. An existential risk is an unrecoverable harm to humanity’s potential. One way that could happen is that humans die. But the other way that can happen is if we no longer engage in meaningful human activity, if we no longer have embodied experience, if we’re no longer connected to our fellow humans. That, I think, is the existential risk of AI.
ALSTOTT Yes.
GEIST I’m not sure, and here’s why: I’m a nuclear strategist, and I’ve learned through my studies just how hard it can be to tell the difference.
For example, is the hydrogen bomb an existential risk? I think most laypeople would probably say, “Yes, of course.” But even using a straightforward definition of existential risk (humans go extinct), the answer isn’t obvious.
The current scientific understanding is that the nuclear weapons that exist today could probably not be used in a way that would result in human extinction. That scenario would require more and bigger bombs. That said, a nuclear arsenal that could plausibly cause human extinction via fallout and other effects does appear to be something a country could build if it really wanted to.
Are there AI policies that reasonable people could consider and potentially agree on, despite thinking differently about the question of whether AI poses an existential risk?
KALRA Perhaps ensuring that we’re not moving too quickly in integrating AI into our critical infrastructure systems.
BOUDREAUX Transparency or oversight, so we can actually audit tech companies for the claims they’re making. They extoll all these benefits of AI, so they should show us the data. Let us look behind the scenes, as much as we can, to see whether those claims are true. And this isn’t just a role for researchers. For instance, the Federal Trade Commission might need greater funding to ensure that it can prohibit unfair and deceptive trade practices. I think just holding companies to the standards that they themselves have set could be a good step.
What else are you thinking about the role that research can play as we move into this new era?
KALRA I want to see questions about AI policy problems asked in a very RAND-like way: What would we have to believe about our world and about the costs of various actions to prefer Action A over Action B? What’s the quantity of evidence you need for this to be a good decision? Do we have anything resembling that level of evidence? What are the trade-offs if we’re right? What are the trade-offs if we’re wrong? That’s the kind of framing I’d like to see in discussions of AI policy.
I think RAND can make diversity a strong part of our AI research agenda. That can come in part from bringing together a lot of different stakeholders and not just people with computer science degrees. How can we bring people like sociologists into this conversation, too?
GEIST I’d like to see RAND play the kind of vital role in these discussions about AI policy that we played in shaping policy that mitigated the threat of thermonuclear war back in the 1950s and 1960s. In fact, our predecessors invented methodologies back then that could either serve as the inspiration for or perhaps even be directly adapted to AI-related policy problems today.
Zooming out, I think humanity needs to lay out a research agenda that will get us to answer the right questions. Because until very recently, AI as an academic exercise has been pursued in a very ad hoc way. There hasn’t been a systemic research agenda that’s been designed to answer some very concrete questions. It may be that there’s more low-hanging fruit than is obvious if we frame the questions in very practical terms, especially given now that there are so many more eyeballs on AI. The number of people trying to work on these problems has just exploded in the last few years.
ALSTOTT As Ed alludes to, it’s long-standing practice here at RAND to be looking forward multiple decades to contemplate different tech that could exist in the future, so we can understand the potential implications and identify evidence-based actions that might need to be taken now to mitigate future threats. We have started doing this with the AI of today and tomorrow and need to do much more.
We also need a lot more of the science of AI threat assessment. RAND is starting to be known as the place that’s doing that kind of analysis today. We just need to keep going. But we probably have at least a decade’s worth of work and analysis that needs to happen. So if we don’t have it all sorted out ahead of time, it might be that, whatever the threat is, it lands, and then it’s too late.
BOUDREAUX Researchers have a special responsibility to look at the harms and the risks. This isn’t just looking at the technology itself but also at the context—looking into how AI is being integrated into criminal justice and education and employment systems, so we can see the interaction between AI and human well-being. More could also be done to engage affected stakeholders before AI systems are deployed in schools, health care, and so on. We also need to take a systemic approach, where we’re looking at the relationship between AI and all the other societal challenges we face.
It’s also worth thinking about building communities that are resilient to this broad range of crises. AI might play a role by fostering more human connection or providing a framework for deliberation on really challenging issues. But I don’t think there’s a technical fix alone. We need to have a much broader view of how we build resilient communities that can deal with societal challenges.
Benjamin Boudreaux is a policy researcher who studies the intersection of ethics, emerging technology, and security.
Jonathan Welburn is a senior researcher who studies emerging systemic risks, cyber deterrence, and market failures.
Jeff Alstott is a senior information scientist and directs the RAND Center for Technology and Security Policy.
Nidhi Kalra is a senior information scientist whose research examines climate change mitigation, adaptation, and decarbonization planning, as well as decisionmaking amid deep uncertainty.
Edward Geist is a policy researcher whose interests include Russia, civil defense, AI, and the potential effect of emerging technologies on nuclear strategy.
Special thanks to Anu Narayanan, associate director of the RAND National Security Research Division, who moderated the discussion, and to Gary Briggs, who organized this event. Excerpts presented here were edited for length and clarity.
Join in our Next US Federal LCNC Subcommunity of Practice Session on March 27th and Embark on the Next Chapter of Innovation with Us!
US Federal IT Leaders, Low-Code/No-Code Program Managers, and Industry Partners:
We have gathered momentum since our inaugural US Federal LCNC Subcommunity of Practice meeting in December! Following our kickoff, we spent the last couple of months crystallizing our collective vision, solidifying our committee goals, and agreeing on a robust set of roadmap initiatives that we are excited to move forward on. In addition, we’re proud to share that we’re also celebrating multiple quick-wins! Some highlights include:
· Creating an Agency LCNC “Battle-Buddy” Mentorship Program where Agency “Mentees” can benefit from Mentor experiences to accelerate success.
· Establishing a shared artifact repository where Agencies can now contribute and leverage artifacts, including best practices, training materials, product knowledge, and more.
· Developing Cross-Agency relationships. The US Federal LCNC SCoP is successfully facilitating cross-Agency connections to empower members in their LCNC adoption and maturity efforts.
· Expanding Membership & Corporate Knowledge: Our LCNC SCoP is growing, and the expanding membership is increasing our community’s corporate knowledge around LCNC and acting as a force-multiplier toward positive outcomes.
The seeds of big ideas are taking root, and the enthusiasm and efforts of this group is nothing short of phenomenal!
With that said, we’re just two weeks away from our next U.S. Federal Chief Information Officers (CIO) Council sponsored U.S. Federal LCNC Subcommunity of Practice full session taking place on March 27, 2024. This session will feature:
· Through partnership with ATARC (Advanced Technology Academic Research Center), we’ll showcase the Appian Platform (we’re also slating others for future sessions)
· A compelling Agency LCNC implementation vignette presented by the FDA
· An insightful presentation from Gartner on Modernizing Your Application Stack with Low Code and No Code
Help us in our effort to further expand Federal Agency participation by spreading the word! Agency LCNC representatives, Federal IT modernization champions, and Industry partners can register for Subcommittee membership by joining our listserv. To do this, send an email (from your government domain) to LCNC-subscribe-request@listserv.gsa.gov requesting US Federal Low Code No Code Subcommunity of membership.
Let’s continue shaping the future of US Government IT modernization together through LCNC!
WASHINGTON — Today, the U.S. Department of Veterans Affairs, Department of Defense, and Federal Electronic Health Record Modernization office launched the Federal Electronic Health Record (EHR) at the Captain James A. Lovell Federal Health Care Center (Lovell FHCC) in North Chicago, Illinois. This is the first joint deployment of the federal EHR, which DOD calls MHS GENESIS, at a joint VA and Department of Defense (DOD) facility.
As the only fully integrated, jointly run VA and DOD health care system in the country, Lovell FHCC provides health care to approximately 75,000 patients each year, including Veterans, service members and their families, and Navy recruits. The joint deployment ensures that all patients who visit the facility will receive care that is coordinated through a single fully integrated EHR system. The Federal EHR also improves the ability for VA and DOD to coordinate care and share data with each other and the rest of the U.S. health care system.
“The Federal EHR will enhance care for all beneficiaries who walk through our doors, whether they are Veterans, Navy recruits, students, active-duty service members, their dependents, or retirees,” said Dr. Robert Buckley, Lovell FHCC Director. “It enables a continuum of care that will enhance our operations as we work to optimize health outcomes for those we serve.”
“This joint deployment of the Federal EHR at Lovell FHCC will provide a more coordinated experience for patients Dr and the clinicians who care for them,” said acting program executive director of the Electronic Health Record Modernization Integration Office, Dr. Neil Evans. Additionally, while VA continues with the broader reset of our electronic health record modernization program, we are learning lessons from this deployment to inform our future decisions.”
“The launch of the Federal EHR at Lovell FHCC will help DOD and VA deliver on the promise made to those who serve our country to provide seamless care from their first day of active service to the transition to veteran status,” said HON Lester Martinez-Lopez, Assistant Secretary of Defense for Health Affairs. “A joint electronic health record system demonstrates the power of technology to improve health care delivery, and we look forward to continued collaboration with our VA partners.”
“Our deployment of the Federal EHR at Lovell FHCC will significantly advance interoperability,” said Mr. Bill Tinston,FEHRM Director. “This will not only benefit the patients and staff in North Chicago, but all joint sites that need joint solutions to effectively deliver care.”
The new, modernized federal EHR will meaningfully improve patient health outcomes and benefits decisions. It is particularly critical at Lovell FHCC, so all patients – and clinicians – at the facility can utilize one EHR system. The DOD has now completed installation of the federal EHR to all its garrison hospitals and clinics throughout the world.
VA is moving forward with deployment at Lovell FHCC despite pausing all other deployments under a reset of its Electronic Health Record Modernization (EHRM) program. VA continues to closely examine the issues that clinicians and other end users are experiencing at its sites using the Federal EHR and is developing the success criteria to determine when to exit the program reset and restart deployments at other facilities. As the first deployment at a larger, more complex VA health care facility, the experience at Lovell FHCC will help inform these decisions. Additional deployments will not be scheduled until VA is confident that the new EHR is highly functioning at all current sites and ready to deliver for Veterans and VA clinicians at future sites.
The National Security Agency (NSA) has released a Cybersecurity Information Sheet (CSI) that details curtailing adversarial lateral movement within an organization’s network to access sensitive data and critical systems. The CSI, entitled “Advancing Zero Trust Maturity Throughout the Network and Environment Pillar,” provides guidance on how to strengthen internal network control and contain network intrusions to a segmented portion of the network using Zero Trust principles.
“Organizations need to operate with a mindset that threats exist within the boundaries of their systems,” said NSA Cybersecurity Director Rob Joyce. “This guidance is intended to arm network owners and operators with the processes they need to vigilantly resist, detect, and respond to threats that exploit weaknesses or gaps in their enterprise architecture.
”The network and environment pillar–one of seven pillars that make up the Zero Trust framework–isolates critical resources from unauthorized access by defining network access, controlling network and data flows, segmenting applications and workloads, and using end-to-end encryption, according to the CSI.
The CSI outlines the key capabilities of the network and environment pillar, including data flow mapping, macro and micro segmentation, and software defined networking.
VAST Data introduced a new AI cloud architecture based on Nvidia’s BlueField-3 DPU technology. The architecture is designed to improve performance, security, and efficiency for AI data services. The approach seeks to enhance data center operations and introduce a secure, zero-trust environment by integrating storage and database processing into AI servers.
Nvidia DPUs Remove the Bottleneck
VAST Data is leveraging Nvidia’s BlueField-3 DPU to innovate within its AI cloud solution. A DPU is a specialized processor designed to offload, accelerate, and isolate data center workloads, enabling higher performance, increased security, and more efficient data processing.
VAST disaggregates its resources into Nvidia BlueField-3 DPUs. This means that the DPU takes over certain data processing tasks traditionally handled by the server, such as networking, security, and storage operations. By offloading these functions to the DPU, VAST can reduce the load on the main CPU, allowing it to focus on AI and machine learning computations.
Here’s how it works: using the Nvidia BlueField-3 DPU, VAST creates a parallel system architecture where storage and database processing services are embedded directly into AI servers.
This setup provides a dedicated, stateless container for each GPU server running the VAST parallel services operating system. It promotes true linear scalability of data services across a vast number of GPUs without the bottlenecks typically introduced by traditional x86 hardware and networking layers.
By removing the dependency on multiple layers of traditional hardware and leveraging the processing power of the DPU, VAST’s network-attached Data Platform infrastructure becomes significantly more efficient. This efficiency translates into what VAST tells us is a 70% reduction in the power usage and data center footprint for VAST infrastructure, contributing to overall energy consumption savings.
The approach also yields a nice benefit for GPU cloud providers with multi-tenant environments. With VAST’s zero-trust security model, the DPU enables data isolation and data management from the host operating system. By hosting data services on the DPU and utilizing standard client protocols, VAST minimizes potential attack vectors and ensures that data remains secure.
Analyst’s Take
When Nvidia launched its first BlueField DPU, based on technology acquired with Nvidia’s acquisition of Mellanox, the industry saw it as just another intelligent network adapter. It could offload expensive storage and networking tasks such as deep packet inspection or compression. But Nvidia proved that the accelerator is capable of much more.
Not long after Nvidia launched BlueField, VMware (now called “VMware by Broadcom”) took things a step further. It demonstrated that properly designed infrastructure software could leverage an Nvidia BlueField DPU to significantly boost overall system performance. In its vSphere 8.0 release, VMware movedcritical elements of its vSphere Distributed Switch and NSX networking and observability stack to Nvidia’s DPU. VAST Data is now taking a similar approach.
The move towards a disaggregated computing model facilitated by the DPU technology is a significant departure from traditional, monolithic designs. By embedding the entirety of VAST’s operating system natively into an AI cluster, VAST capitalizes on the inherent strengths of Nvidia’s BlueField-3 DPUs and effectively transforms supercomputers into highly specialized AI data engines. This is a significant step towards removing storage bottlenecks in AI and similarly performance-sensitive environments.
Beyond offload, VAST’s zero-trust security model is a critical element. Today, AI training is often a “cloud first” environment, with organizations using GPU cloud providers to train models. VAST Data excels in this market, partnering with top-tier providers like Lambda, CoreWeave, and Core42. Multi-tenant environments like these require a robust and hardware-enforced security model, such as the one VAST Data delivers with its DPU-based architecture.
Large AI clusters are already moving away from traditional storage solutions that struggle to keep up with the increasing scale and performance required for AI workloads. In this market, VAST Data competes with companies like WEKA, which is also finding solid success in the GPU cloud market, and parallel file systems like IBM’s GPFS and the open-source Lustre.
The approach taken by VAST Data and Nvidia is a significant leap forward in optimizing data services for the unique demands of AI. Leveraging DPUs to further remove performance bottles in the data path is a significant differentiator for VAST Data as it competes in this hyper-competitive environment. With this announcement, VAST delivers a compelling and possibly game-changing solution for high-performance data.
Disclosure: Steve McDowell is an industry analyst, and NAND Research is an industry analyst firm that engages in, or has engaged in, research, analysis and advisory services with many technology companies, including those mentioned in this article. Mr. McDowell does not hold any equity positions with any company mentioned in this article.
In “Inviting Millions Into the Era of Quantum Technologies” (Issues, Fall 2023), Sean Dudley and Marisa Brazil convincingly argue that the lack of a qualified workforce is holding back this field from reaching its promising potential. We at IBM Quantum agree. Without intervention, the nation risks developing useful quantum computing alongside a scarcity of practitioners who are capable of using quantum computers. An IBM Institute for Business Value study found that inadequate skills is the top barrier to enterprises adopting quantum computing. The study identified a small subset of quantum-ready organizations that are talent nurturers with a greater understanding of the quantum skills gap, and that are nearly three times more effective than their cohorts at workforce development.
Quantum-ready organizations are nearly five times more effective at developing internal quantum skills, nearly twice as effective at attracting talented workers in science, technology, engineering, and mathematics, and nearly three times more effective at running internship programs. At IBM Quantum, we have directly trained more than 400 interns at all levels of higher education and have seen over 8 million learner interactions with Qiskit, including a series of online seminars on using the open-source Qiskit tool kit for useful quantum computing. However, quantum-ready organizations represent only a small fraction of the organizations and industries that need to prepare for the growth of their quantum workforce.
As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve. As we move even further toward the age of quantum-centric supercomputing, we will need a larger workforce capable of orchestrating quantum and classical computational resources in order to address domain-specific problems.
Looking to academia, we need more quantum-ready institutions that are effective not only at teaching advanced mathematics, quantum physics, and quantum algorithms, but also are effective at teaching domain-specific skills such as machine learning, chemistry, materials, or optimization, along with teaching how to utilize quantum computing as a tool for scientific discovery.
As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve.
Critically, it is imperative to invest in talent early on. The data on physics PhDs granted by race and ethnicity in the United Statespaint a stark picture. Industry cannot wait until students have graduated and are knocking on company doors to begin developing a talent pipeline. IBM Quantum has made a significant investment in the IBM-HBCU Quantum Center through which we collaborate with more than two dozen historically Black colleges and universities to prepare talent for the quantum future.
Academia needs to become more effective in supporting quantum research (including cultivating student contributions) and partnering with industry, in connecting students into internships and career opportunities, and in attracting students into the field of quantum. Quoting Charles Tahan, director of the National Quantum Coordination Office within the White House Office of Science and Technology Policy: “We need to get quantum computing test beds that students can learn in at a thousand schools, not 20 schools.”
Rensselaer Polytechnic Institute and IBM broke ground on the first IBM Quantum System One on a university campus in October 2023. This presents the RPI community with an unprecedented opportunity to learn and conduct research on a system powered by a utility-scale 127-qubit processor capable of tackling problems beyond the capabilities of classical computers. And as lead organizers of the Quantum Collaborative, Arizona State University—using IBM and other industry quantum computing resources—is working with other academic institutions to provide training and educational pathways across high schools and community colleges through to undergraduate and graduate studies in the field of quantum.
Our hope is that these actions will prove to be only part of a broader effort to build the quantum workforce that science, industry, and the nation will need in years to come.
BRADLEY HOLT
IBM Quantum
Program Director, Global Skills Development
Sean Dudley and Marisa Brazil advocate for mounting a national workforce development effort to address the growing talent gap in the field. This effort, they argue, should include educating and training a range of learners, including K–12 students, community college students, and workers outside of science and technology fields, such as marketers and designers. As the field will require developers, advocates, and regulators—as well as users—with varying levels of quantum knowledge, the authors’ comprehensive and inclusive approach to building a competitive quantum workforce is refreshing and justified.
At Qubit by Qubit, founded by the Coding School and one of the largest quantum education initiatives, we have spent the past four years training over 25,000 K–12 and college students, educators, and members of the workforce in quantum information science and technology (QIST). In collaboration with school districts, community colleges and universities, and companies, we have found great excitement among all these stakeholders for QIST education. However, as Dudley and Brazil note, there is an urgent need for policymakers and funders to act now to turn this collective excitement into action.
Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large.
The authors posit that the development of a robust quantum workforce will help position the United States as a leader of Quantum 2.0, the next iteration of the quantum revolution. Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large. With the interdisciplinary nature of QIST, learners gain exposure and skills in mathematics, computer science, physics, and engineering, among other fields. Thus, even for learners who choose not to pursue a career in quantum, they will have a broad set of highly sought skills that they can apply to another field offering a rewarding future.
With the complexity of quantum technologies, there are a number of challenges in building a diverse quantum workforce. Dudley and Brazil highlight several of these, including the concentration of training programs in highly resourced institutions, and the need to move beyond the current focus on physics and adopt a more interdisciplinary approach. There are several additional challenges that need to be considered and addressed if millions of Americans are to become quantum-literate, including:
Funding efforts have been focused on supporting pilot educational programs instead of scaling already successful programs, meaning that educational opportunities are not accessible widely.
Many educational programs are one-offs that leave students without clear next steps. Because of the complexity of the subject area, learning pathways need to be established for learners to continue developing critical skills.
Diversity, inclusion, and equity efforts have been minimal and will require concerted work between industry, academia, and government.
Historically, the United States has begun conversations around workforce development for emerging and deep technologies too late, and thus has failed to ensure the workforce at large is equipped with the necessary technical knowledge and skills to move these fields forward quickly. We have the opportunity to get it right this time and ensure that the United States is leading the development of responsible quantum technologies.
KIERA PELTZ
Executive Director, Qubit by Qubit
Founder and CEO, The Coding School
To create an exceptional quantum workforce and give all Americans a chance to discover the beauty of quantum information science and technology, to contribute meaningfully to the nation’s economic and national security, and to create much-needed bridges with other like-minded nations across the world as a counterbalance to the balkanization of science, we have to change how we are teaching quantum. Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.
The imminence of quantum technologies has motivated physicists—at least in some places—to reinvent their teaching, listening to and working with their engineering, computer science, materials science, chemistry, and mathematics colleagues to create a new kind of course. In 2020, these early experiments in retooling led to a convening of 500 quantum scientists and engineers to debate undergraduate quantum education. Building on success stories such as the quantum concepts course at Virginia Tech, we laid out a plan, published in IEEE Transactions on Education in 2022, to bridge the gap between the excitement around quantum computing generated in high school and the kind of advanced graduate research in quantum information that is really so astounding. The good news is that as Virginia Tech showed, quantum information can be taught with pictures and a little algebra to first-year college students. It’s also true at the community college level, which means the massive cohort of diverse engineers who start their careers there have a shot at inventing tomorrow’s quantum technologies.
Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.
However, there are significant missing pieces. For one, there are almost no community college opportunities to learn quantum anything because such efforts are not funded at any significant level. For another, although we know how to teach the most speculative area of quantum information, namely quantum computing, to engineers, and even to new students, we really don’t know how to do that for quantum sensing, which allows us to do position, navigation, and timing without resorting to our fragile GPS system, and to measure new space-time scales in the brain without MRI, to name two of many applications. It is the most advanced area of quantum information, with successful field tests and products on the market now, yet we are currently implementing quantum engineering courses focused on a quantum computing outcome that may be a decade or more away.
How can we solve the dearth of quantum engineers? First, universities and industry can play a major role by working together—and several such collective efforts are showing the way. Arizona State University’s Quantum Collaborative is one such example. The Quantum consortium in Colorado, New Mexico, and Wyoming recently received a preliminary grant from the US Economic Development Administration to help advance both quantum development and education programs, including at community colleges, in their regions. Such efforts should be funded and expanded and the lessons they provide should be promulgated nationwide. Second, we need to teach engineers what actually works. This means incorporating quantum sensing from the outset in all budding quantum engineering education systems, building on already deployed technologies. And third, we need to recognize that much of the nation’s quantum physics education is badly out of date and start modernizing it, just as we are now modernizing engineering and computer science education with quantum content.
LINCOLN D. CARR
Quantum Engineering Program and Department of Physics
Colorado School of Mines
Preparing a skilled workforce for emerging technologies can be challenging. Training moves at the scale of years while technology development can proceed much faster or slower, creating timing issues. Thus, Sean Dudley and Marisa Brazil deserve credit for addressing the difficult topic of preparing a future quantum workforce.
At the heart of these discussions are the current efforts to move beyond Quantum 1.0 technologies that make use of quantum mechanical properties (e.g., lasers, semiconductors, and magnetic resonance imaging) to Quantum 2.0 technologies that more actively manipulate quantum states and effects (e.g., quantum computers and quantum sensors). With this focus on ramping up a skilled workforce, it is useful to pause and look at the underlying assumption that the quantum workforce requires active management.
In their analysis, Dudley and Brazil cite a report by McKinsey & Company, a global management consulting firm, which found that three quantum technology jobs exist for every qualified candidate. While this seems like a major talent shortage, the statistic is less concerning when presented in absolute numbers. Because the field is still small, the difference is less than 600 workers. And the shortage exists only when considering graduates with explicit Quantum 2.0 degrees as qualified potential employees.
McKinsey recommended closing this gap by upskilling graduates in related disciplines. Considering that 600 workers is about 33% of physics PhDs, 2% of electrical engineers, or 1% of mechanical engineers graduated annually in the United States, this seems a reasonable solution. However, employers tend to be rather conservative in their hiring and often ignore otherwise capable applicants who haven’t already demonstrated proficiency in desired skills. Thus, hiring “close-enough” candidates tends to occur only when employers feel substantial pressure to fill positions. Based on anecdotal quantum computing discussions, this probably isn’t happening yet, which suggests employers can still afford to be selective. As Ron Hira notes in “Is There Really a STEM Workforce Shortage?” (Issues, Summer 2022), shortages are best measured by wage growth. And if such price signals exist, one should expect that students and workers will respond accordingly.
When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences.
If the current quantum workforce shortage is uncertain, the future is even more uncertain. The exact size of the needed future quantum workforce depends on how Quantum 2.0 technologies develop. For example, semiconductors and MRI machines are both mature Quantum 1.0 technologies. The global semiconductor industry is a more than $500 billion business (measured in US dollars), while the global MRI business is about 100 times smaller. If Quantum 2.0 technologies follow the specialized, lab-oriented MRI model, then the workforce requirements could be more modest than many projections. More likely is a mix of market potential where technologies such as quantum sensors, which have many applications and are closer to commercialization, have a larger near-term market while quantum computers remain a complex niche technology for many years. The details are difficult to predict but will dictate workforce needs.
When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences. Overproducing STEM workers benefits industry and academia, but not necessarily the workers themselves. If we prematurely attempt to put quantum computer labs in every high school and college, we may be setting up less-privileged students to pursue jobs that may not develop, equipped with skills that may not be easily transferred to other fields.
DANIEL J. ROZELL
Research Professor
Department of Technology and Society
Stony Brook University
CITE THIS ARTICLE
“Building the Quantum Workforce.” Issues in Science and Technology 40, no. 2 (Winter 2024).
Friday’s announcement that OpenAI CEO Sam Altman will return to the nonprofit’s boardlocks Silicon Valley’s billionaire class into control of the destiny of society-transforming artificial intelligence.
Why it matters: AI will be shaped by rich men and the markets that made them rich, not by the scientists and engineers who are building it or the governmentsthat will have to deal with its impact.
Catch up quick: OpenAI’s board fired Altman in a crisis that shook the AI world last November— but within a few tumultuous days, Altman was back in charge after most of the company threatened to quit.
At the time, the board members who ousted Altmansaid they’d lost trust in him. But they never explained why or how.
The latest: Lawyers who conducted an outside investigation into the board fight found no malfeasance, financial impropriety or product safety-related disagreement behind the firing, OpenAI said Friday.
Apparently, it really was all about a breakdown in trust.
But a breakdown in trustbetween a board and a CEO is a big deal, and we still have little idea what happened in November to the company responsible for ChatGPT.
OpenAI isn’t releasing the full investigation report, only a brief summary. The board members who fired Altman have never shared their story in full detail.
The intrigue: The New York Times reported last week that OpenAI CTO Mira Murati played a “pivotal role” in Altman’s firing.
In a post on X, Murati, who served very briefly as OpenAI’s interim CEO before backing Altman’s return, described the NYT story as “the previous board’s efforts to scapegoat me with anonymous and misleading claims.”
Murati, in a memo to staff that she also posted on X, said she’d given Altman critical feedback directly, then shared the feedback with board members when they asked her about it.
The upshot now is that OpenAI is steaming forward at warp speed with Altman’s original strategy — funding AI development by selling shares in a for-profit subsidiary to Microsoft and other investors.
The old board’s bungled coup is almost certainly the last time anyone will be in a position to challenge Altman’s leadership of OpenAI — or his belief that a nonprofit can fulfill a mission of benefiting humanity by behaving like a for-profit startup pursuing hyper-growth.
The new board is unlikely to block a strategy that the firm’s former board got canned for questioning. And its roster, even with the addition of three accomplished female members announced Friday, no longerincludes specialists in AI ethics.
The big picture: Altman became a billionaire himself as a startup investor and leader of Silicon Valley’s marquee startup incubator, Y Combinator.
His world venerates the startup as a kind of artistic canvas for entrepreneurs and as capitalism’s tool for world change.
That’s why even the ostensibly humanitarian projects Altman has pursued — like Worldcoin, which is deploying a crypto token and global identity system — look and feel more like startups than philanthropies.
Elon Musk, one of OpenAI’s cofounders, sued OpenAI and Altman last week, alleging the company has abandoned its original mission.
Musk’s complaint claims that OpenAI no longer prioritizes serving humanity over “maximizing profits for Microsoft.”
Reality check: Musk is the on-again, off-again richest man in the world who is funding his own AI company and who seems to have agreed with Altman’s raise-big, go-big strategy (based on emails OpenAI posted in response to the suit).
Yes, but: There’s some common-sense truth, if not necessarily legal merit or practical value, in Musk’s message.
Efforts to “strengthen governance” at the company could rid the firm of its remaining nonprofit trappings, leaving it even more like a standard-order tech corporation.
OpenAI has said that its new board will consider broader changes to the firm’s structure.
Friction point: There are still plenty of people in the AI field today who believe the technology carries a risk of destroying humanity.
Plenty of others dismiss that “existential risk” — but believe AI is likely to replicate humankind’s worst biases and flaws unless it’s built with caution and care.
What’s next: Two examples of potentially planet-wrecking technologies from the past century lay out alternative paths for AI.
The Manhattan Projectbroughtthe U.S. government and research scientists together to build nuclear weapons during wartime.
The destruction of Hiroshima and Nagasaki was a tragedy the U.S. still hasn’t come to terms with — but we can say with certainty that the planet has not been destroyed by nukes, at least not yet.
Climate changeshows us the other path — what happens when industry controls the fate of a keytechnology that could leave the earth uninhabitable.
At the end of the 19th century, the rise of the oil and gas industries — and the start of the profligate burning of fossil fuels that now warm our atmosphere — took place at a moment very similar to ours.
Like now, the U.S. government then chose a mostly hands-off approach — and a generation of unfathomably rich “robber barons” shaped a new century.
Their legacy is a slow-burn planetary disaster that we have yet to reverse.
The bottom line: Markets and tycoons are good at moving fast, breaking things and generating wealth. But humankind seems to manage technological danger better when the government and scientists hold the tiller.
Decisions about powerful automation tools should not be left to a handful of entrepreneurs and engineers, MIT researchers argue. Here’s how to reclaim control.
In 18th century Britain, technical improvements in textile production generated great wealth for factory owners but created horrible working and living conditions for textile workers, who did not see their incomes rise for almost 100 years.
Today, artificial intelligence and other digital technologies mesmerize the business elite while threatening to undermine jobs and democracy through excessive automation, massive data collection, and intrusive surveillance.
In their new book, “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity,” MIT economists Daron Acemoglu and Simon Johnson decry the economic and social damage caused by the concentrated power of business and show how the tremendous computing advances of the past half century can become empowering and democratizing tools.
In this excerpt, the authors call for the development of a powerful new narrative about shared prosperity and offer four ways to rechart the course of technology so it complements human capabilities.
Our current problems are rooted in the enormous economic, political, and social power of corporations, especially in the tech industry. The concentrated power of business undercuts shared prosperity because it limits the sharing of gains from technological change. But its most pernicious impact is via the direction of technology, which is moving excessively toward automation, surveillance, data collection, and advertising.
To regain shared prosperity, we must redirect technology, and this means activating a version of the same approach that worked more than a century ago for the Progressives.
This can start only by altering the narrative and the norms.
The necessary steps are truly fundamental. Society and its powerful gatekeepers need to stop being mesmerized by tech billionaires and their agenda. Debates on new technology ought to center not just on the brilliance of new products and algorithms but also on whether they are working for the people or against the people. Whether digital technologies should be used for automating work and empowering large companies and nondemocratic governments must not be the sole decision of a handful of entrepreneurs and engineers. One does not need to be an AI expert to have a say about the direction of progress and the future of our society forged by these technologies. One does not need to be a tech investor or venture capitalist to hold tech entrepreneurs and engineers accountable for what their inventions do.
Choices over the direction of technology should be part of the criteria that investors use for evaluating companies and their effects. Large investors can demand transparency on whether new technologies will automate work or create new tasks, whether they will monitor or empower workers, and how they will affect political discourse and other social outcomes.
These are not decisions investors should care about only because of the profits they generate. A two-tiered society with a small elite and a dwindling middle class is not a foundation for prosperity or democracy.
Nevertheless, it is possible to make digital technologies useful to humans and boost productivity so that investing in technologies that help humans can also be good business.
As with the Progressive Era reforms and redirection in the energy sector, a new narrative is critical for building countervailing powers in the digital age. Such a narrative and public pressure can trigger more responsible behavior among some decision makers.
For example, managers with business-school educations tend to reduce wages and cut labor costs, presumably because of the lingering influence of the Friedman doctrine — the idea that the only purpose and responsibility of business is to make profits.
A powerful new narrative about shared prosperity can be a counterweight, influencing the priorities of some managers and even swaying the prevailing paradigm in business schools. Equally, it can help reshape the thinking of tens of thousands of bright young people wishing to work in the tech sector — even if it is unlikely to have much impact on tech tycoons.
More fundamentally, these efforts must formulate and support specific policies to rechart the course of technology. Digital technologies can complement humans by:
Improving the productivity of workers in their current jobs.
Creating new tasks with the help of machine intelligence augmenting human capabilities.
Providing better, more usable information for human decision-making.
Building new platforms that bring together people with different skills and needs.
For example, digital and AI technologies can increase effectiveness of classroom instruction by providing new tools and better information to teachers. They can enable personalized instruction by identifying in real time areas of difficulty or strength for each student, thus generating a plethora of new, productive tasks for teachers. They can also build platforms that bring teachers and teaching resources more effectively together. Similar avenues are open in health care, entertainment, and production work.
An approach that complements workers, rather than sidelining and attempting to eliminate them, is more likely when diverse human skills, based on the situational and social aspects of human cognition, are recognized. Yet such diverse objectives for technological change necessitate a plurality of innovation strategies, and they become less likely to be realized when a few tech firms dominate the future of technology.
Diverse innovation strategies are also important because automation is not harmful in and of itself. Technologies that replace tasks performed by people with machines and algorithms are as old as industry itself, and they will continue to be part of our future. Similarly, data collection is not bad per se, but it becomes inconsistent both with shared prosperity and democratic governance when it is centralized in the hands of unaccountable companies and governments that use these data to disempower people.
The problem is an unbalanced portfolio of innovations that excessively prioritize automation and surveillance, failing to create new tasks and opportunities for workers. Redirecting technology need not involve the blocking of automation or banning data collection; it can instead encourage the development of technologies that complement and help human capabilities.
Society and government must work together to achieve this objective. Pressure from civil society, as in the case of successful major reforms of the past, is key. Government regulation and incentives are critical too, as they were in the case of energy.
However, the government cannot be the nerve center of innovation, and bureaucrats are not going to design algorithms or come up with new products. What is needed is the right institutional framework and incentives shaped by government policies, bolstered by a constructive narrative, to induce the private sector to move away from excessive automation and surveillance and toward more worker-friendly technologies.
The U.S. Army is overhauling how it develops and adopts software, the lifeblood of high-tech weaponry, vehicles and battlefield information-sharing.
The service on March 9 rolled out a policy, dubbed Enabling Modern Software Development and Acquisition Practices, enshrining the revisions. Officials said the measure brings them closer to private-sector expectations, making business simpler and more inclusive.
“We thought this was important to do this now, and issue this policy now, because of how critical software is to the fight right now,” Margaret Boatner, the deputy assistant secretary of the Army for strategy and acquisition reform, told reporters at the Pentagon. “More than ever before, software is actually a national-security imperative.”
Consequences of the policy include: changing the way requirements are written, favoring high-level needs statements and concision over hyper-specific directions; employing alternative acquisition and contracting strategies; reducing duplicative tests and streamlining cybersecurity processes; embracing a sustainment model that recognizes programs can and should be updated; and establishing expert cohorts, such as the prospective Digital Capabilities Contracting Center of Excellence at Aberdeen Proving Ground, Maryland.
While the policy is effective immediately, the different reforms will take different amounts of time to be realized. The contacting center, for example, has several months to get up and running. No additional appropriations are needed to make the transitions, according to Boatner.
Every year, OpenAI’s employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. It’s mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.
In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabet’s DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.
Above all, it is lionized for its mission. Its goal is to be the first to create AGI—a machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.
The implication is that AGI could easily run amok if the technology’s development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.
OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to “build value for everyone rather than shareholders.” Its charter—a document so sacred that employees’ pay is tied to how well they adhere to it—further declares that OpenAI’s “primary fiduciary duty is to humanity.” Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.
But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.
Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation “Can machines think?” Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.
“It is one of the most fundamental questions of all intellectual history, right?” says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. “It’s like, do we understand the origin of the universe? Do we understand matter?”
The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. It’s not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.
But the resounding consensus within the field is that such advanced capabilities would take decades, even centuries—if indeed it’s possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late ’80s and early ’90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. “The field felt like a backwater,” says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.
Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasn’t the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.
The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped downas president of startup accelerator Y Combinator to become OpenAI’s CEO.)
But more than anything, OpenAI’s nonprofit status made a statement. “It’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest,” the announcement said. “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.” Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.
In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. “It was a beacon of hope,” says Chip Huyen, a machine learning expert who has closely followed the lab’s journey.
At the intersection of 18th and Folsom Streets in San Francisco, OpenAI’s office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters “PIONEER BUILDING”—the remnants of its bygone owner, the Pioneer Truck Factory—wrap around the corner in faded red paint.
Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space I’m restricted to during my visit. I’m forbidden to visit the second and third floors, which house everyone’s desks, several robots, and pretty much everything interesting. When it’s time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.
On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. “We’ve never given someone so much access before,” he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.
Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a “focused, quiet childhood.” He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.
Brockman takes me to lunch to remove me from the office during an all-company meeting. In the café across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. It’s easy to appreciate his charisma as a leader. Recounting memorable passages from the books he’s read, he zeroes in on the Valley’s favorite narrative, America’s race to the moon. (“One story I really love is the story of the janitor,” he says, referencing a famous yet probably apocryphal tale. “Kennedy goes up to him and asks him, ‘What are you doing?’ and he says, ‘Oh, I’m helping put a man on the moon!’”) There’s also the transcontinental railroad (“It was actually the last megaproject done entirely by hand … a project of immense scale that was totally risky”) and Thomas Edison’s incandescent lightbulb (“A committee of distinguished experts said ‘It’s never gonna work,’ and one year later he shipped”).
Brockman is aware of the gamble OpenAI has taken on—and aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. It’s the price of daring greatly.
Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was small—formed through a tight web of connections—and management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.
Musk played no small part in building a collective mythology. “The way he presented it to me was ‘Look, I get it. AGI might be far away, but what if it’s not?’” recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. “‘What if it’s even just a 1% or 0.1% chance that it’s happening in the next five to 10 years? Shouldn’t we think about it very carefully?’ That resonated with me,” he says.
But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasn’t clear the team itself knew either. “Our goal right now … is to do the best thing there is to do,” Brockman said. “It’s a little vague.”
Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAI’s members. After two years, at Brockman’s request, Daniela joined too. “Imagine—we started with nothing,” Brockman says. “We just had this ideal that we wanted AGI to go well.”
Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the company’s existence.
By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that “in order to stay relevant,” Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass money—while somehow also staying true to the mission.
Unbeknownst to the public—and most employees—it was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the lab’s core values but subtly shifted the language to reflect the new reality. Alongside its commitment to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” it also stressed the need for resources. “We anticipate needing to marshal substantial resources to fulfill our mission,” it said, “but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”
“We spent a long time internally iterating with employees to get the whole company bought into a set of principles,” Brockman says. “Things that had to stay invariant even if we changed our structure.”
That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a “capped profit” arm—a for-profit with a 100-fold limit on investors’ returns, albeit overseen by a board that’s part of a nonprofit entity. Shortly after, it announcedMicrosoft’s billion-dollar investment (though it didn’t reveal that this was split between cash and credits to Azure, Microsoft’s cloud computing platform).
Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: “Early investors in Google have received a roughly 20x return on their capital,” they wrote. “Your bet is that you’ll have a corporate structure which returns orders of magnitude more than Google … but you don’t want to ‘unduly concentrate power’? How will this work? What exactly is power, if not the concentration of resources?”
The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. “Can I trust OpenAI?” one question asked. “Yes,” began the answer, followed by a paragraph of explanation.
The charter is the backbone of OpenAI. It serves as the springboard for all the lab’s strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the company’s existence. (“By the way,” he clarifies halfway through one recitation, “I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. It’s not like I was reading this before the meeting.”)
How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? “As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that aren’t imaginable today.” How will you structure yourself to evenly distribute AGI? “I think a utility is the best analogy for the vision that we have. But again, it’s all subject to the charter.” How do you compete to reach AGI first without compromising safety? “I think there is absolutely this important balancing act, and our best shot at that is what’s in the charter.”
For Brockman, rigid adherence to the document is what makes OpenAI’s structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesn’t mind—in fact, he agrees with the mentality. It’s the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.
In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of “effective altruism.” They crack jokes using machine-learning terminology to describe their lives: “What is your life a function of?” “What are you optimizing for?” “Everything is basically a minmax function.” To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)
But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employee’s absorption of the mission. Alongside columns like “engineering expertise” and “research direction” in a spreadsheet tab titled “Unified Technical Ladder,” the last column outlines the culture-related expectations for every level. Level 3: “You understand and internalize the OpenAI charter.” Level 5: “You ensure all projects you and your team-mates work on are consistent with the charter.” Level 7: “You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.”
The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.
But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponizedto produce disinformation at immense scale.
The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? “It seemed like OpenAI was trying to capitalize off of panic around AI,” says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.
By May, OpenAI had revised its stance and announced plans for a “staged release.” Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithm’s potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, “no strong evidence of misuse so far.”
Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadn’t been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that “safety and security concerns” would gradually oblige the lab to “reduce our traditional publishing in the future.”
This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. “I think that is definitely part of the success-story framing,” said Miles Brundage, a policy research scientist, highlighting something in a Google doc. “The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.”
But OpenAI’s media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the lab’s big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arm’s length.
This hasn’t stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMind’s AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAI’s achievement. I was not compensated for this.)
And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the lab’s influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: “In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI,” says a line under the “Policy” section. “Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message.” Another, under “Strategy,” reads, “Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to.”
There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?
But little did people know this wasn’t the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.
There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; it’s just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, won’t be enough.
Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.
Brockman and Sutskever deny that this is their sole strategy, but the lab’s tightly guarded research suggests otherwise. A team called “Foresight” runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the lab’s all-in, compute-driven strategy is the best approach.
For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didn’t know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.
In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was “sniffing around.”
In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. “We expect that safety and security concerns will reduce our traditional publishing in the future,” the section states, “while increasing the importance of sharing safety, policy, and standards research.” The spokesperson also added: “Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.”
One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.
The man driving OpenAI’s strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.
Amodei divides the lab’s strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investor’s “portfolio of bets.” Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.
As in an investor’s portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why it’s important to keep an open mind. “Pure language is a direction that the field and even some of us were somewhat skeptical of,” he says. “But now it’s like, ‘Wow, this is really promising.’”
Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun.
The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2’s sentence constructions or a robot’s movements.
Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. “At some point we’re going to build AGI, and by that time I want to feel good about these systems operating in the world,” he says. “Anything where I don’t currently feel good, I create and recruit a team to focus on that thing.”
For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.
“We’re in the awkward position of: we don’t know what AGI looks like,” he says. “We don’t know when it’s going to happen.” Then, with careful self-awareness, he adds: “The mind of any given person is limited. The best thing I’ve found is hiring other safety researchers who often have visions which are different than the natural thing I might’ve thought of. I want that kind of variation and diversity because that’s the only way that you catch everything.”
The thing is, OpenAI actually has little “variation and diversity”—a fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musk’s startup working on computer-brain interfaces, shares the same building and dining room.
According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didn’t specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)
In fairness, this lack of diversity is typical in AI. Last year a report from the New York–based research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. “There is definitely still a lot of work to be done across academia and industry,” OpenAI’s spokesperson said. “Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.”
Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New York–based company, the city just had too little diversity.
But if diversity is a problem for the AI industry in general, it’s something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.
Nor is it at all clear just how OpenAI plans to “distribute the benefits” of AGI to “all of humanity,” as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited “significant unresolved issues regarding … the way in which it would be implemented.”) “This is my biggest problem with OpenAI,” says a former employee, who spoke on condition of anonymity.
“They are using sophisticated technical practices to try to answer social problems with AI,” echoes Britt Paris of Rutgers. “It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.”
Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. “How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,” he says. “I don’t think that that strategy is likely to succeed.”
The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to “make sure that we are understanding the ramifications.”
Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldn’t functionally change OpenAI’s approach to research. Microsoft was well aligned with the lab’s values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.
For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didn’t even know what promises, if any, had been made to Microsoft.
But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altman’s message is clear: OpenAI needs to make money in order to do research—not the other way around.
This is a hard but necessary trade-off, the leadership has said—one it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.
But the truth is that OpenAI faces this trade-off not only because it’s not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategy—not because it’s seen as the only way to AGI, but because it seems like the fastest.
Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.
Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldn’t omit from this profile. “I guess in my opinion, there’s problems,” she begins hesitantly. “Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.”
“But to me, it feels like they are doing something a little bit right,” she says. “I got a sense that the folks there are earnestly trying.”
Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didn’t think it was possible to “bake ethics in… from the very beginning” when developing AI, he intended it to mean that ethical questions couldn’t be solved from the beginning, not that they couldn’t be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not “on a farm,” but “on a hobby farm.” Brockman considers this distinction important.
In addition, we have clarified that while OpenAI did indeed “shed its nonprofit status,” a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. We’ve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).