
By Abi Olvera | December 10, 2025
Data center-related spending is likely to have accounted for nearly half of the United States’ GDP growth in the first six months of this year (Thompson 2025). Additionally, tech companies are projected to spend $400 billion this year on AI-related infrastructure. See Figure 1.
Given the current scale of investment in AI, the world may be in a pre-bubble phase. Alternatively, the world could be at an inflection point where today’s decisions establish default settings that endure for generations.
Both can be true: AI may be in a bit of a bubble at the same time that it is reshaping vital parts of the economy. Even at a slower pace, the tech’s influence will continue to be transformative.
Regardless of whether artificial intelligence investment, growth, and revenues continue to increase at its current exponential rate or at a pace slowed by a re-calibration or shock, AI has already been widely adopted. Companies worldwide are utilizing various kinds of generative artificial intelligence applications, from large language models that can communicate in natural language to bio-design tools that can generate and predict protein-folding and new substances at the molecular level.
While the company and work-related usages may show up in the numbers for nationwide GDP or sector-specific productivity, artificial intelligence is also impacting societies in ways more difficult to measure. For example, a growing number of people use apps powered by large language models as assistants and trusted advisers in both their work and personal lives.
On a bigger scale, this difficulty in measuring impact means that no one is certain of how AI might ultimately influence the world. If artificial intelligence amplifies existing polarization or other failures in coordination, climate change and nuclear risks might be harder to solve. If the tech empowers more surveillance, totalitarian regimes could be locked into power. If AI causes consistent unemployment without creating new jobs—in the same way that some prior technologies have—then democracies face additional risks that come from social unrest due to economic insecurity. Loss of job opportunities could also spark deeper questions about how democracy functions when many people aren’t working or paying taxes, a shift that could weaken the citizen-government relationship.
This uncertainty creates an opening for organizations like the Bulletin of the Atomic Scientists, which has a long history of looking at global, neglected risks through different disciplines. No single field can accurately predict how this transformative technology will reshape society. By connecting AI researchers with economists, political scientists, complexity theorists, scientists, and historians, the Bulletin can continue in its tradition of providing the cross-disciplinary rigor needed to distinguish meaningful information from distractions, anticipate hidden risks, and balance both positive and negative aspects of this technology—at the exact moment when leaders are making foundational decisions.
Why our frameworks matter
Alfred Wegener, who proposed the theory of continental drift, wasn’t a geologist. He was a meteorologist who spent years battling geologists who doubted any force could be powerful enough to move such large landmasses—making him a prime example of how specialists from other domains can often needed to fill in potential voids, especially in evolving fields.
Experts studying AI and its impacts will likely suffer vulnerabilities similar to those geologists,
For example, many researchers who examine the relationship between jobs and artificial intelligence focus on elimination: They try to predict which jobs will disappear; how quickly that will happen; how many people will be displaced. But these forecasts don’t always portray the views of historians of technological innovation and diffusion—which is that new technology often causes an explosive growth in jobs. Sixty percent of today’s jobs didn’t even exist in 1940, and 85 percent of employment growth since then has been technology-driven (Goldman Sachs 2025). That’s because technologies create complementary jobs alongside the ones they automate. Commercial aviation largely replaced ocean liner travel but created jobs for pilots, flight attendants, air traffic controllers, airport staff, and an entire global tourism and hospitality industry. And it can be argued that washing machines and other household appliances freed up time that helped millions of women enter the workforce (Greenwood 2019).
Looking back, it’s clear that technology can create new domains of work. The rise of computers, for example, created more opportunities for data scientists, as statistical methods no longer required hours of manual calculation. This made statistical analysis more accessible and affordable across a wider range of sectors.
To be clear, some forms of automation can lead to job losses. Bank teller positions have declined 30 percent since 2010, likely from rise of mobile banking (Mousa 2025). That’s because when a technology is so efficient that a human is no longer needed, the drop in price for that service doesn’t lead to enough new demand to save those jobs.
The impact of AI on the job markets—which is less a simple subtraction problem rather than a complex adaptive and evolving process—can already be seen. Workers between the ages of 22 and 25 in AI-exposed fields, such as software development and customer service, saw a 13 percent decline in employment, while roles for workers in less exposed fields stabilized or grew (Brynjolfsson, Chandar, and Chen 2025). However, the ramifications of even these changes are less clear. When digital computers were first rolled out, the humans (also called “computers”) who originally did the computations entered into technical roles through other types of mathematical work.
However, if AI continues to improve and takes on a wider range of tasks—particularly long-term projects—traditional models of technology’s impacts on the labor market may become less directly applicable.
This uncertainty could lead to split research strategies. Current research is divided into bifurcated “camps” or approaches. Some focus on where AI could be in a few months, years, or decades. Statements from some AI labs, such as Anthropic, of a “country of geniuses” within two years, contribute to an urgency around this perspective. Others prefer to focus on current models, uses, and applications, often with a lean toward pessimism.
Staying balanced to stay accurate
To better get the full picture, people need to take note of both the impressive and underwhelming aspects, and similarly the positive and negative outcomes, of AI. Understanding AI today means holding two ideas at once: The technology is both more capable than most people expected and less reliable than the hype suggests.
AI models have achieved what seemed impossible just a few years ago: writing coherent essays, generating working code, and solving complex problems in ways that keep beating expert predictions. Benchmarks track this progress, showing how much better AI has especially gotten at technical tasks and math reasoning.
These benchmark improvements, however, don’t translate directly to real-world reliability. Artificial intelligence tools are still not robust enough for companies to fully depend on. Even legal research tools that rely on AI summaries—a core strength of large language models—get these extractions wrong, apparently hallucinating between 17 to 33 percent (Magesh, Surani, Dahl, Suzgun, Manning, and Ho 2025). The gap between what current AI can do in tests and what it can dependably do in practice remains wide.
But there are upsides that receive less attention: Data centers now generate 38 percent of Virginia’s Loudoun County local tax revenue (Styer 2025). Surveys of Replika users found that three percent had suicidal ideation that the AI chatbot “halted” (Maples, Cerit, Vishwanath, and Pea 2024). A 2024 study in the journal Science found that talking to AI systems reduced belief in false conspiracy theories by 20 percent, with the effect persisting for over two months undiminished (Costello, Pennycook, and Rand 2024).
As such, reality is messier than the optimistic or pessimistic narratives suggest. But mainstream news coverage tilts heavily negative—because those stories spread faster and capture more attention.
This creates a policy problem. When regulators only see the downsides, they optimize for preventing visible harms rather than maximizing total welfare. Looking at the entire picture of risk is critical.
One example I think about often is that US aviation policy allows babies to fly on a parent’s lap without a separate seat. This decision means approximately one baby dies every 10 years due to occurrences such as turbulence when a parent cannot securely hold the infant. But the practice saves far more lives overall, if you weigh it against driving, where a minimum of 60 babies would die in those 10 years (Piper 2024). The lower cost of flying (no extra ticket) means more families can choose planes over cars.
The same logic applies to technology policy. Fear of nuclear energy power plants in the 1970s and ‘80s—especially concerns about accidents, nuclear weapons proliferation, and how to safely dispose of radioactive waste—prevented the industry from achieving the cost efficiencies of solar or wind energy. Today, more than a billion people still live in energy poverty: lacking reliable electricity for refrigeration, lighting, or medical equipment (Min, O’Keeffe, Abidoye, Gaba, Monroe, Stewart, Baugh, Sánchez-Andrade, and Meddeb 2024). In high-income countries like the United States, coal plants kill far more people from air pollution than nuclear energy (Walsh 2022). The visible risk of nuclear accidents dominated the policy conversation, perhaps with good measure. However, the invisible cost of energy scarcity or pollution impacts didn’t make headlines.
AI policy faces similar complexities. The research community’s job is to help policy makers see the full picture—not just the headlines. Some risks, such as overreliance on AI decision-making and lower social cohesion from higher unemployment levels, don’t tend to show up as news-worthy incidents. Similarly, other gradual serious risks that aren’t always ink-worthy include declining public trust due to worsening social and economic issues or gradual democratic backsliding from weakening institutions.
The gaps that matter most
To stay abreast of the rapid changes and headlines on AI, the Bulletin could focus on critical issues that determine artificial intelligence’s impact and trajectory. Here are the areas where outside expertise could provide crucial grounding:
- Coordination problems. Humanity already knows how to solve many of its biggest challenges. Several countries have nearly eliminated road fatalities through infrastructure and city redesigns. The world has the technology to dramatically reduce carbon emissions. But coordination problems keep solutions from spreading. Understanding why coordination succeeds or fails could help design better frameworks for AI governance—and aid in recognizing where artificial intelligence might make existing coordination challenges harder or easier to solve.
- Complexity theory and societal resilience. Societies have become more interconnected. There’s rich knowledge about what happens when complex systems come under stress—how elites capture resources; how coordination mechanisms break down; when small changes cascade into large disruptions. Complexity theorists and historians who study societal change could help forecast which societies are losing resilience, and which are at risk of disruptions from shocks such as those from AI. Complexity theory experts can also monitor AI developments that pose systemic risks versus those that create manageable disruptions.
- Innovation diffusion patterns. Experts who study how new technologies spread through economies consistently find that adoption is slower and more uneven than early predictions suggest. These economists know which historical parallels are useful and which are misleading. They understand the institutional barriers that slow both beneficial and harmful applications of new technology.
- Cybersecurity and biosecurity dynamics. In practice, do AI systems increase cybersecurity offense or defense? The cyber field offers real-time lessons that can help capture trends. Both high-level strategic analyses and granular technical insights are critical. How do authentication challenges, application programming interfaces (API) integration difficulties, and decision-ownership questions affect cybersecurity efforts? Understanding these practical bottlenecks could inform both security policy and broader predictions about the speed of AI adoption.
- Biosecurity dynamics. How does AI change the risk of someone creating a bioweapon? Devising an effective policy requires understanding exactly which parts of the supply chain AI affects. Artificial intelligence can help with computational tasks like molecular design, but some biosecurity experts note it doesn’t do much for the hands-on laboratory work that’s often the real bottleneck. If they’re right, researchers might need to watch for advances that lower barriers to physical experimentation, not just computational design. Experts can’t know what to look for without systematic research from practitioners who see the actual process.
- Democratic resilience. Political scientists who study the mechanisms behind stable or fragile democracies rarely contribute to AI policy discussions. But their insights matter enormously. Which institutions bend under pressure and which break? How do democratic societies maintain legitimacy during periods of lower levels of public trust? What early warning signs should policymakers watch for?
- Media landscape dynamics. Beyond tracking misinformation supply, research is needed to understand the demand side: Why certain false narratives spread while others don’t. People filter information through existing trust networks and social identities. What determines who people trust? When do trust networks break down, and when do they hold firm? Why do some societies maintain higher levels of public trust in institutions while others see it erode? Experts on belief formation, media psychology, and historical patterns of institutional trust could help understand both when disinformation poses genuine threats and when other factors—like declining public trust itself—might be the deeper problem.
- Global sentiment patterns. Why are some societies more excited about AI than others? China, for instance, isn’t as enthusiastic about AI as many Western observers assume. This matters because global sentiment affects many things from investment flows to regulatory approaches. Is optimism about technology connected to trust in government, social cohesion, or economic expectations? Understanding these patterns could help predict where AI governance will be more or less successful.
What the Bulletin could do
Founded in 1945 by Albert Einstein and former Manhattan Project scientists immediately following the atomic bombings of Hiroshima and Nagasaki, the Bulletin of the Atomic Scientists has a tradition in hosting a broad range of talent can mitigate blind spots. As such, the organization could:
- Connect different experts: Bring together AI researchers with innovation economists, cybersecurity experts with political scientists, and complexity theorists with historians of technology.
- Apply nuclear-age lessons: International agreements often fail not because of technical problems but because of institutional and incentive misalignments. What are the kinds of global coordination mechanisms and tools, like privacy preserving technology or hardware-enabled security protections, that can help?
- Stay empirically grounded: Test assumptions about AI’s impact against real-world evidence. When predictions prove wrong, investigate why. For example, AI-powered deepfakes did not upend the media industry as forecasted by some headlines. Demand for deepfakes did not increase when the supply of them did.
Why this matters now
Societies are making foundational decisions about AI governance, research priorities, and social adaptation at a moment when the basic institutions for handling emerging challenges are weak. International coordination on existential risks like nuclear proliferation or pandemic preparedness remains threadbare, despite decades of effort. The decisions leaders make today could create path dependencies—self-reinforcing defaults that become nearly impossible to reverse.
For example, companies today collect troves of Americans’ personal data with few limits. In the early 2000s, companies built business models around unrestricted data collection. Two decades later, that choice created trillion-dollar incumbents whose value depends on data collection, making meaningful privacy reform politically difficult in the United States.
But the other path dependencies that worry me are what will be accepted as normal. Societies have normalized living with nuclear weapons—accepting some baseline probability of catastrophic risk as just part of modern life. Meanwhile, pandemic risk sits around two percent per year (Penn 2021). Governments systematically underprepare for these global risks because the risks don’t feel tangible. Even when the impacts are tangible, a sense of normalcy clouds judgement. More than 1,200 children die from malaria every day (Medicines from Malaria Ventures). Despite this, the vaccine took 20 years to be available after the first promising trials due to lack of funding and urgency (Undark 2022).
AI might create similar forks in the road. Path dependency doesn’t require conspiracy or malice. It just requires inattention when defaults are being set.
Humans are in a crucial moment. Smart institutional design creates positive compounding effects—establishing cooperative frameworks that ease future agreements, flexible governance that can adapt over time, and research norms that promote accuracy and a more complete understanding.
That’s exactly the kind of long-term thinking the Bulletin was created to support. No one will have all the answers about AI, but noticing and expanding on the right questions determines the future humanity gets.
References
Brynjolfsson, E., Chandar, B., and Chen, R. 2025. “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence” August 26. Digital Economy. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
Costello, T. H., G. Pennycook, and D. G. Rand. 2024. “Durably reducing conspiracy beliefs through dialogues with AI” September 13.Science.
Goldman Sachs. 2025. “How Will AI Affect the Global Workforce?” August 13. Goldman Sachs. https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce
Greenwood, J. 2019. “How the appliance boom moved more women into the workforce” January 30. Penn Today. https://penntoday.upenn.edu/news/how-appliance-boom-moved-more-women-workforce
Kiernan, K. “The Case of the Vanishing Teller: How Banking’s Entry Level Jobs Are Transforming” May 12. The Burning Glass Institute. https://www.burningglassinstitute.org/bginsights/the-case-of-the-vanishing-teller-how-bankings-entry-level-jobs-are-transforming
Magesh, V., F. Surani, M. Dahl, M. Suzgun, C. D. Manning, and D. E. Ho. 2025. “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools” March 14. Journal of Empirical Legal Studies. https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf
Maples, B., M. Cerit, A. Vishwanath, and R. Pea. 2024. “Loneliness and suicide mitigation for students using GPT3-enabled chatbots” January 22. Nature. https://www.nature.com/articles/s44184-023-00047-6
Medicines for Malaria Venture. 2024. “Malaria facts and statistics.” Medicines for Malaria Venture. https://www.mmv.org/malaria/about-malaria/malaria-facts-statistics
Min, B., Z. O’Keeffe, B. Abidoye, K. M. Gaba, T. Monroe, B. Stewart, K. Baugh, B. Sánchez-Andrade, and R. Meddeb. 2024. “Beyond access: 1.18 billion in energy poverty despite rising electricity access” June 12. UNDP. https://data.undp.org/blog/1-18-billion-around-the-world-in-energy-poverty#:~:text=In%20a%20newly%20released%20paper,2020%2C%20according%20to%20official%20data.
Mousa, D. 2025. “When automation means more human workers” October 7. Under Development. https://newsletter.deenamousa.com/p/when-more-automation-means-more-human
Penn, M. 2021. “Statistics Say Large Pandemics Are More Likely Than We Thought” August 23. Duke Global Health Institute. https://globalhealth.duke.edu/news/statistics-say-large-pandemics-are-more-likely-we-thought
Piper, K. 2024. “What the FAA gets right about airplane regulation” January 18. Vox. https://www.vox.com/future-perfect/24041640/federal-aviation-administration-air-travel-boeing-737-max-alaska-airlines-regulation
Styer, N. 2025. “County Staff to Push for Lower Data Center Taxes to Balance Revenues” July 10. Loudoun Now. https://www.loudounnow.com/news/county-staff-to-push-for-lower-data-center-taxes-to-balance-revenues/article_567df6c2-2179-4eba-9cb5-fc78e2938ccb.html
Thompson, D. 2025. “This Is How the AI Bubble Will Pop.” October 2. Derek Thompson. https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop
Undark. 2022. “It Took 35 years to Get a Malaria Vaccine. Why?” June 9. Undark. https://www.gavi.org/vaccineswork/it-took-35-years-get-malaria-vaccine-why
Walsh, B. 2022. “A needed nuclear option for climate change” July 13. Vox. https://www.vox.com/future-perfect/2022/7/12/23205691/germany-energy-crisis-nuclear-power-coal-climate-change-russia-ukraine
Article link: https://thebulletin.org/premium/2025-12/decisions-about-ai-will-last-decades-researchers-need-better-frameworks/?
