In this page:
- Military, Defense, and National Security
- Health and Well-Being
- Misinformation, Disinformation, and Media Literacy
- Jobs, Workers, and the Economy
- Technology Governance and Regulation
- Featured Experts
Image by Florence Lo/Reuters; design by Haley Okuley/RAND
Artificial intelligence—from machine learning that’s already widely used today to the possible artificial general intelligence of the future—has the power to transform the way we live, work, and interact.
AI tools are evolving quickly, and decisionmakers are grappling with how to maximize the potential benefits, minimize the short- and long-term risks, and plan for an uncertain future.
RAND’s rigorous and independent research can help. Our experts have been studying a wide range of questions about the effects and uses of AI: Which jobs are likely to be most affected? How might AI tools be used to support military decisionmaking? What is required to ensure that algorithms don’t worsen inequity?
Answers to these and other important questions can help leaders and policymakers better understand AI and make informed decisions about how to balance promoting innovation while safeguarding against any dangers.
FEATURED INSIGHTS
- Q&AIs AI an Existential Risk? Q&A with RAND Experts
- REPORTUsing Artificial Intelligence Tools in K–12 Classrooms
- REPORTAddressing the Challenges of Algorithmic Equity
Get updates from RAND delivered straight to your inbox.EmailSIGN UP
Military, Defense, and National Security
- REPORTDoes AI Increase the Operational Risk of Biological Attacks?When researchers role-playing as malign nonstate actors were assigned to realistic scenarios and tasked with planning a biological attack, there was no statistically significant difference in the viability of plans generated with or without the assistance of the current generation of large language models.Jan 25, 2024
- COMMENTARYBuilding a Foundation for Strategic Stability with China on AIApr 2, 2024
- REPORTPrivate-Sector Innovation Could Help Defend TaiwanMar 5, 2024
- REPORTAUKUS Collaboration on Responsible Military AIFeb 6, 2024
- REPORTCan Machine Learning Improve Military Decisionmaking?Sep 19, 2023
- REPORTArtificial Intelligence Systems in Intelligence AnalysisAug 26, 2021
- REPORTArmy Analytic CapabilitiesApr 12, 2021
- COMMENTARYBridging Tech and Humanity: The Role of Foundation Models in Reducing Civilian HarmOct 17, 2023
- REPORTExploring the Feasibility and Utility of Machine Learning-Assisted Command and ControlJul 15, 2021
- REPORTTechnology Innovation and the Future of Air Force Intelligence Analysis: Findings and RecommendationsJan 27, 2021
- RESEARCH BRIEFThe Department of Defense’s Posture for Artificial Intelligence: Assessment and Recommendations for ImprovementJan 26, 2021
Health and Well-Being
- REPORTUsing AI to Identify Youth Suicide Risk: What Does the Evidence Say?In response to the youth mental health crisis, some schools have begun using artificial intelligence to help identify students at risk for suicide and self-harm. How are these tools being used? Are they accurate? And what risks might they bring?Dec 5, 2023
- COMMENTARYRobots, Drones, and AI, Oh My: Navigating the New Frontier of Military MedicineJan 8, 2024
- COMMENTARYNations Must Collaborate on AI and Biotech—or Be Left BehindOct 31, 2023
- REPORTMachine Learning and Gene Editing at the Helm of a Societal EvolutionOct 23, 2023
- COMMENTARYProgress or Peril? The Brave New World of Self-Driving Science LabsSep 18, 2023
- JOURNAL ARTICLEUsing Claims-Based Algorithms to Predict Activity, Mobility, and Memory LimitationsAug 16, 2023
- JOURNAL ARTICLEArtificial Intelligence (AI) use in adult social careJun 30, 2023
- JOURNAL ARTICLEArtificial Intelligence in the COVID-19 ResponseJun 22, 2023
- ARTICLEThe Internet of Bodies Will Change Everything, for Better or WorseOct 29, 2020
Misinformation, Disinformation, and Media Literacy
- COMMENTARYU.S. Adversaries Can Use Generative AI for Social Media ManipulationUsing generative artificial intelligence technology, U.S. adversaries can manufacture fake social media accounts that seem real. These accounts can be used to advance narratives that serve the interests of those governments and pose a direct challenge to democracies. U.S. government, technology, and policy communities should act fast to counter this threat.Sep 7, 2023
- COMMENTARYBiden Should Call China’s Bluff on Responsible AI to Safeguard the 2024 ElectionsNov 14, 2023
- JOURNAL ARTICLECan People Identify Deepfakes?Aug 23, 2023
- COMMENTARYThe AI Conspiracy Theories Are ComingJun 22, 2023
- COMMENTARYThe Threat of DeepfakesJul 6, 2022
- REPORTMachine Learning Can Detect Online Conspiracy TheoriesApr 29, 2021
Jobs, Workers, and the Economy
- VISUALIZATIONRage Against the Machine? How AI Could Affect the Future of WorkUnderstanding how technology and artificial intelligence have—and have not—affected jobs in the past can provide insights on the future of the American workforce. What is the relationship between occupational exposure and technologies, wages, and employment related to artificial intelligence?Oct 11, 2023
- REPORTWill We Hold Algorithms Accountable for Bad Decisions?Oct 12, 2023
- REPORTTechnological and Economic Threats to the U.S. Financial SystemFeb 13, 2024
- REPORTAdvancing Equitable Decisionmaking for the Department of Defense Through Fairness in Machine LearningJun 13, 2023
- REPORTCan Artificial Intelligence Help Improve Air Force Talent Management?Jan 19, 2021
- COMMENTARYMoney, Markets, and Machine Learning: Unpacking the Risks of Adversarial AIAug 31, 2023
Technology Governance and Regulation
- TESTIMONYAdvancing Trustworthy Artificial Intelligence The United States can make safety a differentiator for the artificial intelligence (AI) industry, just as it did for the early aviation, automotive, and pharmaceutical industries. Government involvement in safety standards could build consumer trust in AI that strengthens the U.S. position as a market leader. Jun 22, 2023
- COMMENTARYGenerative Artificial Intelligence Threats to Information IntegrityApr 16, 2024
- COMMENTARYPolicymaking Needs to Get Ahead of Artificial IntelligenceJan 12, 2024
- COMMENTARYThe Case for and Against AI WatermarkingJan 17, 2024
- COMMENTARYPhilosophical Debates About AI Risks Are a DistractionDec 22, 2023
- TESTIMONYPreparing the Federal Response to Advanced TechnologiesSep 19, 2023
- COMMENTARYA Model for Regulating AIAug 16, 2023
- COMMENTARYTackling the Existential Threats from Artificial IntelligenceJul 11, 2023
- TESTIMONYEnsuring That Government Use of Technology Serves the Public Jun 22, 2023
- TESTIMONYArtificial Intelligence: Challenges and Opportunities for the Department of DefenseApr 19, 2023
- TESTIMONYChallenges to U.S. National Security and Competitiveness Posed by Artificial IntelligenceMar 8, 2023
RAND Research That Uses AI
Beyond studying the nexus of AI and public policy, RAND experts regularly use AI in their research—often in novel ways. Here’s just a small sample of RAND studies that put AI to work:
- Conflict Projections in U.S. Central Command: Incorporating Climate Change2023
- Deception Detection 2022
- Facts Versus Opinions: How the Style and Language of News Presentation Is Changing in the Digital Age 2019
- Monitoring Social Media: Lessons for Future Department of Defense Social Media Analysis in Support of Information Operations 2017
- Examining ISIS Support and Opposition Networks on Twitter 2016
Featured Experts
Scores of RAND researchers are studying AI from countless angles, providing key insights that can inform the use and regulation of AI tools now and in the future.
- I don’t think AI poses an irreversible harm to humanity. I think it can worsen our lives. I think it can have long-lasting harm. But I think it’s ultimately something that we can recover from. Jonathan W. WelburnSenior ResearcherSource: rand.org
- From our recent research, it appears that extremist groups have been testing AI tools, including chatbots, but there seems to be little evidence of large-scale coordinated efforts in this space. However, chatbots are likely to present a risk, as they are capable of recognising and exploiting emotional vulnerabilities and can encourage violent behaviours.Pauline PailléSenior AnalystSource: Euronews
- I don’t think AI is going to breed a population of people who can’t think for themselves. I actually think there’s a lot of promise in what AI can do to facilitate teaching, to facilitate critical thinking, and to teach in ways that previously we had been unable to teach.Christopher Joseph DossPolicy ResearcherSource: Education Week
Article link: https://www.rand.org/latest/artificial-intelligence.html?