healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

AI Companies Say Safety Is a Priority. It’s Not – RAND

Posted by timmreardon on 07/28/2024
Posted in: Uncategorized.

COMMENTARY Jul 9, 2024

By Douglas Yeung

This commentary originally appeared on San Francisco Chronicle on July 9, 2024. 

It could save us or it could kill us.

That’s what many of the top technologists in the world believe about the future of artificial intelligence. This is why companies like OpenAI emphasize their dedication to seemingly conflicting goals: accelerating technological progress as rapidly—but also as safely—as possible.

It’s a laudable intention, but not one of these many companies seems to be succeeding.

Take OpenAI, for example. The leading AI company in the world believes the best approach to building beneficial technology is to ensure that its employees are “perfectly aligned” with the organization’s mission. That sounds reasonable except what does it mean in practice?

A lot of groupthink—and that is dangerous.

As social animals, it’s natural for us to form groups or tribes to pursue shared goals. But these groups can grow insular and secretive, distrustful of outsiders and their ideas. Decades of psychological research have shown how groups can stifle dissent by punishing or even casting out dissenters. In the 1986 Challenger space shuttle explosion, engineers expressed safety concerns about the rocket boosters in freezing weather. Yet the engineers were overruled by their leadership, who may have felt pressure to avoid delaying the launch.

It could save us or it could kill us. That’s what many of the top technologists in the world believe about the future of artificial intelligence.Share on Twitter

According to a group of AI insiders, something similar is taking place at OpenAI. According to an open letter signed by nine current and former employees, the company uses hardball tactics to stifle dissent from workers about their technology. One of the researchers who signed the letter described the company as “recklessly racing” for dominance in the field.

It’s not just happening at OpenAI. Earlier this year, an engineer at Microsoft grew concerned that the company’s AI tools were generating violent and sexual imagery. He first tried to get the company to pull them off the market but when that didn’t work, he went public. Then, he said, Microsoft’s legal team demanded he delete the LinkedIn post. In 2021, former Facebook project manager Frances Haugen revealed internal research that showed the company knew the algorithms—often referred to as the building blocks of AI—that Instagram used to surface content for young users were exposing teen girls to images that were harmful to their mental health. When asked in an interview with “60 Minutes” why she spoke out, Haugen responded, “Person after person after person has tackled this inside of Facebook and ground themselves to the ground.”

Leaders at AI companies claim they have a laser focus on ensuring that their products are safe. They have, for example, commissioned research, set up “trust and safety” teams, and even started new companies to help achieve these aims. But these claims are undercut when insiders paint a familiar picture of a culture of negligence and secrecy that—far from prioritizing safety—instead dismisses warnings and hides evidence about unsafe practices, whether to preserve profits, avoid slowing progress, or simply to spare the feelings of leaders. 

So what can these companies do differently?

As a first step, AI companies could ban nondisparagement or confidentiality clauses. The OpenAI whistleblowers asked for that in their open letter and the company says it has already taken such steps. But removing explicit threats of punishment isn’t enough if an insular workplace culture continues to implicitly discourage concerns that might slow progress.

Rather than simply allowing dissent, tech companies could encourage it, putting more options on the table. This could involve, say, beefing up the “bug bounty” programs that tech companies already use to reward employees and customers who identify flaws in their software. Companies could embed a “devil’s advocate” role inside software or policy teams that would be charged with opposing consensus positions.

AI companies might also learn from how other highly skilled, mission-focused teams avoid groupthink. Military special operations forces prize group cohesion but recognize that cultivating dissent—from anyone, regardless of rank or role—might prove the difference between life and death. For example, Army doctrine—fundamental principles of military organizations—emphasizes (PDF) that special operations forces must know how to employ small teams and individuals as autonomous actors.

Finally, organizations already working to make AI models more transparent could shed light on their inner workings. Secrecy has been ingrained in how many AI companies operate; rebuilding public trustcould require pulling back that curtain by, for example, more clearly explaining safety processes or publicly responding to criticism.

With AI, the stakes of silencing those who don’t toe the company line, instead of viewing them as vital sources of mission-critical information, are too high to ignore.Share on Twitter

To be sure, group decisionmaking can benefit (PDF) from pooling information or overcoming individual biases, but too often it results in overconfidence or conforming to group norms. With AI, the stakes of silencing those who don’t toe the company line, instead of viewing them as vital sources of mission-critical information, are too high to ignore.

It’s human nature to form tribes—to want to work with and seek support from a tight group of like-minded people. It’s also admirable, if grandiose, to adopt as one’s mission nothing less than building tools to tackle humanity’s greatest challenges. But AI technologies will likely fall short of that lofty goal—rapid yet responsible technological advancement—if its developers fall prey to a fundamental human flaw: refusing to heed hard truths from those who would know.

Douglas Yeung is a senior behavioral scientist at RAND and a member of the Pardee RAND Graduate School faculty

Article link: https://www.rand.org/pubs/commentary/2024/07/ai-companies-say-safety-is-a-priority-its-not.html?

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Acquisition officials highlight need for transparency in AI discussions with industry – Fedscoop
Cost overruns, delays plague VA’s new integrated financial management system – Nextgov →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Hype Correction – MIT Technology Review 12/15/2025
    • Semantic Collapse – NeurIPS 2025 12/12/2025
    • The arrhythmia of our current age – MIT Technology Review 12/11/2025
    • AI: The Metabolic Mirage 12/09/2025
    • When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists 12/05/2025
    • Why Digital Transformation—And AI—Demands Systems Thinking – Forbes 12/02/2025
    • How artificial intelligence impacts the US labor market – MIT Sloan 12/01/2025
    • Will quantum computing be chemistry’s next AI? 12/01/2025
    • Ontology is having its moment. 11/28/2025
    • Disconnected Systems Lead to Disconnected Care 11/26/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • December 2025 (8)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 154 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d