healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

When AI Gets It Wrong, Will It Be Held Accountable? – RAND

Posted by timmreardon on 11/26/2024
Posted in: Uncategorized.

Elina Treyger followed the news with a growing sense of unease. Every day seemed to bring new examples of artificial intelligence making important decisions in people’s lives. What happens, she wondered, when it costs someone a job, or flags an innocent person as a fraud?

Treyger  is a senior political scientist at RAND, but she’s also a lawyer. And what she really wanted to know was, Will people sue an algorithm? Will juries assign blame to something with no motives whatsoever?

She and a small team of researchers at RAND decided to find out. They designed a survey to test whether people are any less likely to challenge decisions delivered with the cold certainty of an AI. The results underscore the important role the American legal system can play in protecting people from algorithmic harm. The people in the survey were perfectly willing to take the computers to court.

“The legal system incentivizes good behavior and exposes bad outcomes through legal accountability,” Treyger said. “There’s been some concern that, if you’re on the receiving end of a bad algorithmic decision, you might not even know whom you could sue. But, as it turned out, at least in our experiment, that didn’t stop people.”

It’s not hard to find examples of bad algorithmic decisions. A few years ago, an automated system wrongly sent thousands of Michigan residents to collections for unemployment fraud. AI systems trained on historically biased data have recommended disproportionate jail time for Black defendants. A system trained on male-dominated employment data learned to penalize resumes from women.

But when the stakes are that high—when someone’s freedom or financial well-being are on the line—what recourse do people actually have when an AI gets it wrong?

The European Union recently gave people the legal right to get an explanation for any AI decisions that go against them, and to contest those decisions in court. Nothing that explicit exists in the United States. Policymakers here have mostly focused on regulating AI systems up front, making sure they cannot cause catastrophic harm before they go online. Addressing bad outcomes afterward has fallen in part to individuals being willing to take their chances in court.

But will they? Treyger is not the first legal scholar who has worried about that. For one thing, it’s not at all clear who the responsible party would be. The developers who wrote the code? The company that used it? It’s also often hard to know why an AI made a particular decision, which makes it tough to prove that it’s wrong. As a 2021 paper in Columbia Law Review put it, machine-made decisions are often “technically inscrutable and thus difficult to contest.”

Treyger and her team fielded their survey to provide the first nationally representative look at what people will actually do when faced with an unfair AI outcome. They asked 5,000 respondents to consider two scenarios.

In the first, a very well-qualified candidate applies for a job, makes it through the interviews—but then doesn’t get hired. The second scenario raises the stakes: An unemployed worker applies for benefits, gets rejected—and then gets flagged for potential fraud. For both scenarios, some of the respondents had a human making the decisions, and some had a computer.

Those who got the computer were much more likely to say the process was unfair and produced inaccurate results in both scenarios. They also were roughly 10 percentage points more likely to say it wasn’t transparent enough. The results point to what the researchers described as an “algorithmic penalty.” People seem willing to give human decisionmakers some leeway, even when they disagree with their decisions—but not computers.

The researchers then asked the respondents what they would do if they were the people in the two scenarios.

Even in the unemployment scenario, in which the outcome was not just wrong, but harmful, a third of those with a human decisionmaker said they would do nothing. Fewer than a quarter of those with a computer in the mix were so willing to let it slide. They were much more likely to say they would appeal, and slightly more likely to say they would sue. Respondents in both scenarios also were much more likely to say they would join a class-action lawsuit when the decisions were made by a computer.

“That’s encouraging,” Treyger said. “It means they’re not exempting algorithms from our general moral judgments. They’re willing to take legal action to redress algorithmic harms. That can be a real mechanism for accountability.”

White respondents tended to penalize the AI more harshly on most measures than non-white respondents did. The one exception was bias, but the differences were small. That may seem surprising; studies consistently show that AI systems trained on historical data learn to repeat historical biases, especially against racial and ethnic minorities.

But when the researchers dug into the survey data, they found that non-white respondents didn’t necessarily trust the AI more when it came to questions of bias. They trusted human decisionmakers less. They didn’t penalize the AI any more harshly because they didn’t think the humans would make unbiased decisions, either.

The survey results suggest that people will continue to look to the courts to defend their rights even in the era of AI. Algorithmic decisionmakers might not have the intent or state of mind of humans, the researchers wrote—but that won’t prevent legal action when they cause undue harm.

Policymakers working to regulate AI should consider not just problems of bias, but also of accuracy and transparency. And they should consider spelling out a specific legal right for people to contest AI decisions, much like the European Union has.

“In some settings, you would just presume that existing standards cover that,” Treyger said. “We have a lot of antidiscrimination laws, and it seems like those would be just as applicable in an algorithmic context. But it’s not always so clear. And so one implication of our study is, yes, we should establish pretty clearly these rights in the law.”

Michigan just got a hard lesson in how willing people are to go to court when an algorithm upends their lives. Thousands of residents filed a class-action lawsuit when the state’s automated unemployment system wrongly accused them of fraud. Earlier this year, the state finalized $20 million in payments to settle the case.

Article link: https://www.linkedin.com/pulse/when-ai-gets-wrong-held-accountable-rand-corporation-mwh7c?

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Technology is probably changing us for the worse—or so we always think – MIT Technology Review
The Beginning of the End of Big Tech – Wired →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Hype Correction – MIT Technology Review 12/15/2025
    • Semantic Collapse – NeurIPS 2025 12/12/2025
    • The arrhythmia of our current age – MIT Technology Review 12/11/2025
    • AI: The Metabolic Mirage 12/09/2025
    • When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists 12/05/2025
    • Why Digital Transformation—And AI—Demands Systems Thinking – Forbes 12/02/2025
    • How artificial intelligence impacts the US labor market – MIT Sloan 12/01/2025
    • Will quantum computing be chemistry’s next AI? 12/01/2025
    • Ontology is having its moment. 11/28/2025
    • Disconnected Systems Lead to Disconnected Care 11/26/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • December 2025 (8)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 154 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d