healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

It’s time to talk about the real AI risks – MIT Technology Review

Posted by timmreardon on 06/19/2023
Posted in: Uncategorized.


Experts at RightsCon want us to focus less on existential threats, and more on the harms here and now.

By Tate Ryan-Mosley June 12, 2023

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

This week, I tuned into a bunch of sessions at RightsCon while recovering. The event is the world’s biggest digital rights conference, and after several years of only virtual sessions, the top internet ethicists, activists, and policymakers were back in person in Costa Rica.

Unsurprisingly, everyone was talking about AI and the recent rush to deploy large language models. Ahead of the conference, the United Nations put out a statement, encouraging RightsCon attendees to focus on AI oversight and transparency.

I was surprised, however, by how different the conversations about the risks of generative AI were at RightsCon from all the warnings from big Silicon Valley voices that I’ve been reading in the news.

Throughout the last few weeks, tech luminaries like OpenAI CEO Sam Altman, ex-Googler Geoff Hinton, top AI researcher Yoshua Bengio, Elon Musk, and many others have been calling for regulation and urgent action to address the “existential risks”—even including extinction—that AI poses to humanity. 

Certainly, the rapid deployment of large language models without risk assessments, disclosures about training data and processes, or seemingly much attention paid to how the tech could be misused is concerning. But speakers in several sessions at RightsCon reiterated that this AI gold rush is a product of company profit-seeking, not necessarily regulatory ineptitude or technological inevitability.

In the very first session, Gideon Lichfield, the top editor at Wired (and the ex–editor in chief of Tech Review), and Urvashi Aneja, founder of the Digital Futures Lab, went toe to toe with Google’s Kent Walker.

“Satya Nadella of Microsoft said he wanted to make Google dance. And Google danced,” said Lichfield. “We are now, all of us, jumping into the void holding our noses because these two companies are out there trying to beat each other.” Walker, in response, emphasized the social benefits that advances in artificial intelligence could bring in areas like drug discovery, and restated Google’s commitment to human rights. 

The following day, AI researcher Timnit Gebru directly addressed the talk of existential risks posed by AI: “Ascribing agency to a tool is a mistake, and that is a diversion tactic. And if you see who talks like that, it’s literally the same people who have poured billions of dollars into these companies.”

She said, “Just a few months ago, Geoff Hinton was talking about GPT-4 and how it’s the world’s butterfly. Oh, it’s like a caterpillar that takes data and then flies into a beautiful butterfly, and now all of a sudden it’s an existential risk. I mean, why are people taking these people seriously?”

Frustrated with the narratives around AI, experts like Human Right Watch’s tech and human rights director, Frederike Kaltheuner, suggest grounding ourselves in the risks we already know plague AI rather than speculating about what might come. 

And there are some clear, well-documented harms posed by the use of AI. They include:

  • Increased and amplified misinformation. Recommendation algorithms on social media platforms like Instagram, Twitter, and YouTube have been shown to prioritize extreme and emotionally compelling content, regardless of accuracy. LLMs contribute to this problem by producing convincing misinformation known as “hallucinations.” (More on that below)
  • Biased training data and outputs. AI models tend to be trained on biased data sets, which can lead to biased outputs. That can reinforce existing social inequities, as in the case of algorithms that discriminate when assigning people risk scores for committing welfare fraud, or facial recognition systems known to be less accurate on darker-skinned women than white men. Instances of ChatGPT spewing racist content have also been documented.
  • Erosion of user privacy. Training AI models require massive amounts of data, which is often scraped from the web or purchased, raising questions about consent and privacy. Companies that developed large language models like ChatGPT and Bard have not yet released much information about the data sets used to train them, though they certainly contain a lot of data from the internet. 

Kaltheuner says she’s especially concerned generative AI chatbots will be deployed in risky contexts such as mental health therapy: “I’m worried about absolutely reckless use cases of generative AI for things that the technology is simply not designed for or fit for purpose.” 

Gebru reiterated concerns about the environmental impacts resulting from the large amounts of computing power required to run sophisticated large language models. (She says she was fired from Google for raising these and other concerns in internal research.) Moderators of ChatGPT, who work for low wages, have also experienced PTSD in their efforts to make model outputs less toxic, she noted. 

Regarding concerns about humanity’s future, Kaltheuner asks “Whose extinction? Extinction of the entire human race? We are already seeing people who are historically marginalized being harmed at the moment. That’s why I find it a bit cynical.”

What else I’m reading

  • US government agencies are deploying GPT-4, according to an announcement from Microsoft reported by Bloomberg. OpenAI might want regulation for its chatbot, but in the meantime, it also wants to sell it to the US government.
  • ChatGPT’s hallucination problem might not be fixable. According to researchers at MIT, large language models get more accurate when they debate each other, but factual accuracy is not built into their capacity, as broken down in this really handy story from the Washington Post. If hallucinations are unfixable, we may only be able to reliably use tools like ChatGPT in limited situations. 
  • According to an investigation by the Wall Street Journal, Stanford University, and the University of Massachusetts, Amherst, Instagram has been hosting large networks of accounts posting child sexual abuse content. The platform responded by forming a task force to investigate the problem. It’s pretty shocking that such a significant problem could go unnoticed by the platform’s content moderators and automated moderation algorithms.

What I learned this week

A new report by the South Korea–based human rights group PSCORE details the days-long application process required to access the internet in North Korea. Just a few dozen families connected to Kim Jong-Un have unrestricted access to the internet, and only a “few thousand” government employees, researchers, and students can access a version that is subject to heavy surveillance. As Matt Burgess reports in Wired, Russia and China likely supply North Korea with its highly controlled web infrastructure.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/06/12/1074449/real-ai-risks/amp/

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Quantum computers could overtake classical ones within 2 years, IBM ‘benchmark’ experiment shows
VA Looks to Implement New Risk Assessment Framework – MeriTalk →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d