healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Feds beware: New studies demonstrate key AI shortcomings – Nextgov

Posted by timmreardon on 06/03/2024
Posted in: Uncategorized.

By JOHN BREEDEN IIMAY 14, 2024

Recent studies have started to show that there are serious downsides when it comes to such programs’ ability to produce secure code.

It’s no secret that artificial intelligence is almost everywhere these days. And while some groups are worried about potentially devastating consequences if the technology continues to advance too quickly, most government agencies are pretty comfortable adopting AI for more practical purposes, employing it in ways that can help advance agency missions.

And the federal government has plenty of guidelines in place for using AI. For example, the AI Accountability Frameworkfor Federal Agencies provides guidance for agencies that are building, selecting or implementing AI systems. According to GAO and the educational institutions that helped to draft the framework, the most responsible uses of AI in government should be centered around four complimentary principals. They include governance, data, performance and monitoring.

Writing computer code, or monitoring code written by humans to look for vulnerabilities, fits within that framework. And it’s also a core capability that most of the new generative AIs easily demonstrate. For example, when the most popular generative AI program, ChatGPT, upgraded to version 4.0, one of the first things that developer OpenAI did at the unveiling was to have the AI write the code to quickly generate a live webpage.

Given how quickly most generative AIs can code, it’s little wonder that according to a recent survey by GitHub, more than 90% of developers are already using AI coding tools to help speed up their work. That means that the underlying code for most applications and programs being created today is at least partially made by AI, and that includes code that is both written or used by government agencies. However, while the quick pace that AI is able to generate code is impressive, recent studies have started to show that there are serious downsides that come along with that speed, especially when it comes to security.

Trouble in AI coding paradise

The new generative AIs have only been successfully coding for, at most, a couple of years depending on the model. So, it’s little wonder that evaluations of their coding prowess are slow to catch up. But studies are being conducted, and the results don’t bode well for the future of AI coding, especially for mission critical areas within government, at least without some serious improvements.

While AIs are generally able to quickly create apps and programs that work, many of those AI-created applications are also riddled with cybersecurity vulnerabilities that could equate to huge problems if dropped into a live environment. For example, in a recent study conducted by the University of Quebec, researchers asked ChatGPT to generate 21 different programs and applications in a variety of programming languages. While every single one of the created applications the AI coded worked as intended, only five of them were secure from a cybersecurity standpoint. The rest had dangerous vulnerabilities that attackers could easily use to compromise anyone who deployed them. 

And these were not minor security flaws either. They included almost every single vulnerability listed by the Open Web Application Security Project, and many others.

In an effort to find out why AI coding was so dangerous from a cybersecurity standpoint, researchers at the University of Maryland, UC Berkeley and Google decided to switch things up a bit and task generative AI not with writing code, but with examining already assembled programs and applications to look for vulnerabilities. That study used 11 AI models, which were each fed hundreds of examples of programs in multiple languages. Applications rife with known vulnerabilities were mixed in with other code examples which were certified as secure by human security experts.

The results of that study were really bad for the AIs. Not only did they fail to detect hidden vulnerabilities, with some AIs missing over 50% of them, but most also flagged secure code as being vulnerable when it was not, leading to a high rate of false positives. It seems that those dismal results even surprised the researchers, who decided to try and correct the problem by training the AIs in better vulnerability detection. They fed the generative AIs thousands of examples of both secure and insecure code, along with explanations whenever a vulnerability was introduced.

Surprisingly, that intense training did little to improve AI performance. Even when expanding the large language models the AIs used to look for vulnerable code, the final results were still unacceptably bad both in terms of false positives and letting vulnerabilities slip through undetected. That led the researchers to conclude that no matter how much they tweaked the models, that the current generation of AI and “deep learning is still not ready for vulnerability detection.” 

Why is AI so bad at secure coding?

All of the surveys referenced here are relatively new, so there is not a lot of explanation yet as to why generative AI, which performs well at so many tasks, would be so bad when it comes to spotting vulnerabilities or writing secure code. The experts that I talked with said the most likely reason is that generative AIs are trained on thousands or even millions of examples of code written by humans that come from open sources, code libraries and other repositories, and much of that is heavily flawed. Generative AI may simply be too poisoned by all those bad examples used in its training to redeem. Even when researchers from the University of Maryland and UC Berkeley study tried to correct the models with fresh data, their new examples were just a drop in the bucket, and not nearly enough to improve performance.

One study conducted by Secure Code Warrior did try and address this question directly with an experiment that selectively fed generative AIs specific examples of both vulnerable and secure code, tasking them with identifying any security threats. In the case of that study, the difference between the secure and vulnerable code examples presented to the AIs were very subtle, which helped researchers determine what factors were specifically tripping up the AIs when it came to vulnerability detection in code.

According to SCW, one of the biggest reasons that generative AIs struggle with secure coding is a lack of contextual understanding about how the code in question fits in with larger projects or the overall infrastructure, and all of the subsequent security issues that can stem directly from that. They give several examples to prove this point, where a snippet of code should be considered secure if it is being used to trigger a standalone function but then becomes vulnerable with business logic flaws, improper permissions or security misconfigurations when integrated into a larger system or project. Since the generative AIs don’t generally understand the context of how code they are examining is being used, it will often flag secure code as vulnerable or code that has vulnerabilities as safe. 

In a sense, because an AI does not know the context of how code will be used, it sometimes ends up guessing about its vulnerability status, since AIs almost never admit that they don’t know something. The other area that AIs struggled with in the SCW study was when a vulnerability came down to something small, like the order of various input parameters. Generative AIs may simply not be experienced enough to know how something small like the order of input parameters in the middle of a large snippet of code can lead to security problems.

The study does not offer up a solution for fixing an AI’s inability to spot insecure code, but does say that generative AI could still have a role in coding, but only when paired tightly with experienced human developers who can keep a watchful eye on their AI companions. For now, without a good technical solution, that may be the best path forward for agencies that need to tap into the speed that generative AI can offer when coding, but can’t accept the risks that come along with unsupervised AI acting independently when creating government applications and programs.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Article link: https://www.nextgov.com/artificial-intelligence/2024/05/feds-beware-new-studies-demonstrate-key-ai-shortcomings/396526/?

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Army lifts curtains on planned $1B software development contract
Supporting Efforts to Reform Planning, Programming, Budgeting, and Execution (PPBE) – RAND →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Hype Correction – MIT Technology Review 12/15/2025
    • Semantic Collapse – NeurIPS 2025 12/12/2025
    • The arrhythmia of our current age – MIT Technology Review 12/11/2025
    • AI: The Metabolic Mirage 12/09/2025
    • When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists 12/05/2025
    • Why Digital Transformation—And AI—Demands Systems Thinking – Forbes 12/02/2025
    • How artificial intelligence impacts the US labor market – MIT Sloan 12/01/2025
    • Will quantum computing be chemistry’s next AI? 12/01/2025
    • Ontology is having its moment. 11/28/2025
    • Disconnected Systems Lead to Disconnected Care 11/26/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • December 2025 (8)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 154 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d