healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

EU AI Act To Target US Open Source Software

Posted by timmreardon on 05/29/2023
Posted in: Uncategorized.

Delos Prime May 13, 2023

In a bold stroke, the EU’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models.  The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe.  While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.

Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue.  Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.

If enacted, enforcement would be out of the hands of EU member states. Under the AI Act, third parties could sue national governments to compel fines. The act has extraterritorial jurisdiction. A European government could be compelled by third parties to seek conflict with American developers and businesses.

The Amended AI Act

The PDF of the actual text is 144 pages.  The actual text provisions follow a different formatting style from American statutes.  This thing is a complicated pain to read.  I’ve added the page numbers of the relevant sections in the linked pdf of the law. 

Here are the main provisions:

Very Broad Jurisdiction:  The act includes “providers and deployers of AI systems that have their place of establishment or are located in a third country, where either Member State law applies by virtue of public international law or the output produced by the system is intended to be used in the Union.” (pg 68-69).

You have to register your “high-risk” AI project or foundational model with the government.  Projects will be required to register the anticipated functionality of their systems.  Systems that exceed this functionality may be subject to recall.  This will be a problem for many of the more anarchic open-source projects.  Registration will also require disclosure of data sources used, computing resources (including time spent training), performance benchmarks, and red teaming. (pg 23-29).

Expensive Risk Testing Required.  Apparently, the various EU states will carry out “third party” assessments in each country, on a sliding scale of fees depending on the size of the applying company.  Tests must be benchmarks that have yet to be created.  Post-release monitoring is required (presumably by the government).  Recertification is required if models show unexpected abilities.  Recertification is also required after any substantial training.  (pg 14-15, see provision 4 a for clarity that this is government testing).

Risks Very Vaguely Defined:  The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy?  Could this act itself be a risk to democracy? (pg 26).

Open Source LLMs Not Exempt:  Open source foundational models are not exempt from the act.  The programmers and distributors of the software have legal liability.  For other forms of open source AI software, liability shifts to the group employing the software or bringing it to market.  (pg 70).

API Essentially Banned.  API’s allow third parties to implement an AI model without running it on their own hardware.  Some implementation examples include AutoGPT and LangChain.  Under these rules, if a third party, using an API, figures out how to get a model to do something new, that third party must then get the new functionality certified. 

The prior provider is required, under the law, to provide the third party with what would otherwise be confidential technical information so that the third party can complete the licensing process.  The ability to compel confidential disclosures means that startup businesses and other tinkerers are essentially banned from using an API, even if the tinkerer is in the US.  The tinkerer might make their software available in Europe, which would give rise to a need to license it and compel disclosures. (pg 37).

Open Source Developers Liable.  The act is poorly worded.  The act does not cover free and Open Source AI components.  Foundational Models (LLMs) are considered separate from components.  What this seems to mean is that you canoOpen source traditional machine learning models but not generative AI.

If an American Opensource developer placed a model, or code using an API on GitHub – and the code became available in the EU – the developer would be liable for releasing an unlicensed model.  Further, GitHub would be liable for hosting an unlicensed model.  (pg 37 and 39-40).

LoRA Essentially Banned.  LoRA is a technique to slowly add new information and capabilities to a model cheaply.  Opensource projects use it as they cannot afford billion-dollar computer infrastructure.  Major AI models are also rumored to use it as training in both cheaper and easier to safety check than new versions of a model that introduce many new features at once.  (pg 14).

If an Opensource project could somehow get the required certificates, it would need to recertify every time LoRA was used to expand the model. 

Deployment Licensing.  Deployers, people, or entities using AI systems, are required to undergo a stringent permitting review project before launch.  EU small businesses are exempt from this requirement. (pg 26).

Ability of Third Parties to Litigate.  Concerned third parties have the right to litigate through a country’s AI regulator (established by the act).  This means that the deployment of an AI system can be individually challenged in multiple member states.  Third parties can litigate to force a national AI regulator to impose fines. (pg 71).

Very Large Fines.  Fines for non-compliance range from 2% to 4% of a companies gross worldwide revenue.  For individuals that can reach €20,000,0000.  European based SME’s and startups get a break when it comes to fines. (Pg 75).

R&D and Clean Energy Systems In The EU Are Exempt.  AI can be used for R&D tasks or clean energy production without complying with this system. (pg 64-65).

AI Act and US Law

The broad grant of extraterritorial jurisdiction is going to be a problem.  The AI Act would let any crank with a problem about AI – at least if they are EU citizens – force EU governments to take legal action if unlicensed models were somehow available in the EU.  That goes very far beyond simply requiring companies doing business in the EU to comply with EU laws.

The top problem is the API restrictions.  Currently, many American cloud providers do not restrict access to API models, outside of waiting lists which providers are rushing to fill.  A programmer at home, or an inventor in their garage, can access the latest technology at a reasonable price.  Under the AI Act restrictions, API access becomes complicated enough that it would be restricted to enterprise-level customers.

What the EU wants runs contrary to what the FTC is demanding.  For an American company to actually impose such restrictions in the US would bring up a host of anti-trust problems.  Model training costs limit availability to highly capitalized actors.  The FTC has been very frank that they do not want to see a repeat of the Amazon situation, where a larger company uses its position to secure the bulk of profits for itself – at the expense of smaller partners.  Acting in the manner the AI Act seeks, would bring up major anti-trust issues for American companies. 

Outside of the anti-trust provisions, the AI Acts’ punishment of innovation represents a conflict point.  For American actors, finding a new way to use software to make money is a good thing.  Under the EU Act finding a new way to use software voids the safety certification, requiring a new licensing process.  Disincentives to innovation are likely to cause friction given the statute’s extraterritorial reach.

Finally, the open source provisions represent a major problem.  The AI Act treats open source developers working on or with foundational models as bad actors.  Developers and, seemingly, distributors are liable for releasing unlicensed foundation models – of apparently foundation model enhancing code.  For all other forms of Opensource machine learning, the responsibility for licensing falls to whoever is deploying the system.

Trying to sanction parts of the tech ecosystem is a bad idea.  Opensource developers are unlikely to respond well to being told by a government that they can’t program something – especially if the government isn’t their own.  Additionally, what happens if GitHub and the various co-pilots simply say that Europe is too difficult to deal with and shut down access?  That may have repercussions that have not been thoroughly thought through.

Defects of the Act

To top everything off, the AI Act appears to encourage unsafe AI.  It seeks to encourage narrowly tailored systems.  We know from experience – especially with social media – that such systems can be dangerous.  Infamously, many social media algorithms only look at the engagement value of content.  They are structurally incapable of judging the effect of the content.  Large language models can at least be trained that pushing violent content is bad.  From an experience standpoint, the foundational models that the EU is afraid of are safer than the models they are driving.

This is a deeply corrupt piece of legislation.  If you are afraid of large language models, then you need to be afraid of them in all circumstances.  Giving R&D models a pass shows that you are less than serious about your legislation.  The most likely effect of such a policy is to create a society where the elite have access to R&D models, and nobody else – including small entrepreneurs – does.

I suspect this law will pass, and I suspect the EU will find that they have created many more problems than they anticipated. That’s unfortunate as some of the regulations, especially relating to algorithms used by large social networks, do need addressing.

Article link: https://technomancers.ai/eu-ai-act-to-target-us-open-source-software/

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Geoffrey Hinton tells us why he’s now scared of the tech he helped build – MIT Technology Review
Quantum Cryptography Market to Exceed $3B by 2028 – Nextgov →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
    • AI’s missing ingredient: Shared wisdom – MIT Sloan 12/21/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (7)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d