healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

State Dept. Cyber Chief Looks to AI to Write Secure Software

Posted by timmreardon on 06/22/2023
Posted in: Uncategorized.

BY: GRACE DILLE

JUN 21, 2023 2:53 PM

As the Federal government is working to manage the potential risks that AI-driven systems can present, the head of the State Department’s Bureau of Cyberspace and Digital Policy said on June 21 that one positive AI application he’s excited about is using the technology to write more secure software.

At an event hosted by the Hudson Institute, Nathaniel Fick – the State Department’s inaugural ambassador at large for cyberspace and digital policy – explained how this application of AI tech can help to address “this sort of happy-go-lucky, laissez-faire software developer world” that introduces untrusted code and vulnerabilities.

“One of the good applications of AI that I’m most excited about is using AI to write better software. That’s pretty exciting to see the bug rate go way down,” Fick said.

“It is a road to really realize one of the pillars of the National Cybersecurity Strategy, which is really focused on building better software, and incentivizing that, and creating kind of incentive structures and liability punitive structures to require the developers of software that we all rely upon to build good stuff,” he said.

The White House released its National Cybersecurity Strategy (NCS) in March and is working fast to develop an implementation plan for the strategy, as well as a workforce strategy to build a more resilient future.

One key pillar of the NCS, as Fick mentioned, is to “rebalance” the responsibility to defend cyberspace by shifting the cybersecurity burden away from individuals, small businesses, and local governments, and onto the organizations that are best-positioned to reduce risks for all of us – such as software developers.

However, Fick acknowledged that AI also has a dangerous flip side, and he stressed the importance of developing AI regulations. He explained that there are four U.S. companies right now that have leadership positions in AI technologies: Google, Microsoft, OpenAI, and a smaller company called Anthropic.

“As we think about timelines, how much time do we have, how long is it going to take to develop a fifth model that has that capability – a fifth model that’s either built by a company that’s less trustworthy, or a model that’s open sourced? The best answer I can get is it’s less than a year,” Fick said. “We don’t have a lot of time. If this is 1945, we don’t have until 1957 to put together some sort of a regulatory or governance infrastructure.”

So, what is the Federal government going to do about it? Fick said the first step is to start with those four big companies, who will sign up for “voluntary commitments” around AI guardrails.

“Voluntary commitments by definition will not stifle innovation. They, I think also, are likely to be a starting point but not an ending point. But they have the great benefit of speed,” Fick said. “We’ve got to get something out in the world now. And then we’re going to iterate on it and build on it over time.”

Article link: https://www.meritalk.com/articles/state-dept-cyber-chief-looks-to-ai-to-write-secure-software/

State Department looks to AI for streamlined FOIA workloads – Federal News Network

Posted by timmreardon on 06/22/2023
Posted in: Uncategorized.

Jory Heckman

June 21, 2023 9:24 am

The State Department is looking at artificial intelligence and automation tools to process Freedom of Information Act (FOIA) requests more quickly and improve its level of service to requesters.

Eric Stein, the department’s deputy assistant secretary for Office of Global Information Services and co-chairman of the Chief FOIA Officers Council’s Technology Committee, said he’s also looking at ways to use these emerging tools to improve FOIA processing governmentwide.

“I think people are afraid of AI, and maybe they should be. Maybe they shouldn’t be, but my take is, we’d like to get people comfortable with the concepts of AI and machine learning,” Stein said.

The Technology Committee a few months ago went through each the most recently published Chief FOIA Officer reports governmentwide, to better understand what tools agencies are using, and what the committee can do to address their problems.

Agencies generally are seeing an increase in FOIA requests, and are looking for ways to stay on top of this workload. The federal government in fiscal 2022 received a record high of more than 900,000 new FOIA requests.

“What we found is that there are tools in place being pushed to their limits and a lot of assumptions about how records are captured, stored and searched. It’s still very manual,” Stein said.

Federal employees can process most FOIA requests involving unclassified records while working remotely, but Stein said FOIA professionals still need to work in the office to handle classified records.

“Because of that, there’s a balance in how do we recruit and retain people that maybe don’t want to go into the office,” Stein said.

State Department pilots AI for FOIA workloads 

The State Department received nearly 14,000 new FOIA requests in fiscal 2022. Nearly another 21,000 FOIA requests are pending, according to the department’s annual FOIA report for FY 2022.

To stay ahead of this workload, the State Department developed its e-Records Archive, which holds more than 3 billion department records.

Stein said the archive reflects the work of chief FOIA officers over the past decade identifying standards for metadata and capturing records, so that records are optimized for search.

“We developed it in such a way that the data standards are in place, so that we could use that data down the road,” Stein said. “I think a lot of agencies probably have Outlook and different emails tools … but they may not have a central archive they can search across, the way we can here at State.”

The State Department is also experimenting with automation to improve the process of filing a FOIA request.

Stein said the department, in a pilot to improve the process of declassifying records, trained a machine learning model on years of humans reviewing and declassifying records.

The model is now as accurate as human FOIA professionals about 97-99% of the time. Stein said the pilot so far saved the department about half a year’s worth of work.

State Department, he added, is also looking at ways AI could help it use information already released under FOIA to support additional incoming requests.

“Right now, it’s more like you do a search and it looks for the key term or this or that. But making more sophisticated connections among information, it might give you what you need, or at least help you to scope your request, in a way, because we do want to get information to people as quickly as possible, [and] a lot of requests are very broad,” he said.

The department is also considering ways to use AI to help it refine records searches.

“If you look for a very specific term, you get a result. But if that term has the same letters in it as other words, you can end up getting like a million potentially responsive records, when really, you wanted a very narrow, specific thing,” Stein said. “Maybe people colloquially use a different term for something. The machine learning tool could help narrow those results to get more timely responses.”

While automation is showing promise in some FOIA pilots, Stein said the federal government still has a lot more work left to do to understand how best to use this emerging technology.

“I think some people are going to fail at some of this work, and that’s OK, in my opinion, as long as we learn from it, and we can save time by sharing here’s what works, what didn’t work,” Stein said. “Which is also why we’re so proud of the machine-learning pilot that did work, because we’re sharing this work with other agencies.”

The State Department also recently soft-launched its new online request platform, which allows users to track the status of their FOIA requests. Stein said the new platform is meant to reduce the number of calls the department’s FOIA office receives, asking for this information.

“We’re rethinking our online experience to help people find information they want,” he said.

The new platform also supports the department’s “release to one, release to all” policy of posting documents released under FOIA to its virtual reading room. In fiscal 202, the department posted over 6,200 records online after it released them to requests.

Stein said FOIA professionals under this policy are trying to post as many documents online as possible each month, but are looking at ways to streamline this workload through automation.

“A lot of it just comes down to that’s still a manual process. How do we automate it? How do we do a better job of getting information out? It seems to be working pretty well, but there’s still a long way to go.”

Demystifying AI for agencies 

One of the barriers to adoption is getting more of the FOIA community to understand the opportunities and limits of what AI can do.

Stein said the Technology Committee held an “AI 101 course” a few years ago, in order for more FOIA professionals to develop a common understanding of this technology.

“People just assume everyone knows what artificial intelligence is. It is, on one hand, complicated, and on the other kind of easy to understand if you break it down,” Stein said. “If you’re coming from a place where you think we just do a Google-like search across all records at every federal agency, that’s just not where we’re at. And that changed the whole discussion.”

The Chief FOIA Officers Council also held a “NextGen FOIA Tech Showcase”  in February 2022 to identify technologies that could help agencies process their FOIA requests more easily.

The showcase gave agencies an opportunity to learn more about AI, machine learning and tools to improve the customer experience of filing FOIA requests.

Stein said the showcase helped break down some barriers and understand to tools the private sector can offer. But some agencies have unique considerations when it comes to FOIA processing, and AI tools may not be the best way to address those challenges.

“If you’re an agency that gets thousands of requests annually for a specific form, you may not even need AI and machine learning. You many just need a simple tool or an application that could redact certain boxes,” Stein said.

The Technology Committee is looking at collaborative interagency platforms, in an effort to help agencies trying to optimize FOIA processing at agencies with limited IT budgets.

“Maybe a couple of agencies could some together, thinking in a different way, in new ways. [It’s] not just one agency working on it, but several agencies working together, especially those that have high volumes of requests. We could really see some efficiencies through a shared platform and ways to manage information well and securely too,” Stein said.

Article link: https://federalnewsnetwork.com/federal-insights/2023/06/state-department-looks-to-ai-for-streamlined-foia-workloads/

Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge – JAMA Network

Posted by timmreardon on 06/21/2023
Posted in: Uncategorized.

Research Letter June 15, 2023

Zahir Kanjee, MD, MPH1; Byron Crowe, MD1; Adam Rodman, MD, MPH1

Author Affiliations Article Information

JAMA. Published online June 15, 2023. doi:10.1001/jama.2023.8288

Recent advances in artificial intelligence (AI) have led to generative models capable of accurate and detailed text-based responses to written prompts (“chats”). These models score highly on standardized medical examinations.1 Less is known about their performance in clinical applications like complex diagnostic reasoning. We assessed the accuracy of one such model (Generative Pre-trained Transformer 4 [GPT-4]) in a series of diagnostically difficult cases.

Methods

We used New England Journal of Medicine clinicopathologic conferences. These conferences are challenging medical cases with a final pathological diagnosis that are used for educational purposes; they have been used to evaluate differential diagnosis generators since the 1950s.2–4

We used the first 7 case conferences from 2023 to iteratively develop a standard chat prompt (eAppendix in Supplement 1) that explained the general conference structure and instructed the model to provide a differential diagnosis ranked by probability. We copied each case published from January 2021 to December 2022, up to but not including the discussant’s initial response and differential diagnosis discussion, and pasted it along with our prompt into the model. We excluded cases that were not diagnostic dilemmas (such as cases on management reasoning), as determined by consensus of Z.K. and A.R., or that were too long to run as a single chat. We chose recent cases because most of the model’s training data ends in September 2021. Each case, including the cases used to develop the prompt, was run in independent chats to prevent the model applying any “learning” to subsequent cases.

Our prespecified primary outcome was whether the model’s top diagnosis matched the final case diagnosis. Prespecified secondary outcomes were the presence of the final diagnosis in the model’s differential, differential length, and differential quality score using a previously published ordinal 5-point rating system based on accuracy and usefulness (in which a score of 5 is given for a differential including the exact diagnosis and a score of 0 is given when no diagnoses are close).2 All cases were independently scored by Z.K. and B.C., with disagreements adjudicated by A.R. Crosstabs and descriptive statistics were generated with Excel (Microsoft); a Cohen κ was calculated to determine interrater reliability using SPSS version 25 (IBM).

Results

Of 80 cases, 10 were excluded (4 were not diagnostic dilemmas; 6 were deleted for length). The 2 primary scorers agreed on 66% of scores (46/70; κ = 0.57 [moderate agreement]). The AI model’s top diagnosis agreed with the final diagnosis in 39% (27/70) of cases. In 64% of cases (45/70), the model included the final diagnosis in its differential (Table). Mean differential length was 9.0 (SD, 1.4) diagnoses. When the AI model provided the correct diagnosis in its differential, the mean rank of the diagnosis was 2.5 (SD, 2.5). The median differential quality score was 5 (IQR, 3-5); the mean was 4.2 (SD, 1.3) (Figure).

Discussion

A generative AI model provided the correct diagnosis in its differential in 64% of challenging cases and as its top diagnosis in 39%. The finding compares favorably with existing differential diagnosis generators. A 2022 study evaluating the performance of 2 such models also using New England Journal of Medicineclinicopathological case conferences found that they identified the correct diagnosis in 58% to 68% of cases3; the measure of quality was a simple dichotomy of useful vs not useful. GPT-4 provided a numerically superior mean differential quality score compared with an earlier version of one of these differential diagnosis generators (4.2 vs 3.8).2

Study limitations include some subjectivity in the outcome measure, which was mitigated with a standardized approach used in similar diagnostics literature. In some cases, important diagnostic information was not included in the AI prompt due to protocol limitations, likely leading to an underestimation of the model’s capabilities. Also, the agreement on the quality score between scorers was moderate.

Generative AI is a promising adjunct to human cognition in diagnosis. The model evaluated in this study, similar to some other modern differential diagnosis generators, is a diagnostic “black box”; future research should investigate potential biases and diagnostic blind spots of generative AI models. Clinicopathologic conferences are best understood as diagnostic puzzles; once privacy and confidentiality concerns are addressed, studies should assess performance with data from real-world patient encounters.5

Section Editors: Jody W. Zylke, MD, Deputy Editor; Kristin Walter, MD, Senior Editor.

Back to top

Article Information

Accepted for Publication: April 28, 2023.

Published Online: June 15, 2023. doi:10.1001/jama.2023.8288

Corresponding Author: Adam Rodman, MD, MPH, Department of Medicine, Beth Israel Deaconess Medical Center, 330 Brookline Ave, W/SPAN-2, Boston, MA 02215 (arodman@bidmc.harvard.edu).

Author Contributions: Drs Kanjee and Rodman had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Article link: https://jamanetwork.com/journals/jama/fullarticle/2806457?

Five big takeaways from Europe’s AI Act – MIT Technology Review

Posted by timmreardon on 06/20/2023
Posted in: Uncategorized.


The AI Act vote passed with an overwhelming majority, but the final version is likely to look a bit different

By Tate Ryan-Mosley June 19, 2023

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

It was a big week in tech policy in Europe with the European Parliament’s vote to approve its draft rules for the AI Act on the same day EU lawmakers filed a new antitrust lawsuit against Google.

The AI Act vote passed with an overwhelming majority, and has been heralded as one of the world’s most important developments in AI regulation. The European Parliament’s president, Roberta Metsola, described it as “legislation that will no doubt be setting the global standard for years to come.” 

Don’t hold your breath for any immediate clarity, though. The European system is a bit complicated. Next, members of the European Parliament will have to thrash out details with the Council of the European Union and the EU’s executive arm, the European Commission, before the draft rules become legislation. The final legislation will be a compromise between three different drafts from the three institutions, which vary a lot. It will likely take around two years before the laws are actually implemented.

What Wednesday’s vote accomplished was to approve the European Parliament’s position in the upcoming final negotiations. Structured similarly to the EU’s Digital Services Act, a legal framework for online platforms, the AI Act takes a “risk-based approach” by introducing restrictions based on how dangerous lawmakers predict an AI application could be. Businesses will also have to submit their own risk assessments about their use of AI. 

Some applications of AI will be banned entirely if lawmakers consider the risk “unacceptable,” while technologies deemed “high risk” will have new limitations on their use and requirements around transparency. 

Here are some of the major implications:

  1. Ban on emotion-recognition AI. The European Parliament’s draft text bans the use of AI that attempts to recognize people’s emotions in policing, schools, and workplaces. Makers of emotion-recognition software claim that AI is able to determine when a student is not understanding certain material, or when a driver of a car might be falling asleep. The use of AI to conduct facial detection and analysis has been criticized for inaccuracy and bias, but it has not been banned in the draft text from the other two institutions, suggesting there’s a political fight to come.
  2. Ban on real-time biometrics and predictive policing in public spaces. This will be a major legislative battle, because the various EU bodies will have to sort out whether, and how, the ban is enforced in law. Policing groups are not in favor of a ban on real-time biometric technologies, which they say are necessary for modern policing. Some countries, like France, are actually planning to increase their use of facial recognition. 
  3. Ban on social scoring. Social scoring by public agencies, or the practice of using data about people’s social behavior to make generalizations and profiles, would be outlawed. That said, the outlook on social scoring, commonly associated with China and other authoritarian governments, isn’t really as simple as it may seem. The practice of using social behavior data to evaluate people is common in doling out mortgages and setting insurance rates, as well as in hiring and advertising. 
  4. New restrictions for gen AI. This draft is the first to propose ways to regulate generative AI, and ban the use of any copyrighted material in the training set of large language models like OpenAI’s GPT-4. OpenAI has already come under the scrutiny of European lawmakers for concerns about data privacy and copyright. The draft bill also requires that AI generated content be labeled as such. That said, the European Parliament now has to sell its policy to the European Commission and individual countries, which are likely to face lobbying pressure from the tech industry.
  5. New restrictions on recommendation algorithms on social media. The new draft assigns recommender systems to a “high risk” category, which is an escalation from the other proposed bills. This means that if it passes, recommender systems on social media platforms will be subject to much more scrutiny about how they work, and tech companies could be more liable for the impact of user-generated content.

The risks of AI as described by Margrethe Vestager, executive vice president of the EU Commission, are widespread. She has emphasized concerns about the future of trust in information, vulnerability to social manipulation by bad actors, and mass surveillance. 

“If we end up in a situation where we believe nothing, then we have undermined our society completely,” Vestager told reporters on Wednesday.

What I am reading this week

  • A Russian soldier surrendered to a Ukrainian assault drone, according to video footage published by the Wall Street Journal. The surrender took place back in May in the eastern city of Bakhmut, Ukraine. The drone operator decided to spare the life of the soldier, according to international law, upon seeing his plea via video. Drones have been critical in the war, and the surrender is a fascinating look at the future of warfare. 
  • Many Redditors are protesting changes to the site’s API that would eliminate or reduce the function of third-party apps and tools many communities use. In protest, those communities have “gone private,” which means that the pages are no longer publicly accessible. Reddit is known for the power it gives to its user base, but the company may now be regretting that, according to Casey Newton’s sharp assessment. 
  • Contract workers who trained Google’s large language model, Bard, say they were fired after raising concerns about their working conditions and safety issues with the AI itself. The contractors say they were forced to meet unreasonable deadlines, which led to concerns about accuracy. Google says the responsibility lies with Appen, the contract agency employing the workers. If history tells us anything, there will be a human cost in the race to dominate generative AI. 

What I learned this week

This week, Human Rights Watch released an in-depth report about an algorithm used to dole out welfare benefits in Jordan. The agency found some major issues with the algorithm, which was funded by the World Bank, and says the system was based on incorrect and oversimplified assumptions about poverty. The report’s authors also called out the lack of transparency and cautioned against similar projects run by the World Bank. I wrote a short story about the findings. 

Meanwhile, the trend toward using algorithms in government services is growing. Elizabeth Renieris, author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse, wrote to me about the report, and emphasized the impact these sort of systems will have going forward: “As the process to access benefits becomes digital by default, these benefits become even less likely to reach those who need them the most and only deepen the digital divide. This is a prime example of how expansive automation can directly and negatively impact people, and is the AI risk conversation that we should be focused on now.”

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/06/19/1075063/five-big-takeaways-from-europes-ai-act/amp/

VA Looks to Implement New Risk Assessment Framework – MeriTalk

Posted by timmreardon on 06/19/2023
Posted in: Uncategorized.

BY: GRACE DILLE JUN 16, 2023 10:15 AM

The Department of Veterans Affairs (VA) is looking to implement a new risk assessment framework that will bring standardization and consistency to authorization decisions.

The move comes in response to a repeat recommendation from the VA Office of Inspector General (OIG). In the VA OIG’s Federal Information Security Modernization Act Audit for Fiscal Year 2022, the OIG made 26 recommendations for the VA to improve its information security program – the same number of recommendations from fiscal year (FY) 2021.

Despite the VA’s efforts to close the recommendations, the OIG said some have been repeated for multiple years.

Nevertheless, Kurt DelBene, VA’s chief information officer (CIO) and assistant secretary for information and technology, pledged his commitment to addressing these recommendations.

Specifically, the OIG recommended that DelBene consistently implement an improved continuous monitoring program in accordance with the National Institute of Standards and Technology (NIST) Risk Management Framework. The OIG called on the CIO to implement an independent security control assessment process to “evaluate the effectiveness of security controls prior to granting authorization decisions.”

“The assistant secretary reported that the Office of Information Security will implement a new assessment framework, which brings standardization and consistency to the Authorizing Official (AO) reviews and aligns with the NIST framework,” the report says. “To improve the tracking process further, the development of enterprise dashboards to bring visibility to executive leadership of those critical systems that are not meeting cyber security standards will continue.”

DelBene explained that the scale of VA systems is quite large – nearly 1,000 VA systems require an Authority to Operate (ATO), which he said would benefit from an independent control assessment.

However, the CIO noted that “the resources and costs to do so for all our systems are a barrier.”

“The varying risks for the more than 1,000 systems also suggests that we take a more balanced approach, leveraging internal resources for lower risk systems,” DelBene said. “VA OIT will implement specific policy changes that incorporate a prioritization model, based on risk, to assess information security controls in a prudent and rational way. Improving our capacity to conduct independent assessments for our highest risk systems will remain a high priority for OIT.”

The VA’s target completion date to complete this recommendation is Sept. 30.

Article link: https://www.meritalk.com/articles/va-looks-to-implement-new-risk-assessment-framework/

It’s time to talk about the real AI risks – MIT Technology Review

Posted by timmreardon on 06/19/2023
Posted in: Uncategorized.


Experts at RightsCon want us to focus less on existential threats, and more on the harms here and now.

By Tate Ryan-Mosley June 12, 2023

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

This week, I tuned into a bunch of sessions at RightsCon while recovering. The event is the world’s biggest digital rights conference, and after several years of only virtual sessions, the top internet ethicists, activists, and policymakers were back in person in Costa Rica.

Unsurprisingly, everyone was talking about AI and the recent rush to deploy large language models. Ahead of the conference, the United Nations put out a statement, encouraging RightsCon attendees to focus on AI oversight and transparency.

I was surprised, however, by how different the conversations about the risks of generative AI were at RightsCon from all the warnings from big Silicon Valley voices that I’ve been reading in the news.

Throughout the last few weeks, tech luminaries like OpenAI CEO Sam Altman, ex-Googler Geoff Hinton, top AI researcher Yoshua Bengio, Elon Musk, and many others have been calling for regulation and urgent action to address the “existential risks”—even including extinction—that AI poses to humanity. 

Certainly, the rapid deployment of large language models without risk assessments, disclosures about training data and processes, or seemingly much attention paid to how the tech could be misused is concerning. But speakers in several sessions at RightsCon reiterated that this AI gold rush is a product of company profit-seeking, not necessarily regulatory ineptitude or technological inevitability.

In the very first session, Gideon Lichfield, the top editor at Wired (and the ex–editor in chief of Tech Review), and Urvashi Aneja, founder of the Digital Futures Lab, went toe to toe with Google’s Kent Walker.

“Satya Nadella of Microsoft said he wanted to make Google dance. And Google danced,” said Lichfield. “We are now, all of us, jumping into the void holding our noses because these two companies are out there trying to beat each other.” Walker, in response, emphasized the social benefits that advances in artificial intelligence could bring in areas like drug discovery, and restated Google’s commitment to human rights. 

The following day, AI researcher Timnit Gebru directly addressed the talk of existential risks posed by AI: “Ascribing agency to a tool is a mistake, and that is a diversion tactic. And if you see who talks like that, it’s literally the same people who have poured billions of dollars into these companies.”

She said, “Just a few months ago, Geoff Hinton was talking about GPT-4 and how it’s the world’s butterfly. Oh, it’s like a caterpillar that takes data and then flies into a beautiful butterfly, and now all of a sudden it’s an existential risk. I mean, why are people taking these people seriously?”

Frustrated with the narratives around AI, experts like Human Right Watch’s tech and human rights director, Frederike Kaltheuner, suggest grounding ourselves in the risks we already know plague AI rather than speculating about what might come. 

And there are some clear, well-documented harms posed by the use of AI. They include:

  • Increased and amplified misinformation. Recommendation algorithms on social media platforms like Instagram, Twitter, and YouTube have been shown to prioritize extreme and emotionally compelling content, regardless of accuracy. LLMs contribute to this problem by producing convincing misinformation known as “hallucinations.” (More on that below)
  • Biased training data and outputs. AI models tend to be trained on biased data sets, which can lead to biased outputs. That can reinforce existing social inequities, as in the case of algorithms that discriminate when assigning people risk scores for committing welfare fraud, or facial recognition systems known to be less accurate on darker-skinned women than white men. Instances of ChatGPT spewing racist content have also been documented.
  • Erosion of user privacy. Training AI models require massive amounts of data, which is often scraped from the web or purchased, raising questions about consent and privacy. Companies that developed large language models like ChatGPT and Bard have not yet released much information about the data sets used to train them, though they certainly contain a lot of data from the internet. 

Kaltheuner says she’s especially concerned generative AI chatbots will be deployed in risky contexts such as mental health therapy: “I’m worried about absolutely reckless use cases of generative AI for things that the technology is simply not designed for or fit for purpose.” 

Gebru reiterated concerns about the environmental impacts resulting from the large amounts of computing power required to run sophisticated large language models. (She says she was fired from Google for raising these and other concerns in internal research.) Moderators of ChatGPT, who work for low wages, have also experienced PTSD in their efforts to make model outputs less toxic, she noted. 

Regarding concerns about humanity’s future, Kaltheuner asks “Whose extinction? Extinction of the entire human race? We are already seeing people who are historically marginalized being harmed at the moment. That’s why I find it a bit cynical.”

What else I’m reading

  • US government agencies are deploying GPT-4, according to an announcement from Microsoft reported by Bloomberg. OpenAI might want regulation for its chatbot, but in the meantime, it also wants to sell it to the US government.
  • ChatGPT’s hallucination problem might not be fixable. According to researchers at MIT, large language models get more accurate when they debate each other, but factual accuracy is not built into their capacity, as broken down in this really handy story from the Washington Post. If hallucinations are unfixable, we may only be able to reliably use tools like ChatGPT in limited situations. 
  • According to an investigation by the Wall Street Journal, Stanford University, and the University of Massachusetts, Amherst, Instagram has been hosting large networks of accounts posting child sexual abuse content. The platform responded by forming a task force to investigate the problem. It’s pretty shocking that such a significant problem could go unnoticed by the platform’s content moderators and automated moderation algorithms.

What I learned this week

A new report by the South Korea–based human rights group PSCORE details the days-long application process required to access the internet in North Korea. Just a few dozen families connected to Kim Jong-Un have unrestricted access to the internet, and only a “few thousand” government employees, researchers, and students can access a version that is subject to heavy surveillance. As Matt Burgess reports in Wired, Russia and China likely supply North Korea with its highly controlled web infrastructure.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/06/12/1074449/real-ai-risks/amp/

Quantum computers could overtake classical ones within 2 years, IBM ‘benchmark’ experiment shows

Posted by timmreardon on 06/18/2023
Posted in: Uncategorized.

By Tia Ghose published 3 days ago

A new experiment by IBM computers shows that quantum computers could soon outperform classical digital computers at practical tasks in the next two years.

Quantum computers could beat classical ones at answering practical questions within two years, a new experiment from IBM computers shows. The demonstration hints that true quantum supremacy, in which quantum computers overtake classical digital ones, could be here surprisingly soon.

“These machines are coming,” Sabrina Maniscalco, CEO of Helsinki-based quantum-computing startup Algorithmiq, told Nature News.

In the new study, described Wednesday (June 14) in the journal Nature, scientists used IBM’s quantum computer, known as Eagle, to simulate the magnetic properties of a real material faster than a classical computer could. It achieved this feat because it used a special error-mitigating process that compensated for noise, a fundamental weakness of quantum computers.f

Traditional silicon-chip-based computers rely on “bits” that can take just one of two values: 0 or 1. 

By contrast, quantum computers employ quantum bits, or qubits, that can take on many states at once. Qubits rely on quantum phenomena such as superposition, in which a particle can exist in multiple states simultaneously, and on quantum entanglement, in which the states of distant particles can be linked so that changing one instantaneously changes the other. In theory, this allows qubits to make calculations much faster, and in parallel, that digital bits would do slowly and in sequence.

But historically, quantum computers have had an Achilles’ heel: The quantum states of qubits are incredibly delicate, and even the tiniest disruption from the outside environment can mess with their states — and thereby the information they carry — forever. That makes quantum computers very error-prone or “noisy.”

In the new proof-of-principle experiment, the 127-qubit Eagle supercomputer, which uses qubits built on superconducting circuits, calculated the complete magnetic state of a two-dimensional solid. The researchers then carefully measured the noise produced by each of the qubits. It turned out that certain factors, such as defects in the supercomputing material, could reliably predict the noise generated in each qubit. The team then used these predictions to model what the results would have looked like without that noise, Nature News reported.

Claims of quantum supremacy have surfaced before: In 2019, Google scientists claimed that the company’s quantum computer, known as Sycamore, had solved a problem in 200 seconds that an ordinary computer would take 10,000 years to crack. But the problem it solved — essentially spitting out a huge list of random numbers and then checking their accuracy, had no practical use.

By contrast, the new IBM demonstration applies to a real — albeit highly simplified — physical problem. 

“It makes you optimistic that this will work in other systems and more complicated algorithms,” John Martinis, a physicist at the University of California, Santa Barbara, who achieved the 2019 Google result, told Nature News.

Article link: https://www.livescience.com/technology/computing/quantum-computers-could-overtake-classical-ones-within-2-years-ibm-benchmark-experiment-shows

You can read more about the quantum computing milestone at Nature News.

Surgeon General Sounds the Alarm on Social Media Use and Youth Mental Health Crisis – JAMA Network

Posted by timmreardon on 06/18/2023
Posted in: Uncategorized.

June 14, 2023. Jennifer Abbasi


Article Information

JAMA. Published online June 14, 2023. doi:10.1001/jama.2023.10262


A year and a half after US Surgeon General Vivek Murthy, MD, MBA, called attention to increasing symptoms of depression, anxiety, and suicidal ideation among children and adolescents, the Nation’s Doctor is now sounding the alarm on a likely driver of the youth mental health crisis: social media use.

“The most common question parents ask me is, ‘Is social media safe for my kids?’ The answer is that we don’t have enough evidence to say it’s safe, and in fact, there is growing evidence that social media use is associated with harm to young people’s mental health,” Murthy said in a recent statement announcing a new US Surgeon General’s Advisory.

Such advisories are reserved for major public health issues that warrant more awareness and action. Released in late May, the new advisory highlighted increasing concerns about social media’s effects on the mental health of the nation’s youth. It pointed out that nearly all teenagers report using a social media platform and that more than a third say they use at least 1 platform “almost constantly.”Younger children are also active online. Despite a minimum age requirement of 13 years on most US social media platforms, nearly 40% of 8- to 12-year-olds report social media use.

“Children are exposed to harmful content on social media, ranging from violent and sexual content, to bullying and harassment,” Murthy cautioned in the announcement. “And for too many children, social media use is compromising their sleep and valuable in-person time with family and friends. We are in the middle of a national youth mental health crisis, and I am concerned that social media is an important driver of that crisis—one that we must urgently address.”

Although evidence is growing that problematic social media use can negatively affect youth mental health and well-being, the advisory also emphasized that more research is needed to understand the “full scope and scale” of effects on young people—both negative and positive. In the meantime, the burden of protecting children and adolescents should fall not only to their families, but also to technology companies, policy makers, and researchers, the advisory said.

A Wholesale Shift

Leaders of several major medical organizations, including the American Academy of Pediatrics, the American Academy of Family Physicians, and the American Psychiatric Association, welcomed the advisory. “As physicians, we see firsthand the impact of social media, particularly during adolescence—a critical period of brain development,” Jack Resneck Jr, MD, president of the American Medical Association, the publisher of JAMA, said in the announcement.

In emails with JAMA, child and adolescent mental health experts also agreed with the Surgeon General’s message.

“This advisory maps onto what I am concerned about as a clinician, as a researcher, and as a parent,” said Jeremy Veenstra-VanderWeele, MD, a professor of child and adolescent psychiatry at the Columbia University Irving Medical Center and a JAMA Psychiatryeditorial board member.

“We have seen a wholesale shift in how youth are interacting with each other,” he continued. “We see what we think are the impacts of this shift, but we need much more research to understand what we are seeing.” Circumstantially, he said, clinicians have observed “a marked rise in youth anxiety and depression over the same period of time when social media has become so widely used.”

Veenstra-VanderWeele and other experts said the advisory struck the right balance between highlighting the potential harms and benefits of social media for youth. Social media use affects different children in different ways, based on their individual characteristics, as well as on cultural, historical, and socioeconomic factors, the advisory noted. For some young people, social media can offer positive connections and support they may not have in their homes, schools, or neighborhoods. These benefits can be especially important for youth who belong to marginalized groups.

But there can be substantial downsides. “It is staggering how much time youth spend on social media,” Veenstra-VanderWeele said. “It really seems all-consuming for some use, with incredible distress if it is taken away.”

n fact, according to the advisory, teenagers in the University of Michigan’s ongoing Monitoring the Future survey spent an average of 3.5 hours on social media per day in 2021. One in 4 teens reported spending 5 or more hours on the platforms daily. Research published in JAMA Psychiatry found that adolescents who spent more than 3 hours on social media per day had an increased risk of poor mental health outcomes such as depression and anxiety symptoms. And perhaps unsurprisingly, excessive social media use has also been strongly tied to sleep problems in youth.

The hours spent scrolling aren’t the only concern, the advisory noted. There’s also the exposure to harmful messages and behaviors, cyberbullying, and hate-based content. These exposures appear to be taking a toll on the nation’s youth. Nearly half of teenagers—46%—said social media made them feel worse about their body image in a 2022 survey conducted by the Boston Children’s Hospital Digital Wellness Lab. Girls appear to be especially vulnerable to comparing themselves with others on social media, which has been linked with body dissatisfaction, eating disorders, and symptoms of depression.

The advisory acknowledged that the interplay between social media use and youth mental health may be bidirectional and noted that untangling these complex relationships will require more data and transparency than technology companies have been willing to provide so far. It urged these companies to share their data with independent researchers.

Technology companies should also tailor their platforms for children’s developmental capabilities, the American Psychological Association (APA) said in its own health advisory, released in May. Features designed to maximize user engagement—such as displayed “likes,” autoplay content, and infinite scrolling—may not be appropriate for kids.

The Surgeon General’s advisory also outlined actions policy makers, researchers, parents and caregivers, and young people themselves can take immediately. Policy makers can, for example, fund additional research, better protect kids’ privacy, and work to strengthen safety standards, while researchers can prioritize studies that inform those standards. Parents and caregivers, the advisory recommended, can establish technology-free zones in the house to protect sleep and encourage in-person socializing. And young people can aim to adopt healthy practices, such as limiting their social media time.

The Physician’s Role

For Kara Bagot, MD, a New York–based child and adolescent psychiatrist, the advisory is a positive step forward. But she said it falls short in providing a full mitigation plan with resources allocated to address the recommendations.

“This is particularly important as these recommendations are not novel; researchers and clinicians in the field have been urging more action from technology companies, pediatrics and family medicine practitioners, and funding agencies but lack the power or resources to facilitate the changes needed,” said Bagot, who is also an editor at JAMA Psychiatry.

Going forward, “we urgently need to think about testing different approaches to evaluate and potentially mitigate the risks that may be associated with social media use,” Veenstra-VanderWeele said.

Youth should be at the center of addressing the issue of their own social media use, advised Tammy Chang, MD, MPH, an associate professor in the Department of Family Medicine at the University of Michigan and director of the MyVoice National Poll of Youth. The poll has found that many young people already know that social media use has negative impacts and are trying to modulate their use.

“When youth have input and buy-in on initiatives meant to change their behaviors, those initiatives are more likely to succeed,” Chang said, noting that this approach could be critical to the advisory’s success. “Meaningfully partnering with youth now could change the energy around the advisory from something that adults are worried about for youth to a partnership between youth and experts to address something everyone is working on together.”

Pediatricians and family physicians also have an important role. Veenstra-VanderWeele said he believes primary care physicians have a responsibility to discuss safe and balanced use of social media with youth and their parents. These conversations should include setting limits, particularly around sleep.

“In discussions with physicians, youth are often able to describe how they would like to change their use of social media,” he said. “Discussing their desire to change their social media use together with their parents allows collaborative limit-setting, which is an ideal way for youth to align with their parents, rather than generating conflict.”

In their one-on-one discussions with young patients, physicians should also ask about exposure to harmful content or potentially dangerous online interactions, “in the same way that we ask about substance use or sexual activity,” Veenstra-VanderWeele said.

Health professionals can educate families about social media settings that can be adjusted to better customize the platforms for children, such as hiding “like” and “view” counts and restricting time on Instagram, turning off comments and scheduling reminders for “screen time breaks” on TikTok, and toggling off “autoplay” on YouTube.

Physicians can also discuss parents’ screen-related behaviors and how these habits may be affecting their children. “Youth are active observers of their caregivers,” Bagot said. “As such, modeling balanced online behaviors for one’s children is important. Parents need to be educated on [this] and also on what balanced, appropriate behaviors are.”

Ultimately, experts say, efforts to protect youth well-being should not discount the positive interactions that can happen when kids connect on social media. Veenstra-VanderWeele has heard from teens about how important their online community is to them, particularly for those who may not fit in easily with peers at school or in their neighborhood. “We need to figure out how to harness the potential benefits of online social connections while decreasing the potential harms,” he said.

Article link: https://jamanetwork.com/journals/jama/fullarticle/2806277?

Back to top

Article Information

Published Online: June 14, 2023. doi:10.1001/jama.2023.10262

Conflict of Interest Disclosures: Dr Chang reported serving as a committee member of the Board on Children, Youth, and Families at the National Academies of Sciences, Engineering, and Medicine; directing MyVoice, a national poll of youth that has received internal funding from the University of Michigan. No other disclosures were reported.

See More About 

Adolescent MedicineMedia and YouthPsychiatry and Behavioral HealthPediatrics

Opinion | What the Pentagon Thinks About Artificial Intelligence – Politico

Posted by timmreardon on 06/16/2023
Posted in: Uncategorized.

The U.S. has committed to keeping humans in the chain of command. It’s time for China to do the same.

Opinion by KATHLEEN HICKS

06/15/2023 04:30 AM EDT

Kathleen H. Hicks is the U.S. Deputy Secretary of Defense.

Artificial intelligence may transform many aspects of the human condition, nowhere more than in the military sphere. Although many Americans may only now be focusing on AI’s potential promise and peril, the U.S. Defense Department has worked for over a decade to ensure its responsible use. The challenge now is to convince other nations, including the People’s Republic of China, to join the United States in committing to norms of responsible AI behavior.

The Pentagon first issued a responsible use policy for autonomous systems and AI in 2012. Since that time, we’ve maintained our commitment even as technology has evolved. In recent years, we’ve adopted ethical principles for using AI, and issued a responsible AI strategy and implementation pathway. This January, we also updated our original 2012 directive on autonomy in weapon systems, to help ensure we remain the global leader of not just development and deployment, but also safety.

Where the Defense Department is investing in AI, we’re doing so in areas that provide us with the most strategic benefit and capitalize on our existing advantages. We also draw a bright line when it comes to nuclear weapons. The policy of the United States is to maintain a human “in the loop” for all actions critical to informing and executing decisions by the president to initiate and terminate the use of nuclear weapons.

Although we are swiftly embedding AI in many other aspects of our mission — from battlespace awareness, cyber and reconnaissance, to logistics, force support and other back-office functions — we do so mindful of AI’s potential dangers, which we’re determined to avoid. We don’t use AI to censor, constrain, repress or disempower people. By putting our values first and playing to our strengths, the greatest of which is our people, we’ve taken a responsible approach to AI that will ensure America continues to come out ahead.

Our current level of funding for AI reflects our present needs: the latest U.S. defense budget, for fiscal year 2024, invests $1.8 billion in artificial intelligence and machine learning capabilities, to continue our progress in modernization and innovation. That will change over time as we incorporate the technology effectively into how we operate — while also staying true to the principles that make ours the world’s finest fighting force.

Even as our use of AI reflects our ethics and our democratic values, we don’t seek to control innovation. America’s vibrant innovation ecosystem is second-to-none because it’s powered by a free and open society of imaginative inventers, doers and problem-solvers. While that makes me choose our free-market system over China’s statist system any day of the week, it doesn’t mean the two systems cannot coexist.

Chinese diplomats have said that the PRC “‘takes very seriously the need to prevent and manage AI-related risks and challenges,’” according to news reports. Those are good words; actions matter more. If China is indeed “ready to step up exchanges and cooperation ‘with all parties,’” the Pentagon would welcome such direct engagement.

Our commitment to values is one reason why the United States and its military have so many capable allies and partners around the world, and growing numbers of commercial technology innovators who want to work with us: because they share our values.

Such values are owned by no country or company; others are welcome to embrace them. For example, if the PRC credibly and verifiably committed to maintaining human involvement for all actions critical to informing and executing sovereign decisions to use nuclear weapons, it might find that commitment warmly received by its neighbors and others in the international community. And rightfully so.

The United States does not seek an AI arms race, or any arms race, with China, just as we do not seek conflict, either. With AI and all our capabilities, we seek only to deter aggression and defend our country, our allies and partners, and our interests.

America and China are competing to shape the future of the 21st century, technologically and otherwise. That competition is one which we intend to win — not in spite of our values, but because of them.

Article link: https://www.politico.com/news/magazine/2023/06/15/pentagon-artificial-intelligence-china-00101751

The world’s regulatory superpower is taking on a regulatory nightmare: artificial intelligence – Atlantic Council

Posted by timmreardon on 06/16/2023
Posted in: Uncategorized.

June 15, 2023 By Atlantic Council experts

The humans are still in charge—for now. The European Parliament, the legislative branch of the European Union (EU), passed a draft law on Wednesday intended to restrict and add transparency requirements to the use of artificial intelligence (AI) in the twenty-seven-member bloc. In the AI Act, lawmakers zeroed in on concerns about biometric surveillance and disclosures for generative AI such as ChatGPT. The legislation is not final. But it could have far-reaching implications since the EU’s large size and single market can affect business decisions for companies based elsewhere—a phenomenon known as “the Brussels effect.”

Below, Atlantic Council experts share their genuine intelligence by answering the pressing questions about what’s in the legislation and what’s next. 

1. What are the most significant aspects of this draft law? 

The European Parliament’s version of the AI Act would prohibit use of the technology within the EU for controversial purposes like real-time remote biometric identification in public places and predictive policing. Member state law enforcement agencies are sure to push back against aspects of these bans, since some of them are already using these technologies for public security reasons. The final version could well be more accommodating of member states’ security interests.

—Kenneth Propp is a nonresident senior fellow with the Atlantic Council’s Europe Center and former legal counselor at the US Mission to the European Union in Brussels.

The most significant aspect of the draft AI Act is that it exists and has been voted on positively by the European Parliament. This is the only serious legislative attempt to date to deal with the rapidly evolving technology of AI and specifically to address some of the anticipated risks, both due to the technology itself and to the ways people use it. For example, a government agency might use AI to identify wrongdoing among welfare recipients, but due to learned bias it misidentifies thousands of people as participating in welfare fraud (this happened in the Netherlands in 2020). Or a fake video showing a political candidate in a compromising position is released just prior to the election. Or a government uses AI to track citizens and determine whether they exhibit “disloyal” behavior.

To address these concerns, EU policymakers have designed a risk-management framework, in which higher-risk applications would receive more scrutiny. A few uses of AI—social scoring, real-time facial recognition surveillance—would be banned, but most companies deploying AI, even the higher-risk cases, would have to file extensive records on training and uses. Above all, this is a law about transparency and redress: humans should know when they are interacting with AI, and if AI makes decisions about them, they should have a right of redress to a fellow human. In the case of generative AI, such as ChatGPT, the act requires that images be marked as coming from AI and the AI developer should list the copyrighted works on which the AI trained.

Of course, the act is not yet finished. Next, there will be negotiations between parliament and the EU member states, and we can expect significant opposition to certain bans from European law enforcement institutions. Implementation will bring other challenges, especially in protecting trade secrets while examining how algorithms might steer users toward extreme views or criminal fraudsters. But if expectations hold, by the end of 2023 Europe will have the first substantive law on AI in the world.

—Frances Burwell is a distinguished fellow at the Atlantic Council’s Europe Center and a senior director at McLarty Associates.

There are numerous significant aspects of this law, but there are two and a half that really stand out. The first is establishing a risk-based policy where lawmakers identify certain uses as presenting unacceptable risk (for example, social scoring, behavioral manipulation of certain groups, and biometric identification by groups including police). Second, generative AI systems would be regulated and required to disclose any copyrighted data that was used to train the generative model, and any content AI outputs would need to carry a notice or label that it was created with AI. It’s also interesting what’s included as guidance for parliament to “ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.” This gives parliament a wide mandate that could see everything from data provenance to data center energy use be regulated under this draft law.

—Steven Tiell is a nonresident senior fellow with the Atlantic Council’s GeoTech Center. He is a strategy executive with wide technology expertise and particular depth in data ethics and responsible innovation for artificial intelligence.

2. What impact would it have on the industry?

Much as the EU’s General Data Protection Regulation (GDPR) became a globally motivating force in the business community, this law will do the same. The burden on companies to maintain and keep separate infrastructure exclusively for the EU is much higher than the cost of compliance. And the cost (and range) of noncompliance for companies (and individuals) has risen—prohibited uses, those deemed to have unacceptable risk, will incur a fine up to forty million euros or 7 percent of worldwide annual turnover (total global revenue) for the preceding financial year, whichever is greater. Violations of human-rights laws or any type of discrimination perpetrated by an AI will incur fines up to twenty million euros or 4 percent of worldwide turnover. Other noncompliance offenses, including from foundational models (again, the draft regulation affects generative AI), are subjected to fines of up to ten million euros or 2 percent of worldwide annual turnover. And those supplying false, incomplete, or misleading information to regulators can be fined up to five million euros or 1 percent of worldwide annual turnover. These fines are a big stick to encourage compliance. 

—Steven Tiell 

As I wrote for Lawfare when the European Commission proposed the AI Act two years ago, the proposed AI regulation is “a direct challenge to Silicon Valley’s common view that law should leave emerging technology alone.” At the same time, though the legislation is lengthy and complex, it is far from the traditional caricature of EU measures as heavy-handed, top-down enactments. Rather, as I wrote then, the proposal “sets out a nuanced regulatory structure that bans some uses of AI, heavily regulates high-risk uses, and lightly regulates less risky AI systems.” The European Parliament has added some onerous requirements, such as a murky human-rights impact assessment of AI systems, but my earlier assessment remains generally true.

It’s also worth noting that other EU laws, such as the GDPR adopted in 2016, will have an important and still-evolving impact on the deployment of AI within EU territory. For example, earlier this week Ireland’s data protection commission delayed Google’s request to deploy Bard, its AI chatbot, because the company had failed to file a data protection impact assessment, as required by the GDPR. Scrutiny of AI products by multiple European regulatory authorities employing precautionary approaches likely will mean that Europe will lag in seeing some new AI products.

—Kenneth Propp

3. How might this process shape how the rest of the world regulates AI?

It will have an impact on the rest of the world, but not simply by becoming the foundation for other AI acts. Most significantly, the EU act puts certain restrictions on governmental use of AI in order to protect democracy and a citizen’s fundamental rights. Authoritarian regimes will not follow this path. The AI Act is thus likely to become a marker, differentiating between those governments that value democracy more than technology, versus those that seek to use technology to control their publics.

—Frances Burwell

Major countries across the globe from Brazil to South Korea are in the process of developing their own AI legislation. The US Congress is slowly moving in the same direction with a forthcoming bill being developed by Senate Majority Leader Chuck Schumer likely to have important influence. If the EU sticks to its timetable of adopting the AI Act by the end of the year, its legislation could shape other countries’ efforts significantly by virtue of being early out of the gate and comprehensive in nature. Countries more concerned with promoting AI innovation, such as the United Kingdom, may stake out a lighter-touch approach than the EU, however.

—Kenneth Propp

The world’s businesses will comply with the EU’s AI Act if they have any meaningful amount of business in the EU and governments in the rest of the world are aware of this. Compliance with the EU’s AI Act will be table stakes. It can be assumed that many future regulations will mimic many components, big and small, of the EU’s AI Act, but where they deviate will be interesting. Expect to see other regulators emboldened by the fines and seek commensurate remuneration for violations in their countries. Other countries might extend more of the auditing requirements to things such as maintaining outputs from generative models. Consumer protections in different countries will be more variable as well. And it will be interesting to see if countries such as the United States and United Kingdom pivot their legislation toward being more risk-based as opposed to principles-based.

—Steven Tiell 

4. What are the chances of this becoming law, and how long will it take? 

Unlike in the United States, where congressional passage of legislation is typically the decisive step, the European Parliament’s adoption on Wednesday of the AI Act only prepares the way for a negotiation with the EU’s member states to arrive at the final text. Legislative proposals can shift substantially during such closed-door “trilogues” (so named because the European Commission as well as the Council of the European Union also participate). The institutions aim for a final result by the end of 2023, during Spain’s presidency of the Council, but legislation of this complexity and impact easily could take longer to finalize.

—Kenneth Propp

Based on this week’s vote, there are strong signals of overwhelming support for this draft law. The next step is trilogue negotiations among the parliament, the Council of the European Union, and the European Commission, and these negotiations will determine the law’s final form. There are strong odds these negotiations will finish by the end of the year. At that point, the act will take about two years to transpose to EU member states for implementation, similar to what happened with GDPR. Also similar to GDPR, it could take at least that long for member states to develop the expertise to assume their role as market regulators. 

—Steven Tiell 

5. What are some alternative visions for regulating AI that we may see?

In general, we see principles-based, risk-based, and rights-based legislation. Depending on the government and significance of the law, different approaches might be applied. The EU’s AI Act is somewhat unique and interesting as it started life as a principles-based approach, but through its evolution became primarily risk-based. Draft legislation in the United States and the United Kingdom is principles-based today. Time will tell if these governments are influenced by the EU’s approach.

—Steven Tiell

Article link: https://www.atlanticcouncil.org/blogs/new-atlanticist/the-worlds-regulatory-superpower-is-taking-on-a-regulatory-nightmare-artificial-intelligence/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
    • Agentic AI, explained – MIT Sloan 02/18/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (6)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...