healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Six Ways Quantum Computers Could Change the World – NOVA

Posted by timmreardon on 02/25/2024
Posted in: Uncategorized.

Quantum computers operate using the strange laws of quantum physics and could be used to solve complex problems traditional computers aren’t able to tackle today.

https://www.pbs.org/video/six-ways-quantum-computers-could-change-world-xhbavj/

Published: June 24, 2019

Transcript:

Onscreen: A new type of supercomputer is on the horizon. Quantum computers are able to solve complex problems quickly, thanks to quantum physics.

Rob Schoelkopf: A quantum computer is a new device for processing information that employs the unique aspects of the quantum world.

Onscreen: Ordinary computers store information in 0s and 1s, or bits.Quantum computers store information in qubits which can exist as both 0s and 1s at once.

Rob Schoelkopf: A quantum computer can be uniquely suited for doing certain computational tasks that are otherwise intractable today. anything where there is a needle-in-a-haystack type problem where you are searching through a very large number of combinations, a quantum computer can explore them, in a properly designed algorithm, all at the same time. “

Marissa Giustina: I think quantum computing will change the way we do research. It will give us a different tool for asking questions about nature and that’s really exciting.

Onscreen: To outperform traditional computers, a quantum computer needs many qubits to work together. And that’s no easy task.

Marissa Giustina: One qubit does not a quantum computer make. There is a big difference between one qubit and a large array of qubits that all work together and can be controlled coherently. It can be that adding a few more qubits affects your system in more complicated ways that you didn’t anticipate.

Onscreen: Here are six ways quantum computing would change the world.

1. Enhance artificially intelligent systems

Quantum computers have the potential to create really robust AI algorithms

2. Discover new materials

The search for a high temperature superconductors has been the holy grail of material science. Quantum computers could help make this a reality, and vastly improve the energy grid and transportation system.

Rob Schoelkopf: Giving chemists for example insight new ways to synthesize molecules, or material scientists hints about how they could make a better material for photocells or better batteries – these are the things we are excited about and looking forward to in the next few years.

Onscreen: 3. Improve roads, and travel

Quantum computers could predict high traffic hours. Leading to advances in navigation applications and traffic signal coordination.

4. Revolutionize cryptography

A method called quantum encryption has the potential to make messages exponentially more secure. But it also might make current methods of encryption obsolete.

5. Better predict weather

Better predictions could mean more time to evacuate ahead of catastrophic weather.

6. Create more effective drugs

Quantum computing could accelerate the rate of medical breakthroughs.

Quantum computing is still a very young field, but it could fundamentally change almost every industry.

Schoelkopf: We’re on the dawn of creating a whole new industry or paradigm for information so this is very exciting.

Top 10 Emerging Technologies of 2023 – WEF

Posted by timmreardon on 02/25/2024
Posted in: Uncategorized.

Download PDF

The Top 10 Emerging Technologies of 2023 report, now in its 11th year, highlights the technologies set to positively impact society within the next three to five years. This comprehensive report goes beyond listing the top 10 technologies and their associated risks and opportunities. It provides a qualitative assessment of each technology’s potential impact on people, the planet, prosperity, industry and equity.

Emerging Technologies – WEF

Posted by timmreardon on 02/25/2024
Posted in: Uncategorized.

Which new technologies will make the biggest impact over the next few years?

Learn more from our latest Emerging Technologies report: https://ow.ly/7yJC50QHiie

https://www.linkedin.com/posts/world-economic-forum_which-new-technologies-will-make-the-biggest-activity-7167458387205541888-z9-e?

The EHR is a driver of burnout. That’s why IT must be at the table. – AMA

Posted by timmreardon on 02/24/2024
Posted in: Uncategorized.

Taking steps to reduce physician burnout, The Southeast Permanente Medical Group found working with IT made all the difference. Learn more.

By: Sara Berg, MS, News Editor

The SOUTHEAST PERMANENTE MEDICAL GROUP, INC, THE (TSPMG) in Atlanta has seen a notable improvement in well-being since the start of burnout measurement in 2021, with the burnout rate dropping from 48% in 2022 to an encouraging 43% in 2023—significantly lower than the nationwide burnout rate of 53% reported by the AMA Organizational Biopsy® The achievement underscores TSPMG’s commitment to addressing and alleviating the challenges associated with burnout within its clinicians and staff.

A systematic approach employed by TSPMG  not only resulted in tangible positive outcomes but has also garnered bronze-level recognition from the AMA Joy in Medicine™ Health System Recognition Program.

The program is designed to guide organizations interested or already engaged in improving physician satisfaction and reducing burnout. In 2023, 72 health systems were honored for their dedication to physician well-being.

But TSPMG—a member of the AMA Health System Program that provides enterprise solutions to equip leadership, physicians and care teams with resources to help drive the future of medicine—is not new to work on addressing physician burnout.

While the medical group had a well-being committee in place for years, a recent change put IT up front in discussions about burnout.

Filling in the gap with IT

Bringing IT to the table really began with TSPMG’s previous work to improve operational inefficiencies with its “Pebbles in the Shoe” campaign. This effort set out to identify and reduce inefficiencies and documentation burdens by offering a three-week challenge period for clinicians and staff to submit ideas – “pebbles” – for improving efficiencies. From there, teams work behind-the-scenes to address these issues throughout the year.

In the first year of the program, 163 pebbles were submitted, “which we loved from an engagement perspective, but hated from an inefficiency perspective,” said Kerri-Lyn Kelly, senior business consultant of people and culture.

That first year, one “pebble” focused on changing the email and meeting culture. After creating a work culture workgroup, the team implemented Work Wise, which provided education, resources and unified norms around improved email, meeting culture, and efficiency.

Another pebble suggested offering a waiting list for patients to fill appointment slots created when members cancel an appointment within 24 hours. To solve this, the team launched Fast Pass, an automated waitlist that fills open spots with patients who want earlier appointments. It has improved both patient and staff satisfaction.

“Having a campaign like Pebbles in your Shoe helps you to recognize problems and identify potential solutions quickly,” Kelly said.

“Many of the pebbles we received were related to IT—things like our EHR,” said Reneathia P. Baker, MD, a TSPMG pediatrician and associate medical director for People and Culture. “That’s when we had an ‘Aha!’ moment that we needed someone on our wellness committee who understood the complexities and could help solve those problems.

“We really needed someone to provide insight and intel into the background of how those things work,” Dr. Baker added. “We were very fortunate to add a member to the committee who has experience in that field. She is helping us understand it and work through the various steps.”

Additionally, “we have a committee member who works on what we call our Accelerated Care Transformation team who has access to IT resources, databases and metrics,” said Kelly. “They were able to help us with our AMA application and access resources we need to drive projects forward.”

How AI could help

Curious how AI can help? Read the full article here.

Reducing burnout is essential to high-quality patient care and a sustainable health system. The AMA measures and responds to physician burnout, helping drive solutions and interventions with the AMA Recovery Plan For America’s Physicians.

Article link: https://www.linkedin.com/pulse/ehr-driver-burnout-thats-why-must-table-9rkrc 

Why the digital transformation is a pillar for rebuilding trust in government – Federal News Network

Posted by timmreardon on 02/20/2024
Posted in: Uncategorized.

Rob Hankey

February 16, 2024 4:28 pm

Trust in government is at an all-time low, according to the 2023 Edelman Trust Barometer – an annual global survey that polls tens of thousands of people every year. The most recent edition revealed that people trust for-profit businesses and non-governmental organizations (NGOs) more than they trust the government.

That’s true for governments around the world. It’s not an isolated finding either. Pew Research found Americans’ trust in government has been trending down for the last 20 years.

Pew also revealed some important nuances. For example, roughly 70% of those surveyed “say that the federal government is doing a very or somewhat good job in responding to natural disasters.” However, on the flip side, fewer than 1 in 10 citizens say it’s responsive to the needs of ordinary Americans when it comes to routine interactions.

This is exactly where new guidance from the Office of Management and Budget suggests technology could fill an important role in rebuilding trust.

        Join us Feb. 22 at 2 p.m. EST for a discussion with agency and industry leaders on how data, AI/ML and good partnerships can help create better fraud prevention policies and investigations, sponsored by Optum Serve. | CPE eligible

Trust as a form of currency 

Last fall, OMB published a memorandum laying out new guidance for advancing the federal government’s digital transformation. The guidance is intended to help agencies implement a bipartisan law passed in 2018: the 21st Century IDEA Act. Progress had stalled due to “a lack of coordination and consensus” among controlling agencies.

OMB’s guidance eliminates the confusion. It details specific action items agencies must take, along with deadlines and metrics that must be reported to OMB.

The document itself is impressive. It reads like it was written in Silicon Valley or Madison Avenue rather than Washington, D.C. More to the point, the 32-page memo references the word trust 20 times – second only to the term “digital experience” (DX). That’s important because trust is a form of currency used to barter attention from prospective customers.

What does digital experience have to do with trust?

Amazon sold its first book online in the late 1990s to early adopters, but it took a while before the general public got comfortable with e-commerce. Although a book is a relatively small purchase, the process of buying online is different.

Historically, you traveled to a physical store, purchased a book from another human being at the cash register, and walked out with it in hand. By contrast, with Amazon, you paid for it now and hoped it would come in the mail later.

Today, Amazon is the most successful online retailer ever. This didn’t happen on its own – change requires management – and Amazon went to great lengths to earn trust. Amazon accomplished this by fostering a DX customers raved about.

Its refund policy was hassle-free. Returns were simple and easy. It kept addresses in your account, so you didn’t have to re-type these every time you made a purchase. Amazon implemented shipment tracking, so you would know exactly where your purchase was and when you might expect it to arrive.

The consumerization of technology 

Amazon isn’t the only private sector institution to improve its DX. Google has instant answers. Uber has instant rides. Apple put a computer in your pocket – with more processing power than was on the Saturn V rocket – and made it easy for anyone to use. Fostering a productive DX is often the difference between success and failure in the technology sector.

These digital improvements also had a profound effect on mainstream expectations: It initiated a trend called the “consumerization of technology.” People just expect all technology to work as easily as it does on the phone in their pocket. When it doesn’t, they lose trust and confidence. As such, underperforming websites and apps today bring doubt and, over time, distrust.

Improving digital experience will help restore trust in government 

The U.S. government maintains a sprawling IT infrastructure and hasn’t kept up with the pace of DX innovation. OMB says government websites receive some two billion visits every year. Moreover, Americans spend 10.5 billion hours filling out government forms annually. That works out to a staggering 29,000 years of paperwork.

When people can’t find the information for which they are looking on government websites, it chips away at trust. When they find conflicting or outdated information, it erodes credibility. When the instructions use ominous and difficult-to-understand legalese, the experience destroys the confidence of ordinary people who are just trying to do the right thing.

Trust between a population and its government is complicated and bigger than just the digital experience. However, we also know the interactions citizens have with government websites and apps can either destroy or improve trust. So while the government’s efforts at reviving its digital transformation isn’t a panacea, it will go a long way toward rebuilding trust.

Article link: https://federalnewsnetwork.com/commentary/2024/02/why-the-digital-transformation-is-a-pillar-for-rebuilding-trust-in-government/

Rob Hankey is the CEO of Intelliworx which provides FedRAMP-authorized workflow management software solutions to more than 30 federal government departments and agencies. A retired rotary wing pilot for the U.S. Army, he later worked as a government employee before founding Intelliworx. 

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Social Media Posts Have Power, and So Do You – RAND

Posted by timmreardon on 02/20/2024
Posted in: Uncategorized.

The spread of false or misleading information online can lead to knowledge that is inaccurate, incomplete, or manipulated. “Prebunking” is the act of exposing misinformation before it can be passed on to other people.

This new resource walks through three practical prebunking strategies.

Stop the Spread of False and Misleading Information During Voting Season

Published Feb 8, 2024

by Alice Huguet, Julia H. Kaufman, Melissa Kay Diliberti

In a healthy democracy, having accurate information is crucial for making informed decisions about voting and civic engagement. False and misleading information can lead to knowledge that is inaccurate, incomplete, or manipulated. Such knowledge can erode trust in democratic institutions and contribute to divisions within society.[1] And, on a personal level, it can be harder to have conversations with friends, family, and neighbors when you do not share the same facts.

Research shows that older generations are well represented online but are not as confident in their technological know-how as are younger users.[2] They are less likely to recognize and prevent the flow of misinformation,[3] which can exacerbate divisions between groups and contribute to polarization. These issues are particularly important given that older generations are more likely than younger people to vote,[4]and, thus, their skills in identifying false and misleading information can affect democratic processes.

Fortunately, the ability to identify and resist false and misleading information is not static, because this ability relies on skills that can be learned.[5] However, many of the tools that help individuals identify false and misleading information are designed for students in school. More interventions tailored to the needs and preferences of older U.S. residents are needed to help them stop the spread of false and misleading information. 

For these reasons, we developed this tool and three brief informational videos with people aged 55 years and older in mind. However, if you are interested in practical strategies for combating the challenges of false and misleading information — regardless of age — this tool is for you. This tool and its accompanying videos walk through three strategies:

  1. reading across sources rather than readily believing information from a single source
  2. resisting emotional manipulationthat can lead to sharing and believing information that you otherwise might not agree with
  3. taking personal responsibility to stop the spread of false and misleading information.

We identified these strategies by reviewing existing research and conducting a brief survey in 2022 of nearly 1,000 U.S. adults aged 55 and older.[6] We then designed a set of three animated videos — each focused on one of the three approaches above — to help put these strategies into action.

What Is Prebunking?

This tool and its accompanying videos offer strategies commonly referred to as prebunking techniques. Prebunking is the process of exposing false or misleading information before it can be passed on to other people. Individuals can become skilled at prebunking by learning about common tricks used to spread bad information; by being aware of these tricks, people can better resist false information when they come across it. Research has shown that preventative messaging can be more effective than trying to correct inaccurate information after it has been spread.[7]Share this tool and these videos with your friends and family to help them learn prebunking strategies to resist false and misleading information.

1. Reading Across Sources (Lateral Reading)

What Is the Problem?

It is important for individuals to know the difference between trustworthy and false or misleading information. However, identifying what is trustworthy can be difficult and is often influenced by the way information is presented online. For example, if someone reads about a product using only sources that are paid to promote it, that person might not know about any negative side effects. This practice can lead to one-sided understanding and put the person at risk. The same concept holds true when it comes to information about politics or current events.

In response to our survey, older U.S. adults reported that they are not always confident in identifying whether or not information is based on fact. Older generations, in particular, may feel unsure about how to know which information to trust. But there are practical approaches to teasing out fact-based information from the rest. Lateral reading is one simple strategy that has proven particularly effective for checking the trustworthiness of information.[8]

What Is the Solution?

Lateral reading is when you check whether information you found online is trustworthy by looking for corroborating evidence through other websites or sources. Lateral reading has been shown to be more effective than trying to determine whether something is trustworthy based only on clues from the information itself.[9] Before you share information, either online or in person, you should pause. The minimum time that it takes to verify details across sources through lateral reading may prevent you from contributing to the problem of false information.

After you have identified the information that you want to verify, follow these basic steps:

  1. Open a new tab in your browser (e.g., Safari, Firefox, Google Chrome, Edge) on your computer or smartphone and navigate to a search engine, such as Google.
  2. In the search engine, enter text about the topic in which you are interested. The search engine will bring up news stories and other websites that discuss the same topic.
  3. Skim those additional sources to expand your understanding. Focus on information from organizations that do not seem like they are trying to influence you and news sites that do not aim to shock or excite readers.
  4. Search for the author and the organization from which the information came. Look for answers to such questions as the following: Who paid to produce this work? What kinds of expertise do the author and the organization have? Do other people seem to think they are credible, and why?

Now that you are armed with more context, think critically about whether the information you found is based on facts. Practicing this process — opening a new tab on your browser and searching for additional sources — can help prevent you from accidentally sharing false or misleading information.

2. Resisting Emotional Manipulation

What Is the Problem?

Emotionally charged information travels fast online. Social media can make people feel strong emotions that lead them to share things quickly.[10] This concept is built into the design of social media platforms: Appealing to people’s emotions is one of the most certain ways to capture their time and attention online.[11]

Bad actors understand that people are more likely to interact with emotionally charged content. As a result, some people share things that are meant to make readers feel a certain way — even if details are exaggerated or untrue.[12] People are more likely to believe false information when they are in a heightened emotional state.[13]

Engaging with false and misleading information because of the emotions it evokes can have tremendously damaging effects. For instance, if someone believes false information about a group of people, that information may contribute to treating that group unfairly or even causing direct harm to the group.

What Is the Solution?

Although reading across sources is an important step in stopping the spread of false information, some misleading information cannot be confirmed as true or false. Such information might instead contain an opinion or an unprovable assertion. It may be sensational or accompany a shocking image. This kind of information often relies on emotional manipulation, which is why focusing on fact-checking is helpful. However, fact-checking is not always enough to curb the spread of misleading information. To do that, people must be prepared to both fact-check and resist emotionally manipulative content.

To prevent the spread of emotionally manipulative information, the following steps would be helpful:

  1. Before liking, sharing, or commenting on information online, ask yourself why you want to do so — that is, is it because of an emotional reaction or for another reason?
  2. Take a moment to reflect and step back from any emotional response you notice after reading.
  3. Think critically about the information at hand, and what you would accomplish by liking, sharing, or commenting on it.

Studies show that engaging in critical thinking rather than reacting solely based on emotion can reduce the chances of spreading false information.[14] By taking these steps, you can resist emotional manipulation and help prevent the spread of false and misleading content.

3. Taking Personal Responsibility

What Is the Problem?

Our survey showed that most people aged 55 years and older are concerned about false and misleading information spreading in the United States. However, our respondents were more worried about other peoplespreading bad information than about doing so themselves. Other studies show that, compared with younger people, older generations are less likely to feel that the spread of misinformation and disinformation is their personal responsibility.[15]

Some survey respondents said that they felt that social media companies and government agencies should be responsible for stopping the spread of false and misleading content, at least to an extent. However, people cannot always rely on these groups for protection from misleading posts online. Some information might be manipulative but not break any rules, so it would not be removed by companies or the government. Although some information categorized as “fake news” is filtered out of feeds or flagged by social media companies, their systems cannot filter everything. It is simply not enough to believe that someone else will address the problem.

What Is the Solution?

The solution may be shifting your mindset and sharing what you have learned. By taking the following steps, you can remind yourself of your powerful role in addressing the problem of spreading false or misleading information:

  1. Remember that you play a crucial role in stopping the spread of false and misleading information.
  2. Read across sources before interacting with content.
  3. Reflect on your emotional statebefore engaging.
  4. Initiate conversations with friends and family about stopping the spread of false and misleading information.

You can multiply your positive impact by discussing these strategies with family and friends, particularly those who you notice may be sharing false or misleading information. Research shows that people are more likely to correct missteps in sharing false information when someone they trust responds with accurate information.[16]

A Call to Action: Limit the Influence of False and Misleading Information

The spread of false and misleading information can lead to a lack of trust in institutions, a breakdown in civil discourse, and increased polarization. By being responsible and using such strategies as checking multiple sources and avoiding emotional manipulation, people can reduce the impact of bad information during the next election cycle. By working together, individuals can create a more-informed society and protect democracy. Take action by using these strategies and discussing them with others.

Think before you share, because posts have power. And so do you.

How This Survey Was Conducted

To inform our identification of three prebunking strategies, we conducted a survey with a nationally representative sample of U.S. adults aged 55 years and older. We administered a nine-minute survey online in September 2022 using the RAND Corporation’s American Life Panel. Invites were sent to 1,286 individuals; we received 936 responses, for a completion rate of 72.7 percent. Our survey asked respondents about their social media use, where they get their news and information, and their confidence in navigating misinformation and disinformation online. Respondents received a $6 incentive to complete the survey.

Notes

  • [1] Stephan Lewandowsky, Ullrich K. H. Ecker, and John Cook, “Beyond Misinformation: Understanding and Coping with the ‘Post-Truth’ Era,” Journal of Applied Research in Memory and Cognition, Vol. 6, No. 4, 2017.
  • [2] Tanya Notley, Simon Chambers, Sora Park, and Michael Dezuanni, Adult Media Literacy in Australia: Attitudes, Experiences and Needs, Western Sydney University, Queensland University of Technology, and University of Canberra, 2021.
  • [3] Andrew Guess, Jonathan Nagler, and Joshua Tucker, “Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook,” Science Advances, Vol. 5, No. 1, 2019; Sander van der Linden, Jon Roozenbeek, and Josh Compton, “Inoculating Against Fake News About COVID-19,” Frontiers in Psychology, Vol. 11, 2020.
  • [4] Jacob Fabina and Zachary Scherer, “Voting and Registration in the Election of November 2020: Population Characteristics,” Current Population Reports, U.S. Census Bureau, P20-585, January 2022.
  • [5] John Cook, Stephan Lewandowsky, and Ullrich K. H. Ecker, “Neutralizing Misinformation Through Inoculation: Exposing Misleading Argumentation Techniques Reduces Their Influence,” PLOS One, Vol. 12, No. 5, 2017; Andrew M. Guess, Michael Lerner, Benjamin Lyons, Jacob M. Montgomery, Brendan Nyhan, Jason Reifler, and Neelanjan Sircar, “A Digital Media Literacy Intervention Increases Discernment Between Mainstream and False News in the United States and India,” Proceedings of the National Academy of Sciences, Vol. 117, No. 27, 2020; Jon Roozenbeek and Sander van der Linden, “The Fake News Game: Actively Inoculating Against the Risk of Misinformation,” Journal of Risk Research, Vol. 22, No. 5, 2019.
  • [6] Information about our survey methods is included in the box entitled “How This Survey Was Conducted.”
  • [7] Toby Bolsen and James N. Druckman, “Counteracting the Politicization of Science,” Journal of Communication, Vol. 65, No. 5, 2015.
  • [8] Sam Wineburg and Sarah McGrew, “Lateral Reading and the Nature of Expertise: Reading Less and Learning More When Evaluating Digital Information,” Teachers College Record, Vol. 121, No. 11, 2019.
  • [9] Wineburg and McGrew, 2019.
  • [10] Adam D. Kramer, Jamie E. Guillory, and Jeffrey T. Hancock, “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks,” Proceedings of the National Academy of Sciences of the United States of America, Vol. 111, No. 24, 2014.
  • [11] Vlan Bakir and Andrew McStay, “Fake News and the Economy of Emotions: Problems, Causes, Solutions,” Digital Journalism, Vol. 6, No. 2, 2018; Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of True and False News Online,” Science, Vol. 359, No. 6380, 2018.
  • [12] Bakir and McStay, 2018.
  • [13] Cameron Martel, Gordon Pennycook, and David G. Rand, “Reliance on Emotion Promotes Belief in Fake News,” Cognitive Research: Principles and Implications, Vol. 5, No. 47, 2020.
  • [14] Daniel A. Effron and Medha Raj, “Misinformation and Morality: Encountering Fake-News Headlines Makes Them Seem Less Unethical to Publish and Share,” Psychological Science, Vol. 31, No. 1, 2020.
  • [15] Notley et al., 2021.
  • [16] Leticia Bode and Emily K. Vraga, “See Something, Say Something: Correction of Global Health Misinformation on Social Media,” Health Communication, Vol. 33, No. 9, 2018; Leticia Bode, Emily K. Vraga, and Melissa Tully, “Do the Right Thing: Tone May Not Affect Correction of Misinformation on Social Media,” Harvard Kennedy School Misinformation Review, 2020.

Related Products 

  • REPORTExploring Media Literacy Education as a Tool for Mitigating Truth DecayJul 11, 2019Alice Huguet, Jennifer Kavanagh, et al.
  • REPORTMedia Literacy Standards to Counter Truth DecayJan 19, 2021Alice Huguet, Garrett Baker, et al.
  • REPORTMedia Literacy Education to Counter Truth DecayAug 4, 2021Alice Huguet, John F. Pane @JohnFPane, et al.
  • REPORTApproaches and Obstacles to Promoting Media Literacy Education in U.S. SchoolsSep 23, 2021Garrett Baker, Susannah Faxon-Mills, et al.

SHOW MORE

Research conducted by

  • RANDEDUCATION AND LABOR

This tool presents strategies to help U.S. adults — in particular, those 55 and older — identify and resist false and misleading information, especially with the 2024 election season underway.

This study was undertaken by RAND Education and Labor, a division of the RAND Corporation that conducts research on early childhood through postsecondary education programs, workforce development, and programs and policies affecting workers, entrepreneurship, and financial literacy and decisionmaking. This tool was sponsored by a gift from the Brothers Brook Foundation. Questions about this tool should be directed to the lead author, Alice Huguet, ahuguet@rand.org, and questions about RAND Education and Labor should be directed to educationandlabor@rand.org.

This report is part of the RAND tool series. RAND tools may include models, databases, calculators, computer code, GIS mapping tools, practitioner guidelines, web applications, and various toolkits. All RAND tools undergo rigorous peer review to ensure both high data standards and appropriate methodology in keeping with RAND’s commitment to quality and objectivity.

Our mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and objectivity and our unwavering commitment to the highest level of integrity and ethical behavior. To help ensure our research and analysis are rigorous, objective, and nonpartisan, we subject our research publications to a robust and exacting quality-assurance process; avoid both the appearance and reality of financial and other conflicts of interest through staff training, project screening, and a policy of mandatory disclosure; and pursue transparency in our research engagements through our commitment to the open publication of our research findings and recommendations, disclosure of the source of funding of published research, and policies to ensure intellectual independence. For more information, visit www.rand.org/about/research-integrity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.

Article link: https://www.rand.org/pubs/tools/TLA2909-1.html?

Document Details

  • Copyright: RAND Corporation
  • Availability: Web-Only
  • DOI: https://doi.org/10.7249/TLA2909-1
  • Document Number: TL-A2909-1
  • Year: 2024
  • Series: Tools

Explore

Related Topics

  • Civic Education
  • Democracy
  • Media Literacy
  • Older Adults
  • Social Media
  • United States

Browse by Series

Browse by Authors

Stay Informed

Get updates from RAND delivered straight to your inbox.Email

https://www.google.com/recaptcha/api2/anchor?

ABOUT

RAND is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest.

  • RAND History
  • Diversity, Equity, and Inclusion
  • Leadership
  • Research Integrity
  • Career Opportunities

CONNECT

  • Contact Us
  • Locations

I am interested in

  • Jobs at RAND
  • Media Resources
  • Congressional Resources
  • Doing Business with RAND
  • Supporting RAND
  • Educational Opportunities
  • Alumni Association

Follow

  • RAND on Facebook
  • RAND on Twitter
  • RAND on LinkedIn
  • RAND on YouTube
  • RAND on Instagram

STAY INFORMED

Subscribe to the Policy Currents newsletter for RAND insights on the issues that matter most.Email

https://www.google.com/recaptcha/api2/anchor?ar=1&k=6Lej1OkfAAAAACl4nbglyfLqKdV5PjWQHXCz_9Tc&co=aHR0cHM6Ly93d3cucmFuZC5vcmc6NDQz&hl=en&v=yiNW3R9jkyLVP5-EEZLDzUtA&size=invisible&cb=ikb7puvu8jykSIGN UP

View all email newsletters

RESOURCES

  • Multimedia
  • Latest Reports
  • Browse by Author
  • RAND Classics
  • Databases and Tools

Site Information

  • Site Map
  • PRIVACY POLICY
  • Support Policy
  • Feedback
  • Help
RAND Corporation

1776 Main Street
Santa Monica, California 90401-3208

RAND® is a registered trademark. © 1994-2024 RAND Corporation. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The role of harmonised standards under the AI Act

Posted by timmreardon on 02/19/2024
Posted in: Uncategorized.

Leon Doorn | Feb 14, 

In this blogpost I explore the risk associated with not having harmonised standards in place in time under the AI Act.

Why is this relevant?

Without harmonised standards (or common specifications), devices covered by:

1. Annex III point 1 (biometrics devices used for remote identification, categorisation and emotion recognition); and

2. Manufacturers covered by Annex II Section A who are capable of demonstrating compliance applying harmonised standards only (the majority of these products), may also do under the AI Act, but only if there are harmonised standards or common specifications available to support the AI Act

Will require third party conformity assessment, e.g. with a Notified Body.

Without harmonised standards, there will be a significant increase on the demand of resources on the end of already burdened Notified Bodies.

Harmonised standards

The concept of Harmonised Standards as included in the AI Act is not new, and is widely applied in European Legislation. When a standard (for example, and ISO or IEC standard) is considered ‘harmonised’, organisations are presumed compliant with the requirements the standard is linked to (in a so-called ‘Annex Z’ of the standard). The concept is simple, the European Commission drafts a ‘standardisation request’ to standardisation organisations, requesting those standardisation organisations to develop standards which can be used to demonstrate compliance. These standardisation requests are publicly available and can be found here. The ‘draft’* standardisation request for the AI Act is M/593, which has been sent to CEN/CENELEC’s JTC 21.

The standards organisation consequently develops a work program and the standards and proposes the final standards to the European Commission, who outsources the review to HAS consultants who review the proposed standards (and who can reject them) and create a so-called ‘Annex Z’ to demonstrate which part of the regulation are addressed in the standard, after which the standard is Published in the Official European Journal.

An update of the standardisation request is due upon publication of the AI Act in the Official European Journal

High-Risk AI requirements & Harmonised standards

Within the ‘draft’ standardisation request for the AI Act, the European Commission has already set out a number of standards to be developed by CEN/CENELEC, and due to the trilogue outcome, additional requests for harmonised standards are expected in the ‘final’ Standardisation Request, e.g. addressing General Purpose AI (GPAI).

This is of relevance to all High-Risk AI, where these AI Systems and their developers will need to demonstrate compliance with Title III (chapters 2 & 3) of the AI Act. These Chapters 2 and 3 document the requirements on risk management (article 9), data governance (article 10), Record-Keeping (article 12) quality management (article 17), to name a few.

In total 10 standards in relation to these requirements in the AI Act have been requested by the European Commission so far.

Conformity assessment & harmonised standards

Conformity assessment of High-Risk AI Systems per Annex III point 1 must be executed per Article 43 by either following a:

1. Conformity assessment based on internal control as referred to in Annex VI (e.g. issuing a Declaration of Conformity), or;

2. Conformity assessment procedure based on an assessment of the Quality Management System and Technical Documentation by a Notified Body as referred to in Annex VII.

Article 43 further explains that in the absence of harmonised standards or common specifications developers will have to apply 43(a), thus involving a Notified Body for their assessment.

For providers of High-Risk AI covered by Annex II Section A (e.g. machinery, toys, watercraft, etc) Article 43.3 (last paragraph) clarifies that devices who can opt out from notified body assessment under their legislation if:

• it is acceptable under such legislation to demonstrate compliance through compliance with harmonised standards; and


• they applied available harmonised standards or common specifications set out in Chapter 2 of Title III.

In conclusion, these providers will also need to undergo third-party conformity assessment if harmonised standards or common specifications are unavailable.

Timelines

The AI Act will most likely enter into force mid 2024, with a transition period for Annex III devices of 2 years, and Annex II devices 3 years. As CEN-CENELEC has only recently confirmed a proposed work program for standards, where existing ISO standards may not be sufficient to demonstrate compliance with the requirements of the AI Act (e.g. the Management System standard ISO 42001), the timeline to develop standards in due time is becoming short.

Taking note that the average timeline to develop a standard (excluding the harmonisation process) takes 3 years from the first proposal up to publication, it is unlikely to have a full set of harmonised standards to demonstrate compliance prior to the end of the transition period of 2 years for Biometric systems and potentially within the 3 years for devices covered under Annex II Section A applying harmonised standards.

Implications

Without having harmonised standards or common specifications to demonstrate compliance against the High-Risk AI Act’s requirements, these Annex III (point 1) devices and Annex II Section A devices applying harmonised standards, will all require Notified Body conformity Assessment if they make use of AI.

The window for these devices to become certified will be small with a 2-year transition timeline for Annex III point 1 devices and 3- year transition timeline for those covered under Annex II Section A.

Considering that:

1. Notified Bodies will need to be accredited to issue CE certificates against the AI Act for certifying these devices covered by Annex III point 1, and

2. Developers will need to have fulfilled all relevant requirements set out in the AI Act.

Consequently, the pressure on Notified Bodies, which is already intense will increase, and if not managed properly can lead to numerous consequences that have been previously witnessed with the transition of the MDD to the MDR, and IVDD to IVDR. For those involved in Medical Devices and In-Vitro Diagnostics, the frustrations and delays due to a lack of Notified Body resources is unfortunately still on-going.

Additionally, it is questioned whether in the background the European Commission should start development of Common Specifications to avoid situations that the Medical Device industry is already familiar with. While Common Specifications can have drastic consequences (e.g. lack alignment with international frameworks), the alternative of having no harmonised may not be attractive either.

Article link: https://www.linkedin.com/pulse/role-harmonised-standards-under-ai-act-leon-doorn-qqime

Building a DOD Data Economy – DIB

Posted by timmreardon on 02/12/2024
Posted in: Uncategorized.

Defense Innovation Board looks to lock data access in ‘all vendor agreements’ – Nextgov

Posted by timmreardon on 02/12/2024
Posted in: Uncategorized.

Defense Innovation Board looks to lock data access in ‘all vendor agreements’: The Pentagon would mandate data access in all vendor contracts under a new legislative requirement recommended by the Defense Innovation Board in its most recent report.

The report, which examined the Department of Defense’s data economy, said “the current state of data access within DOD vendor agreements is fragmented and inconsistent” and includes suggested legislative text for the FY2025 National Defense Authorization Act that would “enshrine DOD data access and rights in all vendor agreements.”

The DIB — an independent oversight committee that provides technology recommendations to the defense secretary and other senior Pentagon officials — was tasked in October 2023 by David Honey, DOD’s undersecretary of defense for research and engineering, to help the department enhance its use of data. The study was cleared for public release on January 23.
https://buff.ly/3OFtyST

A recent study from the advisory group said data access requirements in the Pentagon’s vendor agreements are “fragmented and inconsistent” and called for Congress to take action

The Pentagon would mandate data access in all vendor contracts under a new legislative requirement recommended by the Defense Innovation Board in its most recent report. 

The report, which examined the Department of Defense’s data economy, said “the current state of data access within DOD vendor agreements is fragmented and inconsistent” and includes suggested legislative text for the FY2025 National Defense Authorization Act that would “enshrine DOD data access and rights in all vendor agreements.”

The DIB — an independent oversight committee that provides technology recommendations to the defense secretary and other senior Pentagon officials — was tasked in October 2023 by David Honey, DOD’s undersecretary of defense for research and engineering, to help the department enhance its use of data. The study was cleared for public release on January 23. 

The report called the Pentagon’s efforts to quickly access and use needed data across the entire department “outdated,” noting that inadequate data access practices are “inhibiting effective interoperability and utilization of data across various platforms” needed to enable the DOD’s Combined Joint All-Domain Command and Control initiative. Known as CJADC2, the ongoing departmentwide effort seeks to streamline information-sharing across disparate military domains into one cohesive network.

The board said the NDAA proposal — which it called an “initial action” to address the department’s broader data access issues — would ensure that all future DOD vendor agreements “incorporate clear language on data rights and interoperability that manages data procured or generated under defense industrial contracts, and that facilitates, safeguards and future-proofs DOD’s access to this data.”

The DIB report also recommended that the proposed requirement “direct the formation of a federated defense industrial data catalog for defense companies and the department, a trusted community of interest for accessing this federated data catalog and an oversight body for this new data marketplace.”

During the DIB’s quarterly public meetingon Jan. 26, Ryan Swann — a board member and the chief data analytics officer at Vanguard — said the recommended NDAA proposal would help the Pentagon “prioritize data rights and data interoperability so that we can get data out of our platforms and systems so they can be shared securely, kind of across the enterprise where we find value, or where we are able to leverage AI to create value.”

While the board’s report said including its recommendation in the next must-pass defense policy bill would not, on its own, be a panacea for all of DOD’s data challenges, it wrote that “enhanced collaboration with commercial vendors will propel DOD’s antiquated approach to data access decades forward in the next 12 to 18 months.”

Article link: https://www.nextgov.com/defense/2024/02/defense-innovation-board-looks-lock-data-access-all-vendor-agreements/393929/?

AI Developers Should Understand the Risks of Deploying Their Clinical Tools, MIT Expert Says – JAMA

Posted by timmreardon on 02/12/2024
Posted in: Uncategorized.

Samantha Anderer; Yulin Hswen, ScD, MPH

Article Information

JAMA. Published online February 7, 2024. doi:10.1001/jama.2023.22981

This conversation is part of a series of interviews in which JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, and expert guests explore issues surrounding the rapidly evolving intersection of artificial intelligence (AI) and medicine.

AI applications for health care should be designed to function well in different settings and across different populations, says Marzyeh Ghassemi, PhD (Video), whose work at the Massachusetts Institute of Technology (MIT) focuses on creating “healthy” machine learning (ML) models that are “robust, private, and fair.” The way AI-generated clinical advice is presented to physicians is also important for reducing harms, according to Ghassemi, who is an assistant professor at MIT’s Department of Electrical Engineering and Computer Science and Institute for Medical Engineering and Science. And, she says, developers should be aware that they have a responsibility to clinicians and patients who could one day be affected by their tools.

Video. AI and Clinical Practice— AI and the Ethics of Developing and Deploying Clinical AI Models

Video. AI and Clinical Practice— AI and the Ethics of Developing and Deploying Clinical AI Models

JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, recently spoke with Ghassemi about “ethical machine learning,” the computer scientist’s decision to opt out of AI in her own health care, and more.

The following interview has been edited for clarity and length.

Dr Bibbins-Domingo:You have a research lab, Healthy ML. It specializes in examining biases in artificial intelligence, and you’re specifically interested in its applications in clinical practice. I’d love to hear how you got into the very specific area.

Dr Ghassemi:At the end of my PhD, we found out that [machine learning] models tend not to work as well in all groups. And that really informs what we do here in my lab today, focusing on how we make sure that models that are developed work robustly. And if you think about robustness, that could mean that it works well in a new environment or across different kinds of people.

Dr Bibbins-Domingo:How do you think about the range of reasons why a model might not perform well in one setting vs another or in one group of people vs another?

Dr Ghassemi:I try to think about it within the pipeline that all models are developed in. And this is not just in health care. This is for any machine learning model that might be developed and deployed in any human-facing setting. You choose a problem, collect some data, define a label, develop an algorithm, and then deploy it. In each part of that pipeline, there are reasons that your model might not perform as well. For problem selection, what we choose to fund and what we choose to work on is often biased. We tend to look at problems that are easy to address where there are more data readily available that can be correlated with different metrics of social status, or privilege, or just where funding tends to be allocated to.

For example, diseases that are disproportionately affecting people who are biologically female at birth tend to be understudied. And if we’re collecting data from these human sources, it’s probably going to have some bias in it just because of the way that humans interact with one another. Just by collecting data from a human process, you’re going to have some potential performance issues. We probably want machine learning models to replicate the very best health care practices that we see now, but if we take a random sample of data from thousands of hospitals and say, “Perform the way that an average doctor is performing on an average day,” we might get some behaviors that we don’t want to extend.

When we define a label, that’s another way that bias can be injected into the learning process. It’s a true-false label. We never contextualize it with the choice that’s being made or the human rule that’s being applied. When you collect labels in this descriptive way but then train a machine learning model, all of those machine learning models become much harsher. They have a much higher false-positive rate.

Dr Bibbins-Domingo:You use the term ethical machine learning. I’d love you to define what that term means for you and help us to understand it in the context of medical practice.

Dr Ghassemi:I think for me as a technical person, ethical machine learning means recognizing your responsibility to end users that might potentially be impacted by the models that you’re developing, the technology that you’re releasing. And I think there are many ethical frameworks that professional societies have—for engineers, for doctors, for different kinds of individuals that interact with humans.

And that’s not standard in computer science training. It wasn’t in my computer science curriculum. There wasn’t a specific set of rules, or regulations, or even principles that we went over. And now we’re seeing a lot of programs like the program at MIT step up and recognize that computer science impacts just as many people as many engineering disciplines do. But I think that we’re playing a little bit of catch-up in the field with people starting to recognize that these choices make an impact.

Dr Bibbins-Domingo:So, what does that mean for algorithms designed for use in clinical practice settings? Do you just need to be more aware and understand this ethical machine learning? Do you and I need to talk as you are developing a particular model? What types of processes get us to the point where we really are focused on the end user, in this case patients? And what type of team, and what type of processes, and what types of things get us there?

Dr Ghassemi:I think we need a change in the technical people, the technical societies, and the technical systems. We need to speak with and be informed by the needs of those whom we are collaborating with and not just to understand how data might have been collected but how a model might be deployed and what the risks are for such a deployment.

I think the problem here is not just that we’re using machine learning and health, it’s that we’re using this really powerful tool in a space where technology has been reasonably laxly regulated. We’re adding this extra tool to a setting that doesn’t currently have a lot of regulation, and I think it’s a struggle to catch up. If you’re upset about a machine learning model learning to kill more women than men, performing more poorly on women than men, but it learned that from the data, maybe we should try to address the underlying problem, which is that more women die in this procedure. Rather than saying, “I’m so angry that the model has learned this thing,” let’s use the fact that it learned it to address the underlying issue.

Image description not available.

Dr Bibbins-Domingo:You’re speaking about such an important issue, and we are in an environment where this technology is moving at rapid speed, both the capabilities and the enthusiasm for adopting any type of machine learning, AI approach in health care. We also know that these models can be subject to biases. So, in your view, how should we think about regulation once the model’s developed or once it’s deployed?

Dr Ghassemi:I totally agree with you that it seems like the philosophy here is deploy ahead of regulation, which I don’t think is the right way of thinking about the role of technology in the health care setting. What I will say is, I think that the FDA [US Food and Drug Administration] has done really fantastic work toward trying to have systems where audits can be done for machine learning models. I think that there are improvements that could be made, like with any system.

I’m actually a big fan of the multiarm regulatory system that aviation has with different federal agencies that were created decades apart specifically to ensure that there’s safety in airplanes that exist, and there’s training for pilots to use technology, and that there are standards about how different airlines have to communicate, and there are responsibilities that airlines and carriers have to passengers who fly.

I think that we need the same kind of regulation that is well recognized as being not about assigning blame or liability but about ensuring safety and having a space and a culture of safety. And also that there is some amount of oversight where people voluntarily take a certain amount of training in order to be able to work well with technology prior to having it integrated into their setting.

I do want to address the fact that—unlike in aviation where there were lots of human-computer-interaction-end user studies done to figure out how best to show information to people in a stressful situation who are trying to make decisions—we haven’t done a lot of those studies in a human-computer interaction of machine learning or other technology-plus-doctor setting. We don’t actually know how best to give information to doctors, information that might be wrong sometimes by the way, such that they are able to use it well when it’s right and they’re not disproportionately biased by it when it’s wrong. The work that we’ve done so far suggests that the key or one of the keys to making sure that doctors aren’t misled by biased information is to make sure that it’s given to them descriptively.

Dr Bibbins-Domingo:And is that because we trust that it’s an AI model, it’s math, and therefore we should do what it says?

Dr Ghassemi:Based on other work by really fantastic researchers and work that my lab has done, I think it is two things coupled. Number one, it’s an automation bias. It’s been well documented in a clinical setting for a long time that if there’s a prefilled default, you’re more likely to use it.

And the other is exactly what you’re saying. We think it’s algorithmic overreliance. People assume that they have a system like a robot, or an AI, or an algorithm, whatever it is, that has access to more information than they do or is well aware of the risks that might be encountered by making an incorrect decision in that setting.

And there’s been many other documented settings where clinicians have been given incorrect or bad advice. And even when they’re made aware that potentially the model could give them incorrect or bad advice, they still exhibit these same automation and overreliance biases. And so, it’s something that we need to be really careful about when we consider exactly the way in which we give advice.

Dr Bibbins-Domingo:I am so glad you brought up the point that in other sectors where there is either a much longer history or a much closer level of training between computers and humans, like in aviation, there has been a lot of attention placed to how information is presented. And it’s clear that we need to understand that much more. It’s reminding me of a study we published in JAMAjust a few months ago on whether the idea of explaining the model can help to give the clinician better insights into where a model might be wrong. It showed that biased models produced the wrong results and the explainability didn’t mitigate against the degree to which a clinician was going to be led astray.

I think it speaks a little bit to what you’re saying here, and how important it is not just to assume that explaining how the model was built is going to help me not to go down the wrong road.

Dr Ghassemi:It’s been well established for a while that explainability methods can make a model less fair because fundamentally they are approximations. How do you make a model explainable? You make it simpler. And so, you have to approximate something. And what we’ve found previously is that these approximations tend to impact minority groups more than majority groups. Which sort of makes sense. If you need to approximate some complex nonlinear boundary and there’s a group you have to do a little bit less well at modeling, it’s probably the group that takes up a smaller amount of the space, right? Because that’s going to impact your performance less.

And so not only do explainability methods tend to make models less fair in many settings that we evaluated, this study in JAMAdemonstrates that explainability can even increase overreliance sometimes. Because if you just have a number or if you just have a description it doesn’t really short-circuit that critical thinking that you have to do to make the decision. But if you make it easy and you start engaging that overreliance and that automation bias where it’s telling you what to do, it’s explaining the reason, I think that’s where we start to see these biases really become very strong.

Dr Bibbins-Domingo:It’s so interesting. The modeling is complex, but humans and human behavior is also complex.

Dr Ghassemi:I think that’s the hardest thing, honestly. It’s such a complex system of interactions. I’m making this loose analogy to aviation. It’s not aviation. In aviation, you have a plane of hundreds of passengers. And the outcome for one is the outcome for all. They all land safely. And that’s not what happens in health care. And so, I think there’s so much more we need to do. There’s so much more research that needs to be done. And we really lack the backbone to do that because even before machine learning, we have had clinical risk scores that do not work for women.

I always tell people when I give these examples, sometimes they’ll say, “Well, a clinical risk score can’t work for every tiny subgroup. It’s hard to collect from minorities.” Women are not a minority. We’re half of the planet, sometimes more. And so, the fact that clinical risk scores have historically not worked for half the planet without machine learning, no AI needed, I think speaks to the fact that we need to understand how to use technology in the health care system, even if we didn’t have machine learning, in a way that doesn’t increase inequity.

Dr Bibbins-Domingo:Okay. So, what AI tools do you use?

Dr Ghassemi:I feel like I have to be very clear here because I have two very different opinions about a very fantastic thing. Like many people, when ChatGPT and other versions of GPT were released, I was so impressed with the technical accomplishment. I have spoken very widely about how unhappy I am that it’s being used for specific things in a clinical setting. I don’t think that that’s the best use of it.

But I will say if you write a grant or you have a great research idea, often you have to summarize it 7 different ways: a 100-word abstract for a general audience, a 200-word abstract for a scientific officer, a 300-word…. I love using GPT models to do summarizations of a specific length for a particular audience of work that I’ve done.

Dr Bibbins-Domingo:That’s a very good example. But let me give you the opportunity to maybe expand on what you were going to challenge us not to use it for before. What AI tools do you avoid or what would you not use right now?

Dr Ghassemi:I opt out of almost all uses of AI in a health setting. Both for myself and for dependents I have, because I’m well aware of the research, some of which is my own, that the tools are unlikely to work well for a minority female.

Dr Bibbins-Domingo:What do you say when someone says, “Well, we are never going to make models that are designed for people like you because you are not letting us use the data on people like you.”

Dr Ghassemi:I have spoken to minority communities and told them, “Please let me use your data. My model will not work. It will perform poorly on your population.” And that’s the reason that clinical models are so bad for so many people, because, sometimes intentionally, only certain groups were studied. What I say is I am doing research that will be peer reviewed, often brutally, and published in some venue. And then if I ever wanted to deploy it, I hope that any deployer, if it’s not me, would go through a rigorous approval process of ensuring that that model was robust prior to deployment.

I think there’s a fundamental difference between using data for discovery and understanding of the limits of machine learning and health vs automating an efficiency metric, or a decision, or an output that just needs to be obtained for an electronic health care record. I would consent to my data being used in a machine learning paper. But I don’t want it used to predict how much care should be allocated for me, or which medications I should have access to, or what kind of doctor I might be available to be referred to, because I know all of those decisions will be biased.

Dr Bibbins-Domingo:Your explanation I think helps us to understand where we are in a landscape of an evolving technology that is both very powerful and has known limitations and biases.

Article Information

Published Online: February 7, 2024. doi:10.1001/jama.2023.22981

Conflict of Interest Disclosures: Dr Ghassemi reported receiving funding from CIFAR, Quanta Computing, Microsoft Research, Helmsley Trust, Wellcome Trust, J-Clinic, IBM, Moore Foundation, Janssen Research and Development, VW Foundation, and Takeda.

Article link: https://jamanetwork.com/journals/jama/fullarticle/2815046

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...