healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Scientists make significant breakthrough in microchip technology that could forever change our electronics: ‘It can open up a new realm’

Posted by timmreardon on 06/20/2024
Posted in: Uncategorized.

Jeremiah Budin

Sat, June 15, 2024 at 5:00 AM EDT

Scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley may have just unlocked the secret to making electronic devices smaller and more efficient, using tiny electronic components called microcapacitors, SciTechDaily reported.

The microcapacitors could allow energy to be stored directly on the devices’ microchips, minimizing the energy losses that occur as energy is transferred between the devices’ different parts.

The Berkeley scientists engineered the microcapacitors with thin films of hafnium oxide and zirconium oxide, achieving record-high energy and power densities. They published their findings in the journal Nature.

As with many scientific discoveries, the researchers behind it had been working on the problem for years but ended up being surprised with how good their results were.

“The energy and power density we got are much higher than we expected,” said Sayeef Salahuddin, the UC Berkeley professor who led the project. “We’ve been developing negative capacitance materials for many years, but these results were quite surprising.”

The discovery could lead directly to smaller and more efficient electronic devices, such as phones, sensors, personal computers, and more.

“With this technology, we can finally start to realize energy storage and power delivery seamlessly integrated on-chip in very small sizes,” said Suraj Cheema, one of the researchers and co-lead author of the paper. “It can open up a new realm of energy technologies for microelectronics.”

Much of the research happening in and around the realm of clean energy is currently focused on improving battery technology — and as we have seen, this problem can be attacked from a wide variety of angles, from making batteries more efficient, to creating them using less environmentally harmful materials, to devising ways to manufacture them more cheaply.

All of these approaches, in their own way, aid in the efforts to move beyondthe dirty energy sources — mainly gas and oil — that contribute massive amounts of air pollution and have led to the overheating of our planet. By making clean energy and battery storage more viable, any one of these breakthroughs could potentially have a huge positive impact.

Article link: https://www.yahoo.com/tech/scientists-significant-breakthrough-microchip-technology-090000848.html

Blog: The Promise Artificial Intelligence Holds for Improving Health Care – FDA

Posted by timmreardon on 06/17/2024
Posted in: Uncategorized.

By: Troy Tazbaz, Director, Digital Health Center of Excellence (DHCoE), Center for Devices and Radiological Health, U.S. Food and Drug Administration

Artificial intelligence (AI) is rapidly changing the health care industry and holds transformative potential. AI could significantly improve patient care and medical professional satisfaction and accelerate and advance research in medical device development and drug discovery.

AI also has the potential to drive operational efficiency by enabling personalized treatments and streamlining health care processes.

At the FDA, we know that appropriate integration of AI across the health care ecosystem will be paramount to achieving its potential while reducing risks and challenges. The FDA is working across medical product centers regarding the development and use of AI as we noted in the paper, Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together.

DHCoE wants to foster responsible AI innovations in health care while ensuring these technologies, when intended for use as medical devices, are safe and effective for the end-users, including patients. Additionally, we seek to foster a collaborative approach and alignment within the health care ecosystem around AI in health care.

AI Development Lifecycle Framework Can Reduce Risk

There are several ways to achieve this. First, agreeing on and adopting standards and best practices at the health care sector level for the AI development lifecycle, as well as risk management frameworks, can help address risks associated with the various phases of an AI model.

This includes, for instance, approaches to ensure that data suitability, collection, and quality match the intent and risk profile of the AI model that is being trained. This could significantly reduce the risks of these models and support their providing appropriate, accurate, and beneficial recommendations.

Additionally, the health care community together could agree on common methodologies that provide information to a diverse range of end users (including patients), and on how the model was trained, deployed, and managed through robust monitoring tools and operational discipline. This includes proper communication of the model’s reasoning that will help build trust and assurance for people and organizations for successful adoption of AI.

Enabling Quality Assurance of AI in Health Care

To positively impact clinical outcomes with the use of AI models that are accurate, reliable, ethical, and equitable, development of a quality assurance practice for AI models should be a priority.

Top of mind for device safety is quality assurance applied across the lifecycle of a model’s development and use in health care. Continuous performance monitoring before, during, and after deployment is one way to accomplish this, as well as by identifying data quality and performance issues before the model’s performance becomes unsatisfactory.

How can we go about achieving our shared goals of assurance, quality, and safety? At the FDA, we’ve discussed several concepts to help promote this process for the use of AI in medical devices. We plan to expand on these concepts through future publications like this one, including discussing the following topics:

  • Standards, best practices, and operational tools
  • Quality assurance laboratories
  • Transparency and accountability
  • Risk management for AI models in health care

Generally, standards, best practices, and tools can help support responsible AI development, and can help provide clinicians, patients, and other end users with quality assurance for the products they need.

Principles such as transparency and accountability can help stakeholders feel comfortable with AI technologies. Quality assurance and risk management, right-sized for health care institutions of all sizes, can help provide confidence that AI models are developed, tested, and evaluated on data that is representative of the population for which they are intended.

Shared Responsibility on AI Quality Assurance is Essential to Success

Efforts around AI quality assurance have sprung up at a grassroots-level across the U.S. and are starting to bear fruit. Solution developers, health care organizations, and the U.S. federal government are working to explore and develop best practices for quality assurance of AI in health care settings.

These efforts, combined with FDA activities relating to AI-enabled devices, may lead to a world in which AI in health care settings is safe, clinically useful, and aligned with patient safety and improvement in clinical outcomes.

Our “virtual” doors at the DHCoE are always open, and we welcome your comments and feedback related to AI use in health care. Email us at digitalhealth@fda.hhs.gov, noting “Attn: AI in health care,” in the subject line.

Article link: https://www.linkedin.com/posts/fda_proper-integration-of-ai-across-the-health-activity-7208524345944485888-TfVQ?

AI and Trust – Belfer Center for Science and International Affairs, Harvard Kennedy School

Posted by timmreardon on 06/15/2024
Posted in: Uncategorized.
  • Bruce Schneier

| Nov. 27, 2023

This essay was originally presented at an AI Cyber Lunch on September 20, 2023, at Harvard Kennedy School.


I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.

Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.

In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.

Okay, so let’s back up and take that all a lot slower. Trust is a complicated concept, and the word is overloaded with many meanings. There’s personal and intimate trust. When we say that we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. We trust their intentions, and know that those intentions will inform their actions. Let’s call this “interpersonal trust.”

There’s also the less intimate, less personal trust. We might not know someone personally, or know their motivations—but we can trust their behavior. We don’t know whether or not someone wants to steal, but maybe we can trust that they won’t. It’s really more about reliability and predictability. We’ll call this “social trust.” It’s the ability to trust strangers.

Interpersonal trust and social trust are both essential in society today. This is how it works. We have mechanisms that induce people to behave in a trustworthy manner, both interpersonally and socially. This, in turn, allows others to be trusting. Which enables trust in society. And that keeps society functioning. The system isn’t perfect—there are always going to be untrustworthy people—but most of us being trustworthy most of the time is good enough.

I wrote about this in 2012 in a book called Liars and Outliers. I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

What I didn’t appreciate is how different the first and last two are. Morals and reputation are person to person, based on human connection, mutual vulnerability, respect, integrity, generosity, and a lot of other things besides. These underpin interpersonal trust. Laws and security technologies are systems of trust that force us to act trustworthy. And they’re the basis of social trust.

Taxi driver used to be one of the country’s most dangerous professions. Uber changed that. I don’t know my Uber driver, but the rules and the technology lets us both be confident that neither of us will cheat or attack each other. We are both under constant surveillance and are competing for star rankings.

Lots of people write about the difference between living in a high-trust and a low-trust society. How reliability and predictability make everything easier. And what is lost when society doesn’t have those characteristics. Also, how societies move from high-trust to low-trust and vice versa. This is all about social trust.

That literature is important, but for this talk the critical point is that social trust scales better. You used to need a personal relationship with a banker to get a loan. Now it’s all done algorithmically, and you have many more options to choose from.

Social trust scales better, but embeds all sorts of bias and prejudice. That’s because, in order to scale, social trust has to be structured, system- and rule-oriented, and that’s where the bias gets embedded. And the system has to be mostly blinded to context, which removes flexibility.

But that scale is vital. In today’s society we regularly trust—or not—governments, corporations, brands, organizations, groups. It’s not so much that I trusted the particular pilot that flew my airplane, but instead the airline that puts well-trained and well-rested pilots in cockpits on schedule. I don’t trust the cooks and waitstaff at a restaurant, but the system of health codes they work under. I can’t even describe the banking system I trusted when I used an ATM this morning. Again, this confidence is no more than reliability and predictability.

Think of that restaurant again. Imagine that it’s a fast-food restaurant, employing teenagers. The food is almost certainly safe—probably safer than in high-end restaurants—because of the corporate systems or reliability and predictability that is guiding their every behavior.

That’s the difference. You can ask a friend to deliver a package across town. Or you can pay the Post Office to do the same thing. The former is interpersonal trust, based on morals and reputation. You know your friend and how reliable they are. The second is a service, made possible by social trust. And to the extent that is a reliable and predictable service, it’s primarily based on laws and technologies. Both can get your package delivered, but only the second can become the global package delivery systems that is FedEx.

Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability—social trust.

But because we use the same word for both, we regularly confuse them. And when we do that, we are making a category error.

And we do it all the time. With governments. With organizations. With systems of all kinds. And especially with corporations.

We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.

So corporations regularly take advantage of their customers, mistreat their workers, pollute the environment, and lobby for changes in law so they can do even more of these things.

Both language and the laws make this an easy category error to make. We use the same grammar for people and corporations. We imagine that we have personal relationships with brands. We give corporations some of the same rights as people.

Corporations like that we make this category error—see, I just made it myself—because they profit when we think of them as friends. They use mascots and spokesmodels. They have social media accounts with personalities. They refer to themselves like they are people.

But they are not our friends. Corporations are not capable of having that kind of relationship.

We are about to make the same category error with AI. We’re going to think of them as our friends when they’re not.

A lot has been written about AIs as existential risk. The worry is that they will have a goal, and they will work to achieve it even if it harms humans in the process. You may have read about the “paperclip maximizer“: an AI that has been programmed to make as many paper clips as possible, and ends up destroying the earth to achieve those ends. It’s a weird fear. Science fiction author Ted Chiang writes about it. Instead of solving all of humanity’s problems, or wandering off proving mathematical theorems that no one understands, the AI single-mindedly pursues the goal of maximizing production. Chiang’s point is that this is every corporation’s business plan. And that our fears of AI are basically fears of capitalism. Science fiction writer Charlie Stross takes this one step further, and calls corporations “slow AI.” They are profit maximizing machines. And the most successful ones do whatever they can to achieve that singular goal.

And near-term AIs will be controlled by corporations. Which will use them towards that profit-maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us.

This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.

Your Google search results lead with URLs that someone paid to show to you. Your Facebook and Instagram feeds are filled with sponsored posts. Amazon searches return pages of products whose sellers paid for placement.

This is how the Internet works. Companies spy on us as we use their products and services. Data brokers buy that surveillance data from the smaller companies, and assemble detailed dossiers on us. Then they sell that information back to those and other companies, who combine it with data they collect in order to manipulate our behavior to serve their interests. At the expense of our own.

We use all of these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.

It’s going to be no different with AI. And the result will be much worse, for two reasons.

The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.

This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers? When you asked it to explain a political issue, did it bias that explanation towards the company’s position? Or towards the position of whichever political party gave it the most money? The conversational interface will help hide their agenda.

The second reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.

The natural language interface is critical here. We are primed to think of others who speak our language as people. And we sometimes have trouble thinking of others who speak a different language that way. We make that category error with obvious non-people, like cartoon characters. We will naturally have a “theory of mind” about any AI we talk with.

More specifically, we tend to assume that something’s implementation is the same as its interface. That is, we assume that things are the same on the inside as they are on the surface. Humans are like that: we’re people through and through. A government is systemic and bureaucratic on the inside. You’re not going to mistake it for a person when you interact with it. But this is the category error we make with corporations. We sometimes mistake the organization for its spokesperson. AI has a fully relational interface—it talks like a person—but it has an equally fully systemic implementation. Like a corporation, but much more so. The implementation and interface are more divergent of anything we have encountered to date…by a lot.

And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.

It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.

We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.

It’s no accident that these corporate AIs have a human-like interface. There’s nothing inevitable about that. It’s a design choice. It could be designed to be less personal, less human-like, more obviously a service—like a search engine . The companies behind those AIs want you to make the friend/service category error. It will exploit your mistaking it for a friend. And you might not have any choice but to use it.

There is something we haven’t discussed when it comes to trust: power. Sometimes we have no choice but to trust someone or something because they are powerful. We are forced to trust the local police, because they’re the only law enforcement authority in town. We are forced to trust some corporations, because there aren’t viable alternatives. To be more precise, we have no choice but to entrust ourselves to them. We will be in this same position with AI. We will have no choice but to entrust ourselves to their decision-making.

The friend/service confusion will help mask this power differential. We will forget how powerful the corporation behind the AI is, because we will be fixated on the person we think the AI is.

So far, we have been talking about one particular failure that results from overly trusting AI. We can call it something like “hidden exploitation.” There are others. There’s outright fraud, where the AI is actually trying to steal stuff from you. There’s the more prosaic mistaken expertise, where you think the AI is more knowledgeable than it is because it acts confidently. There’s incompetency, where you believe that the AI can do something it can’t. There’s inconsistency, where you mistakenly expect the AI to be able to repeat its behaviors. And there’s illegality, where you mistakenly trust the AI to obey the law. There are probably more ways trusting an AI can fail.

All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

It’s government that provides the underlying mechanisms for the social trust essential to society. Think about contract law. Or laws about property, or laws protecting your personal safety. Or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.

The more you can trust that your societal interactions are reliable and predictable, the more you can ignore their details. Places where governments don’t provide these things are not good places to live.

Government can do this with AI. We need AI transparency laws. When it is used. How it is trained. What biases and tendencies it has. We need laws regulating AI—and robotic—safety. When it is permitted to affect the world. We need laws that enforce the trustworthiness of AI. Which means the ability to recognize when those laws are being broken. And penalties sufficiently large to incent trustworthy behavior.

Many countries are contemplating AI safety and security laws—the EU is the furthest along—but I think they are making a critical mistake. They try to regulate the AIs and not the humans behind them.

AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do. Regardless of whether it was due to humans, or AI, or a combination of both. Maybe that won’t be true forever, but it will be true in the near future. If we want trustworthy AI, we need to require trustworthy AI controllers.

We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.

We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants.

And we need one final thing: public AI models. These are systems built by academia, or non-profit groups, or government itself, that can be owned and run by individuals.

The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model that the government has licensed. It’s not even an open-source model that the public is free to examine and modify.

A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.

We can never make AI into our friends. But we can make them into trustworthy services—agents and not double agents. But only if government mandates it. We can put limits on surveillance capitalism. But only if government mandates it.

Because the point of government is to create social trust. I started this talk by explaining the importance of trust in society, and how interpersonal trust doesn’t scale to larger groups. That other, impersonal kind of trust—social trust, reliability and predictability—is what governments create.

To the extent a government improves the overall trust in society, it succeeds. And to the extent a government doesn’t, it fails.

But they have to. We need government to constrain the behavior of corporations and the AIs they build, deploy, and control. Government needs to enforce both predictability and reliability.

That’s how we can create the social trust that society needs to thrive.

For more information on this publication:Belfer Communications Office

For Academic Citation: Schneier, Bruce.“AI and Trust.”  Belfer Center for Science and International Affairs, Harvard Kennedy School, November 27, 2023.

The Pope Heads To G7 For First Time To Talk About AI—After ‘Balenciaga Pope’ Meme – Forbes

Posted by timmreardon on 06/14/2024
Posted in: Uncategorized.

Arianna JohnsonForbes Staff

Johnson is a reporter on the Forbes news desk

Pope Francis is set to attend the G7 summit on Friday and is expected to urge world leaders to adopt AI regulations, a subject the Pope has spoken about several times in the past, including after he was the subject of viral AI-generated images that many believed were real.

KEY FACTS

The Vatican announced Pope Francis would attend the Group of 7 conference in Italy on Friday to discuss ethical concerns surrounding artificial intelligence during a session dedicated to AI, becoming the first pope to participate in the summit of leaders.

The Pope fell victim to AI in the past: AI-generated deepfake images of the Pope in a white puffer jacket and bedazzled crucifix—dubbed the “Balenciaga Pope”—went viral last year and racked up millions of views online, causing some people to believe the pictures were real.

He spoke about the fake images during a speech in Vatican City in January, warning about the rise of “images that appear perfectly plausible but false (I too have been an object of this).”

Pope Francis has spoken out about the danger of AI before, and he’s expected to urge world leaders at the G7 conference to work together to create AI regulations.

During the G7 meetings, Italy is expected to advocate for the development of homegrown AI systems in African countries, further work is expected to be done on the Hiroshima Process—a G7 effort to safeguard the use of generative AI—and leaders from places like the U.S. and the U.K. are expected to promote AI regulations introduced in their countries, according to Politico.

Giorgia Meloni, Italy’s prime minister, said in a statement in April the Pope was invited to the G7 conference to help “make a decisive contribution to defining a regulatory, ethical and cultural framework for artificial intelligence.”

The Vatican also announced Pope Francis will have bilateral conversations with leaders of other countries, including President Joe Biden, President Samoei Ruto of Kenya and India’s Prime Minister Narenda Modi.

KEY BACKGROUND

The Pope has been speaking out about the need for artificial intelligence regulation for years. The Vatican has been promoting the “Rome Call for AI Ethics” since 2020, which lays out six principles for AI ethics, which include transparency, inclusion, impartiality, responsibility, reliability and security and privacy. As part of the August 2023 announcement for this year’s World Day of Peace of the Catholic Church—which was held on Jan. 1—-the Pope warned of the dangers of AI,saying it should be used as a “service of humanity.” He called for “an open dialogue on the meaning of these new technologies, endowed with disruptive possibilities and ambivalent effects.” In December 2023, the Pope called for an international treaty to regulate AI as part of his World Day of Peace message. He urged world leaders to “adopt a binding international treaty” to regulate AI development, adding it shouldn’t just focus on preventing harm, but should also encourage “best practices.” The Pope noted that although advancements in technology and science lead to the betterment of humanity,” they can also give humans “unprecedented control over reality.”

TANGENT

Italy—one of the G7 summit’s rotating hosts—became the first country to temporarily ban AI chatbot ChatGPT in March 2023 after Garante, an Italian data protection regulator, claimed the chatbot violated the European Union’s privacy laws. Garante claimed ChatGPT exposed payment information and messages, and allowed children to access inappropriate information. Other countries that have passed or introduced laws regulating AI include Australia, China, the European Union, the U.S., Japan and the U.K.

Article link: https://www.forbes.com/sites/ariannajohnson/2024/06/13/the-pope-heads-to-g7-for-first-time-to-talk-about-ai-after-balenciaga-pope-meme/

FURTHER READING

Pope Warns Artificial Intelligence Could ‘Fuel Conflicts And Antagonism’ (Forbes)

Pope Francis Calls For Global Treaty To Regulate AI—After Viral Deepfake Of Him Wearing A Puffer Jacket(Forbes)

Health Data Standardization Improves Patient Care

Posted by timmreardon on 06/14/2024
Posted in: Uncategorized.

Jesus Caban, chief data scientist for Enterprise Intelligence and Data Solutions, speaks at the 2024 Healthcare Information and Management Systems Society HIMSS Global Conference & Exhibition, in Orlando, Florida, on March 13, 2024. Caban’s office is leading the effort to standardize data using the Common Data Model. A common vocabulary for data helps beneficiaries get ready, reliable care across the continuum of care. (Photo by Jason Cunningham, Defense Health Agency, Health Information Technology and Training)

To enable providers to make the most informed decisions about a patient’s care and reach “ready, reliable careopens Health.mil” the Military Health System aims for, hundreds of data sources from disparate medical, dental, and readiness systems must be integrated. However, providers need consistent and standardized data to accurately diagnose and treat a patient’s medical condition. That’s not necessarily the case now, said Dr. Jesus Caban, the chief data scientist for Enterprise Intelligence and Data Solutionsview or download the Fact Sheet in a new window.

Soon, “if you have been diagnosed with sleep apnea, no matter where you receive care within the MHS, what data system is used to pull the information, or what analytical tools are employed to generate reports, the definition of sleep apnea will always be the same,” Caban said. Currently, different data systems may have different definitions for sleep apnea, and that could potentially affect a patient’s ability to get benefits for that condition as they pass through different organizations, such as moving from active duty to retirement as a veteran, according to Caban.

Common Data Model

To overcome that issue is a Common Data Model, which helps standardize medical vocabulary, Caban said. Standardization is one key component of the Defense Health Agency’s Strategic Planopens Health.mil for fiscal years 2023-2028.: 

Medical Encounters Across the Military Health System Per Day

Caban presented how the MHS is adopting a Common Data Model at the 2024 Healthcare Information and Management Systems Society Global Health Conference & Exhibition, in Orlando, Florida, on March 13.

“As part of the MHS stabilization effort, we see standardization of clinical practice guidelines, standardization in the electronic health record, standardization in the clinical workflows,” Caban said, “Now, we need to focus on standardization of data so everyone can count the same way.”

The first step in the process was to understand the vocabulary being used by industry and in academic settings, Caban said.

Among common data models used in health care, the Observational Medical Outcomes Partnershipopens OHDSI.org stands out as one of the most widely adopted across industry, academia, and government agencies.

In the early 2000s, the Food and Drug Administration spearheaded a public-private collaboration with pharmaceutical companies and health care providers to establish a common data model to standardize observational studies such as clinical trials. This collaboration led to the establishment of the OMOP common data model.

Stemming from that effort came the Observational Health Data Sciences and Informaticsopens OHDSI.org community. OHDSI is a “multi-stakeholder, interdisciplinary effort that standardizes vocabularies to create uniform analytics,” according to the OHDSI website.

“This open community has been providing guidance, recommendations, directions, mappings, and tools for health care organizations like the MHS to embrace a common data model,” Caban said.

OHDSI members include the Department of Defense, the Department of Veterans Affairs, FDA, and the National Institutes of Health. It has more than 2,000 collaborators across 74 countries and health records for about 810 million unique patients.

Research is another significant area where standardization will help. The DOD has numerous research efforts, many of which involve international collaboration. The MHS CDM will help streamline research by enabling faster integration of data across international partners and mapping of health data from diverse languages.

MHS GENESIS

The Program Executive Office Defense Healthcare Management Systemsopens health.mil works to create interoperability and modernizationview or download the Fact Sheet in a new window of the DOD federal electronic record called MHS GENESIS.

On March 9, 2024, the DOD completed the deployment phase of MHS GENESIS across the global network of military hospitals and clinics. MHS GENESIS is the definitive and portable inpatient and outpatient medical record for service members, veterans, and their families across the continuum of care.

While the deployment phase of MHS GENESIS is complete, data gatherers still “face many challenges because there are inconsistencies” in medical and dental care reporting that necessitate ongoing optimization and enhancements, said Caban.

The sources EIDS PMO integrates include data from Click to closeDirect CareDirect care refers to military hospitals and clinics, also known as “military treatment facilities” and “MTFs.”direct care, inpatient care, outpatient care, TRICARE, operational medicine, and ancillary applications, to name a few, he said. Added to that firehose of information are patient data from legacy medical records systems, and data from personnel and readiness systems.

For service members transitioning into or out of the military today, providers may want to look back 10 years ago in their medical and dental care records, according to Caban. But 10 years ago, the military was using the legacy records system called AHLTAview or download the Fact Sheet in a new window.

These changes over time to the medical record pose challenges to a provider looking for whether a patient may have had a medical or dental condition in the past, Caban explained.

“Once you look at operational medicine, the systems run by our colleagues at JOMISdownload or view the Fact Sheet in a new window [the DHMS Joint Operational Medicine Information Systems], you have the added complexity of service members who may be treated by allied forces … for a short period of time and then transferred to U.S. military care,” Caban said.

Caban said EIDS plans to have 100% of MHS GENESIS and TRICARE data mapped by the end of the summer. That includes TRICARE inpatient and TRICARE outpatient data.

Benefits to Patients

Accelerating research and reducing the time from a clinical question to the answer is “one of the most important” benefits to patients, Caban said. Other benefits are interoperability and data scalability.

“As we work with other agencies, such as the VA, FDA, [and the Centers for Disease Control and Prevention], being able to have a common data model that we share … [produces] interoperability with those other federal agencies, and a key benefit is scalability as we bring more and more data sources forward. We can continue to scale up by adding more and more different datasets and databases and making sure they follow the common vocabulary,” Caban said.

What’s Next

The next phase in EIDS’ common data model effort is to work on its adoption and raise awareness throughout the MHS.

“Then, we will start doing a lot of user engagement sessions, training sessions to showcase the benefits of this, while at the same time adding other data sources because basically we started with two or three key data sources,” said Caban.

“Next year we’ll be working with our JOMIS colleagues to make sure the operational medicine data are included,” because “there’s some uniqueness in the DOD data—for example, deployments.” The OHDSI isn’t too familiar with operational medicine terminology.

The VA is going through a similar initiative using the Common Data Model. “We have been working very closely with the VA to make sure we know how they’re mapping the data; they know how we’re mapping the data; and we are mapping the data the same way or very similar way,” said Caban.

Article link: https://health.mil/News/Dvids-Articles/2024/06/13/news473728

Cleveland Clinic and Purdue Seek to Revolutionize Intensive Care Through AI

Posted by timmreardon on 06/13/2024
Posted in: Uncategorized.

Investigators are developing a deep learning model to predict health outcomes in ICUs.

Cleveland Clinic and Purdue University investigators are collaborating on a deep learning model that aims to significantly improve patient care and outcomes in intensive care units (ICUs).

Xiaofeng Wang, PhD, Staff in Quantitative Health Sciences, Abhijit Duggal, MD, Vice Chair, Department of Critical Medicine, and Faming Liang, PhD, Purdue University, are combining their clinical and computational expertise to create a data model that accurately reflects the challenges of treating patients in the ICU.

The ability to predict potential outcomes for individual patients will help ICU clinicians make more informed decisions based on real-time, patient-specific data. Current health prediction models rely on static time points and variables to guide monitoring and treatment, but those conditions can change in a matter of seconds. A patient could respond poorly to a specific medicine or start to develop a condition like sepsis.

A timely, accurate decision is critical to securing a positive outcome, says Dr. Wang.

“Our system seeks to adapt to the ever-changing nature of critical illness,” he says. “Our goal is to transform how we understand and manage critical illness, paving the way for more effective and personalized interventions tailored to each patient’s unique needs and circumstances.”

Those working in critical care need to navigate the dynamic interplay between a patient’s medical condition and their response to treatment. Deep learning models are a promising method for quickly synthesizing large amounts of data using sophisticated algorithms. To address these complexities, the team will pioneer a Stochastic Neural Network (StoNet), a type of deep learning model that is designed to process data in a method similar to the human brain.

Researchers can train the StoNet can adapt to meet the unique demands of ICU Electronic Health Record (EHR) datasets. The model will be powered by an innovative adaptive stochastic gradient Markov chain Monte Carlo (MCMC) algorithm. A National Institutes of Health grant will support the team in developing their StoNet, leveraging de-identified, real-life EHR data.

“The complexity of caring for critically ill patients is driven by rapid changes in clinical and laboratory data from hour to hour,” says Dr. Duggal. “The ability to leverage real-time data will be a powerful tool to not only provide timely and appropriate interventions, but it will also have the potential to forecast disease trajectories and help clinicians decide the next steps for the treatment.”

Article link: https://consultqd.clevelandclinic.org/cleveland-clinic-and-purdue-seek-to-revolutionize-intensive-care-through-ai?

The EU Is Taking on Big Tech. It May Be Outmatched – Wired

Posted by timmreardon on 06/12/2024
Posted in: Uncategorized.

From the Digital Services Act to the AI Act, in five years Europe has created a lot of rules for the digital world. Implementing them, however, isn’t always easy.

The latest in a series of duels announced by the European Commission is with Bing, Microsoft’s search engine. Brussels suspects that the giant based in Redmond, Washington, has failed to properly moderate content produced by the generative AI systems on Bing, Copilot, and Image Creator, and that as a result, it may have violated the Digital Services Act (DSA), one of Europe’s latest digital regulations.

On May 17, the EU summit requested company documents to understand how Microsoft handled the spread of hallucinations (inaccurate or nonsensical answers produced by AI), deepfakes, and attempts to improperly influence the upcoming European Parliament elections. At the beginning of June, voters in the 27 states of the European Union will choose their representatives to the European Parliament, in a campaign over which looms the ominous shadow of technology with its potential to manipulate the outcome. The commission has given Microsoft until May 27 to respond, only days before voters go to the polls. If there is a need to correct course, it may likely be too late.

Europe’s Strategy

Over the past few months, the European Commission has started to bang its fists on the table when dealing with the big digital giants, almost all of them based in the US or China. This isn’t the first time. In 2022, the European Union hit Google with a fine of €4.1 billion because of its market dominance thanks to its Android system, marking the end of an investigation that started in 2015. In 2023, it sanctioned Meta with a fine of €1.2 billion for violating the GDPR, the EU’s data protection regulations. And in March it presented Apple with a sanction of €1.8 billion.

Recently, however, there appears to have been a change in strategy. Sanctions continue to be available as a last resort when Big Tech companies don’t bend to the wishes of Brussels, but now the European Commission is aiming to take a closer look at Big Tech, find out how it operates, and modify it as needed, before imposing fines. Take, for example, Europe’s Digital Services Act, which attempts to impose transparency in areas like algorithms and advertising, fight online harassment and disinformation, protect minors, stop user profiling, and eliminate dark patterns (design features intended to manipulate our choices on the web).

In 2023, Brussels identified 22 multinationals that, due to their size, would be the focus of its initial efforts: Google with its four major services (search, shopping, maps, and play), YouTube, Meta with Instagram and Facebook, Bing, X (formerly Twitter), Snapchat, Pinterest, LinkedIn, Amazon, Booking, Wikipedia, Apple’s App Store, TikTok, Alibaba, Zalando, and the porn sites Pornhub, XVideos, and Stripchat. Since then, it has been putting the pressure on these companies to cooperate with its regulatory regime.

The day before the Bing investigation was announced, the commission also opened one into Meta to determine what the multinational is doing to protect minors on Facebook and Instagram and counter the “rabbit hole” effect—that is, the seamless flood of content that demands users’ attention, and which can be especially appealing to younger people. That same concern led it to block the launch of TikTok Lite in Europe, deeming its system for rewarding social engagement dangerous and a means of encouraging addictive behavior. It has asked X to increase its content moderation, LinkedIn to explain how its ad system works, and AliExpress to defend its refund and complaint processes.

A Mountain of Laws …

On one hand, the message appears to be that no one will escape the reach of Brussels. On the other, the European Commission, led by President Ursula von der Leyen, has to demonstrate that the many digital laws and regulations that are in place actually produce positive results. In addition to the DSA, there is the Digital Markets Act (DMA), intended to counterbalance the dominance of Big Tech in online markets; the AI Act, Europe’s flagship legislation on artificial intelligence; and the Data Governance Act (DGA) and the Data Act, which address data protection and the use of data in the public and private sectors. Also to be added to the list are the updated cybersecurity package, NIS2 (Network and Information Security); the Digital Operational Resilience Act, focused on finance and insurance; and the digital identity package within eIDAS 2. Still in the draft stage are regulations on health data spaces and much-debated chat measures which would authorize law enforcement agencies and platforms to scan citizens’ private messages, looking for child pornography.

Brussels has deployed its heavy artillery against the digital flagships of the United States and China, and a few successful blows have landed, such as ByteDance’s suspension of the gamification feature on TikTok Lite following its release in France and Spain. But the future is uncertain and complicated. While investigations attract media interest, the EU’s digital bureaucracy is a large and complex machine to run.

On February 17, the DSA became law for all online service operators (cloud and hosting providers, search engines, e-commerce, and online services) but the European Commission doesn’t and can’t control everything. That is why it asked states to appoint a local authority to serve as a coordinator of digital services. Five months later, Brussels had to send a formal notice to six states (Cyprus, Czechia, Estonia, Poland, Portugal, and Slovakia) to urge them to designate and fully empower their digital services coordinators. Those countries now have two months to comply before Brussels will intervene. But there are others who are also not in the clear. For example, Italy’s digital services coordinator, the Communications Regulatory Authority (abbreviated AGCOM, for Autorità per le Garanzie nelle Comunicazioni, in Italian), needs to recruit 23 new employees to replenish its staff. The department told WIRED Italy that it expects to have filled all of its appointments by mid-June.

The DSA also introduced “trusted flaggers.” These are individuals or entities, such as universities, associations, and fact-checkers, committed to combating online hatred, internet harassment, illegal content, and the spread of scams and fake news. Their reports are, one hopes, trustworthy. The selection of trusted flaggers is up to local authorities but, to date, only Finland has formalized the appointment of one, specifically Tekijänoikeuden tiedotus- ja valvontakeskus ry (in English, the Copyright Information and Anti-Piracy Center). Its executive director, Jaana Pihkala, explained to WIRED Italy that their task is “to produce reports on copyright infringements,” a subject on which the association has 40 years of experience. Since its appointment as a trusted flagger, the center’s two lawyers, who perform all of its functions, have sent 816 alerts to protect films, TV series, and books on behalf of Finnish copyright holders.

… and a Mountain of Data

To assure that the new commission is respected by the 27 states, the commission set up the DSA surveillance system as quickly as possible, but the bureaucrats in Brussels still have a formidable amount of research to do. On the one hand, there is the anonymous reporting platform with which the commission hopes to build dossiers on the operations of different platforms directly from internal sources. The biggest scandals that have shaken Meta have been thanks to former employees, like Christopher Wylie, the analyst who revealed how Cambridge Analytica attempted to influence the US elections, and Frances Haugen, who shared documents about the impacts of Instagram and Facebook on children’s health. The DSA, however, intends to empower and fund the commission so that it can have its own people capable of sifting through documents and data, analyzing the content, and deciding whether to act.

The commission boasts that the DSA will force platforms to be transparent. And indeed it can point to some successes already, for example, by revealing the absurdly inadequate numbers of moderators employed by platforms. According to the latest data released last November, they don’t even cover all the languages spoken in the European Union. X reported that it had only two people to check content in Italian, the language of 9.1 million users. There were no moderators for Greek, Finnish, or Romanian even though each language has more than 2 million subscribers. AliExpress moderates everything in English while, for other languages, it makes do with automatic translators. LinkedIn moderates content in 12 languages of the European bloc—that is, just half of the official languages.

At the same time, the commission has forced large platforms to standardize their reports of moderation interventions to feed a large database, which, at the time of writing this article, contains more than 18.2 billion records. Of these cases, 69 percent were handled automatically. But, perhaps surprisingly, 92 percent concerned Google Shopping. This is because the platform uses various parameters to determine whether a product can be featured: the risk that it is counterfeited, possible violations of site standards, prohibited goods, dangerous materials, and others. It can thus be the case that several alerts are triggered for the same product and the DSA database counts each one separately, multiplying the shopping numbers exponentially. So now the EU has a mass of data that further complicates its goal of being fully transparent.

Zalando’s Numbers

And then there’s the Big Tech companies’ legal battle against the fee they have to pay to the commission to help underwrite its supervisory bodies. Meta, TikTok, and Zalando have challenged the fee (though paid it). Zalando is also the only European company on the commission’s list of large platforms, a designation Zalando has always contested because it does not believe it meets the criteria used by Brussels. One example: The platforms on the list must have at least 45 million monthly users in Europe. The commission argues that Zalando has 83 million users, though that number, for example, includes visits from Portugal, where the platform is not marketed, and Zalando argues those users should be deducted from its total count. According to its calculations, the activities subject to the DSA reach only 31 million users, under the threshold. When Zalando was assessed its fee, it discovered that the commission had based it on a figure of 47.5 million users, far below the initial 83 million. The company has now taken the commission to court in an attempt to assure a transparent process.

And this is just one piece of legislation, the DSA. The commission has also deployed the Digital Markets Act (DMA), a package of regulations to counterbalance Big Tech’s market dominance, requiring that certain services be interoperable with those of other companies, that apps that come loaded on a device by default can be uninstalled, and that data collected on large platforms be shared with small- and medium-size companies. Again, the push to impose these mandates starts with the giants: Alphabet, Amazon, Apple, Meta, ByteDance, and Microsoft. In May, Booking was added to the list.

Big Tech Responds

Platforms have started to respond to EU requests, with lukewarm results. WhatsApp, for instance, has been redesigned to allow chatting with other apps without compromising its end-to-end encryption that protects the privacy and security of users, but it is still unclear who will agree to connect to it. WIRED US reached out to 10 messaging companies, including Google, Telegram, Viber, and Signal, to ask whether they intend to look at interoperability and whether they had worked with WhatsApp on its plans. The majority didn’t respond to the request for comment. Those that did, Snap and Discord, said they had nothing to add. Apple had to accept sideloading—i.e., the possibility of installing and updating iPhone or iPad applications from stores outside the official one. However, the first alternative that emerged, AltStore, offers very few apps at this time. And it has suffered some negative publicity after refusing to accept the latest version of its archenemy Spotify’s app, despite the fact that the audio platform had removed the link to its website for subscriptions.

The DMA is a regulation that has the potential to break the dominant positions of Big Tech companies, but that outcome is not a given. Take the issue of surveillance: The commission has funds to pay the salaries of 80 employees, compared to the 120 requested by Internal Market Commissioner Thierry Breton and the 220 requested by the European Parliament, as summarized by Bruegel in 2022. And on the website of the Center for European Policy Analysis(CEPA), Adam Kovacevich, founder and CEO of Chamber of Progress, a politically left-wing tech industry coalition (all of the digital giants, which also fund CEPA, are members), stated that the DMA, “instead of helping consumers, aims to help competitors. The DMA is making large tech firms’ services less useful, less secure, and less family-friendly. Europeans’ experience of large tech firms’ services is about to get worse compared to the experience of Americans and other non-Europeans.”

Kovacevich represents an association financed by some of those same companies that the DMA is focused on, and there is a shared fear that the DMA will complicate the market and, in the end, benefit only a few companies—not necessarily those most at risk because of the dominance of Silicon Valley. It is not only lawsuits and fines, but also the perceptions of citizens and businesses that will help to determine whether EU regulations are successful. The results may come more slowly than desired by Brussels as new legislation is rarely positively received at first.

Learning From GDPR and Gaia-X

Another regulatory act, the General Data Protection Regulation (GDPR), has become the global industry standard, forcing online operators to change the way they handle our data. But if you ask the typical person on the street, they’ll likely tell you it’s just a simple cookie wall that you have to approve before continuing on to a webpage. Or it’s viewed as a law that has required the retention of dedicated external consultants on the part of companies. It is rarely described as the ultimate online privacy law, which is exactly what it is. That said, while the act has reshaped the privacy landscape, there have been challenges, as the digital rights association Noyb has explained. The privacy commissioners of Ireland and Luxembourg, where many web giants are based for tax purposes, have had bottlenecks in investigating violations. According to the latest figures from Ireland’s Data Protection Commission (DPC), 19,581 complaints have been submitted in the past five years, but the body has made only 37 formal decisions and only eight of those began with complaints. Noyb recently conducted a survey of 1,000 data protection officers; 74 percent were convinced that if privacy officers investigated the typical European company, they would find at least one GDPR violation.

The GDPR was also the impetus for another unsuccessful operation: separating the European cloud from the US cloud in order to shelter the data of EU citizens from Washington’s Cloud Act. In 2019, France and Germany announced with great fanfare a federation, Gaia-X, that would defend the continent and provide a response to the cloud market, which has been split between the United States and China. Five years later, the project has become bogged down in the process of establishing standards, after the entry of the giants it was supposed to counter, such as Microsoft, Amazon, Google, Huawei, and Alibaba, as well as the controversial American company Palantir (which analyses data for defense purposes). This led some of the founders, such as the French cloud operator Scaleway, to flee, and that then turned the spotlight on the European Parliament, which led the commission to launch an alternative, the European Alliance for Industrial Data, Edge and Cloud, which counts among its 49 members 26 participants from Gaia-X (everyone except for the non-EU giants) and enjoys EU financial support.

In the meantime, the Big Tech giants have found a solution that satisfies European wishes, investing en masse to establish data centers on EU soil. According to a study by consultancy firm Roland Berger, 34 data center transactions were finalized in 2023, growing at an average annual rate of 29.7 percent since 2019. According to Mordor Intelligence, another market analysis company, the sector in Europe will grow from €35.4 billion in 2024 to an estimated €57.7 billion in 2029. In recent weeks, Amazon web services announced €7.8 billion in investments in Germany. WIRED Italy has reported on Amazon’s interest in joining the list of accredited operators to host critical public administration data in Italy, which already includes Microsoft, Google, and Oracle. Notwithstanding its proclamations about sovereignty, Brussels has had to capitulate: The cloud is in the hands of the giants from the United States who have found themselves way ahead of their Chinese competitors after diplomatic relations between Beijing and Brussels cooled.

The AI Challenge

The newest front in this digital battle is artificial intelligence. Here, too, the European Union has been the first to come up with some rules under its AI Act, the first legislation to address the different applications of this technology and establish permitted and prohibited uses based on risk assessments. The commission does not want to repeat the mistakes of the past. Mindful of the launch of the GDPR, which in 2018 caused companies to scramble to assure they were compliant, it wants to lead organizations through a period of voluntary adjustment. Already 400 companies have declared their interest in joining the effort, including IBM.

In the meantime, Brussels must build a number of structures to make the AI Act work. First is the AI Council. It will have one representative from each country and will be divided into two subgroups, one dedicated to market development and the other to public sector uses of AI. In addition, it will be joined by a committee of technical advisers and an independent committee of scientists and experts, along the lines of the UN Climate Committee. Secondly, the AI Office, which sits within Directorate-General Connect (the department in charge of digital technology), will take care of administrative aspects of the AI Act. The office will assure that the act is applied uniformly, investigate alleged violations, establish codes of conduct, and classify artificial intelligence models that pose a systemic risk. Once the rules are established, research on new technologies can proceed. After it is fully operational, the office will employ 100 people, some of them redeployed from General Connect while others will be new hires. At the moment, the office is looking to hire six administrative staff and an unknown number of tech experts.

On May 29, the first round of bids in support of the regulation expired. These included the AI Innovation Accelerator, a center that provides training, technical standards, and software and tools to promote research, support startups and small- and medium-sized enterprises, and assist public authorities that have to supervise AI. A total of €6 million is on the table. Another €2 million will finance management and €1.5 million will go to the EU’s AI testing facilities, which will, on behalf of countries’ antitrust authorities, analyze artificial intelligence models and products on the market to assure that they comply with EU rules.

Follow the Money

Finally, a total of €54 million is designated for a number of business initiatives. The EU knows it is lagging behind. According to an April report by the European Parliament’s research service, which provides data and intelligence to support legislative activities, the global AI market, which in 2023 was estimated at €130 billion, will reach close to €1.9 trillion in 2030. The lion’s share is in the United States, with €44 billion of private investment in 2022, followed by China with €12 billion. Overall, the European Union and the United Kingdom attracted €10.2 billion in the same year. According to Eurochamber researchers, between 2018 and the third quarter of 2023, US AI companies received €120 billion in investment, compared to €32.5 billion for European ones.

Europe wants to counter the advance of the new AI giants with an open source model, and it has also made its network of supercomputers available to startups and universities to train algorithms. First, however, it had to adapt to the needs of the sector, investing almost €400 million in graphics cards, which, given the current boom in demand, will not arrive anytime soon.

Among other projects to support the European AI market, the commission wants to use €24 million to launch a Language Technology Alliance that would bring together companies from different states to develop a generative AI to compete with ChatGPT and similar tools. It’s an initiative that closely resembles Gaia-X. Another €25 million is earmarked for the creation of a large open source language model, available to European companies to develop new services and research projects. The commission intends to fund several models and ultimately choose the one best suited to Europe’s needs. Overall, during the period from 2021 to 2027, the Digital Europe Program plans to spend €2.1 billion on AI. That figure may sound impressive, but it pales in comparison to the €10 billion that a single company, Microsoft, invested in OpenAI.

The €25 million being spent on the European large language model effort, if distributed to many smaller projects, risks not even counterbalancing the €15 million that Microsoft has spent bringing France’s Mistral, Europe’s most talked-about AI startup, into its orbit. The big AI models will become presences in Brussels as soon as the AI Act, now finally approved, comes into full force. In short, the commission is making it clear in every way it can that a new sheriff is in town. But will the bureaucrats of Brussels be adequately armed to take on Big Tech? Only one thing is certain—it’s not going to be an easy task.

Article link: https://www.wired.com/story/european-commission-big-tech-regulation-outlook/?

VA Technology Modernization Oversight Hearing “U.S. Department of Veterans Affairs Office of Information and Technology Budget Request for Fiscal Year 2025”

Posted by timmreardon on 06/12/2024
Posted in: Uncategorized.

Study finds 268% higher failure rates for Agile software projects

Posted by timmreardon on 06/07/2024
Posted in: Uncategorized.
In praise of knowing the requirements before you start cranking out code

Richard SpeedWed 5 Jun 2024  //  09:25 UTC

A study has found that software projects adopting Agile practices are 268 percent more likely to fail than those that do not.

Even though the researchcommissioned by consultancy Engprax could be seen as a thinly veiled plug for Impact Engineering methodology, it feeds into the suspicion that the Agile Manifesto might not be all it’s cracked up to be.

The study’s fieldwork was conducted between May 3 and May 7 with 600 software engineers (250 in the UK and 350 in the US) participating. One standout statistic was that projects with clear requirements documented before development started were 97 percent more likely to succeed. In comparison, one of the four pillars of the Agile Manifesto is “Working Software over Comprehensive Documentation.”

According to the study, putting a specification in place before development begins can result in a 50 percent increase in success, and making sure the requirements are accurate to the real-world problem can lead to a 57 percent increase.

Dr Junade Ali, author of Impact Engineering, said: “With 65 percent of projects adopting Agile practices failing to be delivered on time, it’s time to question Agile’s cult following.

“Our research has shown that what matters when it comes to delivering high-quality software on time and within budget is a robust requirements engineering process and having the psychological safety to discuss and solve problems when they emerge, whilst taking steps to prevent developer burnout.”

The Agile Manifesto has been criticized over the years. The infamous UK Post Office Horizon IT system was an early large-scale project to use the methodology, although blaming an Agile approach for the system’s design flaws seems a bit of a stretch.

  • Report: 83% of UK software engineers suffer burnout, COVID-19 made it worse
  • ‘Business folk often don’t understand what developers do…’ Twilio boss on the chasm that holds companies back
  • IBM warns Global Tech Services staff that 346 UK heads will roll in latest redundancy action
  • Erik Meijer: AGILE must be destroyed, once and for all

It is also easy to forget that other methodologies have their own flaws. Waterfall, for example, uses a succession of documented phases, of which coding is only a part. While simple to understand and manage, Waterfall can also be slow and costly, with changes challenging to implement.

Hence, there is a tendency for teams to look for alternatives.

Projects where engineers felt they had the freedom to discuss and address problems were 87 percent more likely to succeed. Worryingly, workers in the UK were 13 percent less likely to feel they could discuss problems than those in the US, according to the study.

Many sins of today’s tech world tend to be attributed to the Agile Manifesto. A neverending stream of patches indicates that quality might not be what it once was, and code turning up in an unfinished or ill-considered state have all been attributed to Agile practices.

One Agile developer criticized the daily stand-up element, describing it to The Register as “a feast of regurgitation.”

However, while the Agile Manifesto might have its problems, those stem more from its implementation rather than the principles themselves. “We don’t need a test team because we’re Agile” is a cost-saving abdication of responsibility.

In highlighting the need to understand the requirements before development begins, the research charts a path between Agile purists and Waterfall advocates.

Article link: https://www-theregister-com.cdn.ampproject.org/c/s/www.theregister.com/AMP/2024/06/05/agile_failure_rates/

Scaling national e-health: Best practices from around the world – McKinsey

Posted by timmreardon on 06/06/2024
Posted in: Uncategorized.

We examined best practices from around the world to identify proven ways to implement e-health successfully. These insights could help other countries unlock their own e-health potential.

DOWNLOADS

Article (10 pages)

Implementing new national e-health1 solutions is inherently complex, and the challenges extend well beyond the technology domain. Facilitating adoption and regular use of e-health solutions by patients and healthcare professionals (HCPs) can prove especially fraught, yet widespread adoption is an important component of realizing the full potential of these solutions.

Expectations for e-health solutions can be high, with patients assuming their interactions with e-health solutions will mirror their digital shopping, banking, and takeout and delivery experiences when it comes to ease of use.2 HCPs likely hope to streamline their processes and integrate solutions easily into their existing systems, but reality often diverges from this ideal3:

  • Patients frequently find their e-health experiences are not intuitive for many reasons, including poor app design and patients’ lack of understanding of medical terminology.
  • Workflows for e-health applications such as electronic health records (EHRs) and e-prescriptions are based on clinical processes rather than a user’s perspective.
  • Data privacy and IT security protections, while necessary and important, can frustrate users’ attempts to log in to apps.
  • The practical value of e-health solutions may be unclear to patients (and HCPs).
  • HCPs may find e-health solutions difficult to integrate with their existing systems.

Despite these challenges, e-health solutions have demonstrated their potential to improve care quality, cost, and patient and HCP experience. 

Indeed, a McKinsey assessment shows that the potential value of fully utilizing 26 digital healthcare technologies in 14 countries is equal to 8 to 12 percent of total healthcare spending within each country.4 Of this estimated value, about 70 percent could be captured by hospitals and HCPs and 30 percent by health insurers. 

This article shares strategic actions that successfully spurred adoption and regular use of e-health solutions in countries around the world and lays out how other countries could combine and implement these strategic actions in three phases: setup, scale-up, and enhancing benefits. The analysis conducted for this article and real-world examples illustrate how these strategic actions, along with a holistic change-management program, could be used to scale e-health solutions in other countries and health systems.

Global e-health’s considerable potential

Digitalizing healthcare systems and implementing e-health solutions at the national level could make information more readily accessible and improve data sharing. Furthermore, using analytics-based e-healthcare solutions could improve treatments, avoid unnecessary interventions, and reduce medical and prescription errors. For example, clinicians could use their time during initial appointments more efficiently and avoid follow-up appointments with ready access to patient information. Easier intrahospital communications and reduced admissions enabled by remote monitoring could further boost efficiencies. And telemedicine, which facilitates timely treatment even for patients in remote and rural areas, could help close gaps in healthcare provisioning if combined with affordable, high-speed internet access.5

Governments and public institutions have been investing in e-health solutions for decades, prompted by the potential of these technologies. The United States launched one of the first EHR projects in the early 1970s,6 and in an early form of telemedicine, Norway relied on videoconferences for long-distance outpatient medical consultations and treatment in 1996.7 Since then, e-health solutions have been deployed around the world, with investments increasing markedly during the COVID-19 pandemic. In 2017, the size of the global e-health market8—including investments in e-health apps, devices, online pharmacies, and online consultations—was estimated at $19 billion; by 2022, it had grown to $64 billion.9 The market is projected to achieve a CAGR of 11.31 percent and reach $109 billion by 2027.10

According to McKinsey analysis, some investments in national e-health solutions are not yielding the desired adoption rates. In Germany, for example, less than 1.5 percent of patient records have been captured digitally since the country implemented its EHR system in 2021.11 Further impeding adoption, of the 30 to 40 percent of HCPs that have installed EHR applications, only 2 to 3 percent use them.12

Although elegant user design can contribute to widespread adoption, our analysis of national e-health systems13 reveals that certain strategic actions to introduce technologies and encourage solution use are more likely to facilitate adoption and successful implementation.

Scaling solution adoption: Best practices in action

Countries around the globe have used various combinations of strategic actions to achieve widespread adoption and use of their national e-health solutions. There are numerous adoption-boosting actions that—when applied at the right time and in the right sequence—have yielded favorable results in the countries reviewed as part of this analysis; national e-health solutions take certain strategic actions most frequently (exhibit). 

Scaling national e-health: Best practices from around the world

June 3, 2024 | ArticleShare

PrintDownload

Save

We examined best practices from around the world to identify proven ways to implement e-health successfully. These insights could help other countries unlock their own e-health potential.

https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/1837260519&color=%2307254d&inverse=false&auto_play=false&show_user=true

DOWNLOADS

Article (10 pages)

Implementing new national e-health1 solutions is inherently complex, and the challenges extend well beyond the technology domain. Facilitating adoption and regular use of e-health solutions by patients and healthcare professionals (HCPs) can prove especially fraught, yet widespread adoption is an important component of realizing the full potential of these solutions.Share

Sidebar

About the authors

Expectations for e-health solutions can be high, with patients assuming their interactions with e-health solutions will mirror their digital shopping, banking, and takeout and delivery experiences when it comes to ease of use.2 HCPs likely hope to streamline their processes and integrate solutions easily into their existing systems, but reality often diverges from this ideal3:

  • Patients frequently find their e-health experiences are not intuitive for many reasons, including poor app design and patients’ lack of understanding of medical terminology.
  • Workflows for e-health applications such as electronic health records (EHRs) and e-prescriptions are based on clinical processes rather than a user’s perspective.
  • Data privacy and IT security protections, while necessary and important, can frustrate users’ attempts to log in to apps.
  • The practical value of e-health solutions may be unclear to patients (and HCPs).
  • HCPs may find e-health solutions difficult to integrate with their existing systems.

Despite these challenges, e-health solutions have demonstrated their potential to improve care quality, cost, and patient and HCP experience. 

Indeed, a McKinsey assessment shows that the potential value of fully utilizing 26 digital healthcare technologies in 14 countries is equal to 8 to 12 percent of total healthcare spending within each country.4 Of this estimated value, about 70 percent could be captured by hospitals and HCPs and 30 percent by health insurers. 

This article shares strategic actions that successfully spurred adoption and regular use of e-health solutions in countries around the world and lays out how other countries could combine and implement these strategic actions in three phases: setup, scale-up, and enhancing benefits. The analysis conducted for this article and real-world examples illustrate how these strategic actions, along with a holistic change-management program, could be used to scale e-health solutions in other countries and health systems.

Global e-health’s considerable potential

Digitalizing healthcare systems and implementing e-health solutions at the national level could make information more readily accessible and improve data sharing. Furthermore, using analytics-based e-healthcare solutions could improve treatments, avoid unnecessary interventions, and reduce medical and prescription errors. For example, clinicians could use their time during initial appointments more efficiently and avoid follow-up appointments with ready access to patient information. Easier intrahospital communications and reduced admissions enabled by remote monitoring could further boost efficiencies. And telemedicine, which facilitates timely treatment even for patients in remote and rural areas, could help close gaps in healthcare provisioning if combined with affordable, high-speed internet access.5

Governments and public institutions have been investing in e-health solutions for decades, prompted by the potential of these technologies. The United States launched one of the first EHR projects in the early 1970s,6 and in an early form of telemedicine, Norway relied on videoconferences for long-distance outpatient medical consultations and treatment in 1996.7 Since then, e-health solutions have been deployed around the world, with investments increasing markedly during the COVID-19 pandemic. In 2017, the size of the global e-health market8—including investments in e-health apps, devices, online pharmacies, and online consultations—was estimated at $19 billion; by 2022, it had grown to $64 billion.9 The market is projected to achieve a CAGR of 11.31 percent and reach $109 billion by 2027.10

According to McKinsey analysis, some investments in national e-health solutions are not yielding the desired adoption rates. In Germany, for example, less than 1.5 percent of patient records have been captured digitally since the country implemented its EHR system in 2021.11 Further impeding adoption, of the 30 to 40 percent of HCPs that have installed EHR applications, only 2 to 3 percent use them.12

Although elegant user design can contribute to widespread adoption, our analysis of national e-health systems13 reveals that certain strategic actions to introduce technologies and encourage solution use are more likely to facilitate adoption and successful implementation.

Scaling solution adoption: Best practices in action

Countries around the globe have used various combinations of strategic actions to achieve widespread adoption and use of their national e-health solutions. There are numerous adoption-boosting actions that—when applied at the right time and in the right sequence—have yielded favorable results in the countries reviewed as part of this analysis; national e-health solutions take certain strategic actions most frequently (exhibit). 

Exhibit 

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com

These strategic actions for scaling e-health adoption can be grouped into three deployment phases: setup, scale-up, and enhancing benefits. The phases can be viewed as a continuous transition with a shifting focus rather than as a linear process in which phases are completed consecutively or in a specific order.

The analysis for this article shows that a systematic approach—implementing the right actions at the right time—helps navigate the phases of scaling adoption and usage. Establishing overarching change and communication plans at the outset is an important contributor to adoption. Once a plan is in place, a potentially effective set of strategic actions could be selected and tailored to the e-health solution and the specific needs of the targeted users.

Setup phase: Attracting users

The primary objective of the setup phase is to attract a critical mass of patients and HCPs to new e-health solutions. Supporting strategic actions demonstrate to users how a new e-health solution addresses a specific problem, and they clearly state the solution’s benefits. Additionally, regulatory measures could be explored along with other actions to lower barriers to entry, such as establishing trust in e-health solutions and streamlining registration to simplify solution onboarding for patients and HCPs. Furthermore, compelling incentives such as financial compensation or reimbursements can be evaluated to determine their potential for encouraging proper usage.

These questions could help leaders focus efforts during the setup phase:

  • What is the best use case to begin with, for patients and HCPs?
  • Could regulatory measures be evaluated to ascertain their potential impact on achieving adoption?
  • What actions could help cultivate trust?
  • How can solutions be integrated smoothly with existing IT systems while ensuring interoperability?

Compelling use cases.Compelling use cases convey how a new e-health solution addresses an unmet need for patients or HCPs and show that the solution is easy to use and adds real value in practice. Simple dimensions to assess the potential of new e-health use cases are the number of users (patients and HCPs) affected, transaction volumes, and the potential impact of the e-health solution per transaction.

During the COVID-19 pandemic, some use cases demonstrated effective ways to attract new and lapsed users to e-health solutions and help scale those solutions. The benefits of the COVID-19-related solutions were clearly communicated, and the solutions were easy to use and encouraged frequent use (for example, to show proof of vaccination or COVID-19 test results multiple times daily). Consequently, the solutions attracted many users.

Use of Austria’s e-health system rose during the COVID-19 pandemic. The country introduced an electronic vaccination record in 2020 as a new feature of ELGA, its national electronic patient record system. In addition, free COVID-19 antigen tests in drugstores were available only via ELGA’s e-medication function. These strategic actions helped to further strengthen the system’s value proposition and encouraged the reactivation of a sizable number of ELGA user accounts.14

Management of adoption, use, privacy, and security. Some measures may help encourage proper adoption of e-health solutions on a national level. McKinsey research indicates that both clearly defined consent management policies and measures that mandate high data privacy and cybersecurity standards could potentially cultivate user trust in e-health solutions (a prerequisite for scaling new use cases). Other measures that could help facilitate adoption could also be explored, such as requiring adoption by HCPs or requiring users to actively opt out of digital systems versus actively sign up for them. However, such measures can also have unintended consequences, especially if they lack good design. Further, ensuring compliance with a complex web of regulations requires investment in legal, security, and compliance functions, which can make entering the market challenging for small health systems and innovators.

In Denmark, patient health data is automatically integrated in the national health portal, sundhed.dk. The government system, which contains each resident’s health status, medical history, prescriptions, and other relevant health information, is now mandatory for all HCPs in the country, and patients may not opt out of it. Notably, the system is highly transparent about how patients’ data is used, and strict rules and regulations limit access to and use of the EHR system to authorized HCPs for legitimate medical purposes.15 Patients can also access an activity log for their EHR that displays all data requests from HCPs.16 In addition to other levers, the combination of regulatory measures and other strategic actions increased the number of frequent users of sundhed.dk to about 45 percent of the Danish population by around 2010.17

Denmark also offered incentives for HCPs, gradually building HCP adoption of e-health solutions. For example, physicians received €1,500 per year to spend on e-health and were reimbursed more quickly if they were connected to the national healthcare infrastructure. Eventually Denmark mandated that EHRs integrate with the national e-health infrastructure. Once that legal requirement was established, 100 percent of HCPs adopted the national system.18

Interoperability and integration.A lack of integration and interoperability of IT systems can be one of the biggest barriers to scaling new e-health solutions. The IT systems for different stakeholders (general practitioners, hospitals, national databases, and so on) are generally heterogeneous—not connected or standardized in terms of their data or interfaces. Designing e-health solutions and supporting technical capabilities to make integrating them with existing IT systems as simple as possible fosters adoption among HCPs. Terminology services are one example of technical capabilities to ensure data interoperability. These services act as intermediaries, effectively mapping different coding and exchange standards to enable seamless communication. Financial incentives to reduce the burden for HCPs and offset initial investment costs can also be considered. 

Iceland partnered with the vendors that created Saga, the country’s e-healthcare system, to automate the integration of that system with individual HCPs’ existing IT systems.19 This integration allowed HCPs to access the patient information in Saga directly from their own systems, eliminating the need to switch between different applications or manually enter data. Today, the EHR is used by more than 90 percent of HCPs in Iceland.20

Scale-up phase: Fostering frequent usage

Once a critical mass of users has been reached, strategic actions in the scale-up phase center on encouraging users to become active, frequent users of e-health solutions. The quality of customers’ early experience with e-health solutions often determines whether they make using them a habit. Use rates can also be increased by effectively integrating different solutions—boosting usage across all solutions—as well as by further optimizing user experience, which enhances the value proposition of the solution.

These questions could help leaders focus efforts during the scale-up phase:

  • What is the best approach to making the e-health solution part of users’ daily routines?
  • Is the solution design user-centric?
  • Which use cases can boost regular usage?

Solution integration. The integration of different e-health solutions provides the dual benefits of improving user experiences and boosting the usage rates for all connected solutions. Patients prefer a simple, consistent e-health experience with uninterrupted user journeys. 21Thus, making it easier and more convenient to transition from one solution to another increases the likelihood that users will engage with all solutions more often. When facilitated by use of a central identity, central access, and consent management solutions, integrated e-health solutions help create this holistic patient experience.

Singpass, Singapore’s national management system for verifying citizens’ identities, is an important enabler for integrating various e-health solutions. Singpass is subject to regular, rigorous review and was introduced as part of a much larger identity verification initiative—Singapore’s mandated use of facial recognition—and privacy framework.22 Singpass allows users to easily share data between apps, and (with users’ consent) apps can obtain information from Singpass to verify users’ identity.23 During the COVID-19 pandemic, citizens were required to submit health declarations online. Singpass enabled the integration of data from these declarations with other systems, such as immigration records and COVID-19 testing results. This helped authorities in Singapore identify potential cases of COVID-19 and greatly aided contact tracing.24 Singpass was also used to incorporate vaccination records from various sources—including hospitals, clinics, and vaccination centers—into a centralized system, allowing citizens to access their vaccination records online and authorities to track vaccination coverage.

User centricity. Across industries, digitalization has proved to be most beneficial when it is highly focused on end users.25 When new e-health solutions are designed and developed with insufficient focus on user centricity, poor user experience and limited value in practice often result. This drives down use rates for e-health solutions and impedes scaling. End users of e-health solutions represent a range of populations, including patients, medical professionals, and pharmacies. Patient-focused solutions involve patient journeys and integrating those different journeys wherever possible.

Solutions focused on HCPs, on the other hand, emphasize easy incorporation into daily processes and seamless integration into IT systems. Involving a broad range of stakeholders in the development process for e-health solutions can be invaluable in helping facilitate user centricity across stakeholders. Involvement can vary widely, including conducting research to understand preferences of different user groups; bringing together user group stakeholders to generate ideas, codesign solutions, and provide feedback on prototypes; and conducting user testing that evaluates the usability, feasibility, and acceptability of e-health solutions.

Via the Kanta Services platform, Finnish citizens can access their health data and prescriptions, HCPs can access EHRs and write prescriptions, and pharmacies can access a comprehensive pharmaceutical database, among other capabilities. The platform’s development involved patients, healthcare professionals, and IT experts, and it focused on making the platform user-friendly, secure, and practical for daily use. Consequently, Finland was able to streamline physicians’ workflow and ease their administrative burden by integrating access to services into a single platform.26Prior to Kanta Services, patients could not access their own records, and it was not uncommon for physicians to have difficulty accessing patient records from other units; physicians would need to log in and out of multiple systems throughout the day to access records as well as perform daily tasks such as viewing medical images and ordering laboratory tests.27

High-volume use cases. Digital healthcare applications with high transaction volumes, such as e-prescriptions, could help to further drive adoption at scale of other solutions. Integrating these high-transaction-volume solutions with other e-health solutions fosters greater engagement and increased use across solutions.

Since the introduction of e-prescriptions in Estonia and the integration of prescription data into the country’s EHR system in 2010, the country has seen a substantial increase in the use of both solutions. Today, nearly 100 percent of prescriptions issued in Estonia are digital, and 99 percent of all healthcare data has been captured in the EHR system. Patients can access a full overview of their medications and a data log for each prescription via the patient portal. Additionally, a drug interaction alert service checks e-prescription data for medical interactions and displays notifications to HCPs to alert them to any potential concerns. Today, all Estonian physicians use the drug alert service as part of their daily routines.28

Enhancing benefits: Encouraging further usage and enabling ecosystem growth 

Once a critical mass of users is frequently using e-health solutions or an entire ecosystem of solutions, a variety of innovative strategic actions can help maximize the benefits of the digital ecosystem. Developing new partnerships with HCPs and organizations, for example, may expand the range of services that are available via the e-health solution and add value for users. By consistently delivering value-adding offerings, the e-health ecosystem could deepen user engagement and foster loyalty, increasing adoption and use of the solution and ultimately leading to the realization of greater overall value. 

These questions could help leaders focus efforts during the enhancing-benefits phase:

  • How can e-health adoption lead to valuable innovations and insights?
  • What is the key to creating a thriving e-health ecosystem and using data effectively?
  • How can the full potential of e-health solutions be unlocked?

Data-driven innovations. A central national e-health infrastructure, especially a national data hub, may improve data standards as well as quality, availability, and accessibility of data. These improvements could, in turn, create opportunities for data-driven, value-adding analytics solutions from third parties.

Israel’s national database, the National Health Information Exchange (NHIE), contains health data from various sources—such as hospitals, clinics, and health insurers—and is managed by the Israeli Ministry of Health (IMH). While the NHIE is not publicly available, approved researchers, start-ups, and organizations can access and share the data at no cost with permission from the IMH. To ensure the privacy and security of the data, access to the NHIE is tightly regulated, and users must adhere to strict guidelines and regulations.29 One start-up that has leveraged Israel’s NHIE uses AI-powered medical-scan readers to detect early signs of cancer. The software is designed to analyze medical images such as computerized tomography (CT) scans and X-rays and identify potential anomalies that could indicate the presence of cancer or other diseases. By accessing Israel’s population health data, the solution can train its algorithms on a large and diverse data set, improving the accuracy and reliability of its scans.

A rich ecosystem. Opening a national e-health infrastructure to third parties helped to scale adoption and add value in some of the countries included in the analysis conducted for this article. Indeed, third-party solutions could help accelerate innovation and access to additional value pools. Implementing robust protections for personal health information and ensuring respect for patients’ rights before opening e-health infrastructure to third parties can foster trust in the system. 

As discussed above, Israel’s NHIE allows health organizations as well as start-ups to not only access available health data but also easily connect to the e-health infrastructure using uniform interoperability standards. This standardization helps promote competition and innovation by lowering barriers to entry for companies new to the market. An open API–based platform allows third-party developers to build services that can easily connect to it. Collaboration and innovation are also enabled by this openness, which makes the platform’s data and services available for the development of new solutions. The data-sharing agreements Israel has established govern the secure and controlled sharing of its health data, allowing that data to be used to improve health outcomes while safeguarding patient privacy. A broader ecosystem was thus created via an open digital infrastructure that facilitated competition among healthcare players.30

Population health management and informed decisions and policy making. Digitalized healthcare data could be used for central analytics such as population health management. This data could aid understanding of the overall health and care needs of a population as well as the availability of healthcare within a system, possibly improving care coordination and resource allocation.

Chicago’s Smart Data Project showcases how predictive analytics can support effective public health initiatives. The Chicago Department of Public Health employs predictive models that identify households at high risk for lead poisoning and monitor food establishments to detect safety violations. Using publicly available data, these models enable proactive interventions to help protect public health.31

A tailored approach to encouraging adoption at scale: Fundamental steps

The adoption of e-health solutions cannot be approached with a one-size-fits-all mindset. It is important to recognize that the diverse nature of healthcare systems, technological infrastructures, cultural contexts, and regulatory environments necessitates tailored approaches for each setting. What may work seamlessly in one country may encounter considerable challenges or inefficiencies in another.

The strategic actions provided in this article are instructive, offering insights into successful approaches to achieving e-health solution adoption at scale. They should not be regarded as rigid templates but rather flexible blueprints that can be customized and optimized to a country’s specific circumstances. These actions can serve as guideposts as leaders engage in three potential preliminary steps to develop a customized approach to scaling e-health solutions: defining baselines, developing a target design, and creating a detailed action plan.

Defining baselines

When successfully implemented, tailored e-health solutions typically begin with a clear understanding of a country or region’s baseline for three critical factors: the digital affinity of user groups, the requirements of local healthcare systems, and the digital maturity of infrastructure. 

Assessing user groups’ digital affinity entails understanding their familiarity and comfort with technology. This helps identify potential barriers that may affect adoption and enables targeted interventions to increase acceptance. Local healthcare system requirements could be evaluated by analyzing existing structures, policies, and processes to ensure solutions align with the context and integrate effectively. And analyzing technical capabilities, interoperability, and data standards of existing infrastructure provides insight into its digital maturity. This assessment can determine the feasibility of deploying solutions and identify necessary upgrades for successful implementation and scalability.

Developing a target design

Thoughtful analysis of solutions that would be most beneficial as well as consideration of target user groups and stakeholders can facilitate an effective target design for an adoption strategy. This process involves strategic design choices, including determining the optimal sequencing of solution rollout and ensuring seamless interconnection and interoperability among different offerings. The target design provides a comprehensive vision for structuring solutions and their interactions within the broader e-health ecosystem.

Creating a detailed action plan

Achieving scalability in e-health solutions necessitates an action plan that extends beyond individual solutions and encompasses the entire ecosystem of e-health offerings. Identifying the strategic actions that yield the greatest impact for each specific solution, as well as those that influence the overall offering, is central to effectively scaling adoption. This understanding can inform the appropriate timing and sequencing of actions, which support optimal implementation and adoption. Above all, change management is an integral component of any action plan because it plays a pivotal role in bolstering acceptance and adoption among stakeholders from the outset.

An action plan in action.The Qatar Ministry of Public Health developed a comprehensive action plan to establish its e-health ecosystem.32 The ecosystem’s foundation is a central platform that provides a national health information exchange and a repository that facilitates the secure exchange and storage of health information. The platform will enable communication between public and private healthcare players, including hospitals, health centers, clinics, and pharmacies, and will support patient empowerment.

As part of the program, the ministry will launch multiple solutions, including a health information viewer for clinicians, a national e-prescription system, national disease registries and management, traditional AI capabilities, and a patient-centric mobile app.

The ministry started implementing a strategy built on adoption propelled by target segments and change management strategy; this strategy comprised multiple steps to help ensure adoption and wide use across all national solutions, including the following:

  • implementing a user-centric design approach involving a wide group of end users (clinicians, patients, and technology experts)
  • implementing an easy-to-use patient consent system integrated with the patient application and the health information viewer
  • ensuring solution capabilities to streamline adoption, including embedding solutions into clinical workflows
  • planning multiple training initiatives—for example, training programs tailored to solutions and healthcare systems
  • implementing a communication plan that regularly engages all stakeholders

As the analysis conducted for this article demonstrates, customization, adaptability, and localized approaches in scaling e-health solutions are of utmost importance. Indeed, embracing this mindset and leveraging the wealth of proven strategic actions available can accelerate the adoption of e-health solutions. And, more broadly, tailoring solutions and strategies to meet patients and HCPs where they are advances e-health’s immense potential to enhance healthcare delivery and improve patient outcomes on a global scale.

ABOUT THE AUTHOR(S)

Ali Ustun is a partner in McKinsey’s Doha office, where Osman Ozturk is an associate partner; Florian Niedermann is a senior partner in the Stuttgart office; Matthias Redlich is a partner in the Frankfurt office, where Katharina Sickmüller is an associate partner; Panco Georgiev is a senior partner in the Dubai office, and Andreas Faber is an associate partner in the Cologne office.

Article link: https://www.mckinsey.com/industries/healthcare/our-insights/scaling-national-e-health-best-practices-from-around-the-world

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...