healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

The ethics of artificial intelligence – McKinsey

Posted by timmreardon on 02/02/2019
Posted in: Uncategorized. Leave a comment

In this episode of the McKinsey Podcast, Simon London speaks with MGI partner Michael Chui and McKinsey partner Chris Wigley about how companies can ethically deploy artificial intelligence.

Click on time stamps in the transcript to listen to the relevant sections of audio.

Download Podcast https://www.mckinsey.com/assets/dotcom/the-mckinsey-podcast/MP-The-ethics-of-artificial-intelligence.mp3

The McKinsey Podcast
The ethics of artificial intelligence
37:11
0:00
0:15

Hello, and welcome to this edition of the McKinsey Podcast, with me, Simon London. Today we’re going to be talking about the ethics of artificial intelligence. At the highest level, is it ethical to use AI to enable, say, mass surveillance or autonomous weapons? On the flip side, how can AI be used for good, to tackle pressing societal challenges? And in day-to-day business, how can companies deploy AI in ways that ensure fairness, transparency, and safety?

0:50

To discuss these issues, I sat down with Michael Chui and Chris Wigley. Michael is a partner with the McKinsey Global Institute and has led multiple research projects on the impact of AI on business and society. Chris is both a McKinsey partner and chief operating officer at QuantumBlack, a London-based analytics company that uses AI extensively in his work with clients. Chris and Michael, welcome to the podcast.

1:21

Great to be here.

1:22

Terrific to join you.

1:23

This is a big, hairy topic. Why don’t we start with the broadest of broad brush questions which is, “Are we right to be concerned?” Is the ethics of AI something—whether you’re a general manager or a member of the public—that we should be concerned about?

1:41

Yes, I think the simple answer to this is that the concerns are justified. We are right to worry about the ethical implications of AI. Equally, I think we need to celebrate some of the benefits of AI. The high-level question is, “How do we get the balance right between those benefits and the risks that go along with them?”

2:00

On the benefit side, we can already see hundreds of millions, even billions of people using and benefiting from AI today. It’s important we don’t forget that. Across all of their daily use in search and things like maps, health technology, assistants like Siri and Alexa, we’re all benefiting a lot from the convenience and the enhanced decision-making powers that AI brings us.

2:25

But on the flip side, there are justifiable concerns around jobs that arise from automation of roles that AI enables, from topics like autonomous weapons, the impact that some AI-enabled spaces and forums can have on the democratic process, and even things emerging like deep fakes, which is video created via AI which looks and sounds like your president or a presidential candidate or a prime minister or some kind of public figure saying things that they have never said. All of those are risks we need to manage. But at the same time we need to think about how we can enable those benefits to come through.

3:11

To add to what Chris was saying, you can think about ethics in two ways. One is this is an incredibly powerful tool. It’s a general-purpose technology—people have called it—and one question is, “For what purposes do you want to use it?” Do you want to use it for good or for ill?

3:28

There’s a question about what the ethics of that are. But again, you can use this tool for doing good things, for improving people’s health. You can also use it to hurt people in various ways. That’s one level of questions.

3:42

I think there’s a separate level of questions which are equally important. Once you’ve decided perhaps I’m going to use it for a good purpose, I’m going to try to improve people’s health, the other ethical question is, “In the execution of trying to use it for good, are you also doing the right ethical things?”

4:00

Sometimes you could have unintended consequences. You can inadvertently introduce bias in various ways despite your intention to use it for good. You need to think about both levels of ethical questions.

4:14

Michael, I know you just completed some research into the use of AI for good. Give us an overview. What did you find when you looked at that?

4:22

One of the things that we were looking at was how could you direct this incredibly powerful set of tools to improving social good. We looked at 160 different individual potential cases of AI to improve social good, everything from improving healthcare and public health around the world to improving disaster recovery. Looking at the ability to improve financial inclusion, all of these things.

4:49

For pretty much every one of the UN’s Sustainable Development Goals, there are a set of use cases where AI can actually help improve some of our progress towards reaching those Sustainable Development Goals.

5:06

Give us some examples. What are a couple of things? Bring it to life.

5:08

Some of the things that AI is particularly good at—or the new generations of AI are particularly good at—are analyzing images, for instance. That has broad applicability. Take, for example, diagnosing skin cancer. One thing you could imagine doing is taking a mobile phone and uploading an image and training an AI system to say, “Is this likely to be skin cancer or not?”

5:32

There aren’t dermatologists everywhere in the world where you might want to diagnose skin cancer. So being able to do that, and again, the technology is not perfect yet, but can we just improve our accessibility to healthcare through this technology?

5:45

On a very different scale, we have huge amounts of satellite imagery. The entire world’s land mass is imaged in some cases several times a day. In a disaster situation, it can be very difficult in the search for humans, to be able to identify which buildings are still there, which healthcare facilities are still intact, where are there passable roads, where aren’t there passable roads.

6:11

We’ve seen the ability to use artificial-intelligence technology, particularly deep learning, be able to very quickly, much more quickly than a smaller set of human beings, identify these features on satellite imagery, and then be able to divert or allocate resources, emergency resources, whether it’s healthcare workers, whether it’s infrastructure construction workers, to better allocate those resources more quickly in a disaster situation.

6:40

So disaster response, broadly speaking—there’s a whole set of cases around that.

6:44

Absolutely. It’s a place where speed is of the essence. When these automated machines using AI are able to accelerate our ability to deploy resources, it can be incredibly impactful.

6:58

One of the things that I find most exciting about this is linking that to our day-to-day work as well. So we’ve had a QuantumBlack team, for example, working with a city over the last few months recovering from a major gas explosion on the outskirts of that city. That’s really helped to accelerate the recovery of that infrastructure for the city, helped the families who are affected by that, helped the infrastructure like schools and so on, using a mix of the kinds of imagery techniques that Michael’s spoken about.

7:30

Also there’s the commuting patterns—the communications data that you can aggregate to look at how people travel around the city and so on to optimize the work of those teams who are doing the disaster recovery.

7:45

We’ve also deployed these kinds of machine-learning techniques to look at things like, “What are the root causes of people getting addicted to opioids? And what might be some of the most effective treatments?” to things like the spread of disease in epidemiology, looking at the spread of diseases like measles in Croatia. Those are all things that we’ve been a part of in the last 12 months, often on a pro bono basis, bringing these technologies to life to really solve concrete societal problems.

8:18

The other thing that strikes me in the research is that very often you are dealing with more vulnerable populations when you’re dealing with some of these societal-good issues. So yes, there are many ways in which you can point AI at these societal issues, but the risks in implementation are potentially higher because the people involved are in some sense vulnerable.

8:40

I think we find that to be the case. Sometimes AI can improve social good by identifying vulnerable populations. But in some cases that might hurt the people that you’re trying to help the most. Because when you’re identifying vulnerable populations, then sometimes bad things can happen to them, whether it’s discrimination or acts of malicious intent.

9:03

To that second level that we talked about before, how you actually implement AI within a specific use case also brings to mind a set of ethical questions about how that should be done. That’s as true in for-profit cases as it for not-profit cases. That’s as true in commercial cases as it is in AI for social good.

9:24

Let’s dive deeper on those risks then, whether you’re in a for-profit or a not-for-profit environment. What are the main risks and ethical issues related to the deployment, AI in action?

9:34

One of the first we should touch on is around bias and fairness. We find it helpful to think about this in three levels, the first being bias itself. We might think about this where a data set that we’re drawing on to build a model doesn’t reflect the population that the model will be applied to or used for.

9:55

There have been various controversies around facial-recognition software not working as well for women, for people of color, because it’s been trained on a biased data set which has too many white guys in it. There are various projects afoot to try and address that kind of issue. That’s the first level, which is bias. Does the data set reflect the population that you’re trying to model?

10:20

You then get into fairness which is a second level. Saying, “Look, even if the data set that we’re drawing on to build this model accurately reflects history, what if that history was by its nature unfair?” An example domain here is around predictive policing. Even if the data set accurately reflects a historical reality or a population, are the decisions that we make on top of that fair?

10:50

Then the final one is [about whether the use of data is] unethical. Are there data sets and models that we could build and deploy which could just be turned to not just unfair but unethical ends? We’ve seen debates on this between often the very switched-on employees of some of the big tech firms and some of the work that those tech firms are looking at doing.

11:16

Different groups’ definitions of unethical will be different. But thinking about it at those three levels of, one: bias. Does the data reflect the population? Two: fairness. Even if it does, does that mean that we should continue that in perpetuity? And three: unethical. “Are there things that these technologies can do which we should just never do?” is a helpful of way of separating some of those issues.

11:42

I think Chris brings up a really important point. We often hear about this term algorithmic bias. That suggests that the software engineer embeds their latent biases or blatant biases into the rules of the computer program. While that is something to guard against, the more insidious and perhaps more common for this type of technology is the biases that might be latent within the data sets as Chris was mentioning.

12:10

Some of that comes about sometimes because it’s the behavior of people who are biased and therefore you see it. Arrest records being biased against certain racial groups would be an example. Sometimes it just comes about because of the way that we’ve collected the data.

12:27

That type of subtlety is really important. It’s not just about making sure that the software engineer isn’t biased. You really need to understand the data deeply if you’re going to understand whether there’s bias there.

12:38

Yes, I think there’s that famous example of potholes in Boston I think it was using the accelerometers in smart phones to identify when people are driving, do they go over potholes. The problem with that at the time that this data was collected is that a lot of the more disadvantaged populations didn’t have smart phones. So there was more data on potholes in rich neighborhoods. [The Street Bump program is not in active use by the city of Boston.]

13:00

There’s a bunch of other risks that we also need to take into account. If the bias and fairness gives us an ethical basis for thinking about this, we also face very practical challenges and risks in this technology. So, for example, at QuantumBlack, we do a lot of work in the pharmaceutical industry. We’ve worked on topics like patient safety in clinical trials. Once we’re building these technologies into the workflows of people who are making decisions in clinical trials about patient safety, we have to be really, really thoughtful about the resilience of those models in operation, how those models inform the decision making of human beings but don’t replace it, so we keep a human in the loop, how we ensure that the data sources that feed into that model continue to reflect the reality on the ground, and that those models get retrained over time and so on.

13:54

In those kinds of safety critical or security critical applications, this becomes absolutely essential. We might add to this areas like critical infrastructure, like electricity networks and smart grids, airplanes. There are all sorts of areas where there is a vital need to ensure the operational resilience of these kinds of technologies as well.

14:19

This topic of the safety of AI is a very hot one right now, particularly as you’re starting to see it applied in places like self-driving cars. You’re seeing it in healthcare, where the potential impact on a person’s safety is very large.

14:38

In some cases we have a history of understanding how to try to ensure higher levels of safety in those fields. Now we need to apply them to these AI technologies because many of the engineers in these fields don’t understand that technology yet, although they’re growing in that area. That’s an important place to look in terms of the intersection of safety and AI.

15:00

And the way that some people have phrased that, which I like is, “What is the building code equivalent for AI?” I was renovating an apartment last year. The guy comes around from the local council and says, “Well, if you want to put a glass pane in here, because it’s next to a kitchen, it has to be 45-minutes fire resistant.” That’s evolved through 150, 200 years of various governments trying to do the right thing and ensure that people are building buildings which are safe for human beings to inhabit and minimize things like fire risk.

15:30

We’re still right at the beginning of that learning curve with AI. But it’s really important that we start to shape out some of those building code equivalents for bias, for fairness, for explainability, for some of the other topics that we’ll touch on.

15:44

Chris, you just mentioned explainability. Just riff on that a little bit more. What’s the set of issues there?

15:51

Historically some of the most advanced machine learning and deep-learning models have been what we might call a black box. We know what the inputs into them are. We know that they usefully solve an output question like a classification question. Here’s an image of a banana or of a tree.

16:10

But we don’t know what is happening on the inside of those models. When you get into highly regulated environments like the pharmaceutical industry and also the banking industry and others, understanding how those models are making those decisions, which features are most important, becomes very important.

16:31

To take an example from the banking industry, in the UK the banks have recently been fined over 30 billion pounds, and that’s billion with a B for mis-selling of [payment] protection insurance. When we’re talking to some of the banking leaders here, they say, “Well, you know, as far as we understand it, AI is very good at responding to incentives.” We know that some of the historic problems were around sales teams that were given overly aggressive incentives. What if we incentivize the AI in the wrong way? How do we know what the AI is doing? How can we have that conversation with the regulator?

17:05

We’ve been doing a lot of work recently around, “How can we use AI to explain what AI is doing?” The way that that works in practice we’ve just done a test of this with a big bank in Europe in a safe area. This is how the relationship managers talk to their corporate clients. What are they talking to them about?

17:26

The first model is a deep-learning model, which we call a propensity model. What is the propensity of a customer to do something, to buy a product, to stop using the service? We then have a second machine-learning model, which is querying the first model millions of times to try and unearth why it’s made that decision.

17:48

It’s deriving what the features are that are most important. Is it because of the size of the company? Is it because of the products they already hold? Is it because of any of hundreds of other features?

17:58

We then have a third machine-learning model, which is then translating the insights of the second model back into plain English for human beings to understand. If I’m the relationship manager in that situation, I don’t need to understand all of that complexity. But suddenly I get three or four bullet points written in plain English that say, “Not just here is the recommendation of what to do, but also here’s why.” It’s likely because of the size of that company, of the length of the relationship we’ve had with that customer, whatever it is, that actually A) explains what’s going on in the model and B) allows them to have a much richer conversation with their customer.

18:36

Just to close that loop, the relationship manager can then feed back into the model, “Yes, this was right. This was a useful conversation, or no, it wasn’t.” So we continue to learn. Using AI to explain AI starts to help us to deal with some of these issues around the lack of transparency that we’ve had historically.

18:56

You could think about the ethical problem being, “What if we have a system that seems to work better than another one, but it’s so complex that we can’t explain why it works?” These deep-learning systems have millions of simulated neurons. Again trying to explain how that works is really, really difficult.

19:14

In some cases, as Chris was saying, the regulator requires you to explain what happened. Take, for example, the intersection with safety. If a self-driving car makes a left turn instead of hitting the brakes and it causes property damage or hurts somebody, a regulator might say, “Well, why did it do that?”

19:30

And it does call into question, “How do you provide a license?” In some cases what you want to do is examine the system and be able to understand and somehow guarantee that the technical system is working well. Others have said, “You should just give a self-driving car a driving test and then figure out.” Some of these questions are very real ones as we try to understand how to use and regulate these systems.

19:54

And there’s a very interesting trade-off often between performance and transparency. Maybe at some point in the future there won’t be a trade-off, but at the moment there is. So we might say for a bank that’s thinking about giving someone a consumer loan, we could have a black-box model, which gets us a certain level of accuracy, let’s say 96, 97 percent accuracy of prediction whether this person will repay. But we don’t know why. And so therefore we struggle to explain either to that person or to a regulator why we have or haven’t given that person a loan.

20:29

But there’s maybe a different type of model which is more explainable which gets us to 92, 93 percent level of accuracy. We’re prepared to trade off that performance in order to have the transparency.

20:41

If we put that in human terms, let’s say we’re going in for treatment. And there is a model that can accurately predict whether either a tumor is cancerous or another medical condition is right or wrong. To some extent, as a human being, if we’re reassured that this model is right and has been proven to be right in thousands of cases, we actually don’t care why it knows as long as it’s making a good prediction that a surgeon can act on that will improve our health.

21:13

We’re constantly trying to make these trade-offs between the situations where explainability is important and the situations where performance and accuracy are more important.

21:23

Then for explainability it’s partly an ethical question. Sometimes it has to do with just achieving the benefits. We’ve looked at some companies where they’ve made the trade-off that Chris suggested, where they’ve gone to a slightly less performant system because they knew the explainability was important in order for people to accept the system and therefore actually start to use it.

21:47

Change management is one of the biggest problems in AI and other technologies to achieve benefits. And so explainability can make a difference. But as Chris also said, “That can change over time.” For instance, I use a car with [anti-lock] braking systems. And the truth is I don’t know how that works. And maybe earlier on in that history people were worried: “You’re going to let the car brake for itself.”

22:10

But now we’ve achieved a level of comfort because we’ve discovered this stuff works almost all the time. If we start to see that comfort change in an individual basis as well.

22:24

I’m going to ask an almost embarrassingly nerdy management question now. Stepping away from the technology, what’s our advice to clients about how to address some of these issues? Because some of this feels like it’s around risk management. As you think about deploying AI, how do you manage these ethical risks, compliant risks, you could phrase it any number of different ways. What’s the generalizable advice?

22:54

Let me start with one piece of advice, which is as much as we expect executives to start to learn about every part of their business and maybe you’re going to be a general manager, you’re going to need to know something about supply chain, HR strategy, operations, sales and marketing. It is becoming incumbent on every executive to learn more about technology now.

23:14

To the extent to which they need to learn about AI, they’re going to need to learn more about what it means to deploy AI in an effective way. We can bring some of the historical practices—you mentioned risk management. Understanding risk is something that we’ve learned how to do in other fields.

23:32

We can bring some of those tools to bear here when we couple that with the technical knowledge as well. One thing we know about risk management: understand what all the risks are. I think bringing that framework to the idea of AI and its ethics carries over pretty well.

23:47

Right. So it’s not just understanding the technology, but it’s also at a certain level understanding the ethics of the technology. At least get in your head what are the ethical or the regulatory or the risk implications of deploying the technology.

24:02

That’s exactly right. Take, for example, bias. In many legal traditions around the world, understanding that there are a set of protected classes or a set of characteristics around which we don’t want to actually use technology or other systems in order to discriminate.

24:20

That understanding allows you to say, “Okay, we need to test our AI system to make sure it’s not creating disparate impact for these populations of people.” That’s a concept that we can take over. We might need to use other techniques in order to test our systems. But that’s something we can bring over from our management practices previously.

24:40

As a leader thinking about how to manage the risks in this area, dedicating a bit of head space to thinking about it is a really important first step. The second element of this is bring someone in who really understands it. In 2015, so three years ago now, we hired someone into QuantumBlack who is our chief trust officer.

25:04

No one at the time really knew what that title meant. But we knew that we had to have someone who was thinking about this full time as their job because trust is existential to us. What is the equivalent if you’re a leader leading an organization? What are the big questions for you in this area? How can you bring people into the organization or dedicate someone in the organization who has that kind of mind-set or capabilities to really think about this full time?

25:31

To build on that, I think you need to have the right leaders in place. As a leadership team, you need to understand this. But the other important thing is to cascade this through the rest of the organization, understanding that change management is important as well.

25:45

Take the initiatives people had to do in order to comply with GDPR. That’s something that again I’m not saying that if you’re GDPR compliant, you’re ethical, but think about all the processes that you had to cascade not only for the leaders to understand but all of your people and your processes to make sure that they incorporate an understanding of GDPR.

26:06

I think the same thing is true in terms of AI and ethics as well. You think about everyone needs to understand a little bit about AI, and they have to understand, “How can we deploy this technology in a way that’s ethical, in a way that’s compliant with regulations?” That’s true for the entire organization. It might start at the top, but it needs to cascade through the rest of the organization.

26:25

We also have to factor in the risk of not innovating in this space, the risk of not embracing these technologies, which is huge. I think there’s this relationship between risk and innovation that is really important and a relationship between ethics and innovation.

26:43

We need an ethical framework and an ethical set of practices that can enable innovation. If we get that relationship right, it should become a flywheel of positive impact where we have an ethical framework which enables us to innovate, which enables us to keep informing our ethical framework, which enables us to keep innovating. That positive momentum is the flip side of this. There’s a risk of not doing this as much as there are many risks in how we do it.

27:13

Let’s talk a little bit more about this issue of algorithmic bias, whether it’s in the data set or actually in the system design. Again very practically, how do you guard against it?

27:25

We really see the answer to the bias question as being one of diversity. We can think about that in four areas. One is diversity of background of the people on a team. There’s this whole phenomenon around group think that people have blamed for all sorts of disasters. We see that as being very real.

27:45

We have 61 different nationalities across QuantumBlack. We have as many or more academic backgrounds. Our youngest person is in their early 20s. Our oldest person in the company is in their late 60s. All of those elements of diversity of background come through very strongly. We were at one point over 50 percent women in our technical roles. We’ve dropped a bit below that as we’ve scaled. But we’re keen to get back. Diversity of people is one big area.

28:10

The second is diversity of data. We touched on this topic of bias in the data sets not reflecting the populations that the model is looking at. We can start to understand and address those issues of data bias through diversity of data sets, triangulating one data set against another, augmenting one data set with another, continuing to add more and more different data perspectives onto the question that we’re addressing.

28:35

The third element of diversity is diversity of modeling. We very rarely just build a single model to address a question or to capture an opportunity. We’re almost always developing what we call ensemble models that might be a combination of different modeling techniques that complement each other and get us to an aggregate answer that is better than any of the individual models.

28:56

The final element of diversity we think about is diversity of mind-set. That can be diversity along dimensions like the Myers-Briggs Type Indicator or all of these other types of personality tests.

29:09

But we also, as a leadership team, challenge ourselves in much simpler terms around diversity. We sometimes nominate who’s going to play the Eeyore role and who’s going to play the Tigger role when we’re discussing a decision. Framing it even in those simple Winnie the Pooh terms can help us to bring that diversity into the conversation. Diversity of background, diversity of data, diversity of modeling techniques, and diversity of mind-sets. We find all of those massively important to counter bias.

29:37

So adding to the diversity points that Chris made, there are some process things that are important to do as well. One thing you can do as you start to validate the models that you’ve created is have them externally validated. Have someone else who has a different set of incentives check to make sure that in fact you’ve understood whether there’s bias there and understood whether there’s unintended bias there.

30:02

Some of the other things that you want to do is test the model either yourself or externally for specific types of bias. Depending on where you are, there might be classes of individuals or populations that you are not permitted to have disparate impact on. One of the important things to understand there is not only is race or sex or one of these protected characteristics—

30:27

And a protected characteristic is a very specific legal category, right? And it will vary by jurisdiction?

30:33

I’m not a lawyer. But, yes, depending on which jurisdiction you’re in, in some cases, the law states, “You may not discriminate or have disparate impact against certain people with a certain characteristic.” In order to ensure that you’re not discriminating or having disparate impact is not only that you don’t have gender as one of the fields in your database.

30:55

Because sometimes what happens is you have these, to get geeky, these co-correlates, these other things which are highly correlated with an indicator of a protected class. And so understanding that and being able to test for disparate impact is a core competency to make sure that you’re managing for biases.

31:15

One of the big issues, once the model is up and running, is, “How can we ensure that while we’ve tested it as it’s being developed, that it maintains in operation both accuracy and not being biased.” We’re in the reasonably early stages of this as an industry on ensuring resilience and ethical performance in production.

31:41

But some simple steps like, for example, having a process check to say, “When was the last time that this model was validated?” It sounds super simple. If you don’t do that, people have very busy lives, and they can just get overlooked. Building in those simple process steps all the way through to the more complicated technology-driven elements of this.

32:03

We can actually have a second model checking the first model to see if it’s suffering from model drift, for example. And then translate that into a very simple kind of red, amber, green dashboard of a model in performance. But a lot of this still relies on having switched-on human beings who maybe get alerted or helped by technology, but who engage their brain on the topic of, “Are these models, once they’re up and running, actually still performant?”

32:31

All sorts of things can trip them up. A data source gets combined upstream and suddenly the data feed that’s coming into the model is different from how it used to be. The underlying population in a given area may change as people move around. The technologies themselves change very rapidly. And so that question of how do we create resilient AI, which is stable and robust in production, is absolutely critical, particularly as we introduce AI into more and more critical safety and security and infrastructure systems.

33:04

And the need to update models is a more general problem than just making sure that you don’t have bias. It’s made even more interesting when there are adversarial cases. When in fact just to say, for instance, you have a system that’s designed to detect fraud. People who are fraudulent obviously, don’t want to get detected. So they might change their behavior understanding that the model is starting to detect certain things.

33:24

And so again, you really need to understand when you need to update the model whether it’s to make sure that you’re not introducing bias or just in general to make sure that it’s performing.

33:33

There’s an interesting situation in the UK where the UK government has set up a new independent body called the Centre for Data Ethics and Innovation that is really working on balancing these things out. How can you maximize the benefits of AI to society within an ethical framework?

33:50

And the Centre for Data Ethics and Innovation, or CDEI, is not itself a regulatory body but is advising the various regulatory bodies in the UK like the FCA, which regulates the financial industry and so on. I suspect we’ll start to see more and more thinking at a government and inter-government level on these topics. It’ll be a very interesting area over the next couple of years.

34:14

So AI policy broadly speaking is coming into focus and coming to the fore and becoming much more important over time.

34:21

It is indeed becoming more important. But I also think that it’s interesting within individual regulatory jurisdictions, whether it’s in healthcare or in aviation, whether it’s what happens on roads, the degree to which our existing practices can be brought to bear.

34:38

So again as I said, are driving tests the way that we’ll be able to tell whether autonomous vehicles should be allowed on the roads? There are things around medical licensure and how is that implicated in terms of the AI systems that we might want to bring to bear. Understanding that tradition and seeing what can be applied to AI already is really important.

35:04

So what is the standard to which we hold AI? And how does that compare to the standard to which we hold humans?

35:10

Indeed.

35:12

Absolutely. In the context of something like autonomous vehicles, that’s a really interesting question. Because we know that a human population of a certain size that drives a certain amount is likely to have a certain number of accidents a year. Is the right level for allowing autonomous vehicles when it’s better than that level or when it’s better than that level by a factor of ten?

35:37

Or do we only allow it when we get to a perfect level? And is that ever possible? I don’t think that anyone knows the answer to that question at the moment. But I think that as we start to flesh out these kinds of ethics frameworks around machine learning and AI and so on, we need to deploy them to answer questions like that in a way which various stakeholders in society really buy into.

36:00

A lot of the answers to fleshing out these ethical questions have to come from engaging with stakeholder groups and engaging with society more broadly, which is in and of itself an entire process and entire skill set that we need more of as we do more AI policy making.

36:19

Well, thank you, Chris. And thank you, Michael, for a fascinating discussion.

36:24

Thank you.

36:25

It’s been great.

Transformation underway across the Military Health System – Stripes

Posted by timmreardon on 02/01/2019
Posted in: Uncategorized. Leave a comment
by Tom McCaffery
Principal Deputy Assistant Secretary of Defense for Health Affairs
January 29, 2019

FALLS CHURCH, Va. — The Military Health System is one of America’s largest and most complex health-care delivery systems, and the world’s preeminent military-medical enterprise. Saving lives on the battlefield and caring for 9.5 million beneficiaries in one of the nation’s largest health-benefit plans, the Military Health System (MHS) is embarking on a new chapter, ushering unprecedented reform to military medicine. This transformation marks a new way of doing business – from military treatment facility (MTF) management, to electronic health record (EHR) employment, to TRICARE benefit enhancements – and we are working hard to provide medical readiness and health-care delivery that is more integrated and effective than ever before.

Organization Changes in the Military Health System

October 1, 2018, was a landmark day for the Department of Defense (DoD) and military-health care. Jumpstarting one of the largest organizational changes in decades, the Army, Navy and Air Force began the process of transferring the administration and management of their military MTFs to the Defense Health Agency (DHA). Part of a larger effort to implement reforms across the MHS, this historic change was mandated by the National Defense Authorization Act for fiscal year 2017. The law requires all MTFs to adhere to DHA-established standardized policies, procedures and clinical and business processes. In addition, through a phased approach, the DHA will assume direct responsibility for all MTFs across the MHS.

As such, the DHA will be responsible for MTF budgetary matters; information technology; health-care administration and management; administrative policies and procedures; and military-medical construction. We began the first phase on October 1 with the hospitals and clinics at Fort Bragg, Pope Field and Seymour Johnson Air Force Base, North Carolina; Naval Air Station Jacksonville, Florida; Keesler Air Force Base, Mississippi; and Joint Base Charleston, South Carolina. These facilities are in addition to the DHA’s existing management of Walter Reed National Military Medical Center, Fort Belvoir Community Hospital and their associated clinics. Subsequent phases of the MHS transition plan will include more than 50 percent of all hospitals and clinics in the continental U.S. coming under DHA control by October 2019 (phase 2), the remaining hospitals and clinics in the U.S. moving to DHA control by October 2020 (phase 3) and overseas hospitals and clinics by October 2021 (phase 4). Once complete, this transition will enable the MHS to better support the DoD’s medical-readiness requirements; provide a more consistent and higher quality experience for our patients; and deliver a more integrated military-health enterprise that reduces the costs required to operate the system, freeing up resources to invest in additional priorities.

Our highest priority is ensuring our medical forces are ready to support combat forces in the field, around the globe and building and sustaining a world-class health-care system geared at ensuring a medically ready force. The reforms underway create new opportunities for our providers both in our MTFs and through civilian-sector partnerships to build and maintain clinical skills – part and parcel to delivering on our readiness mission to support the warfighter, their families and retirees.

We will also be able to deliver a more integrated and consistent experience for our patients, whether they are active duty, retired or family members. For the first time, all of the department’s health-delivery functions will be under one roof. The DHA will be responsible for both Purchased Care – what our beneficiaries receive from the civilian sector – and Direct Care – what our beneficiaries receive at our MTFs. This consolidation will produce a better experience for our patients when we implement improvements such as standardizing appointment scheduling systems and streamlining referral processes.

Deploying a New Electronic Health Record

As the MHS embarks on unprecedented reforms, we are utilizing new tools to position us for a successful future. We continue to deploy MHS GENESIS, the new EHR for the military, which will provide enhanced, secure technology to manage health—connecting medical and dental information across the continuum of care, from point of injury to the MTF. MHS GENESIS will replace our legacy systems, which lack the capability to support the delivery of modern, integrated health care. We are scheduled to roll out the next wave in the fall of 2019, with the system-wide completion targeted for calendar year 2023.

The DoD purposefully deployed MHS GENESIS in four initial sites to identify and address lessons learned from initial implementation and utilize those experiences and best practices to inform the next wave of MHS GENESIS sites. We are seeing MHS GENESIS enable easier monitoring and response to patient health through an enhanced set of tools: data reporting and tracking capabilities, improved analytics, computer-aided decision support and a user-friendly patient portal. We’ve seen significant improvements in the four initial MHS GENESIS sites: a substantial decrease in the percentage of emergency-department patients who left without being seen; patient risk-alert systems leading to enhanced clinical decision making; and an avoidance of tens of thousands of duplicate lab tests. Like our broader transformation plans, at the heart of these efforts is a concerted push toward standardization, integration and readiness – and we are moving in the right direction.

TRICARE Enhancements

What do these major organizational changes mean for our beneficiaries? Our ultimate goal is to enhance the quality of care and improve access to health care for all our beneficiaries –making an already strong MHS even better. Alongside the MHS transformation come a number of ongoing enhancements to the TRICARE Health Plan. Over the last year alone, we have kicked off new TRICARE contracts for managed care through our civilian networks, which is more convenient for our beneficiaries when they move, reduces administrative costs, and requires our managed-care support contractors to provide broader access to primary and specialty-care networks. We also rolled out the new TRICARE Select benefit and implemented a series of enhancements for TRICARE beneficiaries, including expanded access to preventive care, urgent care and mental-health services.

From November 12 through December 10, 2018, TRICARE held its first Open Season enrollment period, the annual period when beneficiaries can make changes to their plan for the following calendar year. Also, we replaced the TRICARE Retiree Dental Program effective December 31, 2018, and now offer our 3.3 million retirees dental coverage through the Federal Employee Dental and Vision Insurance Program, or FEDVIP. Most beneficiaries are also now eligible for vision coverage – something DoD has never offered before. With 10 dental and four vision carriers, FEDVIP provides greater choice and scalability for 4.1 million eligible beneficiaries. 

Bringing it all together: what we seek to achieve

The major initiatives underway within the MHS are important steps in answering the call of DoD Secretary Jim Mattis to focus on three lines of effort to execute the National Defense Strategy: enhancing lethality, expanding alliances and partnerships and reforming the way we do business. Secretary Mattis’s call for business reforms is aimed, in his words, at “greater performance and accountability.” Our MHS reforms and the deployment of MHS GENESIS are setting us up to better support medical-readiness requirements and health-care delivery through integration and efficiency. These efforts help lower our costs, working to ensure the department has the resources to sustain the health benefits on which our Service members, retirees and their families depend.

Ultimately, all of these changes – the Military Health System transformation, MHS GENESIS, TRICARE enhancements – are aimed at taking the DoD’s health enterprise to the next level. Amidst these changes, we remain steadfast in our commitment to support readiness, both for our combat forces and for medical personnel. We are committed to meeting the evolving needs of today’s warfighter, and we will continue to deliver the highest quality health care for our 9.5 million active duty, retiree and family members who play such a critical role in keeping our country safe and secure. Our Service members and beneficiaries deserve nothing less.

Article link: https://japan.stripes.com/community-news/transformation-underway-across-military-health-system


Blockchain’s Occam problem – McKinsey

Posted by timmreardon on 01/29/2019
Posted in: Uncategorized. Leave a comment

Blockchain has yet to become the game-changer some expected. A key to finding the value is to apply the technology only when it is the simplest solution available.

January 2019 | Matt Higginson, Marie-Claude Nadeau, and Kausik Rajgopal

Blockchain over recent years has been extolled as a revolution in business technology. In the nine years since its launch, companies, regulators, and financial technologists have spent countless hours exploring its potential. The resulting innovations have started to reshape business processes, particularly in accounting and transactions.

Amid intense experimentation, industries from financial services to healthcare and the arts have identified more than 100 blockchain use cases. These range from new land registries, to KYC applications and smart contracts that enable actions from product processing to share trading. The most impressive results have seen blockchains used to store information, cut out intermediaries, and enable greater coordination between companies, for example in relation to data standards.

One sign of blockchain’s perceived potential is the large investments being made. Venture-capital funding for blockchain startups reached $1 billion in 2017. IBM has invested more than $200 million in a blockchain-powered data-sharing solution for the Internet of Things, and Google has reportedly been working with blockchains since 2016. The financial industry spends around $1.7 billion annually on experimentation.

There is a clear sense that blockchain is a potential game-changer. However, there are also emerging doubts. A particular concern, given the amount of money and time spent, is that little of substance has been achieved. Of the many use cases, a large number are still at the idea stage, while others are in development but with no output. The bottom line is that despite billions of dollars of investment, and nearly as many headlines, evidence for a practical scalable use for blockchain is thin on the ground.

Infant technologyFrom an economic theory perspective, the stuttering blockchain development path is not entirely surprising. It is an infant technology that is relatively unstable, expensive, and complex. It is also unregulated and selectively distrusted. Classic lifecycle theory suggests the evolution of any industry or product can be divided into four stages: pioneering, growth, maturity, and decline (exhibit). Stage 1 is when the industry is getting started, or a particular product is brought to market. This is ahead of proven demand and often before the technology has been fully tested. Sales tend to be low and return on investment is negative. Stage 2 is when demand begins to accelerate, the market expands and the industry or product “takes off.”

Exhibit


Across its many applications, blockchain arguably remains stuck at stage 1 in the lifecycle (with a few exceptions). The vast majority of proofs of concept (POCs) are in pioneering mode (or being wound up) and many projects have failed to get to Series C funding rounds.

One reason for the lack of progress is the emergence of competing technologies. In payments, for example, it makes sense that a shared ledger could replace the current highly intermediated system. However, blockchains are not the only game in town. Numerous fintechs are disrupting the value chain. Of nearly $12 billion invested in US fintechs last year, 60 percent was focused on payments and lending. SWIFT’s global payments innovation initiative (GPI), meanwhile, is addressing initial pain points through higher transaction speeds and increased transparency, building on bank collaboration.

Blockchain players in the payments segment, such as Ripple, are increasingly partnering with nonbank payments providers, the businesses of which may be a better fit for blockchain technology. These companies may also be willing to move forward more rapidly with integration.

In addition, the payments industry faces a classic innovator’s dilemma: incumbents understand that investing in disruption, and the likely resulting rise in customer expectations for faster, easier, and cheaper services, may lead to cannibalization of their own revenues.

Given the range of alternative payments solutions and the disincentives to investment by incumbents, the question is not whether blockchain technology can provide an alternative, but whether it needs to? Occam’s razor is the problem-solving principle that the simplest solution tends to be the best. On that basis blockchain’s payments use cases may be the wrong answer.

Industry caution

Some sense of this dilemma is starting to feed through to industry. Early blockchain development was led by financial services, which from 2012 to 2015 assigned big resources where it was felt processes could be streamlined. Banks and others saw activities such as trade finance, derivatives netting and processing, and compliance (alongside payments) as prime candidates. Numerous companies set up innovation labs, hired blockchain gurus, and invested in start-ups and joint ventures. A leading industry consortium attracted more than 200 financial institutions to its ecosystem, conceived to deliver the next generation of blockchain technology in finance.

As financial services led, others followed. Insurers saw the chance for contract and guarantee efficiencies and the potential to share intelligence on underwriting and fraud. The public sector looked at how it could update its sprawling networks, creating more transparent and accessible public records. Automakers envisaged smart contracts sitting on top of the blockchain to automate leasing and hire agreements. Others spotted a chance to modernize accounting, contracting, and fractional ownership and to create efficiencies in data management and supply chains.

By the end of 2016, blockchain’s future looked bright. Investment was soaring and some of the structural challenges to the industry appeared to be fading. Technical glitches were being resolved and new, more private versions of the ledger were launched to cater to business demands. Regulators appeared to be more sanguine than previously, focusing on communication, adaptation, and debate rather than impediment.

From an industry lifecycle perspective, however, a more complex dynamic was emerging. Just as the financial services industry’s blockchain investments were reaching the end of Stage 1—theoretically the moment when they should be gearing up for growth—they appeared to falter.

Emerging doubts

McKinsey’s work with financial services leaders over the past two years suggests those at the blockchain “coalface” have begun to have doubts. In fact, as other industries have geared up, the mood music at some levels in financial services has been increasingly of caution (even as senior executives have made confident pronouncements to the contrary). The fact was that billions of dollars had been sunk but hardly any use cases made technological, commercial, and strategic sense or could be delivered at scale.

By late 2017, many people working at financial companies felt blockchain technology was either too immature, not ready for enterprise level application, or was unnecessary. Many POCs added little benefit, for example beyond cloud solutions, and in some cases led to more questions than answers. There were also doubts about commercial viability, with little sign of material cost savings or incremental revenues.

Another concern was the requirement for a dedicated network. The logic of blockchain is that information is shared, which requires cooperation between companies and heavy lifting to standardize data and systems. The coopetition paradox applied; few companies had the appetite to lead development of a utility that would benefit the entire industry. In addition, many banks have been distracted by broader IT transformations, leaving little headspace to champion a blockchain revolution.

The key question now is whether those doubts are still justified. Or whether it is just that progress in blockchain development has been slower than expected.

Over recent months some financial institutions have begun to recalibrate their blockchain strategies. They have put POCs under more intense scrutiny and adopted a more targeted approach to development funding. Many have narrowed their focus from tens of use cases to one or two and have doubled down on oversight of governance and compliance, data standards, and network adoption. Some consortia have shrunk their proof of concept rosters from tens in 2016 to just a handful today.

The emergence of cryptocurrencies, and in particular Bitcoin, as potential mainstream financial instruments prompted financial services to move first on blockchain experimentation, placing them 18 to 24 months ahead of other industries on the industry lifecycle. Given that gap, it is not surprising that the earlier concerns in banking are now emerging elsewhere, with initial enthusiasm being eroded by a growing sense of underachievement.

The reality is that rather than following the classic upward curve of the industry lifecycle, blockchain appears to be stalled in the bottom left-hand corner of the X-Y graph. For many, stage 2 isn’t happening. As we enter 2019, blockchain’s practical value is mainly located in three specific areas:

  • Niche applications: There are specific use cases for which blockchain is particularly well-suited. They include elements of data integration for tracking asset ownership and asset status. Examples are found in insurance, supply chains, and capital markets, in which distributed ledgers can tackle pain points including inefficiency, process opacity, and fraud.
  • Modernization value:Blockchain appeals to industries that are strategically oriented toward modernization. These see blockchain as a tool to support their ambitions to pursue digitization, process simplification, and collaboration. In particular, global shipping contracts, trade finance, and payments applications have received renewed attention under the blockchain banner. However, in many cases blockchain technology is a small part of the solution and may not involve a true distributed ledger. In certain instances, renewed energy, investment, and industry collaboration is resolving challenges agnostic of the technology involved.
  • Reputational value: A growing number of companies are pursuing blockchain pilots for reputational value; demonstrating to shareholders and competitors their ability to innovate, but with little or no intention of creating a commercial-scale application. Arguably blockchains focused on customer loyalty, IoT networking and voting fall into this category. In this context, claims of being “blockchain enabled” sound hollow.

A future for blockchain?

Given the lack of convincing at-scale use cases and the industry’s seemingly becalmed position in the industry lifecycle, there are reasonable questions to ask about blockchain’s future. Is it really going to revolutionize transaction processing and lead to material cost reductions and efficiency gains? Are there benefits to be accrued that justify the changes required in market infrastructure and data governance? Or is a secure distributed ledger primarily just one option when contemplating possible replacements for legacy infrastructure?

Certainly, there is a growing sense that blockchain is a poorly understood (and somewhat clunky) solution in search of a problem. The perspective is exacerbated by short-term expense pressures, cultural resistance in some quarters (blockchains may threaten jobs), and concern over disruption to healthy revenue streams. There are challenges in respect of governance—making decisions in a decentralized environment is never easy, especially when accountability is equally decentralized. And there are technical impediments, for example in respect to blockchains’ data storage capacity.

It’s estimated there will be over 20 billion connected devices by 2020, all of which will require management, storage, and retrieval of data. However, today’s blockchains are ineffective data receptacles, because every node on a typical network must process every transaction and maintain a copy of the entire state. The result is that the number of transactions cannot exceed the limit of any single node. And blockchains get less responsive as more nodes are added, due to latency issues.

Finally, there are security concerns. In smaller networks where validation relies on a majority vote there is manifest potential for fraud (the so-called “51 percent problem”). Another potential security challenge arises from advances in quantum computing. Google said in 2016 its quantum prototype was 10 million times faster than any computer in its lab. That raises the possibility that quantum computers will be able to hack codes used to authorize cryptocurrency transactions; a particularly troubling threat for a network that claims to be fraud resistant.

Still, all is not lost. It’s likely that many of the validation protocols used today will be upgraded or replaced in the next two to three years, and innovators are already finding solutions. Cardano, for example, is a so-called third-generation technology and the industry’s first platform to leverage peer-reviewed open source code. The protocol is designed to be quantum-computing resistant. Private blockchains, meanwhile, are being built to give network members control over who can read the ledger and how nodes are connected.

In addition, there have been some promising advances in use cases, particularly away from the financial industry. Recent experiments in supply chains, identity management, and sharing of public records have been positive. We have seen grocery stores target customers with blockchain-enabled products and services, and shipping executives launch a new real-time registry of containers underpinned by blockchain.

An emerging perspective is that the application of blockchain can be most valuable when it democratizes data access, enables collaboration, and solves specific pain points. Certainly, it brings benefits where it shifts ownership from corporations to consumers, sharing “proof” of supply-chain provenance more vertically, and enabling transparency and automation. Our suspicion is that it will be these species of uses cases, rather than those in financial services, that will eventually demonstrate the most value.

Moving through the cycle: Three key principles

There is no guarantee that any blockchain application will make a sustained move to the second stage in the industry lifecycle. To do so will require a strong rationale, significant capital, and increased standardization. Fintech leaders will need to take a more nuanced view of their target industries and hire the right talent. However, where there is potential to address pain points at scale, the opportunity remains in place.

To get there we see three key principles as minimum conditions for progress:

  • Organizations must start with a problem. Unless there is a valid problem or pain point, blockchain likely won’t be a practical solution. Also, Occam’s razor applies—it must be the simplest solution available. Firms must honestly evaluate their risk-reward appetite, level of education, and potential gain. They should also assess the potential impact of any project and supporting business case.
  • There must be a clear business case and target ROI: Organizations must identify a rationale for investment that reflects their market position and which is supported at board level and by employees, without fear of cannibalization. Companies should pragmatically consider their power to shape ecosystems, establish standards, and address regulatory hurdles, all of which will inform their strategic approach. Blockchain’s value comes from its network effects, so a majority of stakeholders must be aligned. There must be a governance agreement covering participation, ownership, maintenance, compliance, and data standards. Finance arrangements must be agreed in advance so that sufficient funding through to commercial launch is guaranteed.
  • Companies must agree to a mandate and commit to a path to adoption. Once a use case is selected, companies must assess their ability to deliver. Sufficient economic and technological support is essential. If they pass those hurdles, the next stage is to launch a design process and gather elements including the core blockchain platform and hardware. They must then set performance targets (transaction volume and velocity). In parallel, companies should put in place the necessary organizational frameworks, including working groups and communications protocols, so that development, configuration, integration, production, and marketing (to drive adoption at scale) are sufficiently supported.

Conceptually, blockchain has the potential to revolutionize business processes in industries from banking and insurance to shipping and healthcare. Still, the technology has not yet seen a significant application at scale, and it faces structural challenges, including resolving the innovator’s dilemma. Some industries are already downgrading their expectations (vendors have a role to play there), and we expect further “doses of realism” as experimentation continues.

Companies set on taking blockchain forward must adapt their strategic playbooks, honestly review the advantages over more conventional solutions, and embrace a more hard-headed commercial approach. They should be quick to abandon applications where there is no incremental value. In many industries, the necessary collaboration may best be undertaken with reference to the ecosystems starting to reshape digital commerce. If they can do all that, and be patient, blockchain may still emerge as Occam’s right answer.

About the authors

Matt Higginson is a partner in McKinsey’s Boston office, Marie-Claude Nadeau is a partner in the San Francisco office, and Kausik Rajgopal is a senior partner in the Silicon Valley office.

Article link: https://www.mckinsey.com/industries/financial-services/our-insights/blockchains-occam-problem

The Deadly Consequences Of Financial Incentives In Healthcare – Forbes

Posted by timmreardon on 01/29/2019
Posted in: Uncategorized. Leave a comment

Robert Pearl, M.D. Contributor

Imagine you’re a CEO in charge of a healthcare organization with thousands of physicians and 19 hospitals.

Overall, the quality of care delivered is good. Prices and costs are low. But there is a problem: Patients rate your service below average. Making matters worse, a swarm of low-priced competitors have moved in, challenging your market share. You’re going to need to improve patient satisfaction to survive. What do you do?

About 20 years ago, this was my predecessor’s problem at Kaiser Permanente. Under enormous pressure, the CEO came up with what he thought was a great solution: A patient-satisfaction incentive program.

It could not have been a bigger disaster.

Hundreds of thousands of KP members mailed in their satisfaction surveys, revealing that 16 of 19 medical centers hit their stretch target. The winning facilities got huge payouts, but inside the three that missed out, resentment erupted, and persisted, not just that year but for decades to come.

It wasn’t laziness or indifference that caused the three medical centers to fail. On the contrary, those facilities had experienced unprecedented spikes in membership that year. As any physician will tell you, patient relationships don’t just blossom overnight. They take time to develop, thus the lower overall scores.

Things weren’t any better inside the 16 facilities that won. The following year, with financial challenges mounting at KP, leaders scratched the incentive program altogether. Suddenly, without any extrinsic motivation to make patients happier, doctors returned to their previous pattern of behavior. Service took a hit and brand perception crashed.

“Money Often Costs Too Much” – Ralph Waldo Emerson 

For decades, elite business schools have touted the benefits of financial incentives to motivate sales teams, factory workers and rising executives. Results are mixed.

In medicine, financial incentives rarely achieve their intended goals. It’s not because they don’t work. As I tell students at the Stanford Graduate School of Business, it’s because they work too well.

Monetary rewards always change doctor behavior, but rarely achieve the outcomes desired—something my predecessor learned the hard way when he tried to increase patient satisfaction.

But what happens when a group of well-intended leaders tries using money to improve clinical quality? As a study recently published in JAMAdemonstrates, the consequences can be fatal.

I’ll let the researchers explain backstory, as they did in this New York Times op-ed:

A decade ago, one in five Medicare patients who were hospitalized for common conditions ended up back in the hospital within 30 days. Because roughly half of those cases were thought to be preventable, reducing hospital readmissions was seen by policymakers as a rare opportunity to improve the quality of care while reducing costs. In 2010, the federal agency that oversees Medicare … established the Hospital Readmissions Reduction Program (HRRP) under the Affordable Care Act. Two years later, the program began imposing financial penalties on hospitals with high rates of readmission within 30 days of a hospitalization for pneumonia, heart attack or heart failure.

Like most financial-incentive programs, HRRP was created for the right reasons. Logic dictates that if you pay doctors to prevent readmissions (or in this case, penalize them when they fail), they’ll make doubly sure patients are healthy enough to go home before releasing them from the hospital.

At first, it seemed to work. Five years into HRRP, readmission rates were down and Medicare had saved $10 billion. But as calls to expand the program grew louder, researchers shed light on the program’s unintended consequences.

“Rule No. 1: Never Lose Money” – Warren Buffett

Remember, CMS began doling out financial penalties within two years of HRRP to punish hospitals for readmitting patients who had experienced either (1) pneumonia, (2) heart attack or (3) heart failure. To make sure hospital administrators took the program seriously, CMS made the financial penalties for readmissions stiff, 10 times greater than the penalties for higher death rates.

True to the program’s intention, researchers found no increase in readmissions or mortality for patients with acute heart attacks. This makes sense. Patients who survive a myocardial infarction (MI) rarely suffer another cardiac event within 30 days.

However, data concerning the other two “penalized” diagnoses (pneumonia and heart failure) raised huge red flags about the program.

“Post-discharge deaths increased by 0.25% for patients hospitalized with heart failure and by 0.40% for patients with pneumonia since implementation of the 30-day readmission rules,” according to the study’s lead author, Dr. Rishi K. Wadhera of the Harvard Medical School.

These fractional increases don’t look all that significant until you consider that, during the period of this study, there were 3.2 million Medicare members admitted for heart failure and 3 million more for pneumonia. Phrased differently, the program’s financial penalties caused 8,000 unnecessary deaths from heart failure and 12,000 from pneumonia. Because HRRP used money to discourage doctors from readmitting patients who otherwise would have benefited from hospital treatment, 20,000 people died unnecessarily within the span of five years.

What happened here? How do we explain people dying from a program designed to improve clinical outcomes for hospitalized patients? The answer is two-fold.

The first part has to do with the diseases measured. Patients recovering from pneumonia and heart failure often relapse, not because of poor medical care or premature discharge, but due to the nature of their underlying heart and lung problems.

When the very sickest of these patients relapse, they’re almost always readmitted. In these situations, attending physicians accept the HRRP penalties as unavoidable. And because the sickest patients get the follow-up care they need, death rates among this group didn’t increase, according to the research.

The same can’t be said for patients who were on the “borderline.” Instead of readmitting them to the hospital, doctors often refer them to an outpatient observation unit or keep them in the ER for prolonged periods of time. For the hospital, neither action counts as a “readmission” and, therefore, no penalty is incurred. Among these patients, however, death rates rose significantly compared to the period before HRRP’s implementation.

This brings us to the second part of the explanation, which has to do with how the human mind responds to financial incentives. Numerous brain-scanning studies have demonstrated that large rewards and penalties cause our minds to undergo a “shift” in perception, leading to all sorts of questionable behaviors.

Second, we created educational programs to help physicians improve their satisfaction scores with personal coaching and development opportunities. Finally, we made it easier for doctors to satisfy patients by offering convenient access to care through technology. Allowing busy people to email with their doctors or participate in video visits had a positive and powerful impact on the satisfaction scores.

As a result, KP earned the highest ranking for “health plan member satisfaction” by J.D. Power and stayed at the top year after year. We used the same type of approach to improve quality: transparent performance data, broad education and advanced technology. The result was a sharp reduction in both hospital utilization and a major decrease in mortality.

Personal financial incentives do create change, but rarely the kind of change patients want or deserve. Physicians are intrinsically motivated to do their best for patients. With the right combination of leadership, resources and a mission-driven spirit, they can and they will.

Dr. Robert Pearl is a former healthcare CEO who teaches at Stanford. More than 10,000 people subscribe to his “Monthly Musings on American Healthcare.” Follow him @RobertPearlMD

Robert Pearl, M.D. Contributor

Article link: https://www.forbes.com/sites/robertpearl/2019/01/28/financial-incentives/amp/

Six lessons on how to embrace the next-generation operating model – McKinsey

Posted by timmreardon on 01/28/2019
Posted in: Uncategorized. Leave a comment

mckins1Realities on the ground highlight what’s really needed to pull off the transformation.
Companies that hope to compete in the digital world are coming to see that it requires a fundamentally new way of working. On the customer-experience side, digital natives have raised the bar considerably; for example, banks today benchmark their websites and apps against companies such as Amazon and Uber. Internally, despite big investments in digitization, process redesign, and automation, the efficiency ratio at most large companies has stalled. Their improvement initiatives reside in different pockets, such as a digital factory or automation center of excellence, and are seldom integrated.

A next-generation operating model1 (NGOM) is needed to give companies the ability to move quickly and adapt to changing circumstances. The rewards for making the leap to the NGOM are significant: step-change improvements that produce 30 to 50 percent productivity gains, up to 80 percent reduction in turnaround time, up to 10 percent enhancement of customer experience, and 20 to 25 percent growth.
Last year, we identified the two key shifts that are necessary for companies to build the NGOM:

  • From uncoordinated improvement efforts within siloes . . . To an integrated transformation program organized around customer journeys (the interactions a customer has with a company) and internal journeys (end-to-end processes inside the company).
  • From using individual technologies and capabilities in a piecemeal way inside siloes . . . To applying them to journeys in combination and in the right sequence, thereby achieving compound impact.

Over the past couple of years, as we’ve worked with companies to develop their NGOM, six important lessons have emerged.

Lesson #1: Start by working on a high-impact end-to-end journey

Some companies start their digital operations transformation with small pilots that don’t generate significant benefits. Others spend a lot of time analyzing which journey to tackle first. But there’s no single right way to get started. The key is to identify a journey that’s important and begin there.

There are two primary approaches for deciding where to begin:

  • If a “burning platform” at the company is already in mind—an issue with potential to have a big impact on customer experience, new-customer acquisition, customer service, and/or cost and productivity—simply start there. Alternatively, identify no-regret areas (every company has a few) and pick one. Set up a cross-functional, agile team to tackle the chosen area.
  • If there are several burning platforms, evaluate the potential of the next-gen levers across the most important customer journeys at the enterprise level. This will help prioritize and sequence journeys for the next two to three years after embarking on the transformation.

Whichever path is chosen, it’s important to get started quickly in order to demonstrate the from-to path for the next-gen transformation and win over skeptics by showing the value the model can generate. We have found that it’s generally better to take on customer-facing journeys before internal ones. If it’s hard to get the buy-in needed to begin with a whole journey, it’s possible to start smaller— inside a single business unit or geographic site—and later extend the effort to include the entire journey from end to end.

Companies have started with a range of high-impact journeys. A North American bank began with home- mortgage origination on an end-to-end basis. For a global property-and-casualty insurer, the starting point was policy services; for a credit-card issuer, it was customer acquisition; for a life insurance issuer, it was new-business origination; and for an airline company, it was the ticket and ancillaries sales journey. Companies in other industries have also applied the NGOM, starting with journeys such as production of steel or restocking of store shelves. Despite beginning in quite different places, all of these companies experienced comparable results along key dimensions that drive costs and revenue growth.

Lesson #2: Be systematic when prioritizing and sequencing improvement levers

The NGOM integrates multiple improvement levers—process redesign, digitization, automation, analytics, and outsourcing/offshoring—to achieve step-change improvements. Yet there can be complex interdependencies between levers. In some settings, for example, applying robotic process automation (RPA) before redesigning the process can be a waste of time. It’s critical to understand the interdependencies and to be systematic in selecting the mix and sequence of levers.
Use a structured process to understand the potential of the key levers and the dependencies between them. These can vary depending on such factors as the journey being addressed or location (exhibit).

mckins2

A vehicle-leasing company recognized there were significant differences in the level of sophistication of its operations in different countries, which meant the applicability of levers varied widely. In one country, where capabilities are less mature, it plans to implement a major process-redesign effort and then centralize key activities, all as a prelude to introducing automation. In another country, by contrast, where the processes are more mature, centralization will be the first step, followed by automation. Another strategy the company is keeping in mind is to favor early initiatives that can generate short-term return on investment.

Lesson #3: Apply the next-generation operating model across all steps of core journeys to get the most value

Often, companies start by applying the NGOM in a focused way—for example, to a customer-facing journey in the organization’s front end. But achieving step-change improvements from the NGOM requires applying it from the point where the customer interacts with the brand all the way through to the back-end systems that support and deliver on that interaction. At most companies, 20 end-to-end journeys account for more than 70 percent of the costs and more than 80 percent of the customer experience. Transforming core journeys touches all parts of an organization and requires spanning “horizontally” to cut across silos and deliver step-change, multidimensional improvements.

We have observed two approaches that can extend the NGOM horizontally:

  • The most common is to start at the front end, with the customer-facing aspects of a journey, and later extend the effort to internal processes housed in the back office.
  • A more aggressive approach is to begin working horizontally from the outset, by assigning responsibility for all aspects of customer-facing journeys, and the key internal processes that support them, to cross-functional agile squads. In this approach, organizational change happens before journey transformation.

One other important way to realize true impact from end-to-end journey transformation is by creating a new role: the journey owner, who has the authority to call on resources from the multiple groups that must contribute to delivering on that journey. In a bank, these could include the front, middle, and back office, as well as IT, risk, and compliance. Because the end-to-end journey owner must engage with many parts of the organization, it is a challenging role and should be filled by a highly capable executive. The end-to-end journey owner should report to a C-level executive: chief digital officer, COO, or even the CEO.

A leading global financial-services company wanted to improve its customer-exception journey. To break down silos within the business, it started by standing up a cross-functional team with process redesign, agile, digital, and automation expertise and representation from compliance, legal, risk, and privacy. The team then focused on improving the entire customer journey. To enhance the customer experience, it tackled the front end by digitizing its customer self-service portal and improving existing interactive voice response (IVR). In parallel, it redesigned processes to drive efficiency in the back office, enabling future RPA opportunities. This “horizontal” approach, spanning multiple parts of the organization, is on pace to deliver close to a 20 percent reduction in cost of service delivery, a more than 35 percent improvement in work efficiency and quality of service, and a more than 20 percent gain in people-engagement scores.

Lesson #4: Start on the talent challenge immediately

Most companies do not fully grasp their talent challenge. Not only do they need new skilled people, but they also need to reskill or redeploy existing staff, since certain improvement levers, such as RPA, will replace some jobs and transform many others.2 Because these challenges don’t appear immediately, companies tend not to focus on them until it’s too late, at which point talent management becomes a bottleneck on their path toward the NGOM.
Organizations need to start preparing right away to get the talent needed for the future workforce. The first step is to assess future skill demand in key areas such as data science and agile, diagnose the current supply, and launch initiatives to fill the gap. This requires HR to work closely with the executives in charge of implementing the NGOM to gain a complete understanding of what the real skill needs are.
Reskilling and redeploying staff is a critical step in monetizing the value generated by the next-generation operating model. For example, a data- entry analyst whose former work is automated could be upskilled to oversee RPA systems. It’s worth noting that not everyone can be reskilled to the roles in greatest demand; a call-center rep can’t be trained in a few months to become a data scientist or an agile scrum master.
A global bank set up a digital factory in a new facility in its home country’s leading tech region, plus satellite digital factories in the tech centers of four countries where it had subsidiaries. It built out the offices so they resembled start-ups and established partnerships with university research centers and early-stage fintechs to create a collaborative environment that leveraged the new offices’ locations inside larger tech ecosystems. Its recruiting materials emphasized that positions in the digital factory were anything but typical banking jobs.

Lesson #5: Don’t let technology debt scare you off

Many companies are in technology debt and believe they cannot achieve the full potential of the NGOM without fully revamping their IT architecture and systems. But no matter what the condition of their current infrastructure, companies can take advantage of the NGOM by making incremental tech upgrades along their transformation journey and so accrue significant benefits without needing to wait for a full system upgrade.
Companies have followed two paths in addressing their technology debt:
The first involves building individual databases and applications, driven by specific use cases, separate from the existing legacy infrastructure. This creates new capabilities, which can then connect back with legacy systems on a case-by-case basis, using application programming interfaces (APIs) and microservices.
Cloud and open-source technologies now make it affordable to build entirely new infrastructure, on a limited scale, to meet the emerging needs of the NGOM, while continuing to run existing legacy systems in parallel. As this new, clean stack grows over time, it can increasingly be used in lieu of the legacy infrastructure.
A large financial-services firm launched an analytics initiative that relied on machine learning, along with associated digital tools and process redesign. When data from its digital customer-acquisition channels were fed into its existing infrastructure, the legacy systems effectively broke. In response, the company stood up new databases and analytics engines by deploying inexpensive cloud technologies and open-source tools. These systems worked well, and the company soon wanted to apply the new analytic models to its old data, which was housed in its legacy core. To make this work, the company wired the new systems and legacy core together with APIs and microservices. The company is now continuing along this path on a case-by-case basis, using the gains generated by earlier activities to pay for the next round of work.

Lesson #6: Keep evolving and adapting with continuous improvement

Our experience from transformation efforts has shown that the 80/20 rule applies. The initial big effort generates the majority of the value—the 80 percent. But the remaining 20 percent is still significant. Rather than stretch to “do it all” in one shot, find the 80/20 balance, and commit to continuous improvement to adapt and optimize the model.

To achieve the last 20 percent of value from the NGOM, a continuous-improvement mind-set should become the new “steady state.” That means making agile practices a way of life, not a project methodology, one that continually generates new ideas, prototypes them quickly, tests them to obtain feedback, and then iterates based on that input. Even after primary customer-facing journeys have been reconfigured, sizable productivity gains can still be achieved by tackling legacy operations and corporate support functions such as finance and HR. The NGOM is not static. It must continually evolve and adapt.

At regular intervals—every three years or so—it’s also worth taking out a clean sheet of paper and reinventing with zero-based design.3

Conclusion: Embrace the journey

Adopting the next-generation operating model is a journey, and insights on how best to embrace it are evolving. We expect to learn more as the journey continues. But so far, these six lessons have consistently made a difference for the companies that applied them.

About the author(s)

Tod Camara is a consultant in McKinsey’s Chicago office, where Alex Singla is a senior partner; Adele Hu is a specialist in the Toronto office, where Rohit Sood is a senior partner; and Jasper van Ouwerkerk is a senior partner in the Amsterdam office.

The authors wish to thank Matthew Craddy, Somesh Khanna, Eric Lamarre, Elixabete Larrea, Chris McShea, and Debasish Patnaik for their contributions to this article.

Article link:https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/six-lessons-on-how-to-embrace-the-next-generation-operating-model

The next-generation operating model for the digital world – McKinsey

Posted by timmreardon on 01/27/2019
Posted in: Uncategorized. Leave a comment

By Albert Bollard, Elixabete Larrea, Alex Singla, and Rohit Sood

Companies need to increase revenues, lower costs, and delight customers. Doing that requires reinventing the operating model.

Companies know where they want to go. They want to be more agile, quicker to react, and more effective. They want to deliver great customer experiences, take advantage of new technologies to cut costs, improve quality and transparency, and build value.

The problem is that while most companies are trying to get better, the results tend to fall short: one-off initiatives in separate units that don’t have a big enterprise-wide impact; adoption of the improvement method of the day, which almost invariably yields disappointing results; and programs that provide temporary gains but aren’t sustainable.

We have found that for companies to build value and provide compelling customer experiences at lower cost, they need to commit to a next-generation operating model. This operating model is a new way of running the organization that combines digital technologies and operations capabilities in an integrated, well-sequenced way to achieve step-change improvements in revenue, customer experience, and cost.

A simple way to visualize this operating model is to think of it as having two parts, each requiring companies to adopt major changes in the way they work:

  • The first part involves a shift from running uncoordinated efforts within siloes to launching an integrated operational-improvement program organized around customer journeys (the set of interactions a customer has with a company when making a purchase or receiving services) as well as the internal journeys (end-to-end processes inside the company). Examples of customer journeys include a homeowner filing an insurance claim, a cable-TV subscriber signing up for a premium channel, or a shopper looking to buy a gift online. Examples of internal-process journeys include Order-to-Cash or Record-to-Report.
  • The second part is a shift from using individual technologies, operations capabilities, and approaches in a piecemeal manner inside siloes to applying them to journeys in combination and in the right sequence to achieve compound impact.

Let’s look at each element of the model and the necessary shifts in more detail:

Shift #1: From running uncoordinated efforts within siloes to launching an integrated operational-improvement program organized around journeys

Many organizations have multiple independent initiatives underway to improve performance, usually housed within separate organizational groups (e.g. front and back office). This can make it easier to deliver incremental gains within individual units, but the overall impact is most often underwhelming and hard to sustain. Tangible benefits to customers—in the form of faster turnaround or better service—can get lost due to hand-offs between units. These become black holes in the process, often involving multiple back-and-forth steps and long lag times. As a result, it’s common to see individual functions reporting that they’ve achieved notable operational improvements, but customer satisfaction and overall costs remain unchanged.

Instead of working on separate initiatives inside organizational units, companies have to think holistically about how their operations can contribute to delivering a distinctive customer experience. The best way to do this is to focus on customer journeys and the internal processes that support them. These naturally cut across organizational siloes—for example, you need marketing, operations, credit, and IT to support a customer opening a bank account. Journeys—both customer-facing and end-to-end internal processes—are therefore the preferred organizing principle.

Transitioning to the next-generation operating model starts with classifying and mapping key journeys. At a bank, for example, customer-facing journeys can typically be divided into seven categories: signing up for a new account; setting up the account and getting it running; adding a new product or account; using the account; receiving and managing statements; making changes to accounts; and resolving problems. Journeys can vary by product/service line and customer segment. In our experience, targeting about 15–20 top journeys can unlock the most value in the shortest possible time.

We often find that companies fall into the trap of simply trying to improve existing processes. Instead, they should focus on entirely reimagining the customer experience, which often reveals opportunities to simplify and streamline journeys and processes that unlock massive value. Concepts from behavioral economics can inform the redesign process in ingenious ways. Examples include astute use of default settings on forms, limiting choice to keep customers from feeling overwhelmed, and paying special attention to the final touchpoint in a series, since that’s the one that will be remembered the most.

In 2014, a major European bank announced a multiyear plan to revamp its operating model to improve customer satisfaction and reduce overall costs by up to 35 percent. The bank targeted the ten most important journeys, including the mortgage process, onboarding of new business and personal customers, and retirement planning. Eighteen months in, operating costs are lower, the number of online customers is up nearly 20 percent, and the number using its mobile app has risen more than 50 percent. (For more on reinventing customer journeys, see “Putting customer experience at the heart of next-generation operating models,” forthcoming on McKinsey.com.)

Shift #2: From applying individual approaches or capabilities in a piecemeal manner to adopting multiple levers in sequence to achieve compound impact

Organizations typically use five key capabilities or approaches (we’ll call them “levers” from now on) to improve operations that underlie journeys (Exhibit 1):

  • Digitization is the process of using tools and technology to improve journeys. Digital tools have the capacity to transform customer-facing journeys in powerful ways, often by creating the potential for self-service. Digital can also reshape time-consuming transactional and manual tasks that are part of internal journeys, especially when multiple systems are involved.1
  • Advanced analytics is the autonomous processing of data using sophisticated tools to discover insights and make recommendations. It provides intelligence to improve decision making and can especially enhance journeys where nonlinear thinking is required. For example, insurers with the right data and capabilities in place are massively accelerating processes in areas such as smart claims triage, fraud management, and pricing.
  • Intelligent process automation (IPA) is an emerging set of new technologies that combines fundamental process redesign with robotic process automation and machine learning. IPA can replace human effort in processes that involve aggregating data from multiple systems or taking a piece of information from a written document and entering it as a standardized data input. There are also automation approaches that can take on higher-level tasks. Examples include smart workflows (to track the status of the end-to-end process in real time, manage handoffs between different groups, and provide statistical data on bottlenecks), machine learning (to make predictions on their own based on inputs and provide insights on recognized patterns), and cognitive agents (technologies that combine machine learning and natural-language generation to build a virtual workforce capable of executing more sophisticated tasks). To learn more about this, see “Intelligent Process Automation: The engine at the core of the next generation operating model.”
  • Business process outsourcing (BPO) uses resources outside of the main business to complete specific tasks or functions. It often uses labor arbitrage to improve cost efficiency. This approach typically works best for processes that are manual, are not primarily customer facing, and do not influence or reflect key strategic choices or value propositions. The most common example is back-office processing of documents and correspondence.
  • Lean process redesign helps companies streamline processes, eliminate waste, and foster a culture of continuous improvement. This versatile methodology applies well to short-cycle as well as long-cycle processes, transactional as well as judgment-based processes, client-facing as well as internal processes.

Guidelines for implementing these levers

In considering which levers to use and how to apply them, it’s important to think in a holistic way, keeping the entire journey in mind. Three design guidelines are crucial:

1. Organizations need to ensure that each lever is used to maximum effect. Many companies believe they’re applying the capabilities to the fullest, but they’re actually not getting as much out of them as they could. Some companies, for example, apply a few predictive models and think they’re really pushing the envelope with analytics—but in fact, they’re only capturing a small fraction of the potential value. This often breeds a false complacency, insulating the organizations from the learnings that would otherwise drive them to higher performance because it is “already under way” or “has been tried”. Having something already under way is a truism: everyone has something under way in these kinds of domains, but it is the companies that press to the limit that reap the rewards. Executives need to be vigilant, challenge their people, and resist the easy answer.

Article link: https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/the-next-generation-operating-model-for-the-digital-world

A winning operating model for digital strategy – McKinsey

Posted by timmreardon on 01/26/2019
Posted in: Uncategorized. Leave a comment

mckin1c

Digital is driving major changes in how companies set and execute strategy. New survey results point to four elements that top performers include in their digital-strategy operating model.
For many companies, the process of building and executing strategy in the digital age seems to generate more questions than answers. Despite digital’s dramatic effects on global business—the disruptions that have upended industries and the radically increasing speed at which business is done—the latest McKinsey Global Survey on the topic suggests that companies are making little progress in their efforts to digitalize the business model.1 Respondents who participated in this year’s and last year’s surveys report a roughly equal degree of digitalization as they did one year ago,2 suggesting that companies are getting stuck in their efforts to digitally transform their business.

The need for an agile digital strategy is clear, yet it eludes many—and there are plenty of pitfalls that we know result in failure. We have looked at how some companies are reinventing themselves in response to digital, not only to avoid failure but also to thrive. In this survey, we explored which specific practices organizations must have in place to shape a winning strategy for digital—in essence, what the operating model looks like for a successful digital strategy of reinvention. Based on the responses, there are four areas of marked difference in how companies with the best economic performance approach digital strategy,3 compared with all others:

  • The best performers have increased the agility of their digital-strategy practices, which enables first-mover opportunities.
  • They have taken advantage of digital platforms to access broader ecosystems and to innovate new digital products and business models.
  • They have used M&A to build new digital capabilities and digital businesses.
    They have invested ahead of their peers in digital talent.
  • Increase the agility of creating, executing, and adjusting strategy

One of the biggest factors that differentiate the top economic performers from others is how quick and adaptable they are in setting, executing, and adjusting their digital strategies—in other words, the velocity and adaptability of their operating models for digital strategy. Both are necessary for companies to achieve first-mover (or very-fast-follower) status, which we know to be a source of significant economic advantage.4 So how do they do it? We looked at the frequency with which companies follow 11 operational practices of digital strategy. With the exception of M&A—which typically requires a much longer time frame than the other ten, often due to regulatory reasons—respondents in the top revenue decile say their companies carry out each one more frequently than their peers (Exhibit 1). The link between frequency and performance also holds up when looking at earnings before interest and taxes (EBIT).5

mckin1d

That speed in strategy links with financial outperformance is not surprising and is consistent with our other work on strategy planning. As the pace of digital-related changes continues to accelerate, companies are required to make larger bets and to reallocate capital and people more quickly. These tactical changes to the creation, execution, and continuous modification of digital strategy enables companies to apply a “fail fast” mentality and become better at both spotting emerging opportunities and cutting their losses in obsolescent ones, which enables greater profitability and higher revenue growth.

Invest in ecosystems, digital products, and operating models

The companies that outperform on revenue and EBIT also differ from the rest in their embrace of the economic changes that digital technologies have wrought. Based on the results, they have done so in three specific ways: taking advantage of new digital ecosystems, focusing product-development efforts on brand-new digital offerings, and innovating the business model. We know that digital platforms have enabled the creation of new marketplaces, the sharing of data, and the benefits of network effects at a scale that was impossible just a few years ago. As these factors have converged, the digital ecosystems created by these platforms are blurring industry boundaries and changing the ways that companies evaluate the economics of their business models, their customers’ needs, and who their competitors—and partners—are.6

The top EBIT performers are taking better advantage of these ecosystem-based dynamics than other companies—namely, by using digital platforms much more often to access new partners and customers. Respondents at these companies are 39 percent more likely than others are to say they do so. And while the share of global sales that move through these ecosystems is still less than 10 percent, other McKinsey research predicts that this share will grow to nearly 30 percent by 2025, making platforms an ever more critical element of digital strategy.

The needs of customers become broader and more integrated in an ecosystem-based world, and the companies that are already active in their respective ecosystems are better positioned to understand these needs and meet them (either on their own or with partners) before their peers do. It makes sense, then, that the top performers seem to be developing much more innovative offerings than their peers. On average, companies’ digital innovations most often involve adjustments to existing products. Yet respondents at the top-performing companies say they focus on creating brand-new digital offerings (Exhibit 2). What’s more, these respondents are about 60 percent more likely than others are to agree that they are more advanced than peers in adopting digital technologies to help them do so. This result is consistent with our previous findings that first movers and early adopters of digital technologies and innovations also outperform their peers.

mckin1e

Last, innovation of the business model is more common at the top-performing companies. In our past survey, only 8 percent of respondents said their companies’ current business models would remain economically viable without making any further digital-based changes. In the newest survey, we see that the companies that have embraced digital are well ahead of their peers in their preparation for digital’s new economic realities. At the top performers, respondents say they have invested more of their digital capital in new digital businesses, compared with all other respondents (Exhibit 3). Our research also shows that companies overall invested a greater share in new digital businesses as the overall digital maturity of their sectors increased. The more successful companies appear to be the ones that made these moves earlier than their peers, rather than being forced into making such investments late in the game.

mckin1f

Use M&A to build digital capabilities and businesses

According to the results, M&A is another differentiator between the top-performing companies and everyone else. Not only are they spending more than others on M&A, but they are also investing in different types of M&A activities (Exhibit 4). At the winners, respondents report spending more than twice as much on M&A, as a share of annual revenue, as their counterparts elsewhere.7 The same is true of respondents reporting top-decile EBIT growth, relative to respondents at other organizations.

mckin1g

Given the pace of digital-related changes and the challenges companies face to match that speed through organic growth alone, this isn’t so surprising. What is surprising, however, is that top economic performers take a different approach to their M&A activities. While top performers and their peers have used some part of their overall digital investments to acquire new digital businesses in recent years, the top performers are investing more in acquiring both new digital businesses and new capabilities. By contrast, other respondents say their companies focus most of their M&A spending on nondigital ventures—an area where lower-performing companies seem to be doubling down.

mckin1hInvest ahead of peers in digital talent

From earlier work, we know that getting the right digital talent is a key enabler for digital success—a point that our latest findings only reinforce. Talent is also a major pain point: qualified digital talent is a scarce commodity, as the pace of digital still outstrips the supply of people who can deliver it. But the top economic performers are making a greater effort to solve this problem. Compared with others, these respondents say their companies are dedicating much more of their workforce to digital initiatives (Exhibit 5). It’s not just the degree of investment that distinguishes top performers, though. They are also much nimbler in their use of digital talent, reallocating these employees across the organization nearly twice as frequently as their peers do. This agility enables more rapid movement of resources to the highest-value digital efforts—or to clearing out a backlog of digital work—and a better alignment between resources and strategies.

Looking ahead

  • Make your strategy process more dynamic. By definition, a digital strategy must adapt to the digital-driven changes happening outside the company, as well as within it. Given the breakneck pace of these changes, such a strategy must keep up with the pace of digital and enable first-mover opportunities by being revisited, iterated upon, and adjusted much more frequently than strategies have been in the past. Companies need their digital strategies to act as a road map for ongoing transformation—a living organism that evolves along with the business landscape. In other work, we laid out the four main fights that companies must win to build truly dynamic digital strategies. Organizations must educate their business leaders on digital and foster an attacker’s perspective, so people are more likely to look at their business, industry, and the role of digital through the eyes of new competitors. They must galvanize senior executives to action by building top-team-effectiveness programs. Organizations also must leverage data-driven insights to test and learn—and correct course—quickly. And they must fight the diffusion of their efforts and resources—a constant challenge, given the simultaneous need to digitalize their core business and innovate with new business models. These steps will put companies in a better position to move first in delivering new products and meeting customers’ and partners’ evolving needs in the new ecosystems that platforms are creating.
  • Invest in talent and capabilities early and aggressively. Talent is already known as one of the hardest issues to solve as companies transform themselves in their pursuit of digitalization. The results confirm that companies need to embrace this reality and then look at how they can solve it best, whether through smarter, more dynamic allocation of these resources or the use of M&A to accelerate the building of new digital capabilities. Digital is driving an ever-faster pace of innovation, and companies can take advantage of the potential benefits only if they have the capabilities to harness it. For the survey’s top performers, one way forward is leveraging M&A to help build their digital capabilities, rather than trying to build them through a slower, organic approach. These companies are also getting the most from their digital capabilities and investments by deploying them in much more agile ways and creating a more flexible, responsive operating model.
  • Redefine how you measure success. The digital era requires that companies move nimbly in order to succeed. Yet many are still measuring performance with the same metrics they used previously—which were designed for a slower pace of business and a rigid strategy-setting process. Companies must move away from old metrics (market share, for example) that are no longer meaningful indicators of economic success. With markets becoming ill-defined due to shifts in industry boundaries and shrinking economic pies within a given sector, market share is no longer a gold-standard metric or even relevant. Companies need to hold themselves to new standards that will indicate whether or not they are truly leading the pack on innovation, productivity, and the adoption of digital technologies. In our experience, outcomes such as being first to market with innovations, leading on productivity, and working with other businesses in the ecosystem (that is, moving from an “us versus them” mind-set on digital to one of partnership) are better indicators of future digital success.

About the author(s)

The survey content and analysis were developed by Jacques Bughin, a director of the McKinsey Global Institute and senior partner in McKinsey’s Brussels office; Tanguy Catlin, a senior partner in the Boston office; and Laura LaBerge, a senior expert in the Stamford office.

They wish to thank Soyoko Umeno for her contributions to this work.

Article link: https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/a-winning-operating-model-for-digital-strategy

The value of digital ID for the global economy and society – McKinsey

Posted by timmreardon on 01/25/2019
Posted in: Uncategorized. Leave a comment

mckin1a

“Good” use of digital identification is a new frontier in value creation for individuals, companies, and countries. A panel of leading voices discuss how to unlock the benefits, while mitigating the risks.

mckin1b

On January 23 at 5:00 pm CET, global experts from business, government, and the social sector will discuss how digital identification can provide both economic and societal benefit, as well the keys to success in implementing digital ID. They will also discuss how to guard against the challenges of digital ID as a potential “dual-use technology” that can be used to the benefit of society but can also be used for undesirable purposes.
This panel discussion will include Mike Kubzansky, managing partner of Omdiyar Network; James Manyika, chairman and director of the McKinsey Global Institute; Nandan Nilekani, co-founder of Infosys and founding chairman of UIDAI; Jüri Ratas, prime minister of Estonia; and Mary Snapp, corporate vice president of philanthropies at Microsoft. It will be streamed live from the 2019 World Economic Forum Annual Meeting in Davos, and available as a recording after the live event has concluded.
About digital identification
In an era of rapid technological change, digital identification provides a significant opportunity for value creation for individuals and institutions.
Nearly one billion people globally lack a legally recognized form of identification, according to the World Bank ID4D database. The rest of the world’s 6.6 billion people either have some form of identification but limited access to services that increasingly are being provided online, or they are active online but struggle to keep track of their digital footprint securely and efficiently.
Individuals can use digital identification, or “digital ID,” to be verified unambiguously through a digital channel, unlocking access to banking, government benefits, education, and many other critical services.
Programs employing this relatively new technology have had mixed success to date—many have failed to attain even modest levels of usage, while a few have achieved large-scale implementation. Yet well-designed digital ID not only enables civic and social empowerment, but also makes possible real and inclusive economic gains—a less well understood aspect of the technology.
Digital identification: A key to inclusive growth
In new research, the McKinsey Global Institute (MGI) develops a framework to understand the potential economic impact of digital ID, informed by an analysis of nearly 100 ways in which digital ID can be used, with deep dives into seven diverse economies. The potential for economic value creation could be significant as “good” digital ID increases inclusion, formalization, and transparency while promoting digitization, however risks need to be addressed.
Download the summary of findings, “Digital identification: A key to inclusive growth,” (PDF–.99MB)
In the seven focus countries of Brazil, China, Ethiopia, India, Nigeria, the United Kingdom, and the United States, the report examines how digital identification can create significant economic value for individuals, business, and governments around the world:
There is 3-6 percent economic value by 2030 from the good use of digital ID for a typical mature or emerging economy, respectively.
Roughly half the potential could accrue to individuals who benefit most as consumers, including financial services, and as taxpayers and beneficiaries who gain access to efficient government e-services.
Institutions could gain from higher productivity, cost savings, and fraud reduction, for example by reducing customer onboarding costs and implementing streamlined employee verification processes.
Responsible use and high adoption of digital ID programs are not automatic and require the right principles and policies. All stakeholders – government, business and civil society – can take steps towards this.

Article link: https://www.mckinsey.com/featured-insights/innovation-and-growth/the-value-of-digital-id-for-the-global-economy-and-society

American Hospital Association, other groups call for widespread effort to accelerate interoperability – Healthcare IT News

Posted by timmreardon on 01/25/2019
Posted in: Uncategorized. Leave a comment

By Diana Manos

January 22, 2019
hcitn 1

More and more healthcare organizations are urging a broader swath of the industry to get involved driving interoperability. The latest is a Jan. 22 report from seven leading national hospital associations urging “all stakeholders” to take part.

The report, titled, “Sharing Data, Saving Lives: The Hospital Agenda for Interoperability,” outlines some pathways to get the job done.

Working with policymakers and other stakeholders, contributors of the report hold that this is what it will take to get interoperability finally where it needs to be in healthcare: data security, enhanced infrastructure, standards that work, connecting beyond EHRs and shared best practices.

WHY IT MATTERS

According to Chip Kahn, president and CEO of the Federation of American Hospitals, data is everything when it comes to quality care. Having the right information at the right time is critical for clinicians and patients alike.

THE BIGGER TREND

Hospitals and health systems are making progress in sharing health information, with 93 percent of them offering the records to their patients online and 88 percent sharing records with ambulatory care providers outside their system. “They have worked to create the most interoperable systems possible given the tools available to them, but at great cost and effort,” the groups added.

That said, the federal government is also making moves to bridge the gap in achieving interoperability. Last week, the Office of the National Coordinator for Health IT released its latest version of standards to promote successful interoperability with the Interoperability Standards Advisory Reference 2019. The reference manual is traditionally how ONC coordinates the identification, assessment, and public awareness of interoperability standards and implementation specifications, encouraging all stakeholders — clinical and research — to use them. ONC also encourages pilot testing of the standards.

ON THE RECORD

“We see interoperability in action all around us,” said AHA President and CEO Rick Pollack. “Mobile phones can call each other regardless of make, model, or operating system. The hospital field has made good headway, but it’s time to complete the job. We are united in calling for a truly interoperable system that allows all providers and patients to benefit from shared health records and data, leading to fully informed care decisions.”

The report was written by America’s Essential Hospitals, the American Hospital Association, the Association of American Medical Colleges, the Catholic Health Association of the United States, the Children’s Hospital Association, the Federation of American Hospitals and the National Association for Behavioral Healthcare, and it also names the benefits of fully interoperable data for patients and providers; lists the challenges to getting there and how interoperable records can improve outcomes.

Diana Manos is a Washington, D.C.-area freelance writer specializing in healthcare, wellness and technology. 

Article link: https://www.healthcareitnews.com/news/american-hospital-association-other-groups-call-widespread-effort-accelerate-interoperability

Electronic Health Records Were Supposed To Cut Medical Costs. They Haven’t – Forbes

Posted by timmreardon on 01/22/2019
Posted in: Uncategorized. Leave a comment

By Roberta Holland

Despite the promise that electronic health records would cut billing costs, savings have yet to materialize, according to a major new study by researchers at Harvard Business School and Duke University.

“The theory was that part of having electronic records was to lower the cost. We didn’t find much evidence for that,” says coauthor Robert S. Kaplan, senior fellow and Marvin Bower Professor of Leadership Development, Emeritus, at Harvard Business School.

The research, conducted at Duke University Medical Center in 2016 and 2017, found that generating a single bill cost anywhere from $20 to $215 depending on the type of visit. That’s despite the fact that Duke has an established electronic health record (EHR) system and an efficient, centralized billing department, Kaplan says.

Administrative costs account for at least a quarter of health care spending in the United States. That is twice the administrative overhead found in Canada and is significantly higher than most other high-resource countries.

Adoption of certified EHR systems was seen as a potential antidote. By digitizing patient records and adding billing workflows around them, the efficiencies would drive down processing costs, it was theorized.

The problem: Automation may help on some record-keeping tasks, but it also imposes its own costs. “In fact, more costs were shifted over to doctors in that they had to enter more codes into the so-called automated system,” Kaplan says. “Turns out that that gets them annoyed, and it distracts them from dealing with the patient.”

The study, published in the February 20 issue of the Journal of the American Medical Association, looked at five types of visits: primary care visits, ER visits resulting in a patient discharge, general medicine hospital stays, outpatient surgical procedures, and inpatient surgeries.

Findings included:

  • A primary care visit necessitated 13 minutes in billing and insurance-related activities, costing $20. The time and cost ramped up to 100 minutes and $215 for an inpatient surgery.
  • Just the physicians’ portion of the time and cost spent on billing amounted to 3 minutes and about $6 for a primary care visit, up to 15 minutes and $51 for surgery.
  • Physicians, who cost between $3 and $8 per minute, are doing administrative tasks that a scribe costing 50 cents a minute could do better, Kaplan says.

To calculate the costs, the study used time-driven activity-based costing, a method Kaplan co-developed. “We tracked how much time each person spent dealing with that bill, we determined the cost per minute of each person, and we just multiplied the two numbers and added them up across all the personnel involved in preparing a bill,” Kaplan says.

Working with Kaplan were Phillip Tseng, Duke University School of Medicine; Barak D. Richman, Duke Law School and the Duke-Margolis Center for Health Policy; Mahek A. Shah, Institute for Strategy and Competitiveness at HBS; and Kevin A. Schulman, the Duke Clinical Research Institute, Duke University School of Medicine, and HBS. The researchers plan to replicate the study internationally to compare costs in Switzerland and other countries.

It’s complicated
As for why administrative costs are so high in the United States and why EHR systems haven’t cut costs more, the answer is complicated, Kaplan says.

One reason is the multiplicity of payers, including Medicare, Medicaid, and commercial insurance companies, with reams of contracts differing in coverage and procedure prices. The different contracts don’t even share a common structure. (In Canada, by contrast, national health care is a single-payer system.)

In other industries, he says, complexity that doesn’t add value is whittled away through competition. “In health care there’s not a lot of competition that extinguishes bad practice, so bad practice can persist for a long period of time.”

Other cost drivers include the nature of health care transactions with third-party involvement, the somewhat clunky EHR systems themselves that combine billing with health records, and fee-for-service payment systems. “Every time the patient moves, there’s another charge we hit them with.”
Bundled payments would help
Kaplan advocates a bundled payment system, where there is one negotiated price for a specific condition, covering everything from the patient’s copay to any medication needed during the procedure. Medicare and large private employers General Electric and Walmart are leading the drive. In General Electric’s case, its employees and retirees are sent to New England Baptist Hospital for joint replacement surgery. Because the provider is getting paid one price, that leads to greater accountability for patient outcomes, Kaplan says.

“When we buy a car we don’t send a separate check to the tire manufacturer or the fuel pump producer. We make one payment and we get everything in there,” Kaplan said. “But anyone whose visited a doctor or a hospital recently knows you get a sequence of charges over the next six months.”

Bundled payments would be so much simpler.  “We can run it off an Excel spreadsheet,” says Kaplan.

Article link:  https://www.forbes.com/sites/hbsworkingknowledge/2018/03/07/electronic-health-records-were-supposed-to-cut-medical-costs-they-havent/#7e504d9a5060

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (1)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...