healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Moving Past the EHR Interoperability Blame Game – NEJM

Posted by timmreardon on 03/26/2017
Posted in: Uncategorized. Leave a comment

Article · February 22, 2017

Julia Adler-Milstein, PhD

University of Michigan

NEJM


As a researcher who studies electronic health records (EHRs), I’ve lost count of the times I’ve been asked “Why can’t the systems talk to each other?” or, in more technical terms, “Why don’t we have interoperability?” The substantial increase in electronic health record adoption across the nation has not led to health data that can easily follow a patient across care settings. Still today, essential pieces of information are often missing or cumbersome to access. Patients are frustrated, and clinicians can’t make informed decisions. When our banks talk to each other seamlessly and online ads show us things we’ve already been shopping for, it is hard to understand why hospitals and doctors’ offices still depend on their fax machines.

When our banks talk to each other seamlessly and online ads show us things we’ve already been shopping for, it is hard to understand why hospitals and doctors’ offices still depend on their fax machines.”

A big part of the reason is that interoperability of health information is hard. If it were easy, we would have it, or at least have more of it, by now. Though it’s a technological issue, it’s not just a technological issue. As we have seen in other industries, interoperability requires all parties to adopt certain governance and trust principles, and to create business agreements and highly detailed guides for implementing standards. The unique confidentiality issues surrounding health data also require the involvement of lawmakers and regulators. Tackling these issues requires multi-stakeholder coordinated action, and that action will only occur if strong incentives promote it.

Though billions in monetary incentives fueled EHR adoption itself, they only weakly targeted interoperability. I have come to believe that we would be substantially farther along if several key stakeholders had publicly acknowledged this reality and had made a few critical decisions differently. While it’s too late for “do-overs,” understanding initial missteps can guide us to a better path. Here is how those key stakeholders, intentionally or not, have slowed interoperability.

Policymakers

The 2009 Health Information Technology for Economic and Clinical Health (HITECH) legislation contained two basic components: a certification program to make sure that EHRs had certain common capabilities, and the “Meaningful Use” program, divided into three progressively more complex stages, that gave providers incentive payments for using EHRs. The legislation specified “health information exchange” (HIE) as one of the required capabilities of certified EHR systems. However, the Centers for Medicare and Medicaid Services (CMS) and the Office of the National Coordinator for Health IT (ONC) had substantial latitude to decide how to define health information exchange as well as how to include it in the Meaningful Use program and the accompanying EHR certification criteria.

CMS and ONC decided to defer the initial HIE criterion — electronically transmitting a summary-of-care record following a patient transition — to Stage 2 of the Meaningful Use program. While many of the capabilities required for providers to meet this criterion would have been challenging to develop on the Stage 1 Meaningful Use timeline, deferring HIE to Stage 2 allowed EHR systems to be designed and adopted in ways that did not take HIE into account, and there were no market forces to fill the void. When providers and vendors started working toward Stage 2, which included the HIE criterion, they had to change established systems and workflows, creating a heavier lift than if HIE had been included from the start.

Without strong incentives that would have created market demand for robust interoperability from the start, we now must retrofit interoperability, rather than having it be a core attribute of our health IT ecosystem.”

While the stimulus program needed to get money out the door quickly, it might have been worth prioritizing HIE over other capabilities in Stage 1 or even delaying Stage 1 Meaningful Use to include HIE from the start. At a minimum, this strategy would have revealed interoperability challenges earlier and given us more time to address them.

A larger problem is that HITECH’s overall design pushed interoperability in a fairly limited way, rather than creating market demand for robust interoperability. If interoperability were a “stay-in-business” issue for either vendors or their customers, we would already have it, but overall, the opposite is true. Vendors can keep clients they might otherwise lose if they make it difficult to move data to another vendor’s system. Providers can keep patients they might otherwise lose if they make it cumbersome and expensive to transfer a medical record to a new provider.

As a result, the weak regulatory incentives pushing interoperability (in the form of a single, fairly limited HIE Meaningful Use criterion), even in combination with additional federal and state policy efforts supporting HIE progress, could not offset market incentives slowing it. Without strong incentives that would have created market demand for robust interoperability from the start, we now must retrofit interoperability, rather than having it be a core attribute of our health IT ecosystem. And, if there had been stronger incentives from the start, we would not now need to address information blocking: the knowing and intentional interference with interoperability by vendors or providers.

Another criticism levied at policymakers is that they should have selected or certified only a small number of certain EHR systems, because narrowing the field of certified systems would have at least limited the scope of interoperability problems. A few even advocated that Congress mandate a single system such as the VA’s VISTA. While such a mandate would have helped solve the interoperability issue, it would have violated the traditional U.S. commitment to market-based approaches. Moreover, the United Kingdom failed very visibly with a heavily centralized approach, and in a health care system the size of the United States, an attempt to legislate IT choices in a similar manner could backfire catastrophically.

EHR Vendors

Most observers assign EHR vendors the majority of blame for the lack of interoperability, but I believe this share is overstated. As noted above, by avoiding or simply not prioritizing interoperability, they are acting exactly in line with their incentives and maximizing profit.

Of the stakeholders, only policymakers have a clear, strong interest in promoting interoperability. Therefore, it is up to them to ensure that robust, cross-vendor interoperability is a stay-in-business issue for EHR vendors and providers.”

Normally, the United States glorifies companies that behave this way. When neither policymakers nor providers were demanding interoperability, vendors risked harming their bottom lines by prioritizing it, and they cannot be blamed for acting in their economic best interest in wholly legal ways.

Nevertheless, senior leaders at EHR vendors should have been more willing to come forward and explain why their economic best interest was at odds with existing regulations. Instead, they often claim to have robust interoperability solutions when they do not, and similarly claim that interoperability is a top priority when it is not. This gap between rhetoric and reality makes it harder for providers and policymakers to demand greater interoperability.

Providers

As noted above, providers may not have a strong business case to prioritize interoperability. However, providers have professional norms and mission statements that should motivate them to pursue interoperability (or at least not actively interfere with it) to benefit their patients. As a result of these conflicting motivations, some provider organizations let competitive pressures drive interoperability decisions (including not demanding robust interoperability from their vendors), while others have chosen to pursue interoperability because it is best for their patients, even if this decision incurs competitive disadvantage. More providers may tip toward the former simply because today’s interoperability solutions are complex and costly. It is hard to justify investing in a complicated, expensive capability that also poses a strategic risk — a double whammy.

The emergence and rapid growth of Epic’s Care Everywhere platform (which connects providers using Epic) suggests that even in highly competitive markets, providers may easily tip the other way when the cost and complexity of interoperability are reduced. Therefore, any efforts that successfully reduce cost and complexity are highly valuable, though not a substitute for stronger incentives for providers (and vendors) to engage in interoperability.

As with vendors, we cannot fault providers for behaving in ways that are aligned with their incentives, but we can argue that their patient care mission requires, at a minimum, more public disclosure about the business rationales behind their interoperability decisions.

The point of the blame game is not to punish the players. It is to understand the dynamics at play and plot a path forward. Of the stakeholders, only policymakers have a clear, strong interest in promoting interoperability. Therefore, it is up to them to ensure that robust, cross-vendor interoperability is a stay-in-business issue for EHR vendors and providers. Once the business case for interoperability unambiguously outweighs the business case against it, both vendors and providers can pursue it without undermining their best interests.


Julia Adler-Milstein, PhD

Associate Professor, School of Information, School of Public Health, University of Michigan

Article link: http://catalyst.nejm.org/ehr-interoperability-blame-game/

Why policy must drive interoperability efforts – HIE Watch

Posted by timmreardon on 03/26/2017
Posted in: Uncategorized. Leave a comment

Chris Nerney, Contributing Writer | March 16, 2017
@chrisnerney        
HIE Watch
Attaining full interoperability of electronic healthcare systems is a challenge that transcends technology, argues Dr. Julia Adler-Milstein, associate professor at the University of Michigan’s School of Information, School of Public Health.

Writing in NEJM Catalyst, an online publication of the New England Journal of Medicine, Adler-Milstein acknowledges that “health information is hard. If it were easy, we would have it, or at least have more of it, by now.”

Compounding the inevitable technology issues, she says, is the need to unite stakeholders and create an organizational framework that supports interoperability goals.

“As we have seen in other industries, interoperability requires all parties to adopt certain governance and trust principles, and to create business agreements and highly detailed guides for implementing standards,” Adler-Milstein writes. “The unique confidentiality issues surrounding health data also require the involvement of lawmakers and regulators.”

Though her article is titled “Moving Past the EHR Interoperability Blame Game,” Adler-Milstein cites in some detail how policymakers, EHR vendors, and providers each have impeded progress toward widespread healthcare interoperability.

The root of the problem is that policymakers failed to create regulatory incentives to promote interoperability that were strong enough to offset market incentives, such as vendors wanting to lock in customers by engaging in information blocking. EHR vendors thus lacked any incentives to prioritize interoperability, according to Adler-Milstein.

And while many providers have opted to pursue interoperability because they believe it will help them deliver better care to patients, Adler-Milstein writes that “some provider organizations let competitive pressures drive interoperability decisions (including not demanding robust interoperability from their vendors).”

“It is hard to justify investing in a complicated, expensive capability that also poses a strategic risk — a double whammy,” she says.

Adler-Milstein concludes that the onus is on policymakers to move the interoperability needle.

“Only policymakers have a clear, strong interest in promoting interoperability,” she says. “Therefore, it is up to them to ensure that robust, cross-vendor interoperability is a stay-in-business issue for EHR vendors and providers.”

You can read the entire article here.

Moving Past the EHR Interoperability Blame Game

http://catalyst.nejm.org/ehr-interoperability-blame-game/

Keeping transformations on target – McKinsey

Posted by timmreardon on 03/26/2017
Posted in: Uncategorized. Leave a comment

 

McKinsey
Analysis of high-stakes transformations reveals a few pragmatic lessons that increase the odds of meeting the organization’s objectives.

Any transformation worth the name starts with ambitious goals. Setting them is hard enough, especially for organizations long used to the risk-averse pattern of underpromising and overdelivering. But the real work starts once the organization sets out to turn the leaders’ targets into initiatives that everyone else designs and implements from the bottom up.

Stay current on your favorite topicsSubscribe

Keeping hundreds or thousands of initiatives on track is a monumental task, one that too few organizations around the world do well. Recent research reconfirms earlier findings that only 30 percent of transformations deliver their intended benefits and meet the targets committed to during the program-planning stage.

These odds are simply unacceptable, especially when the stakes are high. We therefore reviewed 18 transformations at 13 organizations that were in the most critical circumstances. While some were facing significant financial and operational challenges, including rapidly deteriorating performance or liquidity concerns, others were simply seeking a substantial step up in their performance.

Our focus was on the practical lessons these organizations learned in making their ambitions real. Our analysis was enabled by McKinsey’s proprietary program-management platform, Wave, which generates detailed reports tracking the financial and operational impact of individual initiatives.

Wave’s data repository allowed for a comprehensive analysis of the factors contributing to initiative success—ranging from how impact targets were determined, and how quickly initiatives progressed through the various stage-gate reviews, to the structures and timelines of the programs the initiatives supported. We then supplemented our findings with in-depth interviews of executives at representative companies included in the data set.

All the organizations were located in Asia–Pacific: that focus ensured greater consistency in the value-tracking approach and data structure, letting us make more nuanced comparisons. Nevertheless, the organizations varied in industry, size, and program impact. The sectors represented included construction, consumer goods, electric power, mining, natural resources, oil and gas, and retail banking. Annual revenues ranged from $2 billion1 1.All amounts in US dollars, converted as of the study’s conclusion in May 2016. to $28 billion, and total transformation impact ranged from $450 million to $4 billion—but we found no direct correlation between the size of the company and the impact of its transformation program.

Our detailed analysis of these materials has allowed us to draw three main insights that can serve as potential guiding principles when structuring a large-scale transformation program:

Be relentless. From the beginning, organizations should assume that most initiatives will be worth a lot less than they think. Moreover, most of the companies in our sample fell short of their initial goals and needed an additional round of back-to-the-well idea generation. And, they had to be careful about allocating management time, so that smaller initiatives got their due—they accounted for about half of the program’s value, but they could get lost in a focus on only the biggest projects.

Focus your resources. Organizations must resist the temptation to spread their most effective leaders too thin. Three initiatives were the typical burden a leader could shoulder at once. Engaging more of the organization as potential initiative owners allows each initiative to get the support it needs without overburdening a few high performers. Reporting must be prioritized as well. Too many milestones in initiative plans can create unnecessary burdens; most programs try to capture too many metrics—and usually fewer than 30 percent end up actually being used.

Plan and adapt. Most initiatives were at least somewhat delayed in implementation. But organizations could reduce delays with judicious planning of milestones, supplemented by weekly actions that initiative owners would report on between milestones.

Follow the pipeline

The transformations we examined all followed a similar pipeline approach for tracking initiatives.

The stage gates of the pipeline begins at level zero, or “L0,” with the collection of as many ideas as possible, regardless of feasibility or size (Exhibit 1). Our analysis then begins at L1, once the initiatives have been identified as worth pursuing. During that stage, initiative owners set out to validate and refine their early value assumptions with data from other stakeholders and additional analysis. Once a solid business case has been built, the initiative is approved (usually by the finance function) and passes into L2. The initiative owner then defines a robust set of milestones to execute the initiative, and provides a monthly schedule of expected value to be captured on the bottom line, at L3. Many initiatives sit at L3 during implementation, with the initiative only moving to L4 once all milestones to realize value are completed. At that point, the finance function assesses the initiative to ensure that it will deliver value—ideally at the target amount set at L2, but that amount is usually adjusted as the initiative progresses from stage to stage. Finally, once the actual value appears in the business’s cash flows and appears reasonably certain to remain, the initiative passes to the last stage: L5.

Exhibit 1
McKinsey - eex1
Initiatives go through a stage-gate review process from level L0 (idea) through L5 (realized)

Throughout the process, a transformation office (TO)—typically headed by a chief transformation officer—sets an aggressive pace of weekly reviews to monitor initiatives’ progress against their milestones, record the value they capture, and provide support when initiatives run into trouble. The TO’s independence and role in capturing data allows it to drive action, especially through rigorous problem-solving sessions and questioning of self-imposed limits.

We reviewed each organization’s experience across stages L1 to L5 to find out where problems were most likely to arise and how organizations worked around them.

Be relentless

The earliest challenge for any organization transforming itself is to find sources of value. That means fighting attrition, conducting back-to-the-well exercises as needed, and allocating leadership attention with care.

Fight attrition

The top concern for business and program leaders is for the transformation to meet its impact target. Yet any executive will recognize that most initial impact estimates are optimistic. Pressed to meet program goals that are usually aggressive both in timing and in total value, initiative owners naturally tend to overestimate their initiatives’ worth. But just how optimistic are they? And how much will leaders need to compensate for leakage of impact over the course of the transformation?

Our examination of the data confirms that program managers should expect substantial impact leakage. Just between stages L1 and L2, the initial impact estimate falls by an average of about 45 percent (Exhibit 2). From L2 to L3, the now-smaller impact estimate falls another 13 percent, with further drops of 28 percent between L3 and L4 and 9 percent between L4 and L5. Cumulatively, the result is that L1 estimations usually fall by about 70 percent by the time they reach L5. Organizations will therefore need a pipeline with a total value that’s more than three times the initial target.

Exhibit 2
McKinsey - eex2

Find a productive spring—and return to the well later

Part of the solution is simply to generate more ideas early in the process. At this stage, time may not allow for canvassing all employees, but program managers can nevertheless hold broad, disciplined ideation workshops with frontline team leaders and representatives. By setting out rules that encourage full participation, openness to all ideas (even the most “unrealistic”), and creativity, organizations can quickly generate valuable insights from the people whose day-to-day work gives them a unique perspective on real opportunities to improve the business.

In practice, though, even these preparations may not be enough, as demonstrated by a consumer-products manufacturer and an energy company. In each case, with less than three months remaining before publicly announced deadlines, teams were running well short of their targets—by tens of millions of dollars at the energy company and hundreds of millions at the consumer player. But both companies found that going back to the well and asking their people for more ideas allowed them to make up the difference. Each made its target, which provided an essential morale boost that made further improvement possible after the target was met.

Finding these later opportunities requires more effort than the first round of idea generation and typically produces somewhat less in total returns. Our data analysis found that by the second month, about two-thirds of a program’s value had already been discovered, leaving less to find in later efforts. Still, additional pockets of potential almost always remain. One option that several organizations used took advantage of data from the program-management tool to review how much actual value each initiative was generating. Conducting a root-cause analysis on those that were cancelled, delayed, or that under delivered helped uncover important lessons that led to new value.

Impact: Big slope versus long tail

By opening up idea submission to a much larger cross section of the organization, back-to-the-well exercises illustrate a related lesson as well: that the long tail of smaller initiatives matter. At one mining company, for example, a mechanic came up with an idea that reduced maintenance time for each truck by more than 30 minutes. Once applied to the regular monthly service schedule across the entire fleet, this idea added several thousand truck working hours per year and was worth millions of dollars.

Some of the organizations we reviewed expanded the idea-capture process to vendors and business partners as well. And these small initiatives add up to big impact. We divided initiatives into three groups. The first, “boulders,” consisted of initiatives that each represented at least 5.0 percent of the total program’s value. “Pebbles” were those representing between 0.5 and 5.0 percent of total value, and everything smaller than 0.5 percent was “sand.”

Our data indicate that on average, 50 percent of the total program value typically comes from sand (Exhibit 3). That means focusing only on the boulders, the biggest (and often highest-profile) initiatives is risky. Moreover, sand initiatives are often easier and quicker to execute: their small size involves fewer layers of approval and less coordination. And they are often led by frontline analysts and managers, giving them more of a stake in the transformation’s success.

Exhibit 3
McKinsey - eex3
On average, about half of a transformation’s total value comes from small initiatives

Focus your resources

With time of the essence, executives overseeing transformations must be especially careful in allocating time and effort at every level of the organization. The basic reality is that every moment an initiative owner spends on work that isn’t productive is a moment taken away from helping generate more impact.

Make it easier on initiative owners

How much is reasonable to ask of initiative owners? For comparison among transformations, we defined “initiative owner” as “the most senior person who actually does the day-to-day work.” On average, we found that initiative owners manage three initiatives each. As one leader working with a consumer-products company explained, “It’s a rare exception for an owner to successfully manage more than three initiatives. They have to be really good at delegating the underlying milestones to others and following up on their progress.”

Our evidence shows that about 80 percent of total impact is managed by 20 percent of initiative owners (Exhibit 4). That’s often because ownership of big-value initiatives (such as major contract renegotiations) is concentrated in the hands of a few very senior or high-potential individuals.

Exhibit 4
McKinsey - eex4

But it comes at a cost: the potential for burnout. One of our interviewees said that his organization lost several leaders because they simply couldn’t keep up with the demands of overseeing too many initiatives at once.

By contrast, small-value initiatives are often more limited in scope and are owned by frontline analysts and managers who don’t have the time or capacity for a larger set of initiatives. The involvement of a larger number of people not only relieves the owners of higher-profile initiatives but also helps build momentum and buy-in for the program as a whole. These considerations typically outweigh the disadvantage of the added complexity of having to manage a high number of owners.

Would you like to learn more about our Operations Practice?

Keep reporting manageable

But complexity can quickly rear its head in the reporting of initiatives’ status. The ideal initiative execution plan contains all the milestones necessary to carry out the initiative, while avoiding so much detail that the milestones become distracting to initiative owners at negligible additional value.

Either extreme creates problems. For the consumer-goods company, “high plan granularity came with a lot of pushback from initiative owners, who felt micromanaged and worried about the time required to update or create milestones,” one executive told us. On the other hand, milestones spaced too far apart in time reduced the program leaders’ ability to identify delayed or at-risk deliverables until it was too late for effective course corrections. One leader noted, for example, that several of his company’s execution plans ended up in avoidable delays when the milestones that initiative owners scheduled failed to align with important stakeholder approvals, such as for compliance reviews or proxy votes.

Our data show that an average of four milestones was typically the right balance—enough to provide early warning about potential problems, but not so many as to get in the way of implementation.

Make metrics meaningful

The decisions on which metrics to track are typically made during the planning phase of the program, as leaders decide what is in and out of scope, what types of spending should be targeted for savings, and so on. Most commonly, financial metrics are used for transformation programs because of their strategic importance, the availability of the required data, and the ease of tracking them compared with nonfinancial metrics.

But even a relatively straightforward set of metrics can quickly become complicated if additional layers are added. A finance department may ask for the metrics to mirror individual accounting line items, slicing and dicing the data into dozens of submetrics. Adding further permutations, such as distinguishing between recurring and one-time impact or between hard savings and cost avoidance, compounds the complexity for initiative owners. And that’s before measuring and tracking nonfinancial metrics, such as head-count redeployment for different personnel types.

Evidence from our data shows that only 29 percent of the metrics organizations claim to follow are actively used during the length of the project (Exhibit 5). The rest become statistical noise and a source of confusion for initiative owners trying to decide where to allocate the savings from their initiatives.

Exhibit 5
McKinsey - eex5

Accordingly, organizations must strike a balance between ensuring the finance function can report at an acceptable level of detail while also enabling initiative owners to allocate impact easily. A rule of thumb that several organizations used successfully was to eliminate any metric that was likely to carry less than 0.01 percent of total program impact and fold it into other metrics instead.

Plan and adapt

Once the program is under way, the organization will need to adapt quickly and nimbly to inevitable unforeseen obstacles. Careful planning and well-structured review cycles helped the executives we interviewed intervene where needed to keep initiatives—and whole programs—on track.

Plan for delays

Much as the initial value estimate of an initiative tends to be optimistic, so too is the promised timing. Our data show that on average, approximately 31 percent of initiatives will have their execution end date (the date at which stage L3 ends) changed at least once throughout their life cycle. About 28 percent will see it happen twice, and 19 percent three times.

The impact of date changes can be mitigated if they are made early in an initiative’s life cycle, with sound reasoning and the approval of the TO. However, our data show that despite the high frequency of due-date changes, 56 percent of initiatives still miss their planned L3 date (the date at which the plan is approved) by more than a week, and about half miss their L4 date (the date at which execution is complete) by more than a week. On average, initiatives start L3 two-and-a-half weeks later than planned, and they are fully executed approximately four weeks after the set deadline (Exhibit 6).

Exhibit 6
McKinsey - eex6
McKinsey - eex500

Transformation with a capital T Read the article

 

What can organizations do? The amount of time that initiatives spend in the implementation stage will inevitably depend on a range of factors, including the overall agility of the company, the urgency of the transformation program, and the level of approval required to move an initiative from one stage gate to the next. But ultimately, helping owners meet their deadlines is the role of the TO, whose discipline is essential in ensuring performance. A chief transformation officer who comes from outside the organization can often be in a better position to break through cultural norms and other constraints that can impede an initiative’s progress.

Commit to weekly actions

Even when delays are unavoidable, initiative owners can reduce the impact by ensuring that each initiative moves forward every week, regardless of whether there’s a milestone or not. By asking for brief updates on these actions during the regular cadence of meetings and offering support, leaders can encourage owners to report on potential issues early so that they can be solved with minimal effort.

As a rule of thumb, leaders should expect 80 percent of the initiatives across a program to be updated with specific actions every week. While that may seem high, we have found that with five minutes of planning, almost every initiative can be improved or accelerated each and every week.


For high-stakes transformations, this analysis underscores the importance of balancing high aspirations against a pragmatic understanding of what individuals and organizations can achieve. By keeping a few basic constraints in mind, transformation leaders can mitigate some of the inevitable problems that arise when people are trying to achieve dramatic performance improvement in a very short time. By minimizing avoidable waste in the transformation process itself, the organization is much more likely to meet (or even exceed) its goals—and build a foundation for it to keep improving once the program is complete.

About the author(s)

Michael Bucy is a partner in McKinsey’s Charlotte office, Tony Fagan is the Asia–Pacific general manager for Wave and is based in the Singapore office, Benoît Maraite is Wave’s head of client service and is based in the Louvain-la-Neuve office, and Cornelia Piaia is a Wave business and strategy manager in the Paris office.The authors wish to thank Barr Blanton, Simon Herrmann, Matteo Iachino, Nicolas Maya Medina, and Erwin Sie for their contributions to this article.
Article link: https://www.mckinsey.com/business-functions/rts/our-insights/keeping-transformations-on-target

HHS Should Assess the Effectiveness of Its Efforts to Enhance Patient Access to and Use of Electronic Health Information – GAO

Posted by timmreardon on 03/26/2017
Posted in: Uncategorized. 1 Comment

GAO

DOD’s Next-Gen Electronic Health Records System Is Now Live at One Site – Nextgov

Posted by timmreardon on 03/26/2017
Posted in: Uncategorized. Leave a comment

By Mohana Ravindranath

February 15, 2017

Vista1

A small piece of a years-long effort to revamp military health records is finally live, after a new IT system was deployed last week at an Air Force base in Spokane, Washington.

The electronic health record platform, Military Health System GENESIS, or MHS GENESIS, is now in use at the Fairchild Air Force base. The Defense Department plans to observe how it functions at that site first before installing it at other sites in the Pacific Northwest—possibly as soon as June.

The new system is intended to let patient records flow seamlessly between health care providers covering military beneficiaries and to allow patients to view their own records and test results online. Interoperability with both Veterans Affairs Department, as well as private health care providers, are also key goals.

For now, providers can use MHS GENESIS to track down records for patients at other sites, even if those sites haven’t yet upgraded, by using a “legacy viewer” feature, officials told reporters Wednesday.

At Fairchild, MHS GENESIS replaces the existing patient care online portals RelayHealth and MiCare.

Eventually, MHS Genesis will house the records for more than 9.4 million DOD beneficiaries and the Defense Healthcare Management Systems office is aiming for total deployment by 2022, Program Executive Officer Stacy Cummings said.

The initial deployment is part of a larger contract, under the office of the DOD Healthcare Management System Modernization. In 2015 the Pentagon awarded the $4.3 billion project, intended to revamp medical records across the department, to Leidos, Cerner and Accenture.

In its first week, MHS GENESIS encountered a few minor challenges, including the need to retrain employees to use the new interface, Col. Margaret Carey, commander at the 92nd Medical Group at Fairchild, told reporters. Though user training began in September, some users had since forgotten how to navigate to certain features, such as the scheduler. In future sites, the team may move the training closer to deployment.

Ideally, the online system would let disparate health care providers work together on one patient, Air Force Surgeon General Lt. Gen. Mark Ediger told reporters. Providers could aggregate input about an individual from nutritionists, disease management specialists and others to provide holistic treatment.

The Pentagon started its EHR effort by deploying the system in the Pacific Northwest because patients from across the services are represented, Cummings said.


By Mohana Ravindranath

February 15, 2017

http://www.nextgov.com/health/2017/02/dods-next-gen-electronic-health-records-system-now-live-one-site/135474/

VA Secretary Promises Decision on Whether to Go to Commercial Health Records – Nextgov

Posted by timmreardon on 03/26/2017
Posted in: Uncategorized. Leave a comment

By Frank Konkel

March 21, 2017

Shulkin1
Newly confirmed Veterans Affairs Secretary David Shulkin promised Tuesday his department would decide by July whether to move to a commercial health records system.

It’s a decision Shulkin said should have been made years ago.

“From my perspective, I wish this decision had been made by a prior secretary,” said Shulkin, speaking at an event hosted by Politico. “It’s been kicked down the road. If I could back up time, I believe it’s a mistake to not have made a decision with [the Defense Department] at the time. I’m not saying who made the mistake.”

But, Shulkin said, for VA to not have decided on its health records future when DOD opted to pursue a health records platform from a commercial provider in 2014 “was a mistake.”

For years, the two agencies wasted over $1 billion attempting to make health record data from soldiers and veterans interoperable. VA’s system, called the Veterans Health Information Systems and Technology Architecture, and DOD’s legacy health records system, eventually managed workarounds to achieve a semblance of interoperability, but not until after DOD pursued its own system.

In 2014, DOD awarded Leidos and EHR developer Cerner its Defense Healthcare Management Systems Modernization contract worth up to $9 billion, and pilots of the new system began rolling out early this year.

“The VA knows who its customers are, and its customers come from one source: the Defense Department,” Shulkin said. “To not have an integrated system was a mistake.”

VA is carefully weighing its options, he said. Its VISTA system, which revolutionized health care decades ago, is outdated. Either VA will stick with VISTA and make potentially significant investments to improve it or it will begin the process of moving to a commercial product this summer, Shulkin said.

VA is conducting cost reviews to gauge what a modernization of VISTA might run, and Shulkin declined to say whether estimates ranging from $8 billion to $16 billion for a commercial offering to service its 9 million-plus beneficiaries were accurate.

“I’m not approaching this decision-making this summer purely or largely from the financial point of view,” he said. “My No. 1 goal is to come up with a strategic direction that makes sense for VA, and then worry about the financial cost. If we run a strict management process and take change management seriously, we’ll be able to do it for a lot less.”

In any case, Shulkin clarified the July decision will only be the beginning of “a lengthy process,” whichever direction VA takes. Once the decision is made, the acquisition portion of the process will come into play, and law states it must be an open, competitive process.

“[Deciding which way to go] is not the same as deciding what system we’ll use,” Shulkin said when asked whether VA ought to pursue the same EHR as the Pentagon. “We’re not going to rush into things. I’m not going to be backed into a time frame.”

Yet, VA is watching DOD’s EHR rollout intently. Shulkin said VA has “a lot to learn from what they’re doing in change management, let alone technology.”


By Frank Konkel

March 21, 2017

http://www.nextgov.com/health/2017/03/va-secretary-promises-decision-whether-go-commercial-health-records/136337/

A 40-year ‘conspiracy’ at the VA – Politico

Posted by timmreardon on 03/26/2017
Posted in: Uncategorized. Leave a comment

A 40-year ‘conspiracy’ at the VA The Department of Veterans Affairs built perhaps the most important medical computer system in history. Now it’s about to spend billions to throw it away.

nextgov-DoD-Va
By ARTHUR ALLEN | 03/19/17 07:56 AM EDT

Four decades ago, in 1977, a conspiracy began bubbling up from the basements of the vast network of hospitals belonging to the Veterans Administration. Across the country, software geeks and doctors were puzzling out how they could make medical care better with these new devices called personal computers. Working sometimes at night or in their spare time, they started to cobble together a system that helped doctors organize their prescriptions, their CAT scans and patient notes, and to share their experiences electronically to help improve care for veterans.

Within a few years, this band of altruistic docs and nerds—they called themselves “The Hardhats,” and sometimes “the conspiracy”—had built something totally new, a system that would transform medicine. Today, the medical-data revolution is taken for granted, and electronic health records are a multibillion-dollar industry. Back then, the whole idea was a novelty, even a threat. The VA pioneers were years ahead of their time. Their project was innovative, entrepreneurial and public-spirited—all those things the government wasn’t supposed to be.

Of course, the government tried to kill it.

Though the system has survived for decades, even topping the lists of the most effective and popular medical records systems, it’s now on the verge of being eliminated: The secretary of what is now the Department of Veterans Affairs has already said he wants the agency to switch over to a commercial system. An official decision is scheduled for July 1. Throwing it out and starting over will cost $16 billion, according to one estimate.
What happened? The story of the VA’s unique computer system—how the government actually managed to build a pioneering and effective medical data network, and then managed to neglect it to the point of irreparability—is emblematic of how politics can lead to the bungling of a vital, complex technology. As recently as last August, a Medscape survey of 15,000 physicians found that the VA system, called VistA, ranked as the most usable and useful medical records system, above hundreds of other commercial versions marketed by hotshot tech companies with powerful Washington lobbyists. Back in 2009, some of the architects of the Affordable Care Act saw VistA as a model for the transformation of American medical records and even floated giving it away to every doctor in America.

Today, VistA is a whipping boy for Congress; the VA’s senior IT leadership and its overseers in the House and Senate are all sharpening their knives for the system, which they caricature as a scruffy old nag that fails the veterans riding on it. Big commercial companies are circling, each one putting forward its own proprietary technology as the answer to the VA’s woes. The VA leadership seems to agree with them. “We need to move towards commercially tested products,” VA Secretary David Shulkin told a congressional committee on March 7. “If somebody could explain to me why veterans benefit from VA being a good software developer, then maybe I’d change my mind.”

You’d have to be a very brave VA administrator, and perhaps a foolhardy one, to keep VistA in 2017: The system’s homegrown structure creates security and maintenance challenges; a huge amount of talent has fled the agency, and many Congress members are leery of it. Because it serves nearly 9 million veterans at 167 hospitals and 1,700 sites of care, however, the wrangling over VistA concerns much more than just another computer software system. The men and women who created and shaped VistA over the decades were pathfinders in efforts to use data to reshape the multi-trillion-dollar U.S. health care system. Much of what they’ve done continues to serve veterans well; it’s an open question whether the Beltway solution to replacing VistA, and the billions that will be spent pursuing it, will result in a system that serves the VA—and the nation—as well in the long run.

What’s clear, though, is that the whole story of how VistA was born, grew and slid into disrepair illustrates just how difficult it can be for the government to handle innovation in its midst.

YOU COULD SAY that VistA—which stands for the Veterans Information Systems and Technology Architecture—began as a giant hack.

Its birth occurred in 1977, far back in the era of paper medical records, with a pair of computer nerds from the National Bureau of Standards. Ted O’Neill and Marty Johnson had helped standardize a computer language, originally developed at Massachusetts General Hospital, called MUMPS, and the two men were hired by the VA to see whether MUMPS could be the basis of a new computer system connecting the VA’s hospitals. Computerizing the one-on-one art of medical care seemed like a sacrilege at the time, but the VA, struggling with casualties of the Vietnam War, was underfunded, disorganized and needed all the help it could get.

O’Neill and Johnson began recruiting other techies to the effort, some of whom were already working in VA hospitals in places such as St. Petersburg, Florida; Lexington, Kentucky; and San Francisco. Though they were on an official mission, their approach—highly decentralized, with different teams trying things in various hospitals—ran against the grain of a big bureaucracy and aroused the suspicions of the central office. The project soon had the feeling of a conspiracy, something that nonconformists did in secret. They gave themselves an internal nickname—the Hardhats. People who followed the project recall being struck by just how idealistic it was. “This will sound a bit hokey, but they saw a way to improve health care at less cost than was being proposed in the central office,” says Nancy Tomich, a writer who was covering VA health care at the time. As bureaucratic battles mounted, she says, “I remember how impressed I was by these dedicated people who put their personal welfare on the line.”

In 1978, with personal computers just starting to appear in the homes of nerdy hobbyists, the Hardhats bought thousands of personal data processors and distributed them throughout the VA. Software geeks and physicians were soon exploring how patient care could be improved with these new devices. A scheduling system was built in Oklahoma City, while technicians in Columbia, Missouri, built a radiology program, and the Washington, D.C., VA’s Hardhats worked on a cardiology program. In Silicon Valley, Steve Wozniak was building a computer in his garage that would overturn an industry; at the VA, these unsung rebels were doing something that was equally disruptive in its own way—and threatening to the VA’s central computer office, which had a staff and budget hundreds of times greater and planned to service the data-processing needs of the VA hospitals and clinics by means of leased lines to regional mainframe centers. While the bureaucrats in the central office had their own empire, Tomich recalled, the Hardhats—some of them straight-looking guys with burr haircuts and pocket pen protectors, some scruffy, bearded dudes in T-shirts—were “in the field planting seeds, raising crops and things were blossoming,’’ she says.

The Hardhats’ key insight—and the reason VistA still has such dedicated fans today—was that the system would work well only if they brought doctors into the loop as they built their new tools. In fact, it would be best if doctors actually helped build them. Pre-specified computer design might work for an airplane or a ship, but a hospital had hundreds of thousands of variable processes. You needed a “co-evolutionary loop between those using the system and the system you provide them,” says one of the early converts, mathematician Tom Munnecke, a polymathic entrepreneur and philanthropist who joined the VA hospital in Loma Linda, California, in 1978.

So rather than sitting in an office writing code and having the bureaucracy implement it, the computer scientists fanned out to doctors’ offices to figure out what they needed. Doctors with a feel for technology jumped into the fray. “I got involved because it solved my problems,” says Ross Fletcher, a cardiologist at the Washington, D.C., VA—where he is now chief of staff—since 1972. Working in close consultation with their clinical partners,
sometimes coding at home at night or in their spare time, the computer experts built software that enabled doctors to legibly organize their prescriptions, CAT scans and patient notes, and to share their experiences electronically. Fletcher, who had studied a little computer science in college, worked with a software developer to help create an electronic EKG record. “The technical staff was embedded with clinical staff. I had lunch with the doctors, and in the parking lot in the morning we’d report what we’d done the night before,” says Munnecke.

Munnecke, a leading Hardhat, remembers it as an exhilarating time. He used a PDP11/34 computer with 32 kilobytes of memory, and stored his programs, development work and his hospital’s database on a 5-megabyte disk the size of a personal pizza. One day, Munnecke and a colleague, George Timson, sat in a restaurant and sketched out a circular diagram on a paper place mat, a design for what initially would be called the Decentralized Hospital Computer Program, and later VistA. MUMPs computer language was at the center of the diagram, surrounded by a kernel of programs used by everyone at the VA, with applications floating around the fringes like electrons in an atom. MUMPS was a ludicrously simple coding language that could run with limited memory and great speed on a low-powered computer. The architecture of VistA was open, modular and decentralized. All around the edges, the apps flourished through the cooperation of computer scientists and doctors.

“We didn’t call it ‘agile development,’ but it was agile,” says Howard Hayes, another VA IT veteran who served as CIO for the Indian Health Service, which adopted VistA. “Tight relationships between user and programmer, and sometimes they were one and the same.” Instead of top-down goals and project sign-offs, teams of techies and doctors kept working to improve the system. “The developer did something, the user tried it, called him up or walked down the hall and says ‘It really needs to do this.’ The next day they had another build,” says Hayes.

The VA’s centralized computer department, which relied on contractors, was not amused. Its leadership wanted control, and they believed, with a position remarkably similar to current-day criticisms of the VA’s IT work, that it made more sense to let the outside experts move the ball than have “garages” full of unconventional nerds and upstart doctors. The Hardhats were sharing records among doctors and hospitals. They were digitizing X-ray images. They were doing everything much less expensively and more successfully than the central office. They had to be stopped. In 1979, Ted O’Neill was fired (he drove a cab for a while, and later became a real estate agent). The main Hardhats office was shut down, and “pretty much everybody in the Washington part of the organization headed for the hills,” says Munnecke.

But, remarkably, the project didn’t die. There were still Hardhats dispersed throughout the VA system who had been hired locally and couldn’t be sacked, and they carried on. A regular Monday morning telephone tree, which Munnecke created by patching together six people at a time from different parts of the country, each of whom would patch in others, kept them moving together.

The project took on the feeling of an insurgency, and the establishment began to retaliate. In Columbia, Missouri, Hardhat Bob Wickizer got back to his computer room from lunch one day to discover that his new desktop had been unplugged and put in a crate. In Washington, D.C., paper medical records were used to start a fire in the computer room that housed the code developed by Hardhats. “There were fires in the computer rooms. Sand in gas tanks. Not a pleasant fight,” says Fletcher. “Fascinatingly enough, it occurred within a government institution.”

Munnecke and his colleagues fretted about being fired for developing something that was far superior to anything that existed at the time. The participants called themselves “the conspiracy,” and they developed kernels of essential software, which they shared via 300baud modems (which transmit 300 words a minute) at 3 a.m. on Sunday mornings, or via disk packs the size of laundry baskets. “They went through a period of persecution,” says Phillip Longman, who wrote about it in his book on the VA health system, The Best Care Anywhere. “And that was good, because it created bonding experiences.”

Then one day in 1981, VA Chief Medical Director Donald Custis visited the Washington VA medical center and found administrators and practitioners using the unauthorized software. Astonished, he blurted out, “It looks like we have an underground railroad here,” according to Munnecke, who was so tickled by the phrase that he had membership cards printed up with a microchip glued onto a train engine, and a title, “VA MUMPS Underground Railroad.” Munnecke handed the cards to friends, colleagues and higher-ups he was trying to woo. And eventually, the rebels won over some leaders, including VA Administrator Robert Nimmo, who gave them a budget. The money allowed the applications developed piecemeal at hospitals across the country to be shared, recalls George Timson. By 1985, most hospitals were being updated regularly through a database management system Timson developed called FileMan. They all used the same structure of application building, which meant they were integrated; the mainframers had advocated a more traditional approach of buying commercial modules and lashing them together through interfaces.

“The combination of cheap computers and an efficient language allowed the VA to leapfrog over the existing technology,” says Stephan Fihn, the VA’s current director of analytics and business intelligence. “We competed to make it better,” recalls Fletcher. “They’d holler at us because we were late on some project. … Late for what? There were no other options.” And after beating back the mainframe people, Munnecke could crow with his foot planted on the chest of the enemy. “Every one of their systems is totally dependent on a specific vendor, incompatible with every other system they have developed,” he said in a 1982 speech that sounds strangely familiar to students of current, commercial EHR systems, notorious for being “walled gardens” that have trouble sharing data with one another. “Every one of our systems is vendor-independent and compatible with every other of our systems.”

New features could be added as quickly as they were developed in far-flung basements. Responding to doctors’ requests, developers created tools that were good at organizing care for the millions of veterans with chronic illnesses. Doctors could find a patient’s records easily and compare outcomes in patients who’d undergone different treatments, finding the best treatments in different settings and leading to improved guidelines for populationwide care. As it grew, VistA genuinely changed medicine: The Hardhats created databases that allowed researchers to aggregate clinical cases, which led to discoveries linking blood pressure to stroke and the arthritis medicine Vioxx to heart attacks, among others. They helped make VA clinics the best in the world to avoid amputation if you were a diabetic. They also gave the world the patient wristband, whose bar codes assured that the right patients got the right dose at the right time. VA patients in New Orleans were the only ones in the city who emerged from Hurricane Katrina in 2005 with their records—including medication lists—intact.

All that customization was a great joy to the doctors who helped create it, and who used it. But it also held the seeds of a monstrous problem—all those thousands of pieces of code, like all code, would need to be updated and integrated with new computer technology. Sometimes the original coder had moved on, and trying to update his or her work was like editing a book written decades before, in another language, by a dead author.

But such problems would take years to become acute. In the meantime, the system’s popularity began to spread beyond the VA. In the 1980s, the Finnish health system and some German hospitals adopted VistA, and, later, hospitals in Jordan, India, Australia and Japan. In the U.S., the Indian Health Service essentially cloned and then adapted the system. Over the years there would be many suggestions that the Pentagon also use VistA—the DOD runs its own massive health care system, unconnected to the VA, for activeduty service members—but it always resisted. Munnecke says he tried to wire the Pentagon with VistA at least three times. “Each time, I was technically correct and politically incorrect,” he says. The problem, in his view, was that the Pentagon didn’t want to run on the same system as the VA. If it did, Congress would start asking why it had to pay for redundancies. “Why do you need two hospitals? Someone’s going to get riffed. The person who cooperates most gets the least turf. It’s all turf. If I sound bitter, it’s because I’ve been beating my head against the wall so long.” The conflict between VA and DOD, it turned out, would end up becoming VistA’s fatal flaw.

WHEN KEN KIZER, who had run California’s Medicaid program, took over the VA health care system in 1994, it was under widespread attack for poor access and quality of care, to the extent that some GOP leaders were calling for its privatization. The IT system also had problems—most of the programming had been focused on developing applications but not modernizing the core, Munnecke would say later. Kizer, who knew nothing about electronic health records, funded a feasibility study to install a commercial system but discovered that the VA’s system was clearly superior to anything available. So he went in a different direction: He hired one of the original Hardhats, the brilliant Rob Kolodner, as his chief health informatics officer, and, armed with a $400 million annual budget, Kolodner oversaw the implementation of a bright new user interface and record-tracking software. Kizer also implemented quality performance measures and started coordinated care projects well ahead of the rest of health care, and his staff built a web service, which for the first time allowed providers to see a veteran’s electronic records from anywhere in the country. By 1999, Kizer testified in Congress that the new system he’d built—it was rechristened VistA from Decentralized Hospital Computer Program—wasn’t just working out for the VA, it could be the basis of a national commercial health IT system.

But the 2001 administration change was not kind to VistA. Kizer’s creation cost a lot to support, and several members of Congress, concerned by IT spending at the VA, sought to enforce a law requiring that the IT system in each federal agency be run by a single CIO. In other words, they wanted VistA’s work to be handled from a central office, a recurring theme in VistA’s development. VistA’s innovative approach, indeed its value to doctors, resulted from being brewed in small, decentralized units throughout the sprawling VA system. But whenever projects missed their deadlines, critics of the system would say it lacked accountability. “In the federal government, and in VA in particular, there are these cycles of decentralization and centralization, and you have to figure out where you are dropped into the cycle,” says Gary Christopherson, who held various IT positions at the Veterans Health Administration, including CIO from 2000 to 2002. “It’s not because there is necessarily a rationale for one or the other. You’d hear, ‘They’re out of control, we need to centralize.’ Or, ‘Centralization isn’t working—we need to put power out to the field.’ When I got there, we were in decentralization phase. There are always people in both neighborhoods.”

Then too, there were few people in positions of power, particularly in Congress, who could really grasp the complex issues at play. “You have a subject that’s vital to the fate of health care, the fate of the nation, but few people have the skill set to navigate it intelligently,” says Longman. In 2000, Christopherson convened a White House discussion bringing together federal IT officials, doctors groups and private industry to standardize data so that different health systems would be able to communicate with one another. Everyone agreed it was a good idea. It was scrapped when the new administration took office and, in the way of new administrations, was determined to leave its own mark. “The discussion you hear today about standards, interoperability, information exchange, all those things, was agreed to back in 2000-2002 but still hasn’t occurred,” says Christopherson, who is now a sculptor in Wisconsin. “It’s very sad, very tragic, very stupid.”

In 2004, the Pentagon hired outside contractors to revise its medical-records system, implementing a clunky version of VistA known as AHLTA. The Pentagon’s approach was strictly a top-down IT job, unlike the collaborative approach used with such success in building VistA, and it lacked finesse. Many military doctors considered AHLTA a disaster—unreliable, with frequent crashes and inaccessible data. Its initials, some said, stood for, “Aw hell, let’s try again.” The system ranked at the bottom of physician preference lists. At the top of the list? VistA. (The Pentagon system was like “running on sand,” residents told Ross Fletcher, while VistA was “running on asphalt.”)

Meanwhile, the enemies of VistA in the bureaucracy and industry were moving in. A 2005 Gartner study said the agency could save $345 million a year by consolidating the agency’s health IT efforts within the central office. The agency was reluctant to fully centralize, but when a burglar stole a computer containing 26.5 million patient records from the home of a VistA engineer, the knives came out at a series of House Veterans Affairs Committee hearings. The security of veterans’ records became an issue. Steve Buyer, the Indiana Republican who chaired the committee, was determined to recentralize. At one hearing, he called decentralization proponents “the gargoyles that defend bureaucracy and the old way of doing business.”

In 2006, two important things happened to VistA: It won the Harvard Business School’s coveted Innovations in American Government Award—given each year to promote creativity in the public sector—and its budget disappeared. Congress ordered that all IT work at VA report to the agency’s chief information officer, creating new bureaucratic barriers to getting things done and putting power in the hands of an office whose priorities were different from those of IT workers in the field. “We became the only health care system in the world where the health care CIO didn’t report to the CEO of health care,” recalled a senior official. As funding for health IT development dried up, hundreds of VistA experts left to join the private sector, taking their coding memory, their Fingerspitzengefuhl, with them. The collaborative relationships between doctors and developers were over. Several blue-ribbon panels since have said that the reorganization, aimed at efficiency, smothered VistA

innovation. “Modern management techniques killed it,” says the former official. “We always wondered whether it was a plot to help the private vendors. But whether it was or not, it had that effect.”

When Buyer left Congress in 2011, he took an assignment for the giant contractor McKesson—lobbying Congress for the company, a major producer of commercial EHR systems—on health IT and Veterans Affairs issues.

VISTA CONTINUED TO serve the VA, and well enough to continue to receive high user ratings in surveys. Of course, the world around it had changed a lot by 2009. Health IT had blossomed into an industry—big companies like Epic, Cerner, GE and Siemens were selling to big hospital systems, which appreciated their strengths at handling billing—an area where VistA lacked good applications, since the VA was both the provider of care and the agency that paid for it. But VistA was still well ahead of the industry standard for clinical care. It had things like computerized physician order entry, in which the physician electronically requests things like drug prescriptions and radiological scans. Its population health programs had produced a wealth of recommendations for the best treatment of chronic conditions. (MUMPS, VistA’s language, was old but is also still the basis for major commercial EHRs, such as those produced by Epic and McKesson.) VistA was so far ahead of the pack, in fact, that in the year President Barack Obama was elected, Rep. Pete Stark (D-Calif.), the chairman of the Ways and Means Committee, had introduced a bill that would provide VistA for free to every hospital and doctor in the United States. Geoff Gerhardt, a member of Stark’s staff, felt the idea needed tweaking and got the congressman to support an alternative that would give the Centers for Medicare and Medicaid Services money to incentivize doctors who “meaningfully used” EHRs—thereby giving birth to the term that defined Obama’s EHR subsidy program, the HITECH Act.
The Obama administration included the HITECH subsidies in its $831 billion stimulus program. But Stark was the only big proponent of open-source software at the table, and his proposal to give away VistA to doctors en masse ran into trouble. Republicans and Democrats like Anna Eshoo, who represents Silicon Valley, were not particular fans. “There weren’t a lot of members who felt strongly [in favor],” Gerhardt recalls, “and when you have Microsoft opposing it …” Instead of directing the administration to distribute VistA to doctors, the final language included a sort of “public option,” whereby HHS would make available a version of the open-source software to doctors. It never did. Senior administration officials felt the government lacked the expertise to provide software directly.

And so, when $35 billion was teed up for doctors, VistA was not available to them. A few startups, like MedSphere, had adapted the software and used it to wire some hospitals and other medical practices, at a price that was pennies on the dollar compared with the bills for the big EHR systems, which had started out as administrative billing software. So nearly all of that money was funneled into purchases of commercial EHRs.

BACK AT THE VA, VistA was running into different kinds of problems. Roger Baker, who became the VA’s CIO in 2009, was an Obama transition team member with a background in corporate IT. To deal with the perception of disorganization and cost overruns at VistA, Baker set up the Project Management Accountability System, which required each IT project, such as modifications of VistA’s lab and prescription software, to be up and running within six months and capped each project at two years. The federal government loved the accountability system, declaring it a model government program. It used lots of recordkeeping, monitoring and strictly enforced business rules to keep projects from spinning out of control and burning up truckloads of money. And it succeeded at that, but at the same time became so burdensome that it strangled initiative. “It’s like having a building, refusing to properly maintain it, and then when it inevitably starts to fall apart you say, ‘See, there is no reason to maintain this building. It’s falling apart,’” says Fred Trotter, CEO of DocGraph and one of the country’s leading health IT hackers.

Baker, to his credit, was committed to making VistA flourish again. “When people are passionate about a product, they want to keep working it,” he told me. “They do outrageous amounts of work to make the product as good as possible. In the 1980s and 1990s, that happened with VistA. By the time I got there, the passion wasn’t at the VA anymore.” To reignite the original spirit of the VistA team, he created a sort of “X Prize,” an initiative for a relatively inexpensive project that would bring in open-source hackers, including some former VistA programmers, to wrap VistA in more contemporary code. The Veterans Health Administration put out bids for that work in 2011.

For the VistA lovers, the idealists who still thought open-source technology was the logical system for medical records, this was perhaps the last best chance to restore a treasured common property. Munnecke was one of them. So were Longman and a techie colleague at the New America Foundation, Sascha Meinrath. They created a consortium and enrolled Red Hat, a federal contractor and leading open source developer, as well as several former Hardhats to help them shape the bid. They were sure they would get the job, although midway through the process, the entire Wisconsin congressional delegation—Epic, the big EHR maker, was based outside Madison—wrote Baker, urging him not to focus exclusively on open source developers. In the end, the bid went to another consortium led by wellknown contractors. It created a nonprofit organization that serves as a forum for open source EHR developers, but, while full of idealists, the organization is underfunded and relatively toothless.

The loss of that contract stings Meinrath, now a technology professor at Penn State University, because he’s convinced that VistA’s open-source technology could have been used across the health care system. “If we’d been able to do this, [the] Healthcare.gov [disaster] wouldn’t have happened,” he maintains. “We could have built a backbone recordkeeping system that was standardized and extensible to all the systems that all the different states were using.”

Baker tried to strengthen VistA by attempting, once again, to convince the Pentagon to incorporate the VA’s software in a shared health-records project. Even entering the project, many VA officials were convinced it would fail because they sensed the DOD was not really interested. The climax, after $564 million in spending, came at a November 2012 meeting at which the VA and Pentagon leaders of the project presented slides to VA Secretary Eric Shinseki and Defense Secretary Leon Panetta. “If you’ve ever been on the receiving side of a full fusillade from Leon Panetta. … It was not a lot of fun,” says a former VA official who attended the meeting. “If we could have gotten DOD in, we could have gotten the critical mass we needed to rebuild VistA.” With the failure of the shared records project, VistA’s future also seemed to slip away.

THE VA STILL runs on VistA, and IT teams are still working to improve the interface for clinicians and to improve connections between the Veterans Health Administration, the Pentagon and other VA programs, such as the one that enrolls vets in benefits. But there’s a bull’s-eye on the program’s back. Since the failure of the integration effort, and the subsequent 2014 scheduling fiasco, in which VA officials in Phoenix falsified records to make the agency look better at timely treatment of veterans, congressional committees have repeatedly called VA and Defense officials to testify. The solons blast the bureaucrats over the scheduling system and shake their fists about the supposed lack of interoperability between Defense and the VA—though in fact, experts say, there is far better transfer of information between the VA and DOD than between any two other U.S. health care entities. And minds are made up about VistA. “It’s like an old Buick that gets you from Point A to Point B. But wouldn’t we rather be riding inside an air-conditioned new Cadillac?” the new Veterans Affairs Committee chairman, physician Phil Roe of Tennessee, said at a hearing last year.

An updated user interface for VistA, called the Enterprise Health Management Program, seems to be progressing, although when it was implemented at the VA in Hampton Roads, Virginia, in a March 2015 pilot, it caused a system crash that affected clinical services at the hospital for a month. Not enough good computer expertise was left at VistA to manage the task, according to one former senior official. With the dramatic growth of commercial EHRs, the skills and training and innovation had moved out of VistA and out of open source EHRs, says David Waltman, chief information strategy officer at the Veterans Health Administration. A blue-ribbon panel on the VA’s future urged the agency to dump VistA. The membership of the panel included conservative-funded veterans groups and was led by leaders of hospital systems that have spent hundreds of millions to install Epic software. Not a single VistA expert was on the panel or consulted by it.

At a summer meeting of the open source EHR forum, the VA’s then-chief information officer, Laverne Council, indicated it was time to move on.

“Technologists accept change and hate old stuff,” she said. VistA had become a victim of the factors that led to its strength. Its distributed development resulted in a thousand flowers blooming, each one a different species, and in the foggy ruins of time no one could identify the people who wrote the code or how it could be amended. Managing all of those applications across a 167-hospital national system, some said, was a nightmare—more than half of the VA’s $4 billion annual IT budget goes to maintaining existing software. Under its current tentative plans, the VA will continue to update VistA’s interface through 2018. After that, the VA is likely to swap out VistA for a commercial EHR. The speculation is that bid would go to Leidos and Cerner—if their current partnership with the DOD continues to move along on schedule. Some believe the fix is in already, because a number of Leidos IT specialists already work inside the VA. At least two other big EHR vendors, Epic and Allscripts, say they’ll also bid to take over the VA’s EHR duties.

This is bitter fruit for many VistA fans. Some still say the system could be fixed for $200 million a year—the cost of a medium-sized hospital system’s EHR installation. “I don’t know if there even is an EHR out there with data comparable to the longitudinal data that VistA has about veterans, and we certainly do not want to throw that data out if a new EHR were to be used,” says Nancy Anthracite, a Hardhat and an infectious-disease physician.
Replacing VistA could be a colossal task—the military is spending $4.3 billion to switch to a Cerner EHR by the end of 2022, and Baker has estimated it could cost the VA $16 billion to rip and replace VistA. To maintain the functions that doctors like about it would require intricately lacing together new and existing software. Pitching all the good old stuff because it’s too old and complex to integrate with a new EHR would be tremendously wasteful and frustrating to doctors. A decision on whether to switch to a commercial system has been set for July 1. Some, including former CIO Baker, think the VA should wait until at least September, at which point the IT world will have a clearer picture of how the military’s implementation of a Cerner EHR is progressing.

“If they change the system and it becomes user unfriendly, that would be a major disaster,” says Ross Fletcher. “I would hope that the creativity and leadership are not supplanted. Creativity is important.”

OUTSIDE THE VA, the story of digitization of American medicine isn’t any smoother than the story of VistA. It’s true that an industry that used to run on paper records and folders is now mostly digital—a transformation accelerated by the $35 billion in subsidies offered as part of the stimulus package. But the promises of electronic records—improving care, improving health and making the industry more efficient—have only been partly borne out. Today the verdict is that electronic health records may have made medicine safer, but they’ve actually reduced, not improved, its efficiency.

Doctors now spend an average of $32,000 a year each on health IT installation and maintenance, and roughly 40 percent of their time working on the computer. Ask doctors what really bugs them and it doesn’t take long to get to the software: clunky, vulnerable to hackers, and built by competing players without agreed-upon standards, so patient information often ends up locked in the records of competing software companies and hospital systems. The highly educated doctor delivering you cutting-edge medicine is stuck on a computer system you’d have been annoyed to find on your desktop back in 2005.

It’s reasonable to see this, of course, as the growing pains of a new industry, much like computers themselves, which were originally a mishmash of competing systems until they standardized around just a couple. The alternative seems unrealistic: a centrally designed system that has gone through years of testing and improvement. But in one of Washington’s strange ironies, the government really did develop such a system. And the government is about to spend billions of dollars to scrap and replace it.

The four-decade struggle over the VA’s health IT system reveal some philosophical and logistical quandaries that arise in big health-care management systems, in government IT—and in the economy in general. Should the evolving pieces of a complex technological program run independently, or as an integrated, centralized whole? Should they be customized to user needs, or standardized for simplicity’s sake? Does the answer lie in opensource or proprietary technology? Does passion or accountability promise the most success? Problem solving or planning? Creativity or control? And who should be trusted to come up with the correct decisions—experienced careerists or outside analysts? Or politicians? And finally, who best serves the veteran: the private sector or the government?

There are still VistA die-hards who think it could be revived were it not for politics and money. “It’s hard to argue that VistA is an old racehorse when it still comes in first in the races,” says DocGraph’s Trotter, himself a Hardhat. For the most part, however, even the men and women who built and patched and rejuvenated VistA over the years have given up on it. It has been kicked around and neglected too long, like a former Kentucky Derby candidate chewing up pasture while awaiting a trip to the glue factory. Even so, a few dream that in a parallel universe, a place with less grandstanding about brave veterans and incompetent bureaucrats, a place with fewer lobbying dollars and more humility, the racehorse could keep on running.

“Perhaps this was a golden era of the kind of, ‘Let’s all make the world a better place by working together’ attitude,” Munnecke says. “It seems terribly naive today, but it was a driving force back then.

Article link: http://www.politico.com/agenda/story/2017/03/vista-computer-history-va-conspiracy-000367

Transformation with a capital T – McKinsey Quarterly

Posted by timmreardon on 12/09/2016
Posted in: Uncategorized. Leave a comment

By Michael Bucy, Stephen Hall, and Doug Yakola

Companies must be prepared to tear themselves away from routine thinking and behavior.

Imagine. You lead a large basic-resources business. For the past decade, the global commodities supercycle has fueled volume growth and higher prices, shaping your company’s processes and culture and defining its outlook. Most of the top team cannot remember a time when the business priorities were different. Then one day it dawns on you that the party is over.

Or imagine again. You run a retail bank with a solid strategy, a strong brand, a well-positioned branch network, and a loyal customer base. But a growing and fast-moving ecosystem of fintech players—microloan sites, peer-to-peer lenders, algorithm-based financial advisers—is starting to nibble at your franchise. The board feels anxious about what no longer seems to be a marginal threat. It worries that management has grown complacent.

In industry after industry, scenarios that once appeared improbable are becoming all too real, prompting boards and CEOs of flagging (or perhaps merely drifting) businesses to embrace the T-word: transformation.

Transformation is perhaps the most overused term in business. Often, companies apply it loosely—too loosely—to any form of change, however minor or routine. There are organizational transformations (otherwise known as org redesigns), when businesses redraw organizational roles and accountabilities. Strategic transformations imply a change in the business model. The term transformation is also increasingly used for a digital reinvention: companies fundamentally reworking the way they’re wired and, in particular, how they go to market.

What we’re focused on here—and what businesses like the previously mentioned bank and basic-resource companies need—is something different: a transformation with a capital T, which we define as an intense, organization-wide program to enhance performance (an earnings improvement of 25 percent or more, for example) and to boost organizational health. When such transformations succeed, they radically improve the important business drivers, such as topline growth, capital productivity, cost efficiency, operational effectiveness, customer satisfaction, and sales excellence. Because such transformations instill the importance of internal alignment around a common vision and strategy, increase the capacity for renewal, and develop superior execution skills, they enable companies to go on improving their results in sustainable ways year after year. These sorts of transformations may well involve exploiting new digital opportunities or accompany a strategic rethink. But in essence, they are largely about delivering the full potential of what’s already there.

The reported failure rate of large-scale change programs has hovered around 70 percent over many years. In 2010, conscious of the special challenges and disappointed expectations of many businesses embarking on transformations, McKinsey set up a group to focus exclusively on this sort of effort. In six years, our Recovery & Transformation Services (RTS) unit has worked with more than 100 companies, covering almost every geography and industry around the world. These cases—both the successes and the efforts that fell short—helped us distill a set of empirical insights about improving the odds of success. Combined with the right strategic choices, a transformation can turn a mediocre (or good) business into a world-class one.

Would you like to learn more about our McKinsey Recovery & Transformation Services Practice?Visit our Turnarounds & transformations page

Why transformations fail

Transformations as we define them take up a surprisingly large share of a leadership’s and an organization’s time and attention. They require enormous energy to realize the necessary degree of change. Herein lie the seeds of disappointment. Our most fundamental lesson from the past half-dozen years is that average companies rarely have the combination of skills, mind-sets, and ongoing commitment needed to pull off a large-scale transformation.

It’s true that across the economy as a whole, “creative destruction” has been a constant, since at least 1942, when Joseph Schumpeter coined the term. But for individual organizations and their leaders, disruption is episodic and sufficiently infrequent that most CEOs and top-management teams are more accomplished at running businesses in stable environments than in changing ones. Odds are that their training and practical experience predominantly take place in times when extensive, deep-rooted, and rapid changes aren’t necessary. For many organizations, this relatively placid experience leads to a “steady state” of stable structures, regular budgeting, incremental targets, quarterly reviews, and modest reward systems. All that makes leaders poorly prepared for the much faster-paced, more bruising work of a transformation. Intensive exposure to such efforts has taught us that many executives struggle to change gears and can be reluctant to lead rather than delegate when they face external disruption, successive quarters of flagging performance, or just an opportunity to up a company’s game.

Executives embarking on a transformation can resemble career commercial air pilots thrust into the cockpit of a fighter jet. They are still flying a plane, but they have been trained to prioritize safety, stability, and efficiency and therefore lack the tools and pattern-recognition experience to respond appropriately to the demands of combat. Yet because they are still behind the controls, they do not recognize the different threats and requirements the new situation presents. One manufacturing executive whose company learned that lesson the hard way told us, “I just put my head down and worked harder. But while this had got us out of tight spots in the past, extra effort, on its own, was not enough this time.”

Tilting the odds toward success

The most important starting point of a transformation, and the best predictor of success, is a CEO who recognizes that only a new approach will dramatically improve the company’s performance. No matter how powerful the aspirations, conviction, and sheer determination of the CEO, though, our experience suggests that companies must also get five other important dimensions right if they are to overcome organizational inertia, shed deeply ingrained steady-state habits, and create a new long-term upward momentum. They must identify the company’s full potential; set a new pace through a transformation office (TO) that is empowered to make decisions; reinforce the executive team with a chief transformation officer (CTO); change employee and managerial mind-sets that are holding the organization back; and embed a new culture of execution throughout the business to sustain the transformation. The last is in some ways the most difficult task of all.

Stretch for the full potential

Targets in most corporations emerge from negotiations. Leaders and line managers go back and forth: the former invariably push for more, while the latter point out all the reasons why the proposed targets are unachievable. Inevitably, the same dynamic applies during transformation efforts, and this leads to compromises and incremental changes rather than radical improvements. When managers at one company in a highly competitive, asset-intense industry were shown strong external evidence that they could add £250 million in revenue above what they themselves had identified, for example, they immediately talked down the proposed targets. For them, targets meant accountability—and, when missed, adverse consequences for their own compensation. Their default reaction was “let’s underpromise and overdeliver.”

To counter this natural tendency, CEOs should demand a clear analysis of the company’s full value-creation potential: specific revenue and cost goals backed up by well-grounded facts. We have found it helpful for the CEO and top team to assume the mind-set, independence, and tool kit of an activist investor or private-equity acquirer. To do so, they must step outside the self-imposed constraints and define what’s truly achievable. The message: it’s time to take a single self-confident leap rather than a series of incremental steps that don’t lead very far. In our experience, targets that are two to three times a company’s initial estimates of its potential are routinely achievable—not the exception.

Change the cadence

Experience has taught us that it’s essential to create a hub to oversee the transformation and to drive a cadence markedly different from the normal day-to-day one. We call this hub the transformation office.

What makes a TO work? One company with a program to boost EBITDA1 1.Earnings before interest, taxes, depreciation, and amortization. by more than $1 billion set up an unusual but highly effective TO. For a start, it was located in a circular room that had no chairs—only standing room. Around the wall was what came to be known, throughout the business, as “the snake”: a weekly tracker that marked progress toward the goal. By the end of the process, the snake had eaten its own tail as the company materially exceeded its financial target.

Each Tuesday, at the weekly TO meeting, work-stream leaders and their teams reviewed progress on the tasks they had committed themselves (the previous week) to complete and made measurable commitments for the next week in front of their peers. They used only handwritten whiteboard notes—no PowerPoint presentations—and had just 15 minutes apiece to make their points. Owners of individual initiatives within each work stream reviewed their specific initiatives on a rotating basis, so third- or fourth-level managers met the top leaders, further increasing ownership and accountability. Even the divisional CEO made a point of attending these TO meetings each time he visited the business, an experience that in hindsight convinced him that the TO process was more crucial than anything else to shifting the company’s culture.

For senior leaders, distraction is the constant enemy. Most prefer talking about new customers, M&A opportunities, or fresh strategic choices—hence the temptation at the top to delegate responsibility to a steering committee or an old-style program-management office charged with providing periodic updates. When top management’s attention is diverted elsewhere, line managers will emulate that behavior when they choose their own priorities.

Given these distractions, many initiatives move too slowly. Parkinson’s law states that work expands to fill the time available, and business managers aren’t immune: given a month to complete a project requiring a week’s worth of effort, they will generally start working on it a week before the deadline. In successful transformations, a week means a week, and the transformation office constantly asks, “how can you move more swiftly?” and “what do you need to make things happen?” This faster clock speed is one of the most defining characteristics of successful transformations.

Collaborating with senior leaders across the entire business, the TO must have the grit, discipline, energy, and focus to drive forward perhaps five to eight major work streams. All of them are further divided into perhaps hundreds (even the low thousands) of separate initiatives, each with a specific owner and a detailed, fully costed bottom-up plan. Above all, the TO must constantly push for decisions so that the organization is conscious of any foot dragging when progress stalls.

Bring on the CTO

Managing a complex enterprise-wide transformation is a full-time executive-level job. It should be filled by someone with the clear authority to push the organization to its full potential, as well as the skills, experience, and even personality of a seasoned fighter pilot, to use our earlier analogy.

The chief transformation officer’s job is to question, push, praise, prod, cajole, and otherwise irritate an organization that needs to think and act differently. One CEO introduced a new CTO to his top team by saying, “Bill’s job is to make you and me feel uncomfortable. If we aren’t feeling uncomfortable, then he’s not doing his job.” Of course, the CTO shouldn’t take the place of the CEO, who (on the contrary) must be front and center, continually reinforcing the idea that this is my transformation.

Many leaders of traditional program-management offices are strong on processes but unable or unwilling to push the CEO and top team. The right CTO can sometimes come from within the organization. But one of the biggest mistakes we see companies making in the early stages is to choose the CTO only from an internal slate of candidates. The CTO must be dynamic, respected, unafraid of confrontation, and willing to challenge corporate orthodoxies. These qualities are harder to find among people concerned about protecting their legacy, pursuing their next role, or tiptoeing around long-simmering internal political tensions.

What does a CTO actually do? Consider what happened at one company mounting a billion-dollar productivity program. The new CTO became exasperated as executives focused on individual technical problems rather than the worsening cost and schedule slippage. Although he lacked any background in the program’s technical aspects, he called out the facts, warning the members of the operations team that they would lose their jobs—and the whole project would close—unless things got back on track within the next 30 days. The conversation then shifted, resources were reallocated, and the operations team planned and executed a new approach. Within two weeks, the project was indeed back on track. Without the CTO’s independent perspective and candor, none of that would have happened.

Remove barriers, create incentives

Many companies perform under their full potential not because of structural disadvantages but rather through a combination of poor leadership, a deficient culture and capabilities, and misaligned incentives. In good or even average times, when businesses can get away with trundling along, these barriers may be manageable. But the transformation will reach full potential only if they are addressed early and explicitly. Common problematic mind-sets we encounter include prioritizing the “tribe” (local unit) over the “nation” (the business as a whole), being too proud to ask for help, and blaming the external world “because it is not under our control.”

One public utility we know was paralyzed because its employees were passively “waiting to be told” rather than taking the initiative. Given its history, they had unconsciously decided that there was no advantage in taking action, because if they did and made a mistake, the results would make the front pages of newspapers. A bureaucratic culture had hidden the underlying cause of paralysis. To make progress, the company had to counter this very real and well-founded fear.

McKinsey’s influence model, one proven tool for helping to change such mind-sets, emphasizes telling a compelling change story, role modeling by the senior team, building reinforcement mechanisms, and providing employees with the skills to change. While all four of these interventions are important in a transformation, companies must address the change story and reinforcement mechanisms (particularly incentives) at the outset.

An engaging change story. Most companies underestimate the importance of communicating the “why” of a transformation; too often, they assume that a letter from the CEO and a corporate slide pack will secure organizational engagement. But it’s not enough to say “we aren’t making our budget plan” or “we must be more competitive.” Engagement with employees and managers needs to have a context, a vision, and a call to action that will resonate with each person individually. This kind of personalization is what motivates a workforce.

At one agribusiness, for example, someone not known for speaking out stood up at the launch of its transformation program and talked about growing up on a family farm, suffering the consequences of worsening market conditions, and observing his father’s struggle as he had to postpone retirement. The son’s vision was to transform the company’s performance out of a sense of obligation to those who had come before him and a desire to be a strong partner to farmers. The other workers rallied round his story much more than the financially based argument from the CEO.

Incentives. Incentives are especially important in changing behavior. In our experience, traditional incentive plans, with multiple variables and weightings—say, six to ten objectives with average weights of 10 to 15 percent each—are too complicated. In a transformation, the incentive plan should have no more than three objectives, with an outsized payout for outsized performance; the period of transformation, after all, is likely to be one of the most difficult and demanding of any professional career. The usual excuses (such as “our incentive program is already set” or “our people don’t need special incentives to give their best”) should not deter leaders from revisiting this critical reinforcement tool.

Nonmonetary incentives are also vital. One CEO made a point, each week, of writing a short handwritten note to a different employee involved in the transformation effort. This cost nothing but had an almost magical effect on morale. In another company, an employee went far beyond normal expectations to deliver a particularly challenging initiative. The CEO heard about this and gathered a group, including the employee’s wife and two children, for a surprise party. Within 24 hours, the story of this celebration had spread throughout the company.

No going back

Transformations typically degrade rather than visibly fail. Leaders and their employees summon up a huge initial effort; corporate results improve, sometimes dramatically; and those involved pat themselves on the back and declare victory. Then, slowly but surely, the company slips back into its old ways. How many times have frontline managers told us things like “we have undergone three transformations in the last eight years, and each time we were back where we started 18 months later”?

Sustaining the momentum of a transformation

Sustaining the momentum of a transformation Read the article

The true test of a transformation, therefore, is what happens when the TO is disbanded and life reverts to a more normal rhythm. What’s critical is that leaders try to bottle the lessons of the transformation as it moves along and to ingrain, within the organization, a repeatable process to deliver better and better results long after it formally ends. This often means, for example, applying the TO meetings’ cadence and robust style to financial reviews, annual budget cycles, even daily performance meetings—the basic routines of the business. It’s no good starting this effort near the end of the program. Embedding the processes and working approaches of the transformation into everyday activities should start much earlier to ensure that the momentum of performance continues to accelerate after the transformation is over.

Companies that create this sort of momentum stand out—so much that we’ve come to view the interlocking processes, skills, and attitudes needed to achieve it as a distinct source of power, one we call an “execution engine.” Organizations with an effective execution engine conspicuously continue to challenge everything, using an independent perspective. They act like investors—all employees treat company money as if it were their own. They ensure that accountability remains in the line, not in a central team or external advisers. Their focus on execution remains relentless even as results improve, and they are always seeking new ways to motivate their employees to keep striving for more. By contrast, companies doomed to fail tend to revert to high-level targets assigned to the line, with a minimal focus on execution or on tapping the energy and ideas of employees. They often lose the talented people responsible for the initial achievements to headhunters or other internal jobs before the processes are ingrained. To avoid this, leaders must take care to retain the enthusiasm, commitment, and focus of these key employees until the execution engine is fully embedded.

Consider the experience of one company that had realized a $4 billion (40 percent) bottom-line improvement over several years. The impetus to “go back to the well” for a new round of improvements, far from being a top-leadership initiative, came out of a series of conversations at performance-review meetings where line leaders had become energized about new opportunities previously considered out of reach. The result was an additional billion dollars of savings over the next year.


Nothing about our approach to transformations is especially novel or complex. It is not a formula reserved for the most able people and companies, but we know from experience that it works only for the most willing. Our key insight is that to achieve a transformational improvement, companies need to raise their ambitions, develop different skills, challenge existing mind-sets, and commit fully to execution. Doing all this can produce extraordinary and sustainable results.

Article link: http://www.mckinsey.com/business-functions/mckinsey-recovery-and-transformation-services/our-insights/transformation-with-a-capital-t?cid=other-nsl-mkq-mck-oth-1612

 

Being patient-centric in a digitizing world – McKinsey Quarterly

Posted by timmreardon on 12/09/2016
Posted in: Uncategorized. Leave a comment

A Danish pharma company’s strong customer focus and determined digital drive have important lessons for other businesses.

From company headquarters, in the suburbs of Copenhagen, LEO Pharma has been stepping up its strategy to become the world’s leading company for people with skin diseases. McKinsey senior partner Martin Møller recently talked with LEO Pharma’s president and CEO, Gitte Aabo, about the group’s efforts to better understand the needs of patients and about its recent investment in LEO Innovation Lab, a stand-alone unit designed to develop digital solutions for patients.

The Quarterly: At LEO Pharma, everything seems to be about the patient. What exactly does patient-centricity mean—and to what extent is this idea new?

Gitte Aabo: Clearly, it’s always been the case at LEO Pharma—as it should be at any pharma company—that we care about delivering excellent treatments to patients. But we’ve taken this one step further by asking ourselves not just whether our treatments are safe and efficacious but also are they convenient and do they truly address patients’ needs.

One of the obstacles we face is that even though skin diseases can have a profound impact on the lives of patients, patients don’t always adhere to treatments, often because they find it too difficult to use the products. We need to remember that patients are people like you and me, who get up in the morning, go to work, and pick up their kids after school. So if we come up with a treatment, like an ointment, that takes patients a long time to apply every day, they most likely won’t. We want to respond to this.

The Quarterly: How has patient-centricity changed the way you do things in practice?

Gitte Aabo: One example is that we have asked anthropologists who study psoriasis patients in various parts of the world to help us understand not only the needs that these patients are able to express themselves but also some of the unmet needs that, maybe, they are not even aware of. Indeed, this led to a new treatment applicator, which is now being used by people with psoriasis all over the world.

Sidebar

Gitte Aabo biography

Gitte Aabo
Education
Earned an MBA at Copenhagen Business School

Career Highlights
LEO Pharma (1992–present)

President and CEO (2008–present)

Deputy group managing director (2007)

Vice president and senior vice president, finances and IT (1999–2006)

Fast Facts
Member of the board of directors of the Danish National Bank

Another example is in R&D, where we now specifically work to address the issues of different personas. We are very conscious, for instance, that a young girl who gets psoriasis in her teenage years—a time when she is concerned with her looks, thinking about a first date, and worrying about her education—will react differently from a 70-year-old man in the same situation. That is reflected in how we develop treatments and support these different types of patients.

To me, patient-centricity means being deeply entrenched in patient’s needs, not just thinking about how to develop new products and new features. It means reaching out to patients and considering treatments that will help them in whatever situation they find themselves in.

The Quarterly: How have you changed the culture of the company to reflect this thinking?

Gitte Aabo: That is a huge challenge and clearly not something that happens overnight. We’ve done a number of things. Every employee who joins LEO Pharma, for example, meets a patient as part of the induction. And the incentive schemes for all senior managers are now split into three categories: patients, people, and performance—with patients being the one that has the heaviest weighting.

Other elements still need to change. Take our clinical trials. What does a successful clinical trial look like in a patient-centric culture? It requires a focus on convenience—ease of use—and on reported patient outcomes as much as on safety and efficacy, and it requires openly sharing the results. As an example, we have taken steps toward the latter with our commitment to transparency. We were the second company, globally, to commit itself to increased disclosure of clinical-trial information. We are proud of that commitment but want to do even more.

The Quarterly: Can you tell us about the LEO Innovation Lab? Why did you create a separate unit, and what is its relationship with the rest of the company?

Gitte Aabo: The idea behind the LEO Innovation Lab has been to build and test digital technologies and platforms that will address areas the pharmaceutical industry typically overlooks. We wanted, above all, to create an environment that resembles a start-up company because we realized that the competencies we need are very different from what we find in many employees with scientific backgrounds. A company with a more than 100-year history probably doesn’t have that start-up environment. Hence the decision to opt for a separate unit, with a different way of working that would attract people wanting to innovate in the digital space.

The Quarterly: How did you decide where to locate the LEO Innovation Lab?

Gitte Aabo: We felt it was important to locate the lab in the center of Copenhagen, where younger, digitally savvy people are more likely to want to work, rather than in the suburbs, where LEO Pharma is headquartered. And it was important to be in Copenhagen—not, say, Silicon Valley—so that we could more easily transfer all the insights we have in the company about the physical, social, and psychological impact of living with a skin disease.

To guide the LEO Innovation Lab, we have put in place an advisory board that combines people from the business in LEO Pharma with people well known within the start-up and digital space. The latter bring knowledge, experience, and networks to the table, but, most important, they set the tone for a start-up environment in culture and values.

Besides Copenhagen, we have satellite labs in the UK, France, and Canada—all markets where we have a very strong presence and close relationships with dermatologists, payors, and pharmacists. To reach out to patients, we need a deep understanding of the ecosystems surrounding them.

How_pharma_win_digital_world_1536_1536_Feature

How pharma can win in a digital world Read the article

The Quarterly: What results are you expecting from the LEO Innovation Lab, and how will you measure them?

Gitte Aabo: In the first instance, we aim to develop specialized apps to give people living with skin diseases resources like dietary advice, beauty tips for psoriasis sufferers, and general ideas on how patients can benefit from their interactions with healthcare professionals. We will have KPIs to track how many people with skin diseases use our solutions and continue to use them. We believe that the better patients are informed and understand a disease, the better they will be able to take control of it and adhere to treatment.

The Quarterly: How flexible is the operating model of LEO Innovation Lab?

Gitte Aabo: It’s flexible in the sense that it’s scalable. The lab operates a lot through external partnerships and hiring people with specialized competences on shorter assignments to work on a particular digital solution.

We’ve allocated around €60 million for the next three years and are already considering how to continue the initiative, and in what form, when that period is up. We want to strike a balance, ensuring that there is enough funding to have an impact, while not providing so much money that it discourages the sort of risk taking, pragmatism, and agility that distinguish the best start-ups.

I hope that some of the thinking applied in LEO Innovation Lab will rub off on how we run projects or processes inside the traditional, nondigital part of LEO Pharma. In LEO Innovation Lab, we have an innovation process that runs within 100 days—100 days from the point we have an idea to the moment we have a solution on the market. Although I would love to see that kind of speed in my innovation process in more traditional research and development, that’s not possible for many reasons. Still, there are elements that we can learn from and apply elsewhere in the business.

The Quarterly: With LEO Innovation Lab, you’ve been active in seeking innovation partnerships. What technologies are you most interested in, and what characteristics do you look for in potential partners?

Gitte Aabo: We are particularly interested, at the moment, in the combination of imaging and artificial intelligence. Currently, general practitioners, or family doctors, have a limited ability to diagnose a skin disease. Studies show that only about 50 percent of eczema cases, for instance, are correctly diagnosed by these GPs. By combining imaging technology with pictures taken on a mobile phone, you can build up knowledge, over time, about what eczema looks like or what a melanoma looks like. We’ve recently invested in a company whose app to detect melanoma can provide as accurate a diagnosis, with images taken by an individual patient, as the best specialists.

The Quarterly: How does the legal and regulatory framework affect LEO Pharma’s strategy?

Gitte Aabo: The legal and regulatory frameworks reflect the credibility of our industry in the eyes of society. Credibility is crucial to the industry because a lot of people don’t trust pharma companies. That’s something we need to address and change in the coming years, and there’s only one way to do it—by being transparent about our clinical trials and our other activities.

The Quarterly: As you look ahead, what worries you and what excites you?

Gitte Aabo: One of the things that excites me is the level of access to information that patients now have, which will further increase. I believe this is going to change the whole dynamic of the healthcare system. We’ve only scratched the surface at the moment, but more information will have a profound impact on the physician’s role, the patient’s role, and our role as a company. Patients will have more decision power, at least when it comes to chronic diseases, and as a citizen I think that’s a healthy development. It’s also challenging because it requires a completely new business model, in which the patient gradually moves to the foreground.

The Quarterly: Is it important for LEO Pharma to prioritize long-term success over short-term gain?

Gitte Aabo: I think it’s important for the entire pharma industry if we want to be perceived as credible and to run a sustainable business. In the years to come, people will increasingly select not just a pharmaceutical product but the company behind that product—and that’s where trust is vital. That mind-set is embedded in how we run the business and how we make investments. The fact that LEO Pharma is owned 100 percent by a foundation strengthens our ability to think and act for the long term and is closely related to our credibility.


For more interviews on how the pharmaceutical industry is evolving and how leaders can adapt, see Biopharma Frontiers.

About the author(s)

Gitte Aabo is the president and CEO of LEO Pharma. This interview was conducted by Martin Møller, a senior partner in McKinsey’s Copenhagen office.

Article link: http://www.mckinsey.com/industries/pharmaceuticals-and-medical-products/our-insights/being-patient-centric-in-a-digitizing-world?cid=other-alt-mkq-mck-oth-1612

A Simple Way to Measure Health Care Outcomes – HBR

Posted by timmreardon on 12/09/2016
Posted in: Uncategorized. Leave a comment

hbr1

  • Amitabh Chandra
  • Robert S. Huckman
  • John Schupbach

 

December 08, 2016

Despite the current uncertainty surrounding the fate of the Affordable Care Act (ACA), health care leaders must not let debates over access detract from what needs to happen regardless of the legislation’s fate: Their organizations must improve the value of care they deliver. When the ACA was passed, in 2010, many observed that although the ACA expanded access to health care, it did less to address the equally critical issue of improving the delivery of that care. Regardless of whether the ACA is repealed, this challenge remains.

The success of efforts to improve care delivery hinges on developing clear approaches to performance measurement. A guiding principle offered by many has been to focus on improving the value of care delivered, as measured by the health outcomes achieved per dollar spent on care. Typically, value is expressed as a ratio: the quality of outcomes (adjusted to account for the severity of a patient’s condition) divided by the cost of treating that condition. Improving value therefore occurs by increasing quality relative to cost. For the most part, care organizations are better able to measure the denominator (cost) than the numerator (quality) of the value equation. As a result, they spend too much time focusing on costs and cost growth instead of value and its growth.

Our suggestion for a starting point to measure outcomes is simple, easy, and inexpensive: Ask the patient. For many years, health care providers have worked to collect so-called patient-reported outcomes measures, or PROMs, which are measures of function and health status reported by patients. The widespread and consistent use of PROMs, however, has proven to be elusive for a range of reasons, including the complexity of the measures tracked and the varying reliability of patient assessments on many measures.

As providers continue to refine their approaches to collecting PROMs, they should consider developing simpler approaches for capturing feedback from patients. We emphasize that these simpler methods would not displace PROMs, but rather serve as a complement to them. For example, Net Promoter Score (NPS) is a method of measuring customer loyalty or advocacy across wide array of industries, from hospitality to financial services to consumer technology. It uses a simple question of whether a consumer would recommend a product or service to a friend or colleague. NPS at the level of a unit or organization is then calculated as the percentage of respondents who are “promoters” less the percentage who are “detractors.”

In health care, NPS could also be calculated at the level of a given condition or provider within an institution. Asking even a single question to patients may help organizations understand the needs of each patient and identify opportunities for improvement that may not require much investment. In addition to being used for internal improvement, this data could inform consumers, employers, and insurers in their decisions to purchase care from particular providers.

“The Elements of Value,” an article in the September 2016 issue of Harvard Business Review, discusses the role that product or service quality plays in customer advocacy. The article states: “Across all the industries we studied, perceived quality affects customer advocacy more than any other element. Products and services must attain a certain minimum level, and no other elements can make up for a significant shortfall on this one.” What this suggests is that patient-reported satisfaction may be a reasonable proxy for quality. But what does it reliably tell us about clinical outcomes?

Another recent HBR article by authors from the Geisinger Health System, which offers a satisfaction guarantee to patients, acknowledges that “critics of programs that improve patient satisfaction will often imply that there is a false equivalence between efforts geared towards improving quality and those aimed at improving patient experience — that quality efforts are in some way superior.” The authors challenge this critique, noting, “[Recent] strong evidence [see this article and this one] suggests that improved patient satisfaction is in fact correlated with better health outcomes and quality: Increased satisfaction is associated with decreased length of hospital stay, lower readmission rates, reduced mortality, and fewer minor complications.”

We acknowledge that correlation does not imply causation. Rather, a high-quality outcome is the result of several factors and actions that, when properly aligned, result in patient satisfaction. In this case, satisfaction is meaningful not because it causes quality but because it may indicate that quality has been delivered.

For many types of care — especially care that is routine, highly standardized, and completed over a short duration — one may thus get significant mileage out of asking patients very simple questions such as, “At this point, how satisfied are you with the outcome of your care?” Consider, for example, a knee replacement or appendectomy. Given that a return to normal function is a key outcome for these treatments and an expectation for many patients, asking patients at predetermined intervals post-treatment to rate the quality of their outcomes may prove highly valuable.

The value of such patient feedback in primary care and the management of chronic diseases such as diabetes or hypertension could also be substantial. Beyond the value of the information itself, the process of collecting it, which could involve interacting with a health coach or even a mobile app, may have the additional benefit of helping patients with the often-elusive goal of better adhering to their disease management plans.

For other types of health care, it will be harder for patients to assess outcomes that may be subtle, technical, and reveal themselves over extended periods of time. The fact that such assessment is difficult, however, should not be used to dismiss efforts to achieve it. For example, asking a patient with lung cancer to rate their care may not capture the quality of the medical treatment received, as a significant portion of the relevant outcomes of that treatment emerge during the months and years following initial diagnosis. Asking that patient to rate care at set intervals, however, may help observe some of those emerging outcomes.

We often assume that health care is different from other consumer products or services and that this uniqueness prioritizes the value of clinical measures of performance over those related to patient experience. This assumption might cause one to measure smartphones primarily on battery life or cars on fuel economy. Yet other attributes of smartphones and cars, such a price and ease of convenience, matter too.

The view that health care is different misses the insight that it is different because it comes in a wide variety of forms — some routine, others quite complex. When it comes to measuring outcomes and value, not all health care falls into the “complex and hard to measure” bucket. Much of it is routine and predictable.

As such, the use of measures that we typically see in other consumer products and services may be relevant and helpful in measuring, and thus improving, value. In short, the fact that simple patient ratings of care may not be applicable for all types of care should not cause us to dismiss their use for any types of care.

Article link: https://hbr.org/2016/12/a-simple-way-to-measure-health-care-outcomes?utm_content=bufferbacca&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer


John Schupbach is an MD candidate at the Mayo Clinic School of Medicine, an MBA candidate at Harvard Business School, and founder and executive director of the nonprofit organization Squalor to Scholar.


Amitabh Chandra is the Malcolm Weiner Professor of Public Policy and the director of health policy research at the Harvard Kennedy School of Government and a visiting professor at Harvard Business School. He is an elected member of the National Academy of Medicine.


Robert S. Huckman is the Albert J. Weatherhead III Professor of Business Administration at Harvard Business School and is the current faculty chair of the HBS Health Care Initiative.

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • The Global Healthcare System Is Broken. Japan Fixed It for $4,100 Per Person. 04/10/2026
    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (2)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...