Read full article: http://www.washingtonpost.com/sf/brand-connect/philips/transforming-healthcare/?origin=13_us_en_wpvbc_philipsnatwitter____nabcd_paid
Archives
All posts for the month March, 2017
Officials with the Department of Veterans Affairs repeatedly told lawmakers that the agency is moving on from its failed approach of building in-house IT systems, and is instead seeking commercial, off-the-shelf options to improve scheduling, EHR interoperability, and billing and claims processing.
In a hearing before the House Committee on Veterans’ Affairs on Tuesday, Rob Thomas, acting assistant secretary for information technology and CIO for the Office of Information and Technology at the VA, acknowledged previous failures in the agency’s attempt to modernize IT systems, but noted that it is no longer trying to build systems from the ground up.
“I’m confident we’re going to go commercial,” Thomas said, noting that the VA is considering commercial products and moving data to the cloud instead of spending money building customized in-house systems. However, he declined to offer a completion date.
RELATED: VA’s slow but steady push to modernize its technology
Lawmakers continued to criticize VA officials for using decades-old legacy systems, drawing comparisons to the overwhelming costs required to maintain an old car. As the Government Accountability Office (GAO) has previously pointed out, a large portion of the VA’s $4 billion IT budget goes to managing outdated systems that are more than 50 years old.
Since 2015, the GAO has included the VA’s IT systems and the lack of interoperability between the Department of Defense and the VA on its list of “high risk” areas, and an official at the hearing said it would be included on the agency’s 2017 list. The GAO has been critical of the VA’s efforts to modernize its systems, highlighting an estimated $127 million in wasted funding used to rebuild the VA’s outpatient scheduling system.
In a report released during the hearing, the GAO again criticized the VA for ongoing failures tied to its IT modernization efforts and recommended the agency develop concrete goals and metrics moving forward.
“The problem with federal government is that they are so reluctant to buy commercial products and change antiquated business practices,” David Powner, director of IT management issues at the GAO, said during the hearing. He added that the VA could save “hundreds of millions of dollars” by improving data consolidation efforts.
“Buying instead of building is the way to go,” he added.
Thomas agreed that the VA needs to aggressively shrink its footprint and spend less to maintain old legacy systems. He added that he is hoping for a speedy confirmation for David Shulkin, President Donald Trump’s selection to lead the VA, so they could roll out a timeline for purchasing commercial systems.
RELATED: Donald Trump picks David Shulkin to lead VA
Despite its history of modernization failures, Jennifer Lee, M.D., deputy under secretary for health for Policy and Services at the VA, did emphasize the success of the agency’s enterprise Health Management Platform (eHMP), which allows physicians to access patient information from other participating hospitals outside of the VA system.
Following failed attempts by the VA to modernize its VistA EHR system, the previous VA CIO, LaVerne Council, initiated a request for information to replace the system with a commercial product. Council previously testified that she cringes when she thinks about how old some of the VA systems are.
Thomas noted that he was he was chosen to replace Council to continue that transition toward an off-the-shelf product, noting that prior administrations did not have a coherent strategy for IT modernization or cybersecurity. But, given the VA’s history in missing deadlines, Powner urged Congress to hold quarterly meetings and manage the process “with a heavy hand to make sure deadlines are met.”
The processing required to prepare unstructured data for analysis can be cumbersome and prone to error. That’s why companies should do more to organize their data before it is ever collected.
Unstructured data — data that is not organized in a predefined way, such as text — is now widely available. But structure must be added to the data to make it useable for analysis, which means significant processing. That processing can be a problem.
In a form of modern alchemy, modern analytics processes now transmute “base” unstructured data into “noble” business value. Systems everywhere greedily salt away every imaginable kind of data. Technologies such as Hadoop and NoSQL store this hoard easily in its native unstructured form. Natural language processing, feature extraction (distilling nonredundant measures from larger data), and speech recognition now routinely alchemize vast quantities of unstructured text, images, audio, and video, preparing it for analysis. These processes are nothing short of amazing, working against entropy to create order from disorder.
Unfortunately, while these processing steps are impressive, they are far from free or free from error. I can’t help but think that a better alternative in many cases would be to avoid the need for processing altogether.
We all know how each step in a process mangles information. In the telephone game, as each person whispers to the next player what they think was said to them, words can morph into an unexpected or misleading final message. In a supply chain, layers exacerbate distortion as small mistakes and uncertainty quickly compound.
By analogy, organizations are playing a giant game of telephone with data, and unstructured data makes the game far more difficult. In a context where data janitorial activities consume 50% to 80% of scarce data scientist resources, each round of data telephone costs organizations in accuracy, effort, and time — and few organizations have a surplus of any of these three.
Within organizations, each processing step can be expensive to develop and maintain. But the growth in importance of data sharing between organizations magnifies these concerns. Our recently published report, “Analytics Drives Success with IoT,” associates business value with sharing data between organizations in the context of the internet of things. And, to foreshadow our report to be released in January, we observe similar results in the broader analytics context. But with every transfer of data, more processes need to be developed and maintained.
If this processing were unavoidable, then it would just be a cost of data sharing within or between organizations. A disconcerting point, however, is that there is (or could be) structure in the ancestry of much of the data that is currently unstructured. For example, for every organization that generates a web page based on data in a database, there are likely multiple organizations scraping that data (either sanctioned or unsanctioned) and then processing it to try to regain that structure. In the best case, that’s a lot of thrashing just to end up with data in its original form. In the worst case, it’s a lot of effort to put toward obtaining data with many errors.
By Mohana Ravindranath
January 6, 2017
The 2017 National Defense Authorization Act provides for the reorganization of the Pentagon’s buying office, and it could be a key part of outpacing adversaries’ tech development.
By February 2018, the undersecretary of defense for acquisition, technology and logistics, known as AT&L, will be split into two separate roles: one undersecretary of defense for research and engineering, and another for acquisition and sustainment.
“We’re not doing this for a business management reason,” Bill Greenwalt, a professional staff member to the Senate Armed Services Committee, said during a meeting hosted by the Professional Services Council on Friday. “There is a fear … that some of our potential adversaries are really boosting up their research and development functions [and] copying what we have.”
“We need to figure out a way to bring those technologies and those operational concepts into the Department of Defense,” he added.
AT&L needed to be split into smaller roles because it has “grown too big, tries to do too much and is too focused on compliance at the expense of innovation,” Senate Armed Services Committee Chairman Sen. John McCain, R-Ariz., said in a statement in November.
By Mohana Ravindranath
January 6, 2017
President Donald Trump and his new Pentagon leadership team come to power at a time of major change in the multibillion-dollar world of military acquisition. A measured approach from the administration, as well as from Congress, could be key to how well the Pentagon navigates this complex, ongoing system-wide reform.
For the Department of Defense (DoD) the bulk of acquisition regulations derive from procurement and acquisition laws enacted by Congress. From the Packard Commission in the mid-1980s to the Goldwater-Nichols Act in the late 1980s, the acquisition reform initiatives in the 1990s to the Weapons Systems Reform Act of 2009, new institutions, bureaucracies, and regulations were instituted.
In 2016, after the military service chiefs testified that they were not a part of the acquisition system, the Congress passed the National Defense Authorization Act (NDAA) that gave those positions limited but powerful voices in the process. While this was a significant change to the mandates of the Goldwater-Nichols Act, its implementation will take time and change cultures in the Pentagon. This has slowly begun to evolve and should continue during the Trump administration.
The Congress has now passed the 2017 NDAA, which includes roughly 250 pages in Title VIII, Acquisition Policies, Management and Related Matters. The effect on the institutions charged with implementing these mandates is yet unknown but is expected to be significant in terms of both implementation as well as manpower.
In addition, the 2017 NDAA realigned the Office of Under Secretary of Defense for Acquisition, Technology and Logistics, creating an undersecretary for research and engineering and one for acquisition and sustainment. At the same time, it created the position of chief management officer to oversee DoD business systems that, of course, affect the entire acquisition chain.
While many have complained that the current single position of undersecretary for acquisition, technology and logistics is too broad, it provided a management continuity and singular direction from the inception of an idea in basic research, flowing into program development and production and finally into support of the weapon system. Now, these elements will be split between at least two different undersecretaries, with different authorities, priorities, staffs and concepts. This could lead to potential bureaucratic conflicts between these two offices and their military department counterparts. The position of chief management officer could complicate things further.
All this occurs as a new administration moves into the Pentagon facing mandated reductions in personnel, military flag and general officers, members of the Senior Executive Service and finally a 20–25 percent cut in overall manpower. The nomination of a new deputy secretary, possibly someone from the business community, could be a significant moment for the process moving forward.
Fortunately, the roughly 50-page section 900 of the 2017 NDAA, related to the Office of the Secretary of Defense, gives the new secretary some breathing room by delaying the creation of the two undersecretary positions until February of 2018.
The acquisition system in the DoD is complex and ever-changing. It requires a trained and active workforce, one that has the support of the Congress. Allowing some of the reforms to take place in a measured fashion would probably be a wise choice for the administration and the Congress.
Irv Blickstein is a senior engineer at the nonprofit, nonpartisan RAND Corporation. He has held leadership positions in both the Department of the Navy and the Department of Defense.
This commentary originally appeared on RealClearDefense on February 21, 2017.
Article · February 22, 2017
As a researcher who studies electronic health records (EHRs), I’ve lost count of the times I’ve been asked “Why can’t the systems talk to each other?” or, in more technical terms, “Why don’t we have interoperability?” The substantial increase in electronic health record adoption across the nation has not led to health data that can easily follow a patient across care settings. Still today, essential pieces of information are often missing or cumbersome to access. Patients are frustrated, and clinicians can’t make informed decisions. When our banks talk to each other seamlessly and online ads show us things we’ve already been shopping for, it is hard to understand why hospitals and doctors’ offices still depend on their fax machines.
A big part of the reason is that interoperability of health information is hard. If it were easy, we would have it, or at least have more of it, by now. Though it’s a technological issue, it’s not just a technological issue. As we have seen in other industries, interoperability requires all parties to adopt certain governance and trust principles, and to create business agreements and highly detailed guides for implementing standards. The unique confidentiality issues surrounding health data also require the involvement of lawmakers and regulators. Tackling these issues requires multi-stakeholder coordinated action, and that action will only occur if strong incentives promote it.
Though billions in monetary incentives fueled EHR adoption itself, they only weakly targeted interoperability. I have come to believe that we would be substantially farther along if several key stakeholders had publicly acknowledged this reality and had made a few critical decisions differently. While it’s too late for “do-overs,” understanding initial missteps can guide us to a better path. Here is how those key stakeholders, intentionally or not, have slowed interoperability.
Policymakers
The 2009 Health Information Technology for Economic and Clinical Health (HITECH) legislation contained two basic components: a certification program to make sure that EHRs had certain common capabilities, and the “Meaningful Use” program, divided into three progressively more complex stages, that gave providers incentive payments for using EHRs. The legislation specified “health information exchange” (HIE) as one of the required capabilities of certified EHR systems. However, the Centers for Medicare and Medicaid Services (CMS) and the Office of the National Coordinator for Health IT (ONC) had substantial latitude to decide how to define health information exchange as well as how to include it in the Meaningful Use program and the accompanying EHR certification criteria.
CMS and ONC decided to defer the initial HIE criterion — electronically transmitting a summary-of-care record following a patient transition — to Stage 2 of the Meaningful Use program. While many of the capabilities required for providers to meet this criterion would have been challenging to develop on the Stage 1 Meaningful Use timeline, deferring HIE to Stage 2 allowed EHR systems to be designed and adopted in ways that did not take HIE into account, and there were no market forces to fill the void. When providers and vendors started working toward Stage 2, which included the HIE criterion, they had to change established systems and workflows, creating a heavier lift than if HIE had been included from the start.
While the stimulus program needed to get money out the door quickly, it might have been worth prioritizing HIE over other capabilities in Stage 1 or even delaying Stage 1 Meaningful Use to include HIE from the start. At a minimum, this strategy would have revealed interoperability challenges earlier and given us more time to address them.
A larger problem is that HITECH’s overall design pushed interoperability in a fairly limited way, rather than creating market demand for robust interoperability. If interoperability were a “stay-in-business” issue for either vendors or their customers, we would already have it, but overall, the opposite is true. Vendors can keep clients they might otherwise lose if they make it difficult to move data to another vendor’s system. Providers can keep patients they might otherwise lose if they make it cumbersome and expensive to transfer a medical record to a new provider.
As a result, the weak regulatory incentives pushing interoperability (in the form of a single, fairly limited HIE Meaningful Use criterion), even in combination with additional federal and state policy efforts supporting HIE progress, could not offset market incentives slowing it. Without strong incentives that would have created market demand for robust interoperability from the start, we now must retrofit interoperability, rather than having it be a core attribute of our health IT ecosystem. And, if there had been stronger incentives from the start, we would not now need to address information blocking: the knowing and intentional interference with interoperability by vendors or providers.
Another criticism levied at policymakers is that they should have selected or certified only a small number of certain EHR systems, because narrowing the field of certified systems would have at least limited the scope of interoperability problems. A few even advocated that Congress mandate a single system such as the VA’s VISTA. While such a mandate would have helped solve the interoperability issue, it would have violated the traditional U.S. commitment to market-based approaches. Moreover, the United Kingdom failed very visibly with a heavily centralized approach, and in a health care system the size of the United States, an attempt to legislate IT choices in a similar manner could backfire catastrophically.
EHR Vendors
Most observers assign EHR vendors the majority of blame for the lack of interoperability, but I believe this share is overstated. As noted above, by avoiding or simply not prioritizing interoperability, they are acting exactly in line with their incentives and maximizing profit.
Normally, the United States glorifies companies that behave this way. When neither policymakers nor providers were demanding interoperability, vendors risked harming their bottom lines by prioritizing it, and they cannot be blamed for acting in their economic best interest in wholly legal ways.
Nevertheless, senior leaders at EHR vendors should have been more willing to come forward and explain why their economic best interest was at odds with existing regulations. Instead, they often claim to have robust interoperability solutions when they do not, and similarly claim that interoperability is a top priority when it is not. This gap between rhetoric and reality makes it harder for providers and policymakers to demand greater interoperability.
Providers
As noted above, providers may not have a strong business case to prioritize interoperability. However, providers have professional norms and mission statements that should motivate them to pursue interoperability (or at least not actively interfere with it) to benefit their patients. As a result of these conflicting motivations, some provider organizations let competitive pressures drive interoperability decisions (including not demanding robust interoperability from their vendors), while others have chosen to pursue interoperability because it is best for their patients, even if this decision incurs competitive disadvantage. More providers may tip toward the former simply because today’s interoperability solutions are complex and costly. It is hard to justify investing in a complicated, expensive capability that also poses a strategic risk — a double whammy.
The emergence and rapid growth of Epic’s Care Everywhere platform (which connects providers using Epic) suggests that even in highly competitive markets, providers may easily tip the other way when the cost and complexity of interoperability are reduced. Therefore, any efforts that successfully reduce cost and complexity are highly valuable, though not a substitute for stronger incentives for providers (and vendors) to engage in interoperability.
As with vendors, we cannot fault providers for behaving in ways that are aligned with their incentives, but we can argue that their patient care mission requires, at a minimum, more public disclosure about the business rationales behind their interoperability decisions.
The point of the blame game is not to punish the players. It is to understand the dynamics at play and plot a path forward. Of the stakeholders, only policymakers have a clear, strong interest in promoting interoperability. Therefore, it is up to them to ensure that robust, cross-vendor interoperability is a stay-in-business issue for EHR vendors and providers. Once the business case for interoperability unambiguously outweighs the business case against it, both vendors and providers can pursue it without undermining their best interests.
March 16, 2017
@chrisnerney
Attaining full interoperability of electronic healthcare systems is a challenge that transcends technology, argues Dr. Julia Adler-Milstein, associate professor at the University of Michigan’s School of Information, School of Public Health.
Writing in NEJM Catalyst, an online publication of the New England Journal of Medicine, Adler-Milstein acknowledges that “health information is hard. If it were easy, we would have it, or at least have more of it, by now.”
Compounding the inevitable technology issues, she says, is the need to unite stakeholders and create an organizational framework that supports interoperability goals.
“As we have seen in other industries, interoperability requires all parties to adopt certain governance and trust principles, and to create business agreements and highly detailed guides for implementing standards,” Adler-Milstein writes. “The unique confidentiality issues surrounding health data also require the involvement of lawmakers and regulators.”
Though her article is titled “Moving Past the EHR Interoperability Blame Game,” Adler-Milstein cites in some detail how policymakers, EHR vendors, and providers each have impeded progress toward widespread healthcare interoperability.
The root of the problem is that policymakers failed to create regulatory incentives to promote interoperability that were strong enough to offset market incentives, such as vendors wanting to lock in customers by engaging in information blocking. EHR vendors thus lacked any incentives to prioritize interoperability, according to Adler-Milstein.
And while many providers have opted to pursue interoperability because they believe it will help them deliver better care to patients, Adler-Milstein writes that “some provider organizations let competitive pressures drive interoperability decisions (including not demanding robust interoperability from their vendors).”
“It is hard to justify investing in a complicated, expensive capability that also poses a strategic risk — a double whammy,” she says.
Adler-Milstein concludes that the onus is on policymakers to move the interoperability needle.
“Only policymakers have a clear, strong interest in promoting interoperability,” she says. “Therefore, it is up to them to ensure that robust, cross-vendor interoperability is a stay-in-business issue for EHR vendors and providers.”
You can read the entire article here.
Moving Past the EHR Interoperability Blame Game
Analysis of high-stakes transformations reveals a few pragmatic lessons that increase the odds of meeting the organization’s objectives.
Any transformation worth the name starts with ambitious goals. Setting them is hard enough, especially for organizations long used to the risk-averse pattern of underpromising and overdelivering. But the real work starts once the organization sets out to turn the leaders’ targets into initiatives that everyone else designs and implements from the bottom up.
Keeping hundreds or thousands of initiatives on track is a monumental task, one that too few organizations around the world do well. Recent research reconfirms earlier findings that only 30 percent of transformations deliver their intended benefits and meet the targets committed to during the program-planning stage.
These odds are simply unacceptable, especially when the stakes are high. We therefore reviewed 18 transformations at 13 organizations that were in the most critical circumstances. While some were facing significant financial and operational challenges, including rapidly deteriorating performance or liquidity concerns, others were simply seeking a substantial step up in their performance.
Our focus was on the practical lessons these organizations learned in making their ambitions real. Our analysis was enabled by McKinsey’s proprietary program-management platform, Wave, which generates detailed reports tracking the financial and operational impact of individual initiatives.
Wave’s data repository allowed for a comprehensive analysis of the factors contributing to initiative success—ranging from how impact targets were determined, and how quickly initiatives progressed through the various stage-gate reviews, to the structures and timelines of the programs the initiatives supported. We then supplemented our findings with in-depth interviews of executives at representative companies included in the data set.
All the organizations were located in Asia–Pacific: that focus ensured greater consistency in the value-tracking approach and data structure, letting us make more nuanced comparisons. Nevertheless, the organizations varied in industry, size, and program impact. The sectors represented included construction, consumer goods, electric power, mining, natural resources, oil and gas, and retail banking. Annual revenues ranged from $2 billion1 1.All amounts in US dollars, converted as of the study’s conclusion in May 2016. to $28 billion, and total transformation impact ranged from $450 million to $4 billion—but we found no direct correlation between the size of the company and the impact of its transformation program.
Our detailed analysis of these materials has allowed us to draw three main insights that can serve as potential guiding principles when structuring a large-scale transformation program:
Be relentless. From the beginning, organizations should assume that most initiatives will be worth a lot less than they think. Moreover, most of the companies in our sample fell short of their initial goals and needed an additional round of back-to-the-well idea generation. And, they had to be careful about allocating management time, so that smaller initiatives got their due—they accounted for about half of the program’s value, but they could get lost in a focus on only the biggest projects.
Focus your resources. Organizations must resist the temptation to spread their most effective leaders too thin. Three initiatives were the typical burden a leader could shoulder at once. Engaging more of the organization as potential initiative owners allows each initiative to get the support it needs without overburdening a few high performers. Reporting must be prioritized as well. Too many milestones in initiative plans can create unnecessary burdens; most programs try to capture too many metrics—and usually fewer than 30 percent end up actually being used.
Plan and adapt. Most initiatives were at least somewhat delayed in implementation. But organizations could reduce delays with judicious planning of milestones, supplemented by weekly actions that initiative owners would report on between milestones.
Follow the pipeline
The transformations we examined all followed a similar pipeline approach for tracking initiatives.
The stage gates of the pipeline begins at level zero, or “L0,” with the collection of as many ideas as possible, regardless of feasibility or size (Exhibit 1). Our analysis then begins at L1, once the initiatives have been identified as worth pursuing. During that stage, initiative owners set out to validate and refine their early value assumptions with data from other stakeholders and additional analysis. Once a solid business case has been built, the initiative is approved (usually by the finance function) and passes into L2. The initiative owner then defines a robust set of milestones to execute the initiative, and provides a monthly schedule of expected value to be captured on the bottom line, at L3. Many initiatives sit at L3 during implementation, with the initiative only moving to L4 once all milestones to realize value are completed. At that point, the finance function assesses the initiative to ensure that it will deliver value—ideally at the target amount set at L2, but that amount is usually adjusted as the initiative progresses from stage to stage. Finally, once the actual value appears in the business’s cash flows and appears reasonably certain to remain, the initiative passes to the last stage: L5.

Throughout the process, a transformation office (TO)—typically headed by a chief transformation officer—sets an aggressive pace of weekly reviews to monitor initiatives’ progress against their milestones, record the value they capture, and provide support when initiatives run into trouble. The TO’s independence and role in capturing data allows it to drive action, especially through rigorous problem-solving sessions and questioning of self-imposed limits.
We reviewed each organization’s experience across stages L1 to L5 to find out where problems were most likely to arise and how organizations worked around them.
Be relentless
The earliest challenge for any organization transforming itself is to find sources of value. That means fighting attrition, conducting back-to-the-well exercises as needed, and allocating leadership attention with care.
Fight attrition
The top concern for business and program leaders is for the transformation to meet its impact target. Yet any executive will recognize that most initial impact estimates are optimistic. Pressed to meet program goals that are usually aggressive both in timing and in total value, initiative owners naturally tend to overestimate their initiatives’ worth. But just how optimistic are they? And how much will leaders need to compensate for leakage of impact over the course of the transformation?
Our examination of the data confirms that program managers should expect substantial impact leakage. Just between stages L1 and L2, the initial impact estimate falls by an average of about 45 percent (Exhibit 2). From L2 to L3, the now-smaller impact estimate falls another 13 percent, with further drops of 28 percent between L3 and L4 and 9 percent between L4 and L5. Cumulatively, the result is that L1 estimations usually fall by about 70 percent by the time they reach L5. Organizations will therefore need a pipeline with a total value that’s more than three times the initial target.

Find a productive spring—and return to the well later
Part of the solution is simply to generate more ideas early in the process. At this stage, time may not allow for canvassing all employees, but program managers can nevertheless hold broad, disciplined ideation workshops with frontline team leaders and representatives. By setting out rules that encourage full participation, openness to all ideas (even the most “unrealistic”), and creativity, organizations can quickly generate valuable insights from the people whose day-to-day work gives them a unique perspective on real opportunities to improve the business.
In practice, though, even these preparations may not be enough, as demonstrated by a consumer-products manufacturer and an energy company. In each case, with less than three months remaining before publicly announced deadlines, teams were running well short of their targets—by tens of millions of dollars at the energy company and hundreds of millions at the consumer player. But both companies found that going back to the well and asking their people for more ideas allowed them to make up the difference. Each made its target, which provided an essential morale boost that made further improvement possible after the target was met.
Finding these later opportunities requires more effort than the first round of idea generation and typically produces somewhat less in total returns. Our data analysis found that by the second month, about two-thirds of a program’s value had already been discovered, leaving less to find in later efforts. Still, additional pockets of potential almost always remain. One option that several organizations used took advantage of data from the program-management tool to review how much actual value each initiative was generating. Conducting a root-cause analysis on those that were cancelled, delayed, or that under delivered helped uncover important lessons that led to new value.
Impact: Big slope versus long tail
By opening up idea submission to a much larger cross section of the organization, back-to-the-well exercises illustrate a related lesson as well: that the long tail of smaller initiatives matter. At one mining company, for example, a mechanic came up with an idea that reduced maintenance time for each truck by more than 30 minutes. Once applied to the regular monthly service schedule across the entire fleet, this idea added several thousand truck working hours per year and was worth millions of dollars.
Some of the organizations we reviewed expanded the idea-capture process to vendors and business partners as well. And these small initiatives add up to big impact. We divided initiatives into three groups. The first, “boulders,” consisted of initiatives that each represented at least 5.0 percent of the total program’s value. “Pebbles” were those representing between 0.5 and 5.0 percent of total value, and everything smaller than 0.5 percent was “sand.”
Our data indicate that on average, 50 percent of the total program value typically comes from sand (Exhibit 3). That means focusing only on the boulders, the biggest (and often highest-profile) initiatives is risky. Moreover, sand initiatives are often easier and quicker to execute: their small size involves fewer layers of approval and less coordination. And they are often led by frontline analysts and managers, giving them more of a stake in the transformation’s success.

Focus your resources
With time of the essence, executives overseeing transformations must be especially careful in allocating time and effort at every level of the organization. The basic reality is that every moment an initiative owner spends on work that isn’t productive is a moment taken away from helping generate more impact.
Make it easier on initiative owners
How much is reasonable to ask of initiative owners? For comparison among transformations, we defined “initiative owner” as “the most senior person who actually does the day-to-day work.” On average, we found that initiative owners manage three initiatives each. As one leader working with a consumer-products company explained, “It’s a rare exception for an owner to successfully manage more than three initiatives. They have to be really good at delegating the underlying milestones to others and following up on their progress.”
Our evidence shows that about 80 percent of total impact is managed by 20 percent of initiative owners (Exhibit 4). That’s often because ownership of big-value initiatives (such as major contract renegotiations) is concentrated in the hands of a few very senior or high-potential individuals.

But it comes at a cost: the potential for burnout. One of our interviewees said that his organization lost several leaders because they simply couldn’t keep up with the demands of overseeing too many initiatives at once.
By contrast, small-value initiatives are often more limited in scope and are owned by frontline analysts and managers who don’t have the time or capacity for a larger set of initiatives. The involvement of a larger number of people not only relieves the owners of higher-profile initiatives but also helps build momentum and buy-in for the program as a whole. These considerations typically outweigh the disadvantage of the added complexity of having to manage a high number of owners.
Keep reporting manageable
But complexity can quickly rear its head in the reporting of initiatives’ status. The ideal initiative execution plan contains all the milestones necessary to carry out the initiative, while avoiding so much detail that the milestones become distracting to initiative owners at negligible additional value.
Either extreme creates problems. For the consumer-goods company, “high plan granularity came with a lot of pushback from initiative owners, who felt micromanaged and worried about the time required to update or create milestones,” one executive told us. On the other hand, milestones spaced too far apart in time reduced the program leaders’ ability to identify delayed or at-risk deliverables until it was too late for effective course corrections. One leader noted, for example, that several of his company’s execution plans ended up in avoidable delays when the milestones that initiative owners scheduled failed to align with important stakeholder approvals, such as for compliance reviews or proxy votes.
Our data show that an average of four milestones was typically the right balance—enough to provide early warning about potential problems, but not so many as to get in the way of implementation.
Make metrics meaningful
The decisions on which metrics to track are typically made during the planning phase of the program, as leaders decide what is in and out of scope, what types of spending should be targeted for savings, and so on. Most commonly, financial metrics are used for transformation programs because of their strategic importance, the availability of the required data, and the ease of tracking them compared with nonfinancial metrics.
But even a relatively straightforward set of metrics can quickly become complicated if additional layers are added. A finance department may ask for the metrics to mirror individual accounting line items, slicing and dicing the data into dozens of submetrics. Adding further permutations, such as distinguishing between recurring and one-time impact or between hard savings and cost avoidance, compounds the complexity for initiative owners. And that’s before measuring and tracking nonfinancial metrics, such as head-count redeployment for different personnel types.
Evidence from our data shows that only 29 percent of the metrics organizations claim to follow are actively used during the length of the project (Exhibit 5). The rest become statistical noise and a source of confusion for initiative owners trying to decide where to allocate the savings from their initiatives.

Accordingly, organizations must strike a balance between ensuring the finance function can report at an acceptable level of detail while also enabling initiative owners to allocate impact easily. A rule of thumb that several organizations used successfully was to eliminate any metric that was likely to carry less than 0.01 percent of total program impact and fold it into other metrics instead.
Plan and adapt
Once the program is under way, the organization will need to adapt quickly and nimbly to inevitable unforeseen obstacles. Careful planning and well-structured review cycles helped the executives we interviewed intervene where needed to keep initiatives—and whole programs—on track.
Plan for delays
Much as the initial value estimate of an initiative tends to be optimistic, so too is the promised timing. Our data show that on average, approximately 31 percent of initiatives will have their execution end date (the date at which stage L3 ends) changed at least once throughout their life cycle. About 28 percent will see it happen twice, and 19 percent three times.
The impact of date changes can be mitigated if they are made early in an initiative’s life cycle, with sound reasoning and the approval of the TO. However, our data show that despite the high frequency of due-date changes, 56 percent of initiatives still miss their planned L3 date (the date at which the plan is approved) by more than a week, and about half miss their L4 date (the date at which execution is complete) by more than a week. On average, initiatives start L3 two-and-a-half weeks later than planned, and they are fully executed approximately four weeks after the set deadline (Exhibit 6).

What can organizations do? The amount of time that initiatives spend in the implementation stage will inevitably depend on a range of factors, including the overall agility of the company, the urgency of the transformation program, and the level of approval required to move an initiative from one stage gate to the next. But ultimately, helping owners meet their deadlines is the role of the TO, whose discipline is essential in ensuring performance. A chief transformation officer who comes from outside the organization can often be in a better position to break through cultural norms and other constraints that can impede an initiative’s progress.
Commit to weekly actions
Even when delays are unavoidable, initiative owners can reduce the impact by ensuring that each initiative moves forward every week, regardless of whether there’s a milestone or not. By asking for brief updates on these actions during the regular cadence of meetings and offering support, leaders can encourage owners to report on potential issues early so that they can be solved with minimal effort.
As a rule of thumb, leaders should expect 80 percent of the initiatives across a program to be updated with specific actions every week. While that may seem high, we have found that with five minutes of planning, almost every initiative can be improved or accelerated each and every week.
For high-stakes transformations, this analysis underscores the importance of balancing high aspirations against a pragmatic understanding of what individuals and organizations can achieve. By keeping a few basic constraints in mind, transformation leaders can mitigate some of the inevitable problems that arise when people are trying to achieve dramatic performance improvement in a very short time. By minimizing avoidable waste in the transformation process itself, the organization is much more likely to meet (or even exceed) its goals—and build a foundation for it to keep improving once the program is complete.
By Mohana Ravindranath
February 15, 2017
A small piece of a years-long effort to revamp military health records is finally live, after a new IT system was deployed last week at an Air Force base in Spokane, Washington.
The electronic health record platform, Military Health System GENESIS, or MHS GENESIS, is now in use at the Fairchild Air Force base. The Defense Department plans to observe how it functions at that site first before installing it at other sites in the Pacific Northwest—possibly as soon as June.
The new system is intended to let patient records flow seamlessly between health care providers covering military beneficiaries and to allow patients to view their own records and test results online. Interoperability with both Veterans Affairs Department, as well as private health care providers, are also key goals.
For now, providers can use MHS GENESIS to track down records for patients at other sites, even if those sites haven’t yet upgraded, by using a “legacy viewer” feature, officials told reporters Wednesday.
At Fairchild, MHS GENESIS replaces the existing patient care online portals RelayHealth and MiCare.
Eventually, MHS Genesis will house the records for more than 9.4 million DOD beneficiaries and the Defense Healthcare Management Systems office is aiming for total deployment by 2022, Program Executive Officer Stacy Cummings said.
The initial deployment is part of a larger contract, under the office of the DOD Healthcare Management System Modernization. In 2015 the Pentagon awarded the $4.3 billion project, intended to revamp medical records across the department, to Leidos, Cerner and Accenture.
In its first week, MHS GENESIS encountered a few minor challenges, including the need to retrain employees to use the new interface, Col. Margaret Carey, commander at the 92nd Medical Group at Fairchild, told reporters. Though user training began in September, some users had since forgotten how to navigate to certain features, such as the scheduler. In future sites, the team may move the training closer to deployment.
Ideally, the online system would let disparate health care providers work together on one patient, Air Force Surgeon General Lt. Gen. Mark Ediger told reporters. Providers could aggregate input about an individual from nutritionists, disease management specialists and others to provide holistic treatment.
The Pentagon started its EHR effort by deploying the system in the Pacific Northwest because patients from across the services are represented, Cummings said.
By Mohana Ravindranath
February 15, 2017
Julia Adler-Milstein, PhD
Associate Professor, School of Information, School of Public Health, University of Michigan
Article link: http://catalyst.nejm.org/ehr-interoperability-blame-game/