View Hearing


Leading an organization through digital transformation is an uncharted journey for most of us. Moving away from legacy systems, processes, and operations to a digital model requires a steady strategic hand. Too many companies approach this transformation as a technology issue when it’s really a people and processes issue.
In this webinar, Gerald C. Kane and Anh Nguyen Phillips, coauthors of MIT SMR’s report, “Coming of Age Digitally,” discuss the steps leaders can take to prepare for and execute digital transformation of the organization.
In this webinar you will learn:
Two decades ago, the costs began rising well beyond that of other nations, and in recent years have shot up again. What can explain it?
By Austin Frakt
There was a time when America approximated other wealthy countries in drug spending. But in the late 1990s, U.S. spending took off. It tripled between 1997 and 2007, according to a study in Health Affairs.
Then a slowdown lasted until about 2013, before spending shot up again. What explains these trends?

By 2015, American annual spending on prescription drugs reached about $1,000 per person and 16.7 percent of overall personal health care spending. The Commonwealth Fund compared that level with that of nine other wealthy nations: Australia, Canada, France, Germany, the Netherlands, Norway, Sweden, Switzerland and Britain.
Among those, Switzerland, second to the United States, was only at $783. Sweden was lowest, at $351. (It should be noted that relative to total health spending, American spending on drugs is consistent with that of other countries, reflecting the fact that we spend a lot more on other care, too.)
Several factors could be at play in America’s spending surge. One is the total amount of prescription drugs used. But Americans do not take a lot more drugs than patients in other countries, as studies document.
In fact, when it comes to drugs primary care doctors typically prescribe — including medications for hypertension, high cholesterol, depression, gastrointestinal conditions and pain — a recent study in the journal Health Policy found that Americans use prescription drugs for 12 percent fewer days per year than their counterparts in other wealthy countries.
Another potential explanation is that Americans take more expensive brand-name drugs than cheaper generics relative to their overseas counterparts. This doesn’t hold up either. We use a greater proportion of generic drugs here than most other countries — 84 percent of prescriptions are generic.
Though Americans take a lower proportion of brand-name drugs, the prices of those drugs are a lot higher than in other countries. For many drugs, U.S. prices are twice those found in Canada, for example.
Prices are a lot higher for brand-name drugs in the United States because we lack the widespread policies to limit drug prices that many other countries have.
“Other countries decline to pay for a drug when the price is too high,” said Rachel Sachs, who studies drug pricing and regulation as an associate professor of law at Washington University in St. Louis. “The United States has been unwilling to do this.”
For example, except in rare cases, Britain will pay for new drugs only when their effectiveness is high relative to their prices. German regulators may decline to reimburse a new drug at rates higher than those paid for older therapies, if they find that it offers no additional benefit. Some other nations base their prices on those charged in Britain, Germany or other countries, Ms. Sachs added.
That, by and large, explains why we spend so much more on drugs in the United States than elsewhere. But what drove the change in the 1990s? One part of the explanation is that a record number of new drugs emerged in that decade.
In particular, sales of costly new hypertension and cancer drugs took off in the 1990s. The number of drugs with sales that topped $1 billion increased to 52 in 2006 from six in 1997. The combination of few price controls and rapid growth of brand-name drugs increased American per capita pharmaceutical spending.
“The scientific explosion of the 1970s and 1980s that allowed us to isolate the genetic basis of certain diseases opened a lot of therapeutic areas for new drugs,” said Aaron Kesselheim, an associate professor of medicine at Harvard Medical School.
He pointed to other factors promoting the growth of drug spending in the 1990s, including increased advertising to physicians and consumers. Regulations on drug ads on TV were relaxed, which led to more advertising. More rapid F.D.A. approvals, fueled by new fees collected from pharmaceutical manufactures that began in 1992, also helped push new drugs to market.
In addition, in the 1990s and through the mid-2000s, coverage for drugs (as well as for other health care) expanded through public programs. Expansions of Medicaid and the Children’s Health Insurance Program also coincided with increased drug spending. And Medicare adopted a universal prescription drug benefit in 2006. Studies have found that when the potential market for drugs grows, more drugs enter it.

In 2007, U.S. drug spending growth was the slowest since 1974. The slowdown in the mid-2000s can be explained by fewer F.D.A. approvals of blockbuster drugs. Annual F.D.A. approvals of new drugs fell from about 35 in the late 1990s and early 2000s to about 20 per year in 2005-07.
In addition, the patents of many top-selling drugs (like Lipitor) expired, and as American prescription drug use tipped back toward generics, per capita spending leveled off.
The spike starting in 2014 mirrors that of the 1990s. The arrival of expensive specialty drugs for hepatitis C, cystic fibrosis and other conditions fueled spending growth. Many of the new drugs are based on relatively recent advances in science, like the completion of the human genome project.
“Many of the new agents are biologics,” said Peter Bach, director of the Center for Health Policy and Outcomes at Memorial Sloan Kettering Cancer Center. “These drugs have no meaningful competition, and therefore command very high prices.”
A U.S. Department of Health and Human Services issue brief estimated that 30 percent of the rise in drug spending between 2000 and 2014 could be attributed to price increases or greater use of higher-priced drugs. Coverage expansions of the Affordable Care Act also contributed to increased drug spending. In addition, “there has been a lowering of approval standards,” Dr. Bach said. “So more of these new, expensive drugs are making it to market faster.”
“As in the earlier run-up in drug spending, we’re largely uncritical of the price-value trade-off for drugs in the U.S.,” said Michelle Mello, a health law scholar at Stanford. “Though we pay high prices for some drugs of high value, we also pay high prices for drugs of little value. The U.S. stands virtually alone in this.”
Outlook for the future
If the principal driver of higher American drug spending is higher pricing on new, blockbuster drugs, what does that bode for the future? “I suspect things will get worse before they get better,” Ms. Sachs said. The push for precision medicine — drugs made for smaller populations, including matching to specific genetic characteristics — may make drugs more effective, therefore harder to live without. That’s a recipe for higher prices.
Democratic politicians have tended to be the ones advocating governmental policies to limit drug prices. But recently the Trump administration announced a Medicare drug pricing plan that seems to reflect growing comfort with how drug prices are established overseas, and there’s new optimism the two sides could work together after the results of the midterms. Although the effectiveness of the plan remains unclear, it is clearly a response to public concern about drug prices and spending.
CVS also recently announced it would devise employer drug plans that don’t include drugs with prices out of line with their effectiveness — something more common in other countries but unheard-of in the United States. Even if these efforts don’t take off rapidly, they are early signs that attitudes might be changing.
Austin Frakt is director of the Partnered Evidence-Based Policy Resource Center at the V.A. Boston Healthcare System; associate professor with Boston University’s School of Public Health; and adjunct associate professor with the Harvard T.H. Chan School of Public Health. He blogs at The Incidental Economist. @afrakt
Things don’t have to be this way. A new survey by McKinsey and Henley Business School highlights the need for enterprise architects to facilitate digital transformations by managing technological complexity and setting a course for the development of their companies’ IT landscape. These responsibilities fall within the typical enterprise-architecture (EA) team’s remit, which is to manage the way that all of the company’s IT systems work together to enable business processes. But not all EA teams carry out their responsibilities in the same manner. Survey respondents who described their companies as “digital leaders” indicated that their EA teams adhere to several best practices (see sidebar, “About the survey”). These teams engage senior executives and boards and spend extra time on long-term planning. They track their accomplishments in terms of how many business capabilities are deployed, while implementing more services. And they attract talent primarily by offering people appealing assignments, ample training opportunities, and well-structured career paths. Below, we take a closer look at these best practices and their benefits.
A number of EA teams we know have helped accelerate their companies’ digital transformations by participating in discussions of business strategy, which deal increasingly with technology. When we asked survey respondents about their involvement with various stakeholder groups, 60 percent of those at digital leaders named C-suite executives and strategy departments as the stakeholders they interact with most. By comparison, just 24 percent of respondents from other companies said they interact most with C-suite executives and strategy departments.
Survey respondents who say their companies are not digital leaders indicated that it’s common for their executive teams and boards to discuss enterprise architecture only when significant issues arise, such as spending decisions, while CIOs alone usually oversee the enterprise architecture.
While few if any EA groups would claim not to be focused on the business, effective teams truly invest their time in understanding business needs and convince senior leaders to invest time in enterprise architecture. Our experience suggests that digital transformations are more likely to succeed when board members understand the importance of technology for their business model and commit their time to making decisions that seem technical but ultimately influence the success or failure of the company’s business aims.
The survey results also indicate that EA teams at digital leaders maintain a clearer orientation toward the future than teams at other companies. One hundred percent of respondents from digital leaders said their architecture teams develop and update models of what the business’s IT architecture should look like in the future; just 58 percent of respondents from other companies said they adhere to this best practice.
Another key difference emerged when we asked respondents how much time their companies devote to strategic planning. Respondents who said their companies’ EA teams devote a higher-than-average proportion of their capacity to strategic planning were also more likely to say they create added value for their organizations. (On average, respondents said strategic planning takes up about one-fifth of the EA team’s working capacity.) Teams that spend more capacity than average on strategic planning were more likely to report delivering sustainable business solutions, making greater contributions to the benefits of projects, and gaining wider recognition within the enterprise (Exhibit 1).

Given the versatility of enterprise architects, leaders may be tempted to assign them to help resolve urgent problems of various kinds. However, this can cause the architecture team to spend most of its time solving problems and little or no time on advance planning. As a result, the drive to quickly fulfill demand for particular applications takes precedence over the thoughtful design process that is required to maintain a cost effective, flexible, and resilient IT environment.
At a high level, digital transformation involves reshaping business models with advanced technology solutions. This puts a premium on collaboration between business functions and IT. In our experience, a lack of coordination between business and IT hinders large transformations. We have seen that such disconnects sometimes originate in the posture of IT functions: instead of concentrating on the enablement of business priorities, they focus excessively on the delivery of technology solutions as an end in itself.
According to our survey, EA teams at digital leaders appear to avoid this trap. Respondents from digital leaders were more likely to say that EA teams contribute “high” or “very high” benefits to business and IT (Exhibit 2).

We’ve seen that an EA team can better align the IT function’s priorities with the business’s priorities by tracking its accomplishments with respect to the business capabilities that it delivers, rather than the sheer number of technology applications that it implements. Capabilities are self-contained business activities, usually belonging to an end-to-end business process, that result in discrete outcomes: for example, predicting a customer’s next purchase so that a website or a call-center representative can make suggestions.
This use of capabilities stood out in the survey. Respondents from digital leaders were more likely to say that their EA teams use capabilities as their primary grouping for the delivery of milestones toward their target architecture (Exhibit 3). Further grouping capabilities into business domains (which generally correspond to business functionalities such as finance or customer management) can have the additional benefit of allowing an EA team to shape the IT landscape according to the business strategy.

The survey results show that digital leaders are also distinguished by how they structure their IT landscape. Digital leaders have implemented three times as many services as other companies. When it comes to integrating applications, a smaller proportion of their integrations consist of point-to-point connections between two applications (56 percent versus 76 percent at other companies), which lessens their “technical debt.” Respondents from digital leaders were twice as likely as respondents from other companies to say that their companies are piloting architectures based on microservices, which are independent components that developers assemble into software applications.
Because EA departments play an important role in digital transformations, we’ve seen that IT leaders do well to staff them with motivated, highly skilled professionals. Yet our experience also suggests that enterprise architecture’s long-held reputation as a mundane field with limited room for advancement can create challenges when it comes to attracting top talent.
The good news is that prospective hires appear to be drawn toward exciting work that offers opportunities to learn and grow. Our survey results indicate that enterprise architects generally seek interesting challenges, recognition from their peers, learning opportunities, and structured career paths. Respondents from digital leaders were more likely to cite peer recognition, education, and well-defined career paths as features that appeal to their employees (Exhibit 4). They were also more likely to say that they offer enterprise architects the chance to pursue career paths in departments other than enterprise architecture.

For EA teams, supporting successful digital transformations involves more than implementing well-chosen technology solutions. It requires an operating model that aligns governance, processes, and talent models with the business’s needs and promotes effective collaboration between business and IT. The survey findings, along with our experience in enterprise architecture, suggests that four moves can help EA teams advance their companies’ digital transformations:
With these tactics, EA teams can build stronger working relationships with senior executives and managers—and thereby position themselves as strategic partners in their companies’ digital transformations.
| Disclosure Forms | 82KB |
Illustration by Ben WisemanOn a sunny afternoon in May, 2015, I joined a dozen other surgeons at a downtown Boston office building to begin sixteen hours of mandatory computer training. We sat in three rows, each of us parked behind a desktop computer. In one month, our daily routines would come to depend upon mastery of Epic, the new medical software system on the screens in front of us. The upgrade from our home-built software would cost the hospital system where we worked, Partners HealthCare, a staggering $1.6 billion, but it aimed to keep us technologically up to date.
More than ninety per cent of American hospitals have been computerized during the past decade, and more than half of Americans have their health information in the Epic system. Seventy thousand employees of Partners HealthCare—spread across twelve hospitals and hundreds of clinics in New England—were going to have to adopt the new software. I was in the first wave of implementation, along with eighteen thousand other doctors, nurses, pharmacists, lab techs, administrators, and the like.
The surgeons at the training session ranged in age from thirty to seventy, I estimated—about sixty per cent male, and one hundred per cent irritated at having to be there instead of seeing patients. Our trainer looked younger than any of us, maybe a few years out of college, with an early-Justin Bieber wave cut, a blue button-down shirt, and chinos. Gazing out at his sullen audience, he seemed unperturbed. I learned during the next few sessions that each instructor had developed his or her own way of dealing with the hostile rabble. One was encouraging and parental, another unsmiling and efficient. Justin Bieber took the driver’s-ed approach: You don’t want to be here; I don’t want to be here; let’s just make the best of it.
I did fine with the initial exercises, like looking up patients’ names and emergency contacts. When it came to viewing test results, though, things got complicated. There was a column of thirteen tabs on the left side of my screen, crowded with nearly identical terms: “chart review,” “results review,” “review flowsheet.” We hadn’t even started learning how to enter information, and the fields revealed by each tab came with their own tools and nuances.
But I wasn’t worried. I’d spent my life absorbing changes in computer technology, and I knew that if I pushed through the learning curve I’d eventually be doing some pretty cool things. In 1978, when I was an eighth grader in Ohio, I built my own one-kilobyte computer from a mail-order kit, learned to program in BASIC, and was soon playing the arcade game Pong on our black-and-white television set. The next year, I got a Commodore 64 from RadioShack and became the first kid in my school to turn in a computer-printed essay (and, shortly thereafter, the first to ask for an extension “because the computer ate my homework”). As my Epic training began, I expected my patience to be rewarded in the same way.
My hospital had, over the years, computerized many records and processes, but the new system would give us one platform for doing almost everything health professionals needed—recording and communicating our medical observations, sending prescriptions to a patient’s pharmacy, ordering tests and scans, viewing results, scheduling surgery, sending insurance bills. With Epic, paper lab-order slips, vital-signs charts, and hospital-ward records would disappear. We’d be greener, faster, better.
But three years later I’ve come to feel that a system that promised to increase my mastery over my work has, instead, increased my work’s mastery over me. I’m not the only one. A 2016 study found that physicians spent about two hours doing computer work for every hour spent face to face with a patient—whatever the brand of medical software. In the examination room, physicians devoted half of their patient time facing the screen to do electronic tasks. And these tasks were spilling over after hours. The University of Wisconsin found that the average workday for its family physicians had grown to eleven and a half hours. The result has been epidemic levels of burnout among clinicians. Forty per cent screen positive for depression, and seven per cent report suicidal thinking—almost double the rate of the general working population.
Something’s gone terribly wrong. Doctors are among the most technology-avid people in society; computerization has simplified tasks in many industries. Yet somehow we’ve reached a point where people in the medical profession actively, viscerally, volubly hate their computers.
On May 30, 2015, the Phase One Go-Live began. My hospital and clinics reduced the number of admissions and appointment slots for two weeks while the staff navigated the new system. For another two weeks, my department doubled the time allocated for appointments and procedures in order to accommodate our learning curve. This, I discovered, was the real reason the upgrade cost $1.6 billion. The software costs were under a hundred million dollars. The bulk of the expenses came from lost patient revenues and all the tech-support personnel and other people needed during the implementation phase.
In the first five weeks, the I.T. folks logged twenty-seven thousand help-desk tickets—three for every two users. Most were basic how-to questions; a few involved major technical glitches. Printing problems abounded. Many patient medications and instructions hadn’t transferred accurately from our old system. My hospital had to hire hundreds of moonlighting residents and pharmacists to double-check the medication list for every patient while technicians worked to fix the data-transfer problem.
Many of the angriest complaints, however, were due to problems rooted in what Sumit Rana, a senior vice-president at Epic, called “the Revenge of the Ancillaries.” In building a given function—say, an order form for a brain MRI—the design choices were more political than technical: administrative staff and doctors had different views about what should be included. The doctors were used to having all the votes. But Epic had arranged meetings to try to adjudicate these differences. Now the staff had a say (and sometimes the doctors didn’t even show), and they added questions that made their jobs easier but other jobs more time-consuming. Questions that doctors had routinely skipped now stopped them short, with “field required” alerts. A simple request might now involve filling out a detailed form that took away precious minutes of time with patients.
Rana said that these growing pains were predictable. The Epic people always build in a period for “optimization”—reconfiguring various functions according to feedback from users. “The first week,” he told me, “people say, ‘How am I going to get through this?’ At a year, they say, ‘I wish you could do this and that.’ ”
I saw what he meant. After six months, I’d become fairly proficient with the new software. I’d bring my laptop with me to each appointment, open but at my side. “How can I help?” I’d ask a patient. My laptop was available for checking information and tapping in occasional notes; after the consultation, I completed my office report. Some things were slower than they were with our old system, and some things had improved. From my computer, I could now remotely check the vital signs of my patients recovering from surgery in the hospital. With two clicks, I could look up patient results from outside institutions that use Epic, as many now do. For the most part, my clinical routine did not change very much.
As a surgeon, though, I spend most of my clinical time in the operating room. I wondered how my more office-bound colleagues were faring. I sought out Susan Sadoughi, whom an internist friend described to me as one of the busiest and most efficient doctors in his group. Sadoughi is a fifty-year-old primary-care physician, originally from Iran, who has worked at our hospital for twenty-four years. She’s married to a retired Boston police lieutenant and has three kids. Making time in her work and family schedule to talk to me was revealingly difficult. The only window we found was in the early morning, when we talked by phone during her commute.
Sadoughi told me that she has four patient slots per hour. If she’s seeing a new patient, or doing an annual physical, she’ll use two slots. Early on, she recognized that technology could contribute to streamlining care. She joined a committee overseeing updates of a home-built electronic-medical-record system we used to rely on, helping to customize it for the needs of her fellow primary-care physicians. When she got word of the new system, she was optimistic. Not any longer. She feels that it has made things worse for her and her patients. Before, Sadoughi almost never had to bring tasks home to finish. Now she routinely spends an hour or more on the computer after her children have gone to bed.
She gave me an example. Each patient has a “problem list” with his or her active medical issues, such as difficult-to-control diabetes, early signs of dementia, a chronic heart-valve problem. The list is intended to tell clinicians at a glance what they have to consider when seeing a patient. Sadoughi used to keep the list carefully updated—deleting problems that were no longer relevant, adding details about ones that were. But now everyone across the organization can modify the list, and, she said, “it has become utterly useless.” Three people will list the same diagnosis three different ways. Or an orthopedist will list the same generic symptom for every patient (“pain in leg”), which is sufficient for billing purposes but not useful to colleagues who need to know the specific diagnosis (e.g., “osteoarthritis in the right knee”). Or someone will add “anemia” to the problem list but not have the expertise to record the relevant details; Sadoughi needs to know that it’s “anemia due to iron deficiency, last colonoscopy 2017.” The problem lists have become a hoarder’s stash.
“They’re long, they’re deficient, they’re redundant,” she said. “Now I come to look at a patient, I pull up the problem list, and it means nothing. I have to go read through their past notes, especially if I’m doing urgent care,” where she’s usually meeting someone for the first time. And piecing together what’s important about the patient’s history is at times actually harder than when she had to leaf through a sheaf of paper records. Doctors’ handwritten notes were brief and to the point. With computers, however, the shortcut is to paste in whole blocks of information—an entire two-page imaging report, say—rather than selecting the relevant details. The next doctor must hunt through several pages to find what really matters. Multiply that by twenty-some patients a day, and you can see Sadoughi’s problem.
The software “has created this massive monster of incomprehensibility,” she said, her voice rising. Before she even sets eyes upon a patient, she is already squeezed for time. And at each step along the way the complexity mounts.
“Ordering a mammogram used to be one click,” she said. “Now I spend three extra clicks to put in a diagnosis. When I do a Pap smear, I have eleven clicks. It’s ‘Oh, who did it?’ Why not, by default, think that I did it?” She was almost shouting now. “I’m the one putting the order in. Why is it asking me what date, if the patient is in the office today? When do you think this actually happened? It is incredible!” The Revenge of the Ancillaries, I thought.
She continued rattling off examples like these. “Most days, I will have done only around thirty to sixty per cent of my notes by the end of the day,” she said. The rest came after hours. Spending the extra time didn’t anger her. The pointlessness of it did.
Difficulties with computers in the workplace are not unique to medicine. Matt Spencer is a British anthropologist who studies scientists instead of civilizations. After spending eighteen months embedded with a group of researchers studying fluid dynamics at Imperial College London, he made a set of observations about the painful evolution of humans’ relationship with software in a 2015 paper entitled “Brittleness and Bureaucracy.”
Years before, a graduate student had written a program, called Fluidity, that allowed the research group to run computer simulations of small-scale fluid dynamics—specifically, ones related to the challenge of safely transporting radioactive materials for nuclear reactors. The program was elegant and powerful, and other researchers were soon applying it to a wide range of other problems. They regularly added new features to it, and, over time, the program expanded to more than a million lines of code, in multiple computer languages. Every small change produced unforeseen bugs. As the software grew more complex, the code became more brittle—more apt to malfunction or to crash.
The I.B.M. software engineer Frederick Brooks, in his classic 1975 book, “The Mythical Man-Month,” called this final state the Tar Pit. There is, he said, a predictable progression from a cool program (built, say, by a few nerds for a few of their nerd friends) to a bigger, less cool program product (to deliver the same function to more people, with different computer systems and different levels of ability) to an even bigger, very uncool program system (for even more people, with many different needs in many kinds of work).
Spencer plotted the human reaction that accompanied this progression. People initially embraced new programs and new capabilities with joy, then came to depend on them, then found themselves subject to a system that controlled their lives. At that point, they could either submit or rebel. The scientists in London rebelled. “They were sick of results that they had gotten one week no longer being reproducible a week later,” Spencer wrote. They insisted that the group spend a year rewriting the code from scratch. And yet, after the rewrite, the bureaucratic shackles remained.
As a program adapts and serves more people and more functions, it naturally requires tighter regulation. Software systems govern how we interact as groups, and that makes them unavoidably bureaucratic in nature. There will always be those who want to maintain the system and those who want to push the system’s boundaries. Conservatives and liberals emerge.
Scientists now talked of “old Fluidity,” the smaller program with fewer collaborators which left scientists free to develop their own idiosyncratic styles of research, and “new Fluidity,” which had many more users and was, accordingly, more rule-bound. Changes required committees, negotiations, unsatisfactory split-the-difference solutions. Many scientists complained to Spencer in the way that doctors do—they were spending so much time on the requirements of the software that they were losing time for actual research. “I just want to do science!” one scientist lamented.
Yet none could point to a better way. “While interviewees would make their resistance known to me,” Spencer wrote, “none of them went so far as to claim that Fluidity could be better run in a different manner.” New Fluidity had capabilities that no small, personalized system could ever provide and that the scientists couldn’t replace.
The Tar Pit has trapped a great many of us: clinicians, scientists, police, salespeople—all of us hunched over our screens, spending more time dealing with constraints on how we do our jobs and less time simply doing them. And the only choice we seem to have is to adapt to this reality or become crushed by it.
Many have been crushed. The Berkeley psychologist Christina Maslach has spent years studying the phenomenon of occupational burnout. She focussed on health care early on, drawn by the demanding nature of working with the sick. She defined burnout as a combination of three distinct feelings: emotional exhaustion, depersonalization (a cynical, instrumental attitude toward others), and a sense of personal ineffectiveness. The opposite, a feeling of deep engagement in one’s work, came from a sense of energy, personal involvement, and efficacy. She and her colleagues developed a twenty-two-question survey known as the Maslach Burnout Inventory, which, for nearly four decades, has been used to track the well-being of workers across a vast range of occupations, from prison guards to teachers.
In recent years, it has become apparent that doctors have developed extraordinarily high burnout rates. In 2014, fifty-four per cent of physicians reported at least one of the three symptoms of burnout, compared with forty-six per cent in 2011. Only a third agreed that their work schedule “leaves me enough time for my personal/family life,” compared with almost two-thirds of other workers. Female physicians had even higher burnout levels (along with lower satisfaction with their work-life balance). A Mayo Clinic analysis found that burnout increased the likelihood that physicians switched to part-time work. It was driving doctors out of practice.
Burnout seemed to vary by specialty. Surgical professions such as neurosurgery had especially poor ratings of work-life balance and yet lower than average levels of burnout. Emergency physicians, on the other hand, had a better than average work-life balance but the highest burnout scores. The inconsistencies began to make sense when a team at the Mayo Clinic discovered that one of the strongest predictors of burnout was how much time an individual spent tied up doing computer documentation. Surgeons spend relatively little of their day in front of a computer. Emergency physicians spend a lot of it that way. As digitization spreads, nurses and other health-care professionals are feeling similar effects from being screen-bound.
Sadoughi told me of her own struggles—including a daily battle with her Epic “In Basket,” which had become, she said, clogged to the point of dysfunction. There are messages from patients, messages containing lab and radiology results, messages from colleagues, messages from administrators, automated messages about not responding to previous messages. “All the letters that come from the subspecialists, I can’t read ninety per cent of them. So I glance at the patient’s name, and, if it’s someone that I was worried about, I’ll read that,” she said. The rest she deletes, unread. “If it’s just a routine follow-up with an endocrinologist, I hope to God that if there was something going on that they needed my attention on, they would send me an e-mail.” In short, she hopes they’ll try to reach her at yet another in-box.
As I observed more of my colleagues, I began to see the insidious ways that the software changed how people work together. They’d become more disconnected; less likely to see and help one another, and often less able to. Jessica Jacobs, a longtime office assistant in my practice—mid-forties, dedicated, with a smoker’s raspy voice—said that each new software system reduced her role and shifted more of her responsibilities onto the doctors. Previously, she sorted the patient records before clinic, drafted letters to patients, prepped routine prescriptions—all tasks that lightened the doctors’ load. None of this was possible anymore. The doctors had to do it all themselves. She called it “a ‘stay in your lane’ thing.” She couldn’t even help the doctors navigate and streamline their computer systems: office assistants have different screens and are not trained or authorized to use the ones doctors have.
“You can’t learn more from the system,” she said. “You can’t do more. You can’t take on extra responsibilities.” Even fixing minor matters is often not in her power. She’d recently noticed, for instance, that the system had the wrong mailing address for a referring doctor. But, she told me, “all I can do is go after the help desk thirteen times.”
Jacobs felt sad and sometimes bitter about this pattern of change: “It’s disempowering. It’s sort of like they want any cookie-cutter person to be able to walk in the door, plop down in a seat, and just do the job exactly as it is laid out.”
Sadoughi felt much the same: “The first year Epic came in, I was so close to saying, ‘That’s it. I’m done with primary care, I’m going to be an urgent-care doctor. I’m not going to open another In Basket.’ It took all this effort reëvaluating my purpose to stick with it.”
Gregg Meyer sympathizes, but he isn’t sorry. As the chief clinical officer at Partners HealthCare, Meyer supervised the software upgrade. An internist in his fifties, he has the commanding air, upright posture, and crewcut one might expect from a man who spent half his career as a military officer.
“I’m the veteran of four large-scale electronic-health-records implementations,” he told me in his office, overlooking downtown Boston. Those included two software overhauls in the military and one at Dartmouth-Hitchcock Medical Center, where he’d become the chief clinical officer. He still sees patients, and he experiences the same frustrations I was hearing about. Sometimes more: he admits he’s not as tech-savvy as his younger colleagues.
“But we think of this as a system for us and it’s not,” he said. “It is for the patients.” While some sixty thousand staff members use the system, almost ten times as many patients log into it to look up their lab results, remind themselves of the medications they are supposed to take, read the office notes that their doctor wrote in order to better understand what they’ve been told. Today, patients are the fastest-growing user group for electronic medical records.
Computerization also allows clinicians to help patients in ways that hadn’t been possible before. In one project, Partners is scanning records to identify people who have been on opioids for more than three months, in order to provide outreach and reduce the risk of overdose. Another effort has begun to identify patients who have been diagnosed with high-risk diseases like cancer but haven’t received prompt treatment. The ability to adjust protocols electronically has let Meyer’s team roll out changes far faster as new clinical evidence comes in. And the ability to pull up records from all hospitals that use the same software is driving real improvements in care.
Meyer gave me an example. “The care of the homeless population of Boston took a quantum leap,” he said. With just a few clicks, “we can see the fact that they had three TB rule-outs”—three negative test results for tuberculosis—“someplace else in town, which means, O.K., I don’t have to put him in an isolation room.”
In Meyer’s view, we’re only just beginning to experience what patient benefits are possible. A recent study bolsters his case. Researchers looked at Medicare patients admitted to hospitals for fifteen common conditions, and analyzed how their thirty-day death rates changed as their hospitals computerized. The results shifted over time. In the first year of the study, deaths actually increased 0.11 per cent for every new function added—an apparent cost of the digital learning curve. But after that deaths dropped 0.21 per cent a year for every function added. If computerization causes doctors some annoyance but improves patient convenience and saves lives, Meyer is arguing, isn’t it time we all got on board?
“I’m playing the long game,” he said. “I have full faith that all that stuff is just going to get better with time.”
And yet it’s perfectly possible to envisage a system that makes care ever better for those who receive it and ever more miserable for those who provide it. Hasn’t this been the story in many fields? The complaints of today’s health-care professionals may just be a white-collar, high-tech equivalent of the century-old blue-collar discontent with “Taylorization”—the industrial philosophy of fragmenting work into components, standardizing operations, and strictly separating those who design the workflow from those who do the work. As Frederick Winslow Taylor, the Progressive Era creator of “scientific management,” put it, “In the past, the man has been first; in the future, the system must be first.” Well, we are in that future, and the system is the computer.
Indeed, the computer, by virtue of its brittle nature, seems to require that it come first. Brittleness is the inability of a system to cope with surprises, and, as we apply computers to situations that are ever more interconnected and layered, our systems are confounded by ever more surprises. By contrast, the systems theorist David Woods notes, human beings are designed to handle surprises. We’re resilient; we evolved to handle the shifting variety of a world where events routinely fall outside the boundaries of expectation. As a result, it’s the people inside organizations, not the machines, who must improvise in the face of unanticipated events.
Last fall, the night before daylight-saving time ended, an all-user e-mail alert went out. The system did not have a way to record information when the hour from 1 A.M. to 1:59 A.M. repeated in the night. This was, for the system, a surprise event. The only solution was to shut down the lab systems during the repeated hour. Data from integrated biomedical devices (such as monitoring equipment for patients’ vital signs) would be unavailable and would have to be recorded by hand. Fetal monitors in the obstetrics unit would have to be manually switched off and on at the top of the repeated hour.
Medicine is a complex adaptive system: it is made up of many interconnected, multilayered parts, and it is meant to evolve with time and changing conditions. Software is not. It is complex, but it does not adapt. That is the heart of the problem for its users, us humans.
Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice—in medicine and in other fields—was all mutation and no selection. There was plenty of room for individuals to do things differently from the norm; everyone could be an innovator. But there was no real mechanism for weeding out bad ideas or practices.
Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn’t wreck some other, distant part of the system.
For those in charge, this kind of system oversight is welcome. Gregg Meyer is understandably delighted to have the electronic levers to influence the tens of thousands of clinicians under his purview. He had spent much of his career seeing his hospitals blighted by unsafe practices that, in the paper-based world, he could do little about. A cardiologist might decide to classify and treat patients with congestive heart failure differently from the way his colleagues did, and with worse results. That used to happen all the time.
“Now there’s a change-control process,” Meyer said. “When everything touches everything, you have to have change-control processes.”
But those processes cannot handle more than a few change projects at a time. Artisanship has been throttled, and so has our professional capacity to identify and solve problems through ground-level experimentation. Why can’t our work systems be like our smartphones—flexible, easy, customizable? The answer is that the two systems have different purposes. Consumer technology is all about letting me be me. Technology for complex enterprises is about helping groups do what the members cannot easily do by themselves—work in coördination. Our individual activities have to mesh with everyone else’s. What we want and don’t have, however, is a system that accommodates both mutation and selection.
We are already seeing the next mutation. During the past year, Massachusetts General Hospital has been trying out a “virtual scribe” service, in which India-based doctors do the documentation based on digitally recorded patient visits. Compared with “live scribing,” this system is purportedly more accurate—since the scribes tend to be fully credentialled doctors, not aspiring med students—for the same price or cheaper. IKS Health, which provides the service, currently has four hundred physicians on staff in Mumbai giving support to thousands of patient visits a day in clinics across the United States. The company expects to employ more than a thousand doctors in the coming year, and it has competitors taking the same approach.
Siddhesh Rane is one of its doctor-scribes. A thirty-two-year-old orthopedic surgeon from a town called Kolhapur, he seemed like any of my surgical colleagues here in Boston, direct, driven, with his photo I.D. swaying on a lanyard around his neck. He’d joined the company for the learning opportunity, he said, not the pay (although many of the IKS staffers were better paid than they would be in a local medical practice).
He explained the virtual-scribe system to me when we spoke via Skype. With the patient’s permission, physicians record an entire patient visit with a multidirectional microphone, then encrypt and transmit the recording online. In India, Rane listens to the visit and writes a first draft of the office note. Before starting the work, he went through a careful “onboarding” process with each of the American physicians he works with. One, Nathalee Kong, a thirty-one-year-old internist, was based at an M.G.H. clinic in Revere, a working-class community north of Boston. For a week, Rane listened to recordings of her patient visits and observed how she wrote them up. For another week, they wrote parallel notes, to make sure Rane was following Kong’s preferences. They agreed on trigger phrases; when she says to the patient, “Your exam is normal except for . . . ,” Rane can record the usual elements of her head-to-toe exam without her having to call each one out.
A note for a thirty-minute visit takes Rane about an hour to process. It is then reviewed by a second physician for quality and accuracy, and by an insurance-coding expert, who confirms that it complies with regulations—and who, not incidentally, provides guidance on taking full advantage of billing opportunities. IKS Health says that its virtual-scribe service pays for itself by increasing physician productivity—in both the number of patients that physicians see and the amount billed per patient.
Kong was delighted by the arrangement. “Now all I have to do is listen to the patient and be present,” she told me. When taking a family history, she said, “I don’t have to go back and forth: ‘O.K., so your mom had breast cancer. Let me check that off in the computer before I forget.’ I’m just having a natural conversation with another human being, instead of feeling like I’m checking off a box, which I literally was doing.”
Before working with Rane, Kong rarely left the office before 7 P.M., and even then she had to do additional work at home in order to complete her notes. Now she can leave at five o’clock. “I’m hopeful that this prevents me from burning out,” she said. “That’s something I was definitely aware of going into this profession—something that I really feared.” What’s more, she now has the time and the energy to explore the benefits of a software system that might otherwise seem to be simply a burden. Kong manages a large number of addiction patients, and has learned how to use a list to track how they are doing as a group, something she could never have done on her own. She has also learned to use a function that enters a vaccine table into patients’ notes, allowing her to list the vaccinations they should have received and the ones they are missing.
Her biggest concern now? That the scribes will be taken away. Yet can it really be sustainable to have an additional personal assistant—a fully trained doctor in India, no less—for every doctor with a computer? And, meanwhile, what’s happening across the globe? Who is taking care of the patients all those scribing doctors aren’t seeing?
There’s a techno-optimist view of how this story will unfold. Big technology companies are already circling to invest in IKS Health. They see an opportunity for artificial intelligence to replace more and more of what Rane does. This prospect doesn’t worry Rane very much; by the time technology has taken his place, he hopes to have set up a clinic of his own, and perhaps get to use the system himself. It’s not hard to believe that our interfaces for documenting and communicating will get easier, more intuitive, less annoying.
But there’s also a techno-pessimist version of the story. A 2015 study of scribes for emergency physicians in an Atlanta hospital system found that the scribes produced results similar to what my Boston colleagues described—a thirty-six-per-cent reduction in the doctors’ computer-documentation time and a similar increase in time spent directly interacting with patients. Two-thirds of the doctors said that they “liked” or even “loved” having a scribe. Yet they also reported no significant change in their job satisfaction. With the time that scribes freed up, the system simply got doctors to take on more patients. Their workload didn’t lighten; it just shifted.
Studies of scribes in other health systems have found the same effect. Squeezing more patients into an hour is better than spending time entering data at a keyboard. More people are taken care of. But are they being taken care of well? As patients, we want the caring and the ingenuity of clinicians to be augmented by systems, not defeated by them. In an era of professional Taylorization—of the stay-in-your-lane ethos—that does not seem to be what we are getting.
Putting the system first is not inevitable. Postwar Japan and West Germany eschewed Taylor’s method of industrial management, and implemented more collaborative approaches than was typical in the U.S. In their factories, front-line workers were expected to get involved when production problems arose, instead of being elbowed aside by top-down management. By the late twentieth century, American manufacturers were scrambling to match the higher quality and lower costs that these methods delivered. If our machines are pushing medicine in the wrong direction, it’s our fault, not the machines’ fault.
Some people are pushing back. Neil R. Malhotra is a boyish, energetic, forty-three-year-old neurosurgeon who has made his mark at the University of Pennsylvania as something of a tinkerer. He has a knack for tackling difficult medical problems. In the past year alone, he has published papers on rebuilding spinal disks using tissue engineering, on a better way to teach residents how to repair cerebral aneurysms, and on which spinal-surgery techniques have the lowest level of blood loss. When his hospital’s new electronic-medical-record system arrived, he immediately decided to see if he could hack the system.
He wasn’t a programmer, however, and wasn’t interested in becoming one. So he sought out Judy Thornton, a software analyst from the hospital’s I.T. department. Together, they convened an open weekly meeting, currently on Thursday mornings, where everyone in the neurosurgery department—from the desk clerks to the medical staff to the bosses—could come not just to complain about the system but also to reimagine it. Department members feared that Malhotra’s pet project would be a time sink. Epic heard about his plans to fiddle around with its system and reacted with alarm. The hospital lawyers resisted, too. “They didn’t want us to build something that potentially had a lot of intellectual property in someone else’s system,” Malhotra said.
But he managed to keep the skeptics from saying no outright. Soon, he and his fellow-tinkerers were removing useless functions and adding useful ones. Before long, they had built a faster, more intuitive interface, designed specifically for neurosurgery office visits. It would capture much more information that really mattered in the care of patients with brain tumors, cerebral aneurysms, or spinal problems.
Now there was mutation and selection—through a combination of individual ingenuity and group preference. One new feature the department embraced, for instance, enlists the help of patients. At the end of a visit, doctors give the keyboard to patients, who provide their firsthand ratings of various factors that show how they’re progressing: their ability to walk without assistance, or their level of depression and anxiety. The data on mobility before surgery turned out to predict which patients would need to be prepared for time in a rehabilitation center and which ones could go straight home from the hospital.
Malhotra’s innovations showed that there were ways for users to take at least some control of their technology—to become, with surprising ease, creators. Granted, letting everyone tinker inside our medical-software systems would risk crashing them. But a movement has emerged to establish something like an app store for electronic medical records, one that functions much the way the app store on your smartphone does. If the software companies provided an “application programming interface,” or A.P.I., staff could pick and choose apps according to their needs: an internist could download an app to batch patients’ prescription refills; a pediatric nurse could download one to set up a growth chart.
Electronic-medical-record companies have fought against opening up their systems this way because of the loss of control (and potential revenue) doing so would entail. In the past couple of years, though, many have begun to bend. Even Epic has launched its “App Orchard.” It’s still in the early stages—only about a hundred apps are available, and there are strict limits on what kinds of customization it enables—but it’s a step in the right direction.
“You know what I’m excited about?” Malhotra said to me. “Walking.” He had collected data on the walking ability of ten thousand patients—both before and after surgery. “In any set of patients, what is our goal? It’s to maintain mobility or, in many cases, improve it.” Previously, his department could track only rates of survival and complications. Now he’s experimenting with an app that could live on patients’ phones and provide more granular data about their recovery process. “You’d turn on the neurosurgery module when you are seeing us, which would wake up at those time points to give you a notification saying, ‘Hey, can you do these surveys? They help your care, and you can do them on your phone.’ ”
I told him about a similar app my research team was experimenting with, which collects step counts, among other measures, after surgery. But we had no way to adjust our electronic medical records so that clinicians could readily find a particular patient’s results.
“Step counts!” Malhotra said. “Oh, if we just had step counts.” His wheels were turning. What if they made a tab in the electronic medical record with whatever data on activity patients were willing to provide?
“You could do that?” I said.
“Sure. Why not?”
It’s a beguiling vision. Many fear that the advance of technology will replace us all with robots. Yet in fields like health care the more imminent prospect is that it will make us all behave like robots. And the people we serve need something more than either robots or robot-like people can provide. They need human enterprises that can adapt to change.
It was a Monday afternoon. I was in clinic. I had no scribe, in India or otherwise; no cool app to speed me through my note-writing or serve up all my patient’s information in some nifty, instantly absorbable visual. It was just me, my computer, a file of papers, and John Cameron, a lanky, forty-three-year-old construction supervisor who’d been healthy all his life, felt fine, but was told to see a surgeon for reasons that he still didn’t completely understand.
It all started, he told me, with a visit to his primary-care provider for a routine physical. I held a printout of the doctor’s note. (My high-tech hack is to have key materials printed out, because it takes too long to flip between screens.) It said that she’d found a calcium level so high it was a wonder that Cameron wasn’t delirious. The internist sent him to an endocrinologist, who found, deep in his electronic records, a forgotten history of several benign skin lesions. The specialist wondered if Cameron had a rare genetic syndrome that’s known to cause tumors and, in turn, hormone abnormalities, skin lesions, and high calcium levels.
The diagnosis seemed very unlikely, but a battery of tests had turned up surprising results, including abnormal levels of a pituitary hormone. I needed to log into the computer to check the original lab reports. He watched me silently click one tab after another. Minutes passed. I became aware of how long it was taking me to pull up the right results. Finally, I let go of the mouse and took Cameron to the examining table. When I’d finished the exam and we sat down again at my little computer desk against the wall, I told him what I’d determined. He had a parathyroid tumor, it had pushed his calcium levels dangerously high, and it needed to be removed surgically. I took out a pen and paper, and drew a picture to explain how the surgery would be done. First, though, we needed to get his calcium under control. The abnormal levels of the pituitary hormone suggested that he might have a tumor in his pituitary gland as well—and might even have the unusual genetic syndrome. I was less sure about this, I told him, so I wanted to do more testing and get an opinion from an expert at my hospital.
Cameron’s situation was too complicated for a thirty-minute slot. We’d gone way over time. Other patients were waiting. Plus, I still had to type up all my findings, along with our treatment plan.
“Any questions?” I asked, hoping he’d have none.
“It’s a lot to take in,” he said. “I feel normal. It’s hard to imagine all this going on.” He looked at me, expecting me to explain more.
I hesitated. Let’s talk after the new tests come back, I said.
Later, I thought about how unsatisfactory my response was. I’d wanted to put my computer away—to sort out what he’d understood and what he hadn’t, to learn a bit about who he really was, to make a connection. But I had that note to type, and the next patient stewing across the hall.
The story of modern medicine is the story of our human struggle with complexity. Technology will, without question, continually increase our ability to make diagnoses, to peer more deeply inside the body and the brain, to offer more treatments. It will help us document it all—but not necessarily to make sense of it all. Technology inevitably produces more noise and new uncertainties.
Perhaps a computer could have alerted me to the possibility of a genetic disorder in John Cameron, based on his history of skin lesions and the finding of high calcium. But our systems are forever generating alerts about possible connections—to the point of signal fatigue. Just ordering medications and lab tests triggers dozens of alerts each day, most of them irrelevant, and all in need of human reviewing and sorting. There are more surprises, not fewer. The volume of knowledge and capability increases faster than any individual can manage—and faster than our technologies can make manageable for us. We ultimately need systems that make the right care simpler for both patients and professionals, not more complicated. And they must do so in ways that strengthen our human connections, instead of weakening them.
A week or two after my visit with Cameron, I called him to review his laboratory results. A scan had pinpointed a parathyroid tumor in the right side of his neck, which would be straightforward to remove. A test showed that he didn’t have the genetic syndrome, after all, and a brain scan showed no pituitary tumor.
I had more time for his questions now, and I let him ask them. When we were done and I was about to get off the phone, I paused. I asked him if he’d noticed, during our office visit, how much time I’d spent on the computer.
“Yes, absolutely,” he said. He added, “I’ve been in your situation. I knew you were just trying to find the information you needed. I was actually trying not to talk too much, because I knew you were in a hurry, but I needed you to look the information up. I wanted you to be able to do that. I didn’t want to push you too far.”
It was painful to hear. Forced to choose between having the right technical answer and a more human interaction, Cameron picked having the right technical answer. I asked him what he meant about having been in my situation. As a construction-site supervisor, he said, he spends half his day in front of his laptop and half in front of people. His current job was overseeing the construction of a thirty-eight-unit apartment complex in town. “I have to make sure that things are being done per design and the specifications,” he said. That involves looking up lots of information, logging inspection data, and the like. But, at the same time, he has to communicate with lots of people. “I have to be out in the field checking and dealing with subcontractors and employees of our own.”
The technology at his disposal has grown more powerful in recent years. “We have cloud-based quality-control software, where we document the job at different stages. I can use that information for punch lists and quality-control checks. We also have a time-lapse camera where we can go back and look at things that we might’ve missed.” The technology is more precise, but it’s made everything more complicated and time-consuming. He faces the same struggle that I do.
Cameron was philosophical about it. He’s worked with big construction companies and small ones and used numerous software systems along the way. He couldn’t do without them. And yet, he said, “all these different technologies and apps on these iPads, all the stuff that I’ve had to use over the years, they’re supposed to make our job easier. But they’re either slow, or they’re cumbersome, or they require a lot of data entry and they’re not efficient.” The system inundates his subcontractors with e-mail alerts, for instance. “ ‘You gotta submit this, you’re behind on that, you didn’t finish the punch list.’ The project managers and superintendents and subcontractors eventually say, ‘Enough’s enough. We can’t deal with all these e-mails. It’s ridiculous.’ So they ignore them all. Then nothing gets done. You end up on the phone, back to the old-school way. Because it’s a people business.”
He went on, “I don’t allow anybody to work on my job unless they go through a one-hour orientation with me. I have to know these guys personally. They have to know me. Millions of years human beings evolved to look at each other in the face, to use facial expression to create connection.”
I’d talked to dozens of experts, but Cameron might have been the wisest of them all. There was something comforting about the way he accepted the inevitability of conflict between our network connections and our human connections. We can retune and streamline our systems, but we won’t find a magical sweet spot between competing imperatives. We can only insure that people always have the ability to turn away from their screens and see each other, colleague to colleague, clinician to patient, face to face.
The next time I saw Cameron was on the day of his operation. He lay on a stretcher outside the operating room, waiting to be wheeled in. A computer screen on a boom loomed over the bed, showing the safety checks I still had to do.
I shook Cameron’s hand and was introduced to his wife, who was in a chair beside him. They smiled nervously. It was his first time going under anesthesia. I told them about who would be on the surgical team with me and what was going to happen. I reached for the computer. But then I hesitated. I remembered when I’d turned my back on Cameron at our last encounter.
“Let’s go through these checks together,” I said.
I angled the screen toward the couple. Side by side, we confirmed that his medical history was up to date, that the correct surgical site was marked on his body, that I’d reviewed his medication allergies. His shoulders began to relax. His wife’s did, too.
“Are you ready?” I asked.
“I am,” he said. ♦
MIT lecturer Donald Sull offers a 7-step framework.
By Kara Baskin | March 28, 2018
Too often, companies get lost in buzzwords that muddle a clear path for the future or offer vague platitudes instead of precise goals. Here’s a guide for turning vision into guidelines for action.
“Strategy is a framework to guide critical choices to achieve a desired future,” said MIT Sloan senior lecturer Donald Sull in a new MIT Sloan Management Review webinar.
But companies frequently hamstring themselves.
Too often companies get lost in jargon. Sull has made a Mad Libs-style chart of overused buzzwords — fallback words like “digitize” and “monetize” and “leverage” — to illustrate how often companies overinflate their vision.
Other times, companies lay out an overly detailed long-term plan. The problem with this overambitious method is that “no plan survives contact with reality,” he said.
Instead, a strategic vision must be detailed enough to lay out a clear vision while being broad enough to allow for flexibility and adjustment.
Call it the Occam’s razor theory of corporate methodology: An ideal strategy provides enough guidance to empower workers to make trade-offs, formulate goals, allocate resources, prioritize activities, and clarify what people are committing to do. At the same time, it offers enough flexibility to allow people to seize opportunities and adapt as needed.
“This is hard,” Sull acknowledged.
Take American Airlines versus Southwest Airlines. American has goals like “be an industry leader” and “look to the future.” Inspiring but vague. Southwest, on the other hand, has initiatives like “fleet modernization” and “growth of Rapid Rewards program.” Precise and defined.
Here’s how to do the same.
Limit your objectives to a handful. This forces you to focus on what matters most and make important trade-offs among conflicting objectives.
Focus on the midterm. Priorities usually take three to five years to accomplish, Sull said. Annual goals are too tactical, and long-term goals are too abstract to offer real guidance.
Pull toward the future. “Many companies think about what worked well in the past. That’s fine, if the future looks like the past,” he said. To shake off inertia, think about priorities that don’t reinforce the status quo.
Make the hard calls. Strategy is about choice, and priorities should force you to confront the most consequential and sometimes difficult trade-offs.
Address critical vulnerabilities. Your strategic priorities should be critical to your company’s success and are most likely to fail in execution, hence deserving attention.
Provide concrete guidance. Priorities should provide a framework for leaders throughout the organization to decide where to focus and what to stop doing, with metrics.
Align the top team. Leaders often fail on this count. Sull cited a survey of 302 companies, where given five tries, only 29 percent of managers could name three of their company’s strategic priorities.
He closed with a Zen koan.
“If a strategy falls into a company and nobody understands it, does it make a difference?” he asked.
Unlike most koans, this one has a correct answer: No.
Sull teaches the MIT Sloan Executive Education course Closing the Gap Between Strategy and Execution.
Article link: http://mitsloan.mit.edu/newsroom/articles/how-to-turn-a-strategic-vision-into-reality
“Medicare for All” is now a political campaign talking point. Polls on Medicare for All and single-payer health care have shown that public support varies depending on what the proposal is called and the arguments for or against it. Misconceptions abound about these proposals and their likely effects.
Care would not be free in a single-payer system—it would be paid for differently. Instead of paying insurance premiums, people would pay taxes, which would be collected by a government agency and used to pay for health care on behalf of the population. Some in higher tax brackets might pay more under a single-payer system than under the current system, while others might pay less.
Many single-payer proposals, including Sen. Bernie Sanders’ “Medicare for All” proposal, cover a comprehensive range of services with no or very low co-pays and deductibles. While common in many proposals (PDF), a single-payer system would not necessarily eliminate all out-of-pocket expenses. In fact, the current Medicare program, which some consider a form of single payer, has deductibles and co-pays.
A single-payer system could push health spending up or down, or not have much effect. Spending could increase if a national single-payer system expanded coverage to more people, leading to higher use of health services. If the single-payer plan cuts deductibles and co-pays, currently insured people would also use more services. But a single-payer system might also reduce or eliminate administrative expenses, such as insurer marketing, billing and claims processing, which would push spending down. A single-payer plan could also cut spending by negotiating lower prices with providers and drug companies.
Two recent studies estimated that in a single-payer system, total spending could decline by a few percentage points.
Two recent studies, a national-level analysis by the Mercatus Center and RAND’s analysis of a single-payer proposal for New York state, estimated that total spending could decline by a few percentage points. Regardless of whether total spending goes up or down, federal spending would almost surely increase, because the government would be responsible for paying the bills.
If the United States adopted a single-payer plan, employer-sponsored insurance would become less relevant because people would have an alternative source of coverage. As a result, many employers would drop health insurance coverage (PDF).
However, workers would not lose access to insurance—they would have coverage through the single-payer plan. Many single-payer plans, including Sanders’ “Medicare for All” proposal, cover more than most current employer insurance plans, which have an average deductible of $1,573 for single coverage.
Some single-payer proposals explicitly prohibit employers and private insurers from offering health insurance coverage, to avoid a two-tiered system in which wealthier people have access to more services and providers. Other proposals would allow private insurance to offer coverage for services not included in the single-payer plan (such as elective surgeries), or to provide faster or improved services for those who wish to supplement their benefits.
For example, in Australia, all residents are eligible for basic health services provided through a single payer, but those with higher income are encouraged to buy additional, private coverage that provides access to private providers and hospitals.
None of the leading Medicare for All proposals require that doctors and other health care professionals become government employees, as is the case in the United Kingdom’s National Health Service. Under Sanders’ Medicare for All proposal, private practices and hospitals would continue to operate independently. Other single-payer proposals require hospitals to convert to nonprofit status (PDF), but could remain privately run.
Enrollees generally would be able to choose among providers participating in the program, and—if all providers participated—there would be no need to worry about out-of-network charges. However, changes in payment rates under a single-payer system could affect doctors’ willingness to supply services, and could make it more difficult to get appointments.
We see this effect in our current system—in 2015, only 45 percent of primary care physicians accepted Medicaid patients, due in part to Medicaid’s relatively low payment rates. In contrast, 72 percent of primary care physicians accepted new Medicare patients and 80 percent accepted new commercial patients.
Even if overall provider payment levels were reduced, payments to each individual provider would depend on their existing mix of patients. Payment might go up for some providers, such as those who see Medicaid patients, and could be about the same for those who see Medicare patients.
Jodi L. Liu is an associate policy researcher at the nonprofit, nonpartisan RAND Corporation. Christine Eibner is the Paul O’Neill-Alcoa chair in policy analysis at RAND and a professor at the Pardee RAND Graduate School.
This commentary originally appeared on USA Today on October 26, 2018. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.

At a hospital in eastern China’s Zhejiang province, patients are now able to sign in not by presenting identification, but by having their faces scanned, state media outlet People’s Daily reported Tuesday.
As of Oct. 15, the new system — co-developed by Alibaba’s health care arm, Ali Health, and the health department of Yuhang, a suburban district of Hangzhou — now allows anyone with health insurance and a mobile payment account to register at Yuhang First People’s Hospital more quickly and efficiently.
“Patients no longer have to search through the mess for their national ID cards, bank cards, or phones,” Li Xiaojun, the hospital’s deputy director, told People’s Daily.
The first step is for patients to link their accounts on Alipay — an Alibaba-affiliated mobile payment app used by an estimated 870 million people — to the hospital’s system using a self-service machine. Then when the patient enters a consultation room, medical staff can verify their identity in just a few seconds. When the consultation is finished, the patient can pay with their medical insurance or Alipay simply by having their face scanned. (Alipay’s top competitor, WeChat Pay, is not accepted under the new system.)
“We are considering expanding this service to cover other hospitals in Yuhang after the facial-recognition technology is well-established,” Ma Weihang, deputy director of Zhejiang’s health and family planning commission, told Sixth Tone.
China is a world leader in facial recognition, which has seen a wide variety of applications across the country in recent years, including checking class attendance, screening railway passengers, catching jaywalkers, nabbing wanted suspects at concerts and beer festivals, and preventing drowning deaths. In April of this year, Alibaba invested $600 million into a Hong Kong company that specializes in the technology.
The hospital’s system uses custom 3-D cameras in combination with Alipay’s biometric technology and user data to match faces against photos in the Ministry of Public Security’s database with 99.99 percent accuracy, according to People’s Daily. Head wounds and bandages, however, could impede the system’s performance.
Editor: David Paulk.
(Header image: A woman registers at Yuhang First People’s Hospital by having her face scanned, Hangzhou, Zhejiang province, Oct. 15, 2018. Courtesy of the hospital)
Article link: http://www.sixthtone.com/news/1003064/zhejiang-hospital-scans-faces-to-register-patients