Not everybody reads the legal notices inside the Ottumwa Courier. But in January, Iowa pharmacist Mark Frahm noticed something unusual in the paper.
For years, Frahm’s South Side Drug bought pills from distributors, and dispensed prescriptions to the Wapello County jail. In turn, the pharmacy got reimbursed for the drugs by CVS Health Corp., which managed the county’s drug benefits plan.
As he compared the newspaper notice with his own records, and then with the county’s, Frahm saw that for a bottle of generic antipsychotic pills, CVS had billed Wapello County $198.22. But South Side Drug was reimbursed just $5.73.
So why was CVS charging almost $200 for a bottle of pills that it told the pharmacy was worth less than $6? And what was the company doing with the other $192.49?
Frahm had stumbled across what’s known as spread pricing, where companies like CVS mark up—sometimes dramatically—the difference between the amount they reimburse pharmacies for a drug and the amount they charge their clients.
It’s where pharmacy benefit managers (PBMs) like CVS make a part of their profit. But Frahm says he didn’t think the spread could be thousands of percent.
“Middlemen have to make some money, but we didn’t expect it to be this extreme,” said Frahm, who said his pharmacy lost money in the jail account last year because CVS paid so little. “We figured everyone was playing fair.”
How Spread Pricing Works
Not long ago, I asked 100 CEOs attending a conference how many of them were currently involved in a significant business transformation. Nearly all of them raised their hands, which was no surprise. According to a study by BCG, 85% of companies have undertaken a transformation during the past decade.
The same research found that nearly 75% of those transformations fail to improve business performance, either short-term or long-term.
So why is transformation so difficult to achieve?
Among many potential explanations, one that gets very little attention may be the most fundamental: the invisible fears and insecurities that keep us locked into behaviors even when we know rationally that they don’t serve us well. Add to that the anxiety that nearly all human beings experience in the face of change. Nonetheless, most organizations pay far more attention to strategy and execution than they do to what their people are feeling and thinking when they’re asked to embrace a transformation. Resistance, especially when it is passive, invisible, and unconscious, can derail even the best strategy.
Business transformations are typically built around new structural elements, including policies, processes, facilities, and technology. Some companies also focus on behaviors — defining new practices, training new skills, or asking employees for new deliverables.
What most organizations typically overlook is the internal shift — what people think and feel — which has to occur in order to bring the strategy to life. This is where resistance tends to arise — cognitively in the form of fixed beliefs, deeply held assumptions and blind spots; and emotionally, in the form of the fear and insecurity that change engenders. All of this rolls up into our mindset, which reflects how we see the world, what we believe and how that makes us feel.
The result is that transforming a business also depends on transforming individuals — beginning with the most senior leaders and influencers. Few of them, in our experience, have spent much time observing and understanding their own motivations, challenging their assumptions, or pushing beyond their intellectual and emotional comfort zones. The result is something that the psychologists Lisa Lahey and Robert Kegan have termed “immunity to change.”
We first ran up against the power of mindset two decades ago when we began to make a case inside organizations that rest and renewal are essential for sustaining high performance. The scientific evidence we presented to clients was compelling. Nearly all of them found the concept persuasive and appealing, both logically and intuitively. We taught them very simple strategies to build renewal into their lives, and they left our workshops eager to change the way they worked.
Nonetheless, most of them struggled with changing their behavior when they got back to their jobs. They continued to equate continuous work and long hours with success. Taking time to renew during work days made them feel as if they were slacking. Even when organizations built nap rooms, they often went unused. People worried that if they rested at all, they wouldn’t get their work done, and above all, they feared failing. Despite their best intentions, many of them eventually defaulted back to their habitual patterns.
More recently, we worked with the senior team of a large consumer product company which had been severely disrupted by smaller, more agile online competitors selling their services directly to consumers. On its face, the team was aligned, focused, and committed to a new multi-faceted strategy with a strong digital component. But when we looked at the team’s mindset more deeply, we discovered that they shared several underlying beliefs including, “Everything we do is equally important,” “More is always better,” and “It has to be perfect or we don’t do it.” They summarized these beliefs in a single sentence: “If we don’t keep running as hard as we can, and attend to every detail, everything will fall apart.”
Not surprisingly, the leaders found they were spreading themselves too thin, struggling to pull the trigger on new initiatives, and feeling exhausted. Simply surfacing these costs and their consequences proved highly valuable and motivating. We also launched several initiatives to address these issues individually and collectively.
One of the most successful began with a simple exercise aimed at helping the leaders to define their three highest priorities. Then we took them through a structured exercise including delving into their calendars to assess whether they were using their time to best advantage, including setting aside time for renewal. This process prompted them to examine more consciously why they were working in self-defeating ways.
We also developed an online site where leaders agreed to regularly share their progress on prioritizing, as well as any feelings of resistance that were arising, and how they managed them. Their work is ongoing, but among the most common feelings people reported were liberation and relief. Their worst fears failed to materialize.
Several factors typically hold mindset in place. The first is that much of it gets deeply rooted early in our lives. Over time we tend to develop confirmation bias, forever seeking evidence that reinforces what we already believe, and downplaying or dismissing what doesn’t. We’re also designed, both genetically and instinctively, to put our own safety first, and to avoid taking too much risk. Rather than using our capacity for critical thinking to assess new possibilities, we often co-opt our prefrontal cortex to rationalize choices that were actually driven by our emotions.
All this explains why the most effective transformation begins with what’s going on inside people — and especially the most senior leaders, given their disproportionate authority and influence. Their challenge is to deliberately turn attention inward in order to begin noticing the fixed patterns in their thinking, how they’re feeing in any given moment, and how quickly the instinct for self-preservation can overwhelm rationality and a longer term perspective, especially when the stakes are high.
Leaders also have an outsize impact on the collective mindset — meaning the organizational culture. As they begin to change the way they think and feel, they’re more able to model new behaviors and communicate to others more authentically and persuasively. Even employees highly resistant to change tend to follow their leaders, simply because most people prefer to fit in, rather than stick out.
Ultimately, personal transformation requires the courage to challenge one’s current comfort zone, and to tolerate that discomfort without overreacting. One of the most effective tools, we’ve found is a series of provocative questions we ask leaders and their teams to build a practice around asking themselves:
“What am I not seeing?
“What else is true?”
“What is my responsibility in this situation?”
“How is my perspective being influenced by my fears?”
Great strategy remains foundational to transformation, but successful execution also requires surfacing and continuously addressing the invisible reasons that people and cultures so often resist changing, even when the way they’re working isn’t working.
Poor requirements management is among the significant contributing factors in the failure of a project. Improper requirements management can lead to disconnected teams, wasted time, out-of-control costs and unsatisfied customers. If you use documents, spreadsheets, emails, wikis and other conventional office and productivity tools to keep track of your business, customer or product requirements, you could be putting your project at risk.
What did we think our solution should do again? Where did we capture our product requirements? Can you find them? I thought this version had that feature for that customer, regulation or to beat competition?! These are just a few of the questions surfaced in the face of poor requirements creation, management and verification. In addition to the high-level factors attributed to proper requirements process such as better time to market, reduced cost and better quality another factor is safety there are many secondary benefits.
Top product development challenges revealed
In a recent Aberdeen Group survey, thirty-nine percent of all survey respondents said collaboration was one of their top two challenges in the development process. Closely following collaboration, 30% of respondents said disconnected processes / siloed departments was at the top of their list of product development challenges. (Source: Aberdeen Research Report: Product development in the era of IoT)
Figure 1: Top product development challenges revealed, Aberdeen Research Report
Coping with complexity, compliance and regulation issues
As continued regulations and compliance issues continue to become increasing complex, it is imperative that a system be in place to ensure the necessary functionality is in the solution to enable and comply with those regulatory issues. Many requirements have direct safety implications and without proper requirement creation, management and tracking may lead to liability on the part of the product and the company.
In one example, the safety requirements for a pool heater was not carried over from one model to the next. The proper venting requirements were missed, causing improper installation and venting of carbon monoxide and death to four hotel guests who were in rooms above the pool heater.
Not only do the requirements have to be define correctly you also have to know where to find them and leverage-link them across the entire solution implementation. In addition, the requirements must be accessible at all times to all members of the team in order to get the sharing that is required to have a cohesive and coherent product.
Word processor or spreadsheet users can benefit from having all their requirements available from a secure, central and accessible location. They can understand what changes have been made and who made them, when they were made and what they affect. They can organize, trace and link priorities and requirements across the development lifecycle; or view history, reviews and discussion comments within the context of the document. Create multiple views with a different perspective of data.
IBM and the IBM® Rational® DOORS® solution is the industry leader for requirements management system across all industries and dimensions. Built on the Jazz platform, DOORS Next Generation uses Open Services for Lifecycle Collaboration or OSLC as the integration framework. This provides the ability to build a set of user defined integrations in requirements, tests, architecture, design as well as change in configuration etc.
“In a 24-hour turnaround cycle, we were able to close 200 defects. We did that a few days in a row, and all of a sudden, we had a stable product. That was a big impact for us.” —John Penoyer, Group Manager of Tools, Panasonic Automotive Systems
Watch the video to learn how Panasonic Automotive improves quality with our cloud- based requirements management solution, DOORS NG.
Unlocking the cloud for requirements management
The pressure to improve product development takes on a new meaning when designing products for the IoT. By their nature, IoT-based products are more complex – and the development process must accommodate more product functionality and higher reliability, without compromising power, form, factor, or overall performance.
Figure 2: Top pressures to improve product development, Aberdeen Research Report
Demands to accelerate time-to-market (outpacing the competition), the need for continuous product innovation (ensuring ensure customers enjoy the latest updates to features and capabilities), while expanding market opportunities (the need to keep up with disruptive forces) are forcing organizations to stay on the cutting-edge of the development process.
What’s crucial to product and solution development?
#1 Reduction in development cost
DOORS NG enables increased collaboration with its cloud implementation with ubiquitous access. Using a consistent tool, drives consistency reducing interpretation of requirements that may led to subjective implementation.
#2 Improving time to market
Providing a holistic view of the system enables access to the latest requirements and requirement history reduces the need to verify the information you are creating and referencing.
#3 Multi user simultaneous support
A built-in scheme for supporting multiple users with locking capability down to the paragraph level enables reduction in duplicate copies and older versions replacing more current. Like any good requirements management solution DOORS Next Generation supports the annotation of requirements with additional information called attributes. These attributes can store comments, priorities and mixed costs to improve the efficiency and effectiveness of the process being applied.
Supporting audit requests in three ways
#1 DOORS Next Generation allows enter and traceability. This allows an organization to show how they have applied a systematic engineering process from concept to components
#2 Secondly, DOORS Next Generation supports history for all artifacts, recording who changes what and when by using attributes or linking work when requirements change
#3 DOORS Next Generation can link to related work items and test plans for not only showing how requirements have been met, but also what to be validated and verified.
Assessing the impact of changes before the change happens
Using traceability as a guide makes it possible to follow the data linked to impact analysis enabling you to see the effects of any proposed change before it’s been made.
Requirements management solution users can manage change across the product lifecycle for faster impact assessment and better quality. They can explore information with more granularity with individually managed information and artifacts. You can assess the impact of changes more quickly by linking change requests to requirements. Drag-and-drop traceability is also supported. This software allows you to utilize proven models and templates for certification and compliance.
Safety for your data and its use leveraging over 20 years of experience
IBM Continuous Engineering Requirement Management solutions have been consistently rated highly by market leaders and industry analysts. The DOORS family of products has been in the market for more than 20 years – incorporating the collective learning and user experience from decades of leadership into the cloud-based version.
“IBM is helping us gear up to meet the challenges of an auto industry undergoing tremendous change. By putting us in complete control of our software and systems development process, we can continue to deliver rapid innovation for the next generation of safe, efficient, connected automobiles.” —Kishore LM, Senior Lead Engineer, Mahindra & Mahindra
Purchasing and training at your figure tips
DOORS Next Generation is available with authorized and floating user licenses with perpetual and term-based licenses. Customers can also take advantage of token-based licensing for additional flexibility. DOORS Next Generation is relatively simple to get up and running. Explore the vibrant user community on Jazz.net. With an active forum, library, blog, and more, support is always at your fingertips.
Learn more about requirements management and cloud
An incredible change in how care is measured, billed, and paid is slowly transforming large portions of the United States healthcare industry. Many believe value-based payment can change the trajectory of national health expenditures and improve the quality of care, yet its success remains uncertain.
Early adopters of value-based payment have encountered numerous challenges to operationalizing the new model. In many ways, fee-for-service administrative processes and systems simply do not align with the needs of value-based payment. For example, no single system supports exchange of all the necessary data for value-based payment. The claims system was designed to support fee-for-service reimbursement. Electronic health record (EHR) systems hold much of the clinical data needed for direct patient care, yet they were not intended to integrate financial and clinical data or to serve as analytics tools. Health plans also maintain siloed systems, in which there is no clean way to integrate clinical data from providers.
Innovation has delivered important advances, and it has also created an operational patchwork of work-arounds and proprietary approaches. The challenge is that some uniformity is needed to prevent administrative roadblocks that can keep new payment models from thriving. The success of value-based payment hinges on achieving a new level of collaboration and cooperation between stakeholders.
We’ve seen this happen before. CAQH CORE was founded to address a similar issue arising from the initial adoption of electronic fee-for-service transactions. Health plans, providers, vendors, clearinghouses, government entities, standards development organizations, and other stakeholders came together to hammer out common rules of the road to help the industry uniformly and more fully implement Health Insurance Portability and Accountability Act (HIPAA) standard transactions. Through ongoing collaboration, these stakeholders continue to create and promote rules that, in combination with other industry efforts, help health plans, providers, government entities, vendors, and clearinghouses conduct fee-for-service administrative transactions electronically and securely. Importantly, these rules help eliminate many manual phone, fax and mail transactions.
As our organization followed the advance of value-based payment, it became clear that the emerging system was at risk of repeating fee-for-service-era mistakes. After a 2-year study, CAQH CORE released a report, All Together Now: Applying the Lessons of Fee-for-Service to Streamline Adoption of Value-Based Payments, which identifies 5 opportunity areas that, if improved, would streamline value-based payment operations. The report also details specific strategies to address each of these areas.
One of the opportunity areas identified in the report—data quality and uniformity—tackles the common problem of irregularities in data. That is, healthcare operational data often lacks the precision necessary to achieve the fluid interactions needed across stakeholders for success in value-based payment.
Discrepancies in the quality or uniformity of data can cause a host of issues in value-based payment. Missing, inaccurate, or obscure provider data can impact quality, cost, and patient experience. It may cause inaccurate risk-based payments when a provider’s specialty and relationship to the patient are unclear. Inaccurate provider data may cause unsuccessful provider-to-provider network referrals, affecting the overall amount of payment able to be made and potentially increasing patient out-of-pocket cost. Provider identity also plays a role in care accountability, which can affect quality measurement, system cost, and patient cost.
As an example, the lack of a single, universal standard for provider identification can be a source of confusion. Some commercial health plans use a proprietary identifier in lieu of the standard National Provider Identifier (NPI) on claims. Medicaid programs typically use both a Medicaid ID and NPI (when applicable based on the provider). CMS uses both the NPI and the tax identification number (TIN), which sometimes identify physicians at the group level.
Of course, data quality and uniformity are affected by many other issues. Even inconsistent use of seemingly common terms can be a source of trouble. When used inconsistently, terms to describe the date and time of an event, such as “discharge from an emergency department” or the nature of an event, such as “admission for observation,” can compromise the ability to measure timeliness of care and affect patients’ financial obligations.
Also, while standards exist for the use of many medical code sets in HIPAA-mandated transactions, EHR systems, quality measures, and so forth, there are no standards for the use of others. For example, a provider reporting the number of diabetics with an HbA1c level above 9.0% may encounter an error because “HbA1c” can be represented in more than 100 different ways.
Interoperability. The scale and scope of collaboration needed for success in value-based payment magnifies the need for interoperability. Improvements are needed in semantic interoperability, as described above in the discussion on data quality, as well as in technical and process interoperability.
Of course, numerous efforts are already pursuing improvements in technical interoperability. Our recommendation fully supports those efforts by encouraging use of existing and emerging standards and technologies.
Process interoperability affects how information is exchanged and how actions are interpreted by other stakeholders. More carefully choreographed workflows, processes and policies would allow stakeholders to make more reliable comparisons and act on timelier insights, among other benefits. Our recommendation to promote process interoperability focuses on cataloging best practices.
Patient risk stratification. Characterizing patients’ risk and using that understanding to direct the focus of care coordination activities and adjust payment based on patient risk factors are hallmarks of value-based payment.
In our research, however, it was clear that the multitude of risk stratification and risk adjustment methodologies used today are sources of confusion that erode trust. Also, despite the vast amounts of data available today, gaps still need to be filled before the full potential of patient risk stratification can be realized in value-based payment.
Our recommendations seek to improve awareness of the threats caused by variability and data gaps and to promote collaboration and transparency in risk stratification and risk adjustment models.
Provider attribution. In value-based payment models, providers take responsibility for the care of specific patients, yet many providers indicate that they do not understand attribution models when they engage in these arrangements and that they may not be informed about their attributed patients.
We identified two underlying operational causes. First, providers frequently bill under multiple TINs or may bill under TINs that specify identification at the group or organization level. In these cases, reliance on the TIN for provider attribution is inadequate. Second, there is little uniformity in provider attribution methodologies. The large number of attribution methodologies is a source of provider frustration, as it can obscure the provider’s view of the patient’s true care coordination needs.
To improve the accuracy of provider attribution, we recommend the promotion of more precise means of identifying providers at the individual level, including the use of standardized data. We also recommend a streamlined approach and more transparency in the use of provider attribution models, beginning with a catalog of models and a library of leading practices.
Quality measurements. Responding to the growing number of clinical quality measurement programs has become an operational burden for providers. In many cases, redundant information is collected and communicated inconsistently.
CAQH CORE identified 3 specific operational challenges posed by quality measurement that provide opportunity for improvement: Over-proliferation of quality measures, the process to generate quality data for value-based payment initiatives, and the lack of quality measures that are truly useful for improving care and addressing consumer concerns.
Many efforts to improve quality measurement, reporting and to promote harmonization are already underway. Our recommendation lends support to those efforts and proposes industry education on the quality measurement improvement goals, such as improved consistency in measures across programs, reductions in the data collection burden and a requirement that measures be actionable.
What “patient-empowered” approaches offer promising solutions to current problems in medical record matching?
To what extent can any solutions meet the following criteria: improvement in record matching if widely implemented; patient control; likelihood of adoption by patients, by providers, and by vendors; feasibility (of development, pilot testing, and implementation); minimal security risks; sustainability (operational and financial); political viability; potential to foster new uses of matched records; and low potential for unintended negative consequences?
Despite widespread adoption of electronic health records and increasing exchange of health care data, the benefits of interoperability and health information technology have been hampered by the inability to reliably match patients and their records. The Pew Charitable Trusts contracted with the RAND Corporation to investigate “patient-empowered” approaches to record matching — solutions that have some additional, voluntary role for patients beyond simply supplying demographics to their health care providers — and to select a promising solution for further development and pilot testing. After extensive consultation with a variety of experts, researchers did not identify a “silver bullet” or achieve consensus on a single solution. Instead, this report recommends adopting a three-stage approach that aims to improve the quality of identity information, establish new smartphone app functionality to facilitate bidirectional exchange of identity information and health care data between patients and providers, and create advanced functionality to further improve value. The report also suggests that because the solution contains multiple components involving diverse stakeholders, a governance mechanism likely will be needed to provide leadership, track pilot tests, and evaluation, as well as to convene key stakeholders to build consensus where consensus is needed.
Key Findings
This report identified ten potential patient-empowered solutions to improve record matching, evaluated them using 11 criteria, and proposed a three-stage solution combining elements of several approaches.
Implementing a voluntary universal identifier.
Using a public key as an identifier.
Expanding the use of existing government-issued identifiers.
Adding knowledge-based identity information.
Adding biometric data.
Having patients verify identity information.
Using consumer-directed exchange.
Using health record banks.
Having patients manually verify record matches.
Having patients supply record location information.
These potential solutions varied greatly in the level of detail and experience being tested. Application of evaluation criteria found notable strengths and weaknesses with every solution and, on balance, a reasonable argument could be made to further develop most of them. Given high uncertainty as to the extent to which any specific solution can ultimately succeed, further investigation, development, and pilot testing of a range of solutions are warranted.
Recommendations
A promising approach is to advance a three-stage solution that leverages mobile phones and smartphones and aims to (1) improve the quality of identity information used for record matching, (2) establish new functionalities of smartphone apps to facilitate transfer of this information to providers, and (3) create advanced app functionality to further improve record matching and address other evaluation criteria (e.g., likelihood of adoption, sustainability).
Specific recommendations include those that will advance the selected three-stage solution through development and pilot testing by:
Developing technical specifications for verified data fields, developing best practices that allow health care providers to verify mobile phone numbers, and iteratively pilot testing and refining the specifications and best practices to maximize feasibility and usability
Developing application programming interfaces and best practices for establishing bidirectional communication between a smartphone app and health care provider registration systems at the point of care, and iteratively pilot testing and refining them
Developing advanced app functionalities to further improve record matching and increase the value of apps to patients and providers.
Two additional recommendations intended to help accelerate efforts to improve record matching are:
Establish or designate an organization to oversee national progress in record matching
Conduct more rigorous research into the nature and magnitude of record matching errors, and create methods for health care providers to objectively benchmark their record matching performance.
One of the biggest inflators of medical costs in 2019 will be hospital megamergers, a new analysis by PwC’s Health Research Institute suggests.
For its report, researchers at HRI analyzed census data from the Herfindahl-Hirschman Index for hospital markets to predict and understand hospital consolidation patterns.
Here are five things to know:
1. The HRI estimates that 93 percent of most metropolitan hospital markets will be highly concentrated by 2019.
2. This trend, in the short term, will likely lead to higher prices for medical services in these markets.
3. The reasons for the inflated costs are that providers will have more leverage to negotiate higher prices for medical services with payers and there will be increased expenses from integrating various networks.
4. “There has been little to no action taken to stop providers from concentrating or taking price increases,” Sherry Glied, PhD, dean of New York University Wagner Graduate School of Public Service, told HCI. “If inflation ramps up, and consolidation continues, expect to see strong upward pressure on prices.”
5. Over the long term, economies of scale and efficiencies may be achieved to lower costs, HCI said.
Host Francis Rose explores some of the key issues on Health IT. The eighth installment focuses on the intersection between interoperability and value-based care, how artificial intelligence will transform healthcare and how the implementation of new government standards, like the data act, could impact patient outcomes.
Featured on this installment are leaders from the Office of the National Coordinator for Health Information Technology, U.S Department of Veteran Affairs, Deloitte, Defense Healthcare Management Systems, the U.S. Army Research Laboratory, U.S. Health and Human Services, the National Cancer Institute and the U.S. Digital Service.
Appearing on this program with Francis Rose, Government Matters Thought Leadership Network:
Dr. Jon White, Deputy National Coordinator, Office of the National Coordinator for Health Information Technology
Kelly Cronin, Senior Advisor, Office of Connected Care Veterans Health Administration, U.S. Department of Veteran Affairs
Dr. Doug Rosendale, Chief Medical Interoperability Officer, Deloitte
Stacy Cummings, Program Executive Officer, Defense Healthcare Management Systems
Sonja Lemott, Chief Engineer for Program Executive Officer, Defense Healthcare Management Systems
Dr. Jean Vettel, Senior Science Lead, U.S. Army Research Laboratory
Amy Haseltine, Deputy CIO for Enterprise Services, U.S. Health & Human Services
Jeff Shilling, Chief Information Officer, National Cancer Institute
Shannon Sartin, Executive Director, United States Digital Service
The threat that electronic health records and machine learning pose to physicians’ clinical judgment — and their well-being.
By ABRAHAM VERGHESE
Illustration by ERIK CARTER
There are times when the diagnosis announces itself as the patient walks in, because the body is, among other things, a text. I’m thinking of the icy hand, coarse dry skin, hoarse voice, puffy face, sluggish demeanor and hourglass swelling in the neck — signs of a thyroid that’s running out of gas. This afternoon the person before me in my office isn’t a patient but a young physician; still, the clinical gaze doesn’t turn off, and I diagnose existential despair.
Let’s not call this intuition — an unfashionable term in our algorithmic world, although there is more to intuition than you think (or less than you think), because it is a subconscious application of a heuristic that can be surprisingly accurate. This physician, whose gender I withhold in the interest of anonymity and because the disease is gender-neutral, is burned out in what should be the honeymoon of a career. Over the years, I have come to recognize discrete passages in a medical life, not unlike in Shakespeare’s “Seven Ages of Man” — we have our med-school equivalent of “the whining schoolboy with his satchel and shining morning face” and the associate professor “jealous in honor, sudden and quick in quarrel.” But what I see in my colleague is disillusionment, and it has come too early, and I am seeing too much of it.
Does this physician recall sitting before me as an idealistic first-year medical student, keen to take the world in for repairs? It was during those preclinical years that the class learned to use the stethoscope, the ophthalmoscope and the tendon hammer, to percuss the body, sounding out its hollows, the territorial boundaries of lung and liver. After the preclinical come the two clinical years, though I think of those phases these days as precynical and cynical. When students arrive on the wards full time, white coats packed with the aforementioned instruments, measuring tape, tuning fork, flashlight and Snellen eye chart, they are shocked to find that the focus on the ward doesn’t revolve around the patients but around the computers lining the bunkers where students, residents and attending physicians spend the majority of their time, backs to one another. All dialogue among them and other hospital staff members — every order, every lab request and result — must pass through this electronic portal, even if the person whose inbox you are about to overload is seated next to you.
In America today, the patient in the hospital bed is just the icon, a place holder for the real patient who is not in the bed but in the computer. That virtual entity gets all our attention. Old-fashioned “bedside” rounds conducted by the attending physician too often take place nowhere near the bed but have become “card flip” rounds (a holdover from the days when we jotted down patient details on an index card) conducted in the bunker, seated, discussing the patient’s fever, the low sodium, the abnormal liver-function tests, the low ejection fraction, the one of three blood cultures with coagulase negative staph that is most likely a contaminant, the CT scan reporting an adrenal “incidentaloma” that now begets an endocrinology consult and measurements of serum cortisol.
The living, breathing source of the data and images we juggle, meanwhile, is in the bed and left wondering: Where is everyone? What are they doing? Hello! It’s my body, you know!
My young colleague slumping in the chair in my office survived the student years, then three years of internship and residency and is now a full-time practitioner and teacher. The despair I hear comes from being the highest-paid clerical worker in the hospital: For every one hour we spend cumulatively with patients, studies have shown, we spend nearly two hours on our primitive Electronic Health Records, or “E.H.R.s,” and another hour or two during sacred personal time. But we are to blame. We let this happen to our trainees, to ourselves.
How we salivated at the idea of searchable records, of being able to graph fever trends, or white blood counts, or share records at a keystroke with another institution — “interoperability”! — and trash the fax machine. If every hospital were connected, we would have a monster database, Big Data that’s truly big and that would allow us to spot trends in disease so much earlier and determine best practice and predict complications. But we didn’t quite get that when, as part of the American Recovery and Reinvestment Act of 2009, $35 billion was eventually steered toward making medicine paperless.
My A.T.M. card is amazing: I can get cash and account details all over America and beyond. Yet I can’t reliably get a patient record from across town, let alone from a hospital in the same state, even if both places use the same brand of E.H.R., for reasons that are only partly explained by software that has been customized for each site. This is not like sending around a standard Word file. And so, too often the record comes by fax.
What the E.H.R. has done is help reduce medication errors; it is a wonderful gathering place for laboratory and imaging information; the notes are always legible. But the leading E.H.R.s were never built with any understanding of the rituals of care or the user experience of physicians or nurses. A clinician will make roughly 4,000 keyboard clicks during a busy 10-hour emergency-room shift. In the process, our daily progress notes have become bloated cut-and-paste monsters that are inaccurate and hard to wade through. A half-page, handwritten progress note of the paper era might in a few lines tell you what a physician really thought. (A neurosurgeon I once worked with in Tennessee would fill half the page with the words “DOING WELL” in turquoise ink, followed by his signature. If he deviated from that, I knew he was very worried and knew to call him.) But now, with a few keystrokes, you can populate your note with all the listed diagnoses, all the medications, all the labs, all the radiology reports, pages and pages of these, as well as enough “smart phrases” — “.EXT2” might spit out “Extremities-2+ pedal edema, normal pulses” — to allow you to swear you personally examined the patient from head to foot and personally took all the elements of the history, personally did a physical exam separate from the admitting physician that would put Sir William Osler to shame, all of which make it possible to bill at the highest level for that encounter (“upcoding”).
As a result, so much of the E.H.R., but particularly the physical exam it encodes, is a marvel of fiction, because we humans don’t want to leave a check box empty or leave gaps in a template. Over the years, I have met one-legged patients with “pulses intact in both feet” and others with “heart sounds normal, no murmur or gallop” whose mechanical heart valve’s clicks and murmurs are so loud that the patient in the next bed demands earplugs. For a study, my colleagues and I at Stanford solicited anecdotes from physicians nationwide about patients for whom an oversight in the exam (a “miss”) had resulted in real consequences, like diagnostic delay, radiation exposure, therapeutic or surgical misadventure, even death. They were the sorts of things that would leave no trace in the E.H.R. because the recorded exam always seems complete — and yet the omission would be glaring and memorable to other physicians involved in the subsequent care. We got more than 200 such anecdotes. The reason for these errors? Most of them resulted from exams that simply weren’t done as claimed. “Food poisoning” was diagnosed because the strangulated hernia in the groin was overlooked, or patients were sent to the catheterization lab for chest pain because no one saw the shingles rash on the left chest.
I worry that such mistakes come because we’ve gotten trapped in the bunker of machine medicine. It is a preventable kind of failure. Our $3.4 trillion health care system is responsible for more than a quarter of a million deaths per year because of medical error, the rough equivalent of, say, a jumbo jet’s crashing every day. Much of that is a result of poorly coordinated care, poor communication, patients falling through the cracks, knowledge not being transferred and so on, but some part of it is surely from failing to listen to the story and diminishing skill in reading the body as a text.
Five years ago, I experienced a sudden asthma attack while visiting another city. The E.R. physician was efficient, the exam adequate. The nurse came in regularly, but not to visit me so much as the screen against the wall. Her back was to me as she asked, “On a scale of 1 to 10, with 10 being great difficulty breathing …?” I saw her back three more times before I left. My visit recorded in the E.H.R. would have exceeded all the “Quality Indicators,” measures that affect reimbursement and hospital ratings. As for my experience, it was O.K., not great. I received care but did not feel cared for.
A senior colleague who was hospitalized for a longer spell, a person whose scientific and technological discoveries have helped transform care in virtually every hospital, told me with some chagrin that during his stay, the only people who got to know him as an individual human being were the nursing assistant and the housekeeper. The nurses, through no fault of theirs, were tethered to the COWs — Computers on Wheels — into which they entered data about him. But what, he asked, if it had been the other way around? What if the computer gave the nurse the big picture of who he was both medically and as a person? What if it reminded the nurse, because it was post-op Day 6 and his white count was rising, to check the surgical wound and the puncture sites where intravenous fluid was administered? What if the top screen reminded the nurse about his family members, his life outside this room and his unique concerns? What if it gave the nurse specific insights on how he was handling his suffering and suggested a strategy to support him?
For all the effort that goes into data gathering and entering, too often the data is ignored. One of my brothers, a professor at M.I.T. whose current interest in biomedical engineering is “bedside informatics,” marvels at the fact that in an I.C.U., a blizzard of monitors from disparate manufacturers display EKG, heart rate, respiratory rate, oxygen saturation, blood pressure, temperature and more, and yet none of this is pulled together, summarized and synthesized anywhere for the clinical staff to use. The physician or the nurse walks in and looks at 10s of seconds of information, a snapshot at that moment. What these monitors do exceedingly well is sound alarms, an average of one alarm every eight minutes, or more than 180 per patient per day. What is our most common response to an alarm? We look for the button to silence the nuisance because, unlike those in a Boeing cockpit, say, our alarms are rarely diagnosing genuine danger.
The biggest price for “digital medicine” is being paid by physicians like the sad case seated before me, who is already considering jumping to venture capital or a start-up, not because that is where the heart is but because it’s a place to bail out to. By some estimates, more than 50 percent of physicians in the United States have at least one symptom of burnout, defined as a syndrome of emotional exhaustion, cynicism and decreased efficacy at work. It is on the increase, up by 9 percent from 2011 to 2014 in one national study. This is clearly not an individual problem but a systemic one, a 4,000-key-clicks-a-day problem. The E.H.R. is only part of the issue: Other factors include rapid patient turnover, decreased autonomy, merging hospital systems, an aging population, the increasing medical complexity of patients. Even if the E.H.R. is not the sole cause of what ails us, believe me, it has become the symbol of burnout.
Burnout is costly. My colleague at Stanford, Tait Shanafelt, a hematologist and oncologist who specializes in the well-being of physicians, is Stanford Medicine’s first chief wellness officer. His studies suggest that burnout is one of the largest predictors of physician attrition from the work force. The total cost of recruiting a physician can be nearly $90,000, but the lost revenue per physician who leaves is between $500,000 and $1 million, even more in high-paying specialties. Turnover begets more turnover because those left behind feel more stress. Physicians who are burned out make medical errors, and burnout can be infectious, spreading to other members of the team.
I hold out hope that artificial intelligence and machine-learning algorithms will transform our experience, particularly if natural-language processing and video technology allow us to capture what is actually said and done in the exam room. The physician focuses on the patient and family, and if there is a screen in the room, it is to summarize or to share images with the patient; by the end of the visit, the progress notes and billing are done. But A.I. applications will help us only if we vet all of them for their unintended consequences. We need sufficient regulatory scrutiny so that we don’t have debacles like that of Theranos, a company that claimed it could perform comprehensive lab tests from just a few drops of blood. Technology that is not subject to such scrutiny doesn’t deserve our trust, nor should we ever allow it to be deeply integrated into our work.
Starting with good data is critical for medical applications of A.I. and machine learning. The messy nature of medical data, of the E.H.R., makes the task difficult. Garbage in begets garbage out — sanitized, pretty, color-coded garbage, but garbage nonetheless. Advances in pattern recognition applied to X-rays and CT scans and retinal scans will be very helpful aids to the clinician. But as with any lab test, what A.I. will provide is at best a recommendation that a physician using clinical judgment must decide how to apply.
I remind my young colleague that no one else can fulfill that final role: If data scientists complain about “messy data,” then there is nothing messier than an emergency meeting with a seriously ill patient when the information on what preceded this moment emerges in dribbles, and all the while we need to make a provisional diagnosis, start therapy, then make quick decisions as the disease evolves and recalibrate with more story from the patient, or from the firefighters who found the bottles of medication by the patient’s side, or from the friend or family member who suddenly shows up and mentions the sneezing parakeet, or from the lab numbers that come back. True clinical judgment is more than addressing the avalanche of blood work, imaging and lab tests; it is about using human skills to understand where the patient is in the trajectory of a life and the disease, what the nature of the patient’s family and social circumstances is and how much they want done.
The seriously ill patient has entered another kingdom, an alternate universe, a place and a process that is frightening, infantilizing; that patient’s greatest need is both scientific state-of-the-art knowledge and genuine caring from another human being. Caring is expressed in listening, in the time-honored ritual of the skilled bedside exam — reading the body — in touching and looking at where it hurts and ultimately in localizing the disease for patients not on a screen, not on an image, not on a biopsy report, but on their bodies.
I recall an occasion when we were on rounds seeing a patient, and suddenly a vile odor filled the air. Time stood still until an earthy and wonderful nurse said: “Do you all smell poop? Let’s fix that!” and stepped forward, and we did, too, rolled up sleeves and learned from a master, with minimal fuss and minimal embarrassment to the patient, how to take care of something that a keyboard won’t do for you, what no algorithm will sanitize. Medicine is messy and complicated, because humans are messy and complicated. That is why I love it. And what all of us in the trenches — housekeepers, nurses, nursing assistants, therapists, doctors — have in common is that humanity. We came to this for many reasons, but it sobers me how many people came because they had a sense of calling, because they genuinely care.
So let’s not be shy about what we do and ought to do and must be allowed to do, about what our patients really need. As he was nearing death, Avedis Donabedian, a guru of health care metrics, was asked by an interviewer about the commercialization of health care. “The secret of quality,” he replied, “is love.”/•/
The trouble began when I needed to open the electronic health record (EHR) system for the tenth time that day. EHRs have significantly changed the way we practice medicine. They have completely eliminated the need for storage and transport of paper charts, reduced prescription errors secondary to illegible handwritings of physicians and provided an excellent platform to maximize billing for services rendered. However, in terms of creating a smooth workflow for physicians and in facilitating meaningful face to face encounters with our patients, all EHR systems have completely failed.
Sara, the nurse for Ms. Tucker, called me saying, “Doc, Ms. Tucker is back from the cardiac catheterization lab. She is complaining of a mild headache.”
“It must be from the nitroglycerine I gave her in the lab; please give the ordered Tylenol, it will resolve soon,” I said.
“I don’t have any orders, Doc. All the orders for this patient are on hold,” she replied.
“On hold? What do you mean?” I said surprised.
“Yes. All orders go on automatic hold when a patient goes for a procedure and comes back unless you reorder. It’s the new EHR upgrade,” she said.
“OK, I will do it as soon as possible,” I said, taking in a deep breath and involuntarily rubbing my forehead.
To reorder Ms. Tucker’s medications, I needed to log into the EHR system again. I had to do it remotely, as I was at another hospital in town that used a completely different EHR system, though it was only ten minutes away. I had to enter my username and password carefully and slowly three times in response to three different prompts in addition to entering the secret code that was texted to my cell phone. After what felt like an eternity, I logged into this most sacred, highly-classified, triple-password-protected space of EHR. I wondered if the FBI have to enter their passwords thrice to log on to their everyday workspace. As I was able to log onto the EHR system without seeing the prompt “Incorrect username and or password,” anytime during my three-password entries, I regarded it as my greatest success for the day. The fact that Ms. Tucker’s procedure was uncomplicated and smooth faded in comparison to this success.
I opened Ms. Tucker’s chart. There were twenty-one tabs vertically on the left-hand corner of the screen and eighteen tabs horizontally on the top of the screen. I quickly glanced through the cluttered twenty-one vertical tabs; I clicked on the one I am looking for — “transfer medication reconciliation” in the 19th slot. A new grid showing sixteen held orders opened. I selected each of them separately and clicked on “continue.”
Select and continue.
Sixteen times two: thirty-two clicks.
I was fidgety. I was not sure of even reviewing each order. I just clicked and clicked. Final tab. Review and sign. Clicked. End of my rendezvous with EHR for today. I hoped. The age of “clerical” physicians continues, I mused.
A few weeks ago, when there was EHR system breakdown, patients were still taken care of. Nurses took verbal orders. It was unquestionably chaotic, but for me, it was bliss. I was on call for acute heart attacks that day. Ms. Copper drove her husband to the hospital with sudden-onset chest pain. Mr. Cooper had a cardiac arrest in the parking lot of our hospital. He was revived after a brief period of cardiopulmonary resuscitation and taken immediately to the cardiac catheterization lab. I opened his occluded left circumflex artery successfully, and he was transported to the intensive care unit in a stable condition. No one pestered me regarding entering orders before doing the procedure.
After the procedure, I wrote down the report and all the orders on a single plain sheet of paper in fifteen minutes. Zero clicks. I then went to meet Ms. Cooper in the family lounge. Usually, I am interrupted by messages from the nursing staff that family is waiting for me when putting orders or vice-versa that there is some problem with order entry when I am meeting with the family. This didn’t happen that day. Ms. Cooper had my undivided attention for a full twenty minutes. Her eyes filled with tears of gratitude at the end of our conversation. She hugged me and said, “Thank you for saving my husband’s life.” I didn’t have to rush to correct orders or type in my notes. The bliss, unfortunately, lasted only for twenty-four hours, and once the software was uploaded again, the usual “digital” drudgery continued.
I chose to be a physician to care for people. I value my direct face-to-face interactions with my patients. I want to hear their stories told by them in their own words. I cherish their smiles, their tears, their gratitude, their handshakes and hugs. I did not sign up for screen time with a computer, speed typing or clicking. I did not sign up to type passwords multiple times.
The leading, if not the only, cause reported for physician burnout is workflow issues related to the use of EHR systems. Instead of providing meaningful clinical documentation for easy communication among health care providers, it has evolved into this massive, cumbersome giant with so many interfaces, cluttered screens, redundancy and duplication requiring innumerable clicks, selects, templates and passwords with no added clinical value but only frustration and burnout for physicians. The software has to be re-designed for clinicians to efficiently gather data, enter orders and communicate relevant clinical information without having bloated notes and cluttered screens that are catered for easy billing and coding. Clinicians together with hospital administrators and information technology professionals should be actively involved in the development, testing and optimization of new electronic features to streamline the workflow. For me, there will be no joy in the practice of medicine until my frustrating rendezvous with EHRs end.
The names used in essay have been modified to maintain anonymity. It is a fictionalized account of true events.
Absent more rigorous federal regs, Pew Charitable Trusts, MedStar Health and AMA offer model test cases to help vendors and providers detect potentially dangerous usability risks.
Pew Charitable Trusts says not enough attention is being paid to electronic health record usability from a safety point of view. And given that federal certification requirements don’t address two key safety factors, it’s offering EHR developers and provider organizations a toolset to help boost patient protections.For all the benefits that EHRs bring, variations in their design, customization and use can “lead to inefficiencies or workflow challenges and can fail to prevent – or even contribute to – patient harm,” according to a new study from Pew.
Yes, ONC has exacting certification requirements for health IT, including provisions that say vendors have to conduct usability testing and bring clinical end users into their design and development process.
But the testing rules “fall short in two ways when it comes to assessing whether the use of products contributes to patient harm,” according to Pew’s Project Director for Health Information Technology Ben Moscovitch.
First, federal testing criteria focus on vendors’ development and design but “do not address circumstances in which customized changes are made to an EHR as part of the implementation process or after the system goes live,” he explained. “The second key challenge is the absence of requirements and guidance on how to test clinician interaction with the EHR for safety issues.”
But use of such clinical test cases, with scenarios that mimic real-world clinical workflows and patient conditions, can be hugely valuable in detecting design or customization quirks that might have an adverse impact on safety.
As part of its new report, Pew, in partnership with MedStar Health’s National Center for Human Factors in Healthcare and the American Medical Association, has made available a group of model test cases and list of best practices to help providers make safety assessments of their own post-install EHRs, and spot any “usability-related risks to patients throughout the life cycle of these products.”
They tracked how emergency physicians at each location handled six specific scenarios, collecting keystroke, mouse click and video data.
“There was wide variability in task completion time, clicks and error rates,” according to the report. “For certain tasks, there were an average of a nine-fold difference in time and eight-fold difference in clicks.”
To better help hospitals and health systems ensure their EHRs are deployed and configured for optimal usability and safety, the new Pew report outlines scenarios and advice to help providers get a handle on how its clinical staff are putting the IT to work.
“The sample scenarios in the report provide thorough tests covering seven types of usability issues that clinicians may face, including unclear settings, data entry obstacles, and confusing system alerts,” said Moscovitch. “Each scenario was designed so that both health IT vendors and purchasers of EHR products can conduct realistic, safety-focused usability tests. The scenarios can be used directly by vendors or providers and can help build additional tests.”
He emphasized that successful scenarios should “include expected users of the system with varying levels of computer expertise; represent realistic clinical care processes; be shaped around a clinically oriented goal; contain clear, quantitative measures of success and failure; and include known risk areas and challenging processes, such as ordering that a patient’s drug dose be tapered.”
The hope, Moscovitch said, is that more EHR vendors adopt such test cases and incorporate them not just in design and development, but also in subsequent iterations as the technology matures.
He added that the test scenario criteria included in the Pew report “can serve as a potential standard for the EHR accrediting bodies and a resource for developers” – and ideally would eventually be incorporated into future updates to ONC’s own certification requirements.
Health systems, meanwhile, “can use the test criteria and sample cases to evaluate the usability and safety of their product during the implementation phase, after changes are made, and to inform customization decisions,” said Moscovitch. “Organizations can immediately leverage the example test cases to quickly evaluate system safety to identify challenges and prevent harm.”