Biostasis aims to prevent death following traumatic injury by slowing biochemical reactions inside cells
outreach@darpa.mil
3/1/2018
When a Service member suffers a traumatic injury or acute infection, the time from event to first medical treatment is usually the single most significant factor in determining the outcome between saving a life or not. First responders must act as quickly as possible, first to ensure a patient’s sheer survival and then to prevent permanent disability. The Department of Defense refers to this critical, initial window of time as the “golden hour,” but in many cases the opportunity to successfully intervene may extend much less than sixty minutes, which is why the military invests so heavily in moving casualties as rapidly as possible from the battlefield to suitable medical facilities. However, due to the realities of combat, there are often hard limits to the availability of rapid medical transport and care.
DARPA created the Biostasis program to develop new possibilities for extending the golden hour, not by improving logistics or battlefield care, but by going after time itself, at least how the body manages it. Biostasis will attempt to directly address the need for additional time in continuously operating biological systems faced with catastrophic, life-threatening events. The program will leverage molecular biology to develop new ways of controlling the speed at which living systems operate, and thus extend the window of time following a damaging event before a system collapses. Essentially, the concept aims to slow life to save life.
“At the molecular level, life is a set of continuous biochemical reactions, and a defining characteristic of these reactions is that they need a catalyst to occur at all,” said Tristan McClure-Begley, the Biostasis program manager. “Within a cell, these catalysts come in the form of proteins and large molecular machines that transform chemical and kinetic energy into biological processes. Our goal with Biostasis is to control those molecular machines and get them to all slow their roll at about the same rate so that we can slow down the entire system gracefully and avoid adverse consequences when the intervention is reversed or wears off.”
The program will pursue various approaches to slowing down biochemical processes in living cells. Ideally, these approaches will scale from simple biological treatments such as antibodies to more holistic treatments applicable to whole cells and tissues, eventually scaling all the way up to the level of a complete organism. Successful approaches will meet the conditions that the system be slowed across all measurable biological functions and that it do so with minimal damage to cellular processes when the system reverts and resumes normal speed.
“Our treatments need to hit every cellular process at close to the same rate, and with the same potency and efficacy,” McClure-Begley said. “We can’t focus treatments to interrupt just a subset of known critical processes.”
For example, cellular respiration is critical for many cellular processes, but those other processes do not shut down in tandem if respiration is blocked. The maladaptive responses from such an intervention would ultimately kill the cell.
Instead, DARPA is looking for biochemical approaches that control cellular energetics at the protein level. Proteins are the workhorses of cellular functions, and nature offers several examples of organisms that use proteins to help them survive extreme environmental conditions. Creatures such as tardigrades and wood frogs exhibit a capability known as “cryptobiosis,” a state where all metabolic processes appear to have stopped, yet life persists. In the case of tardigrades—microscopic invertebrates colloquially known as “water bears”—they can survive freezing, near total dehydration, and extreme radiation. Wood frogs, meanwhile, can survive being frozen completely solid for days on end. And while the specific molecular mechanisms involved in these animals are very different, they share a common biochemical concept: they selectively stabilize their intracellular machinery.
“Nature is a source of inspiration,” McClure-Begley said. “If we can figure out the best ways to bolster other biological systems and make them less likely to enter a runaway downward spiral after being damaged, then we will have made a significant addition to the biology toolbox.”
Biostasis is initially aimed at generating proof-of-concept, benchtop technologies and testing their application in simple living systems for experimental validation. To support eventual transition to patients, DARPA will work with federal health and regulatory agencies as the program advances to develop a pathway for potential, future human medical use. By the end of the five-year, fundamental research program DARPA hopes to have multiple tools for reducing the risk of permanent damage or death following acute injury or infection.
Similar Biostasis technologies could also extend the shelf-life of blood products, biological reagents, and drugs by reducing reaction times. Early program research is aimed at identifying approaches that can be tested in simple biological systems such as enzyme complexes or cell lines. If this aspect of the program is successful, these technologies would help to reduce the Defense Department’s logistical burden of transporting biological products into the field.
DARPA will hold a Proposers Day webinar on March 20, 2018, at 12:30 PM EDT to provide more information about Biostasis and answer questions from potential proposers. For details of the event, including registration requirements, visit: https://go.usa.gov/xnzqE.
A full program description will be made available in a forthcoming Broad Agency Announcement.
Image Caption: DARPA’s Biostasis program aims to prevent death following traumatic injury by slowing biochemical reactions inside cells, thus extending the “golden hour” for medical intervention. The desired interventions would be effective for only limited durations before the process reverts and biological processes resume at normal speeds.
The surgery went fine. Her doctors left for the day. Four hours later, Paulina Tam started gasping for air.
Internal bleeding was cutting off her windpipe, a well-known complication of the spine surgery she had undergone.
But a Medicare inspection report describing the event says that nobody who remained on duty that evening at the Northern California surgery center knew what to do.
How a push to cut costs and boost profits at surgery centers led to a trail of death.
A team of journalists based in California, Indiana, New Jersey, Florida, Washington, D.C., and Virginia worked to tell this story in a partnership between Kaiser Health News and USA TODAY Network.
Christina Jewett is a senior correspondent for Kaiser Health News. Mark Alesia is an investigative reporter for the Indianapolis Star.
Reporters pored through thousands of pages of court records and crisscrossed the U.S. to talk to injured patients or families of the deceased.
For more than a year, using federal and state open-records laws, reporters gathered more than 12,000 inspection records and 1,500 complaint reports, as well as autopsies and EMS documents and medical records, together forming the foundation for this report.
In desperation, a nurse did something that would not happen in a hospital.
She dialed 911.
By the time an ambulance delivered Tam to the emergency room, the 58-year-old mother of three was lifeless, according to the report.
If Tam had been operated on at a hospital, a few simple steps could have saved her life.
But like hundreds of thousands of other patients each year, Tam went to one of the nation’s 5,600-plus surgery centers.
Such centers started nearly 50 years ago as low-cost alternatives for minor surgeries. They now outnumber hospitals as federal regulators have signed off on an ever-widening array of outpatient procedures in an effort to cut federal health care costs.
Thousands of times each year, these centers call 911 as patients experience complications ranging from minor to fatal. Yet no one knows how many people die as a result, because no national authority tracks the tragic outcomes. An investigation by Kaiser Health News and the USA TODAY Network has discovered that more than 260 patients have died since 2013 after in-and-out procedures at surgery centers across the country. Dozens — some as young as 2 — have perished after routine operations, such as colonoscopies and tonsillectomies.
Reporters examined autopsy records, legal filings and more than 12,000 state and Medicare inspection records, and interviewed dozens of doctors, health policy experts and patients throughout the industry, in the most extensive examination of these records to date.
The investigation revealed:
Surgery centers have steadily expanded their business by taking on increasingly risky surgeries. At least 14 patients have died after complex spinal surgeries like those that federal regulators at Medicare recently approved for surgery centers. Even as the risks of doing such surgeries off a hospital campus can be great, so is the reward. Doctors who own a share of the center can earn their own fee and a cut of the facility’s fee, a meaningful sum for operations that can cost $100,000 or more.
To protect patients, Medicare requires surgery centers to line up a local hospital to take their patients when emergencies arise. In rural areas, centers can be 15 or more miles away. Even when the hospital is close, 20 to 30 minutes can pass between a 911 call and arrival at an ER.
Some surgery centers are accused of overlooking high-risk health problems and treat patients who experts say should be operated on only in hospitals, if at all. At least 25 people with underlying medical conditions have left surgery centers and died within minutes or days. They include an Ohio woman with out-of-control blood pressure, a 49-year-old West Virginia man awaiting a heart transplant and several children with sleep apnea.
Some surgery centers risk patient lives by skimping on training or lifesaving equipment. Others have sent patients home before they were fully recovered. On their drives home, shocked family members in Arkansas, Oklahoma and Georgia discovered their loved ones were not asleep but on the verge of death. Surgery centers have been criticized in cases where staff didn’t have the tools to open a difficult airway or skills to save a patient from bleeding to death.
Last October, I wrote that a large pot of money, dedicated to protecting the world from infectious diseases, was about to run dry.
In December 2014, Congress appropriated $5.4 billion to fight the deal with the historic Ebola epidemic that was raging in West Africa. Most of that money went to quashing the epidemic directly, but around $1 billion was allocated to help developing countries improve their ability to detect and respond to infectious diseases. The logic is sound: It is far more efficient to invest money in helping countries contain diseases at the source, than to risk small outbreaks flaring up into large international disasters.
But the $1 billion pot, which was mostly divided between the Centers for Disease Control and Prevention and USAID, runs out in 2019—a fiscal cliff with disaster at its foot. As I wrote:
That money has been used well, to train epidemiologists, buy equipment, upgrade labs, and stockpile drugs. If it disappears, progress will halt, and potentially reverse. The CDC, for example, would have to pull back 80 percent of its staff in 35 countries, breaking ties with local ministries of health.
This is now coming to pass. Two weeks ago, Betsy McKay at TheWall Street Journal reported that the CDC, with no firm promise of future funding, is indeed preparing to downsize its work in 39 countries. Those include the Democratic Republic of Congo, which recently experienced its eighth Ebola outbreak, and China, which is currently undergoing its worst outbreak of H7N9 bird flu. Lena Sun of The Washington Post confirmed this report on Thursday, writing that “notice is being given now to CDC country directors” as the first part of a transition.
The CDC is not the only affected agency. USAID also received $300 million from the same dwindling pot of money, which it used to expand its work in the Middle East and Asia. Those programs may also have to shut down in 2019.
These changes would make the world—and the United States—more vulnerable to a pandemic. “We’ll leave the field open to microbes,” says Tom Frieden, a former CDC director who now heads an initiative called Resolve to Save Lives. “The surveillance systems will die, so we won’t know if something happens. The lab networks won’t be built, so if something happens, we won’t know what it is. We can’t be safe if the world isn’t safe. You can’t pull up the drawbridge and expect viruses not to travel.”
The $1 billion of Ebola money was used under the umbrella of the Global Health Security Agenda—a five-year international partnership to improve the health security of developing nations. Barack Obama convened the GHSA in 2014 with strong bipartisan support, and it has already made a significant difference.
Thanks to the GHSA, Uganda now has a secure lab for studying dangerous germs. Tanzania has a digital communications network so people can phone in information on potential outbreaks from remote locations. Liberia has more than 115 frontline disease detectives trained by the CDC. Cameroon shortened its response time to recent outbreaks of cholera and bird flu shortened from 8 weeks to just 24 hours. The DRC controlled an outbreak of yellow fever and built an emergency operations center (EOC)—a kind of war room for responding to outbreaks. But there is still much to do: The DRC, for example, still needs to train staff to run its EOC.
Last October, at a meeting in Kampala, Uganda, Tim Ziemer, the White House senior director for global health security, confirmed that the United States wants to ensure that GHSA is extended to 2024. “Distance alone no longer provides protection from disease outbreaks,” he noted. “We recognize that the cost of failing to control outbreaks and losing lives is far greater than the cost of prevention.” (Ziemer’s leadership is a promising sign for the public-health community, given his redoubtable credentials: He led George W. Bush’s President’s Malaria Initiative, and has been described as “one of the most quietly effective leaders in public health.”)
But that verbal commitment hasn’t yet been followed by a financial one. It is entirely possible that the next budget, which is due to be issued on February 12, will include money for the GHSA. But the uncertainty has already forced the CDC to begin preparing for potential pullbacks. Damage is already being done. “The reality is that people have to prepare and live their lives,” says Linda Venczel, from PATH, a nonprofit working in global health. “People are packing their bags and looking for other jobs. Things will unravel pretty quickly.”
If that happens, much of the good work that the CDC has accomplished in the last five years, and much of the $1 billion investment, will have been for naught. “To respond to an outbreak, you need to have a presence on the ground to execute emergency operations, and that has to be based on existing trust,” says May Chu, who was part of the White House Office of Science and Technology Policy during the recent Ebola outbreak. If people move on, the relationships that have been built over the last four years will erode, and would have to be built all over again come the next crisis.
“If we pull away from the GHSA in this way, other countries that provide funding and technical assistance will also likely do the same,” noted Tom Inglesby, from the Johns Hopkins Bloomberg School of Public Health, at a Senate committee hearing on America’s preparedness for 21st-century public-health threats.
All of this is but a symptom of a greater malady: our inability to learn from the past. Time and again, diseases flare up, governments throw money at the problem, the crisis recedes, and funding dries up. It happened after anthrax attacks in 2001 alerted people to the risk of bioterrorism. It happened in 2003, after SARS showed people how quickly a new disease could spread around the globe. The world is caught in a cycle of panic and neglect.
“Every time there is an epidemic, the question that always follows is: Why were we so unprepared?” says Nahid Bhadelia, from Boston University. Ironically, the lack of funding is “undercutting the very same programs created in response to the lessons learned after the Ebola epidemic—programs that catch and halt infectious diseases threats early so we can keep U.S. communities and borders safe. We can’t get rid of programs that literally work toward global-health security and expect that the results of the next epidemic will be any different than the last.
Healthcare undoubtedly lags behind other industries when it comes to interoperability. Sadly, organizations remain stymied by systems that fail to communicate. Vendors’ sluggish efforts to make progress have only placed more distrust in the market.
There are concerted efforts underway, ranging from government officials emphasizing interoperability and information blocking, and industry standards such as HL7’s FHIR have come to the forefront. However, little has been done to motivate IT vendors to achieve better integration and data sharing. The industry will have to wait until April to see if the proposed health IT provisions within the 21st Century Cures Act will be enough to drive the final push toward true interoperability.
Having just returned from Cleveland following our annual participation in the IHE North American Connectathon—where vendors cooperatively demonstrate and test the latest industry standards for interoperability—it raises the question; “Why can’t this open collaboration successfully proliferate outside the confines of the event?”
Coordinated, accountable, patient-centered care relies on seamlessly orchestrated access to data. Despite that, providers continue to be challenged by fragmented technology and electronic medical record systems that lack communication, or that share and transmit information effectively. Disparate information trapped in silos across multiple systems and settings hinders data quality, resulting in suboptimal outcomes and avoidable costs of care. For organizations held prisoner by their legacy systems, data inefficiencies worsen as IT environments become increasingly more complex, and the growth and speed to which health information is generated magnifies.
As healthcare IT innovators, companies have a responsibility to develop solutions that allow bidirectional connectivity between systems to enable a coherent experience for clinicians and patients. This free flow of data across various providers and environments is the catalyst for comprehensive, coordinated care, enhanced clinical decision-making, healthier communities, and a better patient experience. As the Connectathon continues to encourage this open and interoperable exchange, a far greater level of industry involvement and incentive will be required to spur change.
While the intent of the IHE Connectathons are unquestionably valid, the massive size and short duration of the events allows only a superficial level of compliance testing. This unfortunately leads to a reality gap between what is tested vs. what systems are capable of doing in the field. To make more tangible progress, the event should be less about passing tests on paper and more about ensuring that products truly leverage the existing standards and IHE profiles.
To adequately ensure value in the market, applications that pass testing must go beyond simply demonstrating that a sequence of messages was sent, but rather prove that the interoperability concepts are truly integrated into the products being certified.
Despite demonstrating open and interoperable exchange, vendors seem to abandon interoperability standards when bringing integration into practice. For healthcare providers, integration projects are constant—and costly. Many organizations are struggling to remedy interface and integration deficiencies, such as resorting to ineffective screen scraping technology to manually transfer data to their local systems. Many, if not most, of the systems only act as a feed of identity data into the EMPI, specifically demographics and identifiers. This is because many systems have already been designed to produce this outbound communication for purposes other than the management of demographic data. When it comes to querying the EMPI for patient identity, this requires a fundamental paradigm shift for many vendors and a modest investment to enhance their software.
Access to comprehensive patient data is essential to drive informed clinical decision-making and delivering care quality and efficiency. Despite EHR improvements, few organizations have been able to overcome the barriers of information silos. And achieving a truly connected healthcare environment, even for the most technically advanced health systems, remains a goal rather than a reality.
With years of health IT standards in place that yield a centralized and uniform way of managing and exchanging data, the meager pace and progress of vendors to adopt them is disconcerting. As vendors continue to master additional standards, advances in system integration and interoperability stay stagnant.
As other verticals, such as banking and manufacturing, leverage standards-based exchange at a much faster pace, healthcare struggles to effectively accelerate increased adherence of standards adoption. The challenge to the industry is to engage in an open and frank dialogue to examine how vendors can be incentivized, so patients and physicians can benefit from real-time data exchange during every encounter along the healthcare journey.
For organizations to be successful in healthcare’s rapidly evolving transformation, the status quo of interoperability is no longer acceptable. An open IT infrastructure that provides a longitudinal view of patient data across the continuum is a critical component for value-based care and population health initiatives. Let’s support organizations and the communities they serve by removing the integration barriers that plague our industry’s IT systems, and facilitate access to comprehensive information at the point of care to drive better outcomes in quality, safety and delivery.
The next two months will be a pivotal time for the Defense Department’s commercial electronic health records platform, MHS Genesis.
Developed by Cerner under a $4.3 billion contract won by defense contractor Leidos, the platform is undergoing significant optimization based on feedback—including several thousand open “trouble tickets”—across four pilot locations.
While 11,000 tickets have been closed, around 5,000 of them are “tier 3,” which means they’ve been deferred to senior decision-makers and rise to the level of possibly requiring enterprise-level changes, said Defense Healthcare Management System spokesperson David Norley.
The department, along with teams from Cerner and Leidos, will address those issues and implement additional tweaks, lessons learned and best practices over the coming eight weeks, but officials from all three sides told Nextgov they are confident the implementation period won’t derail the platform’s schedule.
“We are on schedule,” said Jerry Hogge, senior vice president for Leidos Health.
Hogge said the two-month implementation period was planned since the contract’s inception, although it was initially up to 10 months in duration in the contract’s request for proposal.
“This is not a pause or a surprise, it’s a planned element of the program,” Hogge said. “We’re going to get it done in eight weeks time.”
Travis Dalton, Cerner’s vice president and federal general manager, told Nextgov the approach mirrors what Cerner has done in the past developing electronic health records platforms in the commercial sector.
“It’s very similar to what we’ve seen with large health systems with 50-plus hospitals,” Dalton said. “There is always a period of time where you optimize what you’ve done. We have a process we use to garner best practices and feedback from our clients. Most of the content and workflow we adopt are driven by our client base along with our best thinking.”
In total, Defense wants MHS Genesis deployed at more than 1,200 sites, including 56 hospitals and medical centers, by 2022. Norley said Defense officials will decide in spring whether to fully deploy the software. If it does, Hogge said, it will be a single commercial platform “with a common set of workflows.” Whereas many Defense medical facilities are used to customized versions of software.
Hogge said one of the “guiding principles” in the contract’s competition was the need to “get away from customized one-off systems.”
Initially, that could upset some users because change, Hogge said, can be difficult.
“There is a learning curve, a temporary loss of efficiency and comfort that, over time, is more than offset by the benefits of the new system,” Hogge said. “Cerner has mountains of data that show productivity improvements over the long- and medium-term once the learning curve is overcome.”
The Coast Guard is still shelling out millions for an electronic health records system that went $46 million over budget and was canceled two years ago, a congressional watchdog found.
Lawmakers were so appalled by the mismanagement, they threatened to force the Coast Guard to adopt the system the Defense Department is rolling out rather than let the agency try again.
With an original price-tag of $14 million over five years, the Integrated Health Information System cost the Coast Guard roughly $60 million before it was canned in 2015, according to the Government Accountability Office. The agency also paid out an additional $6.6 million in contract obligations after terminating the ill-fated project, investigators found.
Lawmakers grilled Coast Guard officials Tuesday at a House Transportation Subcommittee hearing about the underlying causes of the “five-year epic failure” and the steps being taken to ensure the agency’s next modernization attempt ends with a better outcome.
“IHIS was kind of a watershed event—it shook our foundations,” said Rear Admiral Michael Haycock, the Coast Guard’s assistant commandant for acquisition and chief acquisition officer. He said the agency is exploring alternative options for managing medical records, including the MHS Genesis platform being rolled out at the Defense and Veterans Affairs departments, and should have an acquisition decision “probably by the end of February.”
Though they acknowledged the importance of following the formal acquisition process, lawmakers seemed perplexed as to why the agency would not choose a system deemed sufficient by the Pentagon and VA, which both serve more patients than Coast Guard.
“It’s a waste of money and time going to look at stuff when it exists now,” said Chairman Duncan Hunter, R-Calif. “You guys don’t get to go off on your own and use taxpayer dollars because it’s fun. I’m of the mind to make you get on DOD’s thing, no matter what you think. We ought to just tell you to do it.”
“This is not that complicated,” said Ranking Member John Garamendi, D-Calif., who joined Hunter in pressing the Coast Guard to follow in the footsteps of the two larger agencies. “Electronic health records are now a standard in virtually every health system in the nation—some of this stuff is off the shelf.”
Despite the funds poured into the effort, IHIS yielded no reusable software or equipment, and forced the Coast Guard to revert to a system relying primarily on paper medical records, according to GAO.
The watchdog attributed the failure of IHIS to agency leadership’s absence from the development process. Despite creating four groups to oversee the project, the Coast Guard couldn’t prove it followed its own system development guidelines and it was not clear whether officials documented management and oversight actions in the first place, investigators found.
“While the Coast Guard chartered these various governance bodies for IHIS oversight, the agency could not provide evidence that the boards had ever been active in overseeing the project prior to its cancellation,” the report said.
The Veterans Affairs Department blew almost $2 billion over three attempts to modernize the electronic health records system it uses to provide care to 9 million veterans.
A recent audit by the Government Accountability Office identified $1.1 billion in wasted spending on two VA projects from 2011 to 2016, the Integrated Electronic Health Record and Veterans Health Information Systems and Technology Architecture.
Nextgov identified another $600 million VA spent on a third program, the HealtheVet initiative, which began in 2001 but was deemed a “failed project,” according to the audit, and canceled in 2010. The HealtheVet spending was not included in GAO’s audit because VA said it no longer possessed spending records. Agencies are required by the Federal Acquisition Regulation to keep contract records for six years after final payment, the audit notes.
An analysis by big data analytics firm Govini reveals nearly half of the $600 million spent on HealtheVet went to Hewlett-Packard for hardware, software and other IT services. The agency also paid a large amount to CACI, as well as Systems Made Simple, which Lockheed Martin bought and then spun off as part of the IT business it sold to Leidos.
“It is truly amazing how much federal agencies spend on electronic health records,” said Matt Hummer, director of analytics and professional services at Govini.
The audit and spending trail on failed IT projects is important as VA embarks on its fourth attempt to modernize its health IT and records systems—this one an expected $10 billion sole-source contract to Cerner Corp. Cerner ranked as the 13th highest-paid contractor in VA’s failed iEHR and VistA programs, according to GAO, receiving $13.4 million from the agency.
Cerner is also partnering with Leidos to build the Defense Department’s next-gen health records systems—valued at $4.3 billion over 10 years. When the Defense Department’s MHS Genesis platform and VA’s Cerner-built platform are fully operational, they are expected to be interoperable, or able to share records of soldiers and veterans seamlessly between each other.
However, as GAO’s audit warns, this is VA’s fourth attempt at modernizing its health records system. In the previous three instances, the agency’s planning, management and execution led to billions of dollars wasted.
“The department’s dedication to completing and effectively executing the planning activities that it has identified will be essential to helping minimize program risks and expeditiously guide this latest electronic health record modernization initiative to a successful outcome—which VA, for almost two decades, has been unable to achieve,” the audit states.
VA did not respond to Nextgov questions by publication.
Agile organizations—of any size and across industries—have five key elements in common.
Our experience and research demonstrate that successful agile organizations consistently exhibit the five trademarks described in this article. The trademarks include a network of teams within a people-centered culture that operates in rapid learning and fast decision cycles which are enabled by technology, and a common purpose that co-creates value for all stakeholders. These trademarks complement the findings from “How to create an agile organization.”
The old paradigm: Organizations as machines
A view of the world—a paradigm—will endure until it cannot explain new evidence. The paradigm must then shift to include that new information. We are now seeing a paradigm shift in the ways that organizations balance stability and dynamism.
Sidebar
What is an agile organization?
The dominant “traditional” organization (designed primarily for stability) is a static, siloed, structural hierarchy – goals and decisions rights flow down the hierarchy, with the most powerful governance bodies at the top (i.e., the top team). It operates through linear planning and control in order to capture value for shareholders. The skeletal structure is strong, but often rigid and slow moving.
In contrast, an agile organization (designed for both stability and dynamism) is a network of teams within a people-centered culture that operates in rapid learning and fast decision cycles which are enabled by technology, and that is guided by a powerful common purpose to co-create value for all stakeholders. Such an agile operating model has the ability to quickly and efficiently reconfigure strategy, structure, processes, people, and technology toward value-creating and value-protecting opportunities. An agile organization thus adds velocity and adaptability to stability, creating a critical source of competitive advantage in volatile, uncertain, complex, and ambiguous (VUCA) conditions.
First, the old paradigm. In 1910, the Ford Motor Company was one of many small automobile manufacturers. A decade later, Ford had 60 percent market share of the new automobile market worldwide. Ford reduced assembly time per vehicle from 12 hours to 90 minutes, and the price from $850 to $300, while also paying employees competitive rates.1
Ford’s ideas, and those of his contemporary, Frederick Taylor, issued from scientific management, a breakthrough insight that optimized labor productivity using the scientific method; it opened an era of unprecedented effectiveness and efficiency. Taylor’s ideas prefigured modern quality control, total-quality management, and—through Taylor’s student Henry Gantt—project management.
Gareth Morgan describes Taylorist organizations such as Ford as hierarchical and specialized—depicting them as machines.2 For decades, organizations that embraced this machine model and the principles of scientific management dominated their markets, outperformed other organizations, and drew the best talent. From Taylor on, 1911 to 2011 was “the management century.”
Disruptive trends challenging the old paradigm
Now, we find the machine paradigm shifting in the face of the organizational challenges brought by the “digital revolution” that is transforming industries, economies, and societies. This is expressed in four current trends:
Quickly evolving environment. All stakeholders’ demand patterns are evolving rapidly: customers, partners, and regulators have pressing needs; investors are demanding growth, which results in acquisitions and restructuring; and competitors and collaborators demand action to accommodate fast-changing priorities.
Constant introduction of disruptive technology. Established businesses and industries are being commoditized or replaced through digitization, bioscience advancements, the innovative use of new models, and automation. Examples include developments such as machine learning, the Internet of Things, and robotics.
Accelerating digitization and democratization of information. The increased volume, transparency, and distribution of information require organizations to rapidly engage in multidirectional communication and complex collaboration with customers, partners, and colleagues.
The new war for talent. As creative knowledge- and learning-based tasks become more important, organizations need a distinctive value proposition to acquire—and retain—the best talent, which is often more diverse. These “learning workers” often have more diverse origins, thoughts, composition, and experience and may have different desires (for example, millennials).
When machine organizations have tried to engage with the new environment, it has not worked out well for many. A very small number of companies have thrived over time; fewer than 10 percent of the non-financial S&P 500 companies in 1983 remained in the S&P 500 in 2013. From what we have observed, machine organizations also experience constant internal churn. According to our research with 1,900 executives, they are adapting their strategy (and their organizational structure) with greater frequency than in the past. Eighty-two percent of them went through a redesign in the last three years. However, most of these redesign efforts fail—only 23 percent were implemented successfully.3
The new paradigm: Organizations as living organisms
The trends described above are dramatically changing how organizations and employees work. What, then, will be the dominant organizational paradigm for the next 100 years? How will companies balance stability and dynamism? Moreover, which companies will dominate their market and attract the best talent?
Our article “Agility: It rhymes with stability” describes the paradigm that achieves this balance and the paradox that truly agile organizations master—they are both stable and dynamic at the same time. They design stable backbone elements that evolve slowly and support dynamic capabilities that can adapt quickly to new challenges and opportunities. A smartphone serves as a helpful analogy; the physical device acts as a stable platform for myriad dynamic applications, providing each user with a unique and useful tool. Finally, agile organizations mobilize quickly, are nimble, empowered to act, and make it easy to act. In short, they respond like a living organism (Exhibit 1).
Exhibit 1
When pressure is applied, the agile organization reacts by being more than just robust; performance actually improves as more pressure is exerted.4 Research shows that agile organizations have a 70 percent chance of being in the top quartile of organizational health, the best indicator of long-term performance.5 Moreover, such companies simultaneously achieve greater customer centricity, faster time to market, higher revenue growth, lower costs, and a more engaged workforce:
A global electronics enterprise delivered $250 million in EBITDA, and 20 percent share price increase over three years by adopting an agile operating model with its education-to-employment teams.
A global bank reduced its cost base by about 30 percent while significantly improving employee engagement, customer satisfaction, and time to market.
A basic-materials company fostered continuous improvement among manual workers, leading to a 25 percent increase in effectiveness and a 60 percent decrease in injuries.
As a result agility, while still in its early days, is catching fire. This was confirmed in a recent McKinsey Quarterly survey report of 2,500 business leaders.6 According to the results, few companies have achieved organization-wide agility but many have already started pursuing it in performance units. For instance, nearly one-quarter of performance units are agile. The remaining performance units in companies lack dynamism, stability, or both.
However, while less than ten percent of respondents have completed an agility transformation at the company or performance-unit level, most companies have much higher aspirations for the future. Three-quarters of respondents say organizational agility is a top or top-three priority, and nearly 40 percent are currently conducting an organizational-agility transformation. High tech, telecom, financial services, and media and entertainment appear to be leading the pack with the greatest number of organizations undertaking agility transformations. More than half of the respondents who have not begun agile transformations say they have plans in the works to begin one. Finally, respondents in all sectors believe that more of their employees should undertake agile ways of working (on average, respondents believe 68 percent of their companies’ employees should be working in agile ways, compared with the 44 percent of employees who currently do).
The rest of this article describes the five fundamental “trademarks” of agile organizations based on our recent experience and research. Companies that aspire to build an agile organization can set their sights on these trademarks as concrete markers of their progress. For each trademark, we have also identified an emerging set of “agility practices”—the practical actions we have observed organizations taking on their path to agility (Exhibit 2).
Exhibit 2
The five trademarks of agile organizations
While each trademark has intrinsic value, our experience and research show that true agility comes only when all five are in place and working together. They describe the organic system that enables organizational agility.
Linking across them, we find a set of fundamental shifts in the mind-sets of the people in these organizations. Make these shifts and, we believe, any organization can implement these trademarks in all or part of its operations, as appropriate.
1. North Star embodied across the organization
Mind-set shift
From:“In an environment of scarcity, we succeed by capturing value from competitors, customers, and suppliers for our shareholders.”
To:“Recognizing the abundance of opportunities and resources available to us, we succeed by co-creating value with and for all of our stakeholders.”
Agile organizations reimagine both whom they create value for, and how they do so. They are intensely customer-focused, and seek to meet diverse needs across the entire customer life cycle. Further, they are committed to creating value with and for a wide range of stakeholders (for example, employees, investors, partners, and communities).
To meet the continually evolving needs of all their stakeholders, agile organizations design distributed, flexible approaches to creating value, frequently integrating external partners directly into the value creation system. Examples emerge across many industries, including: modular products and solutions in manufacturing; agile supply chains in distribution; distributed energy grids in power; and platform businesses like Uber, Airbnb, and Upwork. These modular, innovative business models enable both stability and unprecedented variety and customization.
To give coherence and focus to their distributed value creation models, agile organizations set a shared purpose and vision—the “North Star”—for the organization that helps people feel personally and emotionally invested. This North Star serves as a reference when customers choose where to buy, employees decide where to work, and partners decide where to engage. Companies like Amazon, Gore, Patagonia, and Virgin put stakeholder focus at the heart of their North Star and, in turn, at the heart of the way they create value.
Agile organizations that combine a deeply embedded North Star with a flexible, distributed approach to value creation can rapidly sense and seize opportunities. People across the organization individually and proactively watch for changes in customer preferences and the external environment and act upon them. They seek stakeholder feedback and input in a range of ways (for example, product reviews, crowd sourcing, and hackathons). They use tools like customer journey maps to identify new opportunities to serve customers better, and gather customer insights through both formal and informal mechanisms (for example, online forums, in-person events, and start-up incubators) that help shape, pilot, launch, and iterate on new initiatives and business models.
These companies can also allocate resources flexibly and swiftly to where they are needed most. Companies like Google, Haier, Tesla, and Whole Foods constantly scan the environment. They regularly evaluate the progress of initiatives and decide whether to ramp them up or shut them down, using standardized, fast resource-allocation processes to shift people, technology, and capital rapidly between initiatives, out of slowing businesses, and into areas of growth. These processes resemble venture capitalist models that use clear metrics to allocate resources to initiatives for specified periods and are subject to regular review.
Senior leaders of agile organizations play an integrating role across these distributed systems, bringing coherence and providing clear, actionable, strategic guidance around priorities and the outcomes expected at the system and team levels. They also ensure everyone is focused on delivering tangible value to customers and all other stakeholders by providing frequent feedback and coaching that enables people to work autonomously toward team outcomes.
2. Network of empowered teams
Mind-set shift
From:“People need to be directed and managed, otherwise they won’t know what to do—and they’ll just look out for themselves. There will be chaos.”
To:“When given clear responsibility and authority, people will be highly engaged, will take care of each other, will figure out ingenious solutions, and will deliver exceptional results.”
Agile organizations maintain a stable top-level structure, but replace much of the remaining traditional hierarchy with a flexible, scalable network of teams. Networks are a natural way to organize efforts because they balance individual freedom with collective coordination. To build agile organizations, leaders need to understand human networks (business and social), how to design and build them, how to collaborate across them, and how to nurture and sustain them.
An agile organization comprises a dense network of empowered teams that operate with high standards of alignment, accountability, expertise, transparency, and collaboration. The company must also have a stable ecosystem in place to ensure that these teams are able to operate effectively. Agile organizations like Gore, ING, and Spotify focus on several elements:
Implement clear, flat structures that reflect and support the way in which the organization creates value. For example, teams can be clustered into focused performance groups (for example, “tribes,” or a “lattice”) that share a common mission. These groups vary in size, typically with a maximum of 150 people. This number reflects both practical experience and Dunbar’s research on the number of people with whom one can maintain personal relationships and effectively collaborate.7 The number of teams within each group can be adapted or scaled to meet changing needs.
Ensure clear, accountable roles so that people can interact across the organization and focus on getting work done, rather than lose time and energy because of unclear or duplicated roles, or the need to wait for manager approvals. Here, people proactively and immediately address any lack of clarity about roles with one another, and treat roles and people as separate entities; in other words, roles can be shared and people can have multiple roles.
Foster hands-on governance where cross-team performance management and decision rights are pushed to the edge of boundaries.8 It is at this interaction point that decisions are made as close to relevant teams as possible, in highly-productive, limited-membership coordinating forums. This frees senior leaders to focus on overall system design and provide guidance and support to responsible, empowered teams that focus on day-to-day activities.
Evolve functions to become robust communities of knowledge and practice as professional “homes” for people, with responsibilities for attracting and developing talent, sharing knowledge and experience, and providing stability and continuity over time as people rotate between different operating teams.
Create active partnerships and an ecosystem that extends internal networks and creates meaningful relationships with an extensive external network so the organization can access the best talent and ideas, generate insights, and co-develop new products, services, and/or solutions. In agile organizations, people work hands-on and day-to-day with customers, vendors, academics, government entities, and other partners in existing and complementary industries to co-develop new products, services, and/or solutions and bring them to market.
Design and create open physical and virtual environments that empower people to do their jobs most effectively in the environment that is most conducive to them. These environments offer opportunities to foster transparency, communication, collaboration, and serendipitous encounters between teams and units across the organization.
Like the cells in an organism, the basic building blocks of agile organizations are small fit-for-purpose performance cells. Compared with machine models, these performance cells typically have greater autonomy and accountability, are more multidisciplinary, are more quickly assembled (and dissolved), and are more clearly focused on specific value-creating activities and performance outcomes. They can be comprised of groups of individuals working on a shared task (i.e., teams) or networks of individuals working separately, but in a coordinated way. Identifying what type of performance cells to create is like building with Lego blocks. The various types (Exhibit 3) can be combined to create multiple tailored approaches.
Exhibit 3
The three most commonly observed agile types of performance cell today include:
Cross-functional teams deliver ‘products’ or projects, which ensure that the knowledge and skills to deliver desired outcomes reside within the team.These teams typically include a product or project owner to define the vision and prioritize work.
Self-managing teams deliver baseload activity and are relatively stable over time. The teams define the best way to reach goals, prioritize activities, and focus their effort. Different team members will lead the group based on their competence rather than on their position.
Flow-to-the-work pools of individuals are staffed to different tasks full-time based on the priority of the need. This work method can enhance efficiencies, enable people to build broader skillsets, and ensure that business priorities are adequately resourced.
However, other models are continuously emerging through experimentation and adaptation.
3. Rapid decision and learning cycles
Mind-set shift
From: “To deliver the right outcome, the most senior and experienced individuals must define where we’re going, the detailed plans needed to get there, and how to minimize risk along the way.”
To: “We live in a constantly evolving environment and cannot know exactly what the future holds. The best way to minimize risk and succeed is to embrace uncertainty and be the quickest and most productive in trying new things.”
Agile organizations work in rapid cycles of thinking and doing that are closely aligned to their process of creativity and accomplishment. Whether it deploys these as design thinking, lean operations, agile development, or other forms, this integration and continual rapid iteration of thinking, doing, and learning forms the organization’s ability to innovate and operate in an agile way.
This rapid-cycle way of working can affect every level. At the team level, agile organizations radically rethink the working model, moving away from “waterfall” and “stage gate” project-management approaches. At the enterprise level, they use the rapid-cycle model to accelerate strategic thinking and execution. For example, rather than traditional annual planning, budgeting, and review, some organizations are moving to quarterly cycles, dynamic management systems like Objectives and Key Results (OKRs), and rolling 12-month budgets.
The impact of this operational model can be significant. For example, a global bank closed its project-management office and shifted its product-management organization from a traditional waterfall approach to a minimal viable product-based process. It moved from four major release cycles a year to several thousand-product changes monthly; it simultaneously increased product development, deployment, and maintenance productivity by more than 30 percent.
There are several characteristics of the rapid cycle model:
Agile organizations focus on rapid iteration and experimentation. Teams produce a single primary deliverable (that is, a minimal viable product or deliverable) very quickly, often in one- or two-week “sprints.” During these short activity bursts, the team holds frequent, often daily, check-ins to share progress, solve problems, and ensure alignment. Between sprints, team members meet to review and plan, to discuss progress to date, and to set the goal for the next sprint. To accomplish this, team members must be accountable for the end-to-end outcome of their work. They are empowered to seek direct stakeholder input to ensure the product serves all the needs of a group of customers and to manage all the steps in an operational process. Following this structured approach to innovation saves time, reduces rework, creates opportunities for creative “leapfrog” solutions, and increases the sense of ownership, accountability, and accomplishment within the team.
Agile organizations leverage standardized ways of working to facilitate interaction and communication between teams, including the use of common language, processes, meeting formats, social-networking or digital technologies, and dedicated, in-person time, where teams work together for all or part of each week in the sprint. For example, under General Stanley McChrystal, the US military deployed a series of standardized ways of working between teams including joint leadership calls, daily all-hands briefings, collective online databases, and short-term deployments and co-location of people from different units. This approach enables rapid iteration, input, and creativity in a way that fragmented and segmented working does not.
Agile organizations are performance-oriented by nature. They explore new performance- and consequence-management approaches based on shared goals across the end-to-end work of a specific process or service, and measure business impact rather than activity. These processes are informed by performance dialogues comprised of very frequent formal and informal feedback and open discussions of performance against the target.
Working in rapid cycles requires that agile organizations insist on full transparency of information, so that every team can quickly and easily access the information they need and share information with others. For example, people across the unit can access unfiltered data on its products, customers, and finances. People can easily find and collaborate with others in the organization that have relevant knowledge or similar interests, openly sharing ideas and the results of their work. This also requires team members to be open and transparent with one another; only then can the organization create an environment of psychological safety where all issues can be raised and discussed and where everyone has a voice.
Agile organizations seek to make continuous learning an ongoing, constant part of their DNA. Everyone can freely learn from their own and others’ successes and failures, and build on the new knowledge and capabilities they develop in their roles. This environment fosters ongoing learning and adjustments, which help deliverables evolve rapidly. People also spend dedicated time looking for ways to improve business processes and ways of working, which continuously improves business performance.
Agile organizations emphasize quick, efficient, and continuous decision making, preferring 70 percent probability now versus 100 percent certainty later. They have insight into the types of decisions they are making and who should be involved in those decisions.9 Rather than big bets that are few and far between, they continuously make small decisions as part of rapid cycles, quickly test these in practice, and adjust them as needed for the next iteration. This also means agile organizations do not seek consensus decisions; all team members provide input (in advance if they will be absent), the perspectives of team members with the deepest topical expertise are given greater weight, and other team members, including leaders, learn to “disagree and commit” to enable the team to move forward.
4. Dynamic people model that ignites passion
Mind-set shift
From:“To achieve desired outcomes, leaders need to control and direct work by constantly specifying tasks and steering the work of employees.”
To:“Effective leaders empower employees to take full ownership, confident they will drive the organization toward fulfilling its purpose and vision.”
An agile organizational culture puts people at the center, which engages and empowers everyone in the organization. They can then create value quickly, collaboratively, and effectively.
Organizations that have done this well have invested in leadership which empowers and develops its people, a strong community which supports and grows the culture, and the underlying people processes which foster the entrepreneurship and skill building needed for agility to occur.
Leadership in agile organizations serves the people in the organization, empowering and developing them. Rather than planners, directors, and controllers, they become visionaries, architects, and coaches that empower the people with the most relevant competencies so these can lead, collaborate, and deliver exceptional results. Such leaders are catalysts that motivate people to act in team-oriented ways, and to become involved in making the strategic and organizational decisions that will affect them and their work. We call this shared and servant leadership.
Agile organizations create a cohesive community with a common culture. Cultural norms are reinforced through positive peer behavior and influence in a high-trust environment, rather than through rules, processes, or hierarchy. This extends to recruitment. Zappos, the online shoe retailer acquired by Amazon changed its recruiting to support the selection of people that fit its culture—even paying employees $4,000 to leave during their onboarding if they did not fit.10
People processes help sustain the culture, including clear accountability paired with the autonomy and freedom to pursue opportunities, and the ongoing chance to have new experiences. Employees in agile organizations exhibit entrepreneurial drive, taking ownership of team goals, decisions, and performance. For example, people proactively identify and pursue opportunities to develop new initiatives, knowledge, and skills in their daily work. Agile organizations attract people who are motivated by intrinsic passion for their work and who aim for excellence.
In addition, talent development in an agile model is about building new capabilities through varied experiences. Agile organizations allow and expect role mobility, where employees move regularly (both horizontally and vertically) between roles and teams, based on their personal-development goals. An open talent marketplace supports this by providing information on available roles, tasks, and/or projects as well as people’s interests, capabilities, and development goals.
5. Next-generation enabling technology
Mind-set shift
From:“Technology is a supporting capability that delivers specific services, platforms, or tools to the rest of the organization as defined by priorities, resourcing, and budget.”
To:“Technology is seamlessly integrated and core to every aspect of the organization as a means to unlock value and enable quick reactions to business and stakeholder needs.”
For many organizations, such a radical rethinking of the organizational model requires a rethinking of the technologies underlying and enabling their products and processes, as well as the technology practices needed to support speed and flexibility.
Agile organizations will need to provide products and services that can meet changing customer and competitive conditions. Traditional products and services will likely need to be digitized or digitally-enabled. Operating processes will also have to continually and rapidly evolve, which will require evolving technology architecture, systems, and tools.
Organizations will need to begin by leveraging new, real-time communication and work-management tools. Implementing modular-based software architecture enables teams to effectively use technologies that other units have developed. This minimizes handovers and interdependencies that can slow down production cycles. Technology should progressively incorporate new technical innovations like containers, micro-service architectures, and cloud-based storage and services.
In order to design, build, implement, and support these new technologies, agile organizations integrate a range of next-generation technology development and delivery practices into the business. Business and technology employees form cross-functional teams, accountable for developing, testing, deploying, and maintaining new products and processes. They use hackathons, crowd sourcing, and virtual collaboration spaces to understand customer needs and develop possible solutions quickly. Extensive use of automated testing and deployment enables lean, seamless, and continuous software releases to the market (for example, every two weeks vs. every six months). Within IT, different disciplines work closely together (for example, IT development and operations teams collaborate on streamlined, handover-free DevOps practices).
Sidebar
McKinsey on agile transformations
By the year 2000, product developers were facing a challenge—products were being released so slowly that by the time they were production-ready they were already obsolete and customer needs had moved on. This all changed in 2001 when 17 software developers who called themselves “organizational anarchists” were looking for alternative approaches to the typical waterfall approach to software development. They proposed a new set of values, methodologies, and ways of working that then swept through the product-development and technology arenas over next 16 years. This became known as “agile software development” or “agile technology.”
In 2011, McKinsey’s research into organizational redesigns uncovered a very similar problem—57 percent of companies were redesigning every two years with an average length of a redesign being 18 months. In other words, companies were barely finishing one redesign before changes in the market or customers were requiring them to start another redesign—a similar “waterfall” problem in organization design. A new emergent organization form addresses this issue. It leverages both established and novel principles of how to organize work, deploy resources, make decisions, and manage performance with the goal of helping organizations quickly adapt to rapidly changing conditions. Compared with the traditional organizational model, this new approach—which we called an “agile organization” in a nod to its roots—is emerging as a fundamentally different and higher performing kind of organization, one designed for the complex, constantly evolving markets of the 21st century.
McKinsey defines “agile transformations” broadly. For us, the term “agile transformation” is a holistic change that creates value for the enterprise. It necessarily requires a change in the operating model and ways of working. Often technology and digitization are pieces of the journey toward completing an agile transformation. We take a holistic view of a company’s operating model across people, process, structure, strategy, and technology—looking for both the stable and dynamic elements that must be in place to create agility. Such transformations can be done across an entire enterprise or within just a single function, business unit or end-to-end process. They should take an industry-backed perspective to inform the agile design, looking for the latest trends around digital, technology, talent, and supply chain that are posed to make disruptive changes in the market. They should also tie organizational agility tightly to the agile delivery of projects so that organizations build the skills necessary to deliver work quickly as well as create the right organizational environment to make those teams successful.
To learn more about agile organizations, see other articles in the Agile Organization series, or to learn more about agile technology transformations or digital transformations, please see articles on McKinsey.com.
In summary, today’s environment is pressing organizations to become more agile; in response, a new organizational form is emerging that exhibits the five trademarks discussed above. In aggregate, these trademarks enable organizations to balance stability and dynamism and thrive in an era of unprecedented opportunity.
The next question is how to get there? In a rapidly changing commercial and social environment, some organizations are born agile, some achieve agility, and some have agility thrust upon them. To learn more about how to begin the journey towards an agile transformation, stay tuned for another paper in the dynamic Agile Organization series, “The journey to an agile organization.”
Chris Nerney, Contributing Writer | January 23, 2018 @chrisnerney
Proposed changes to federal regulations protecting the confidentiality of veterans’ medical records would make it easier for healthcare providers in the private sector to access patient data when they need it most – at the point of care.
The new rule proposed by Veterans Affairs (VA) Secretary David Shulkin, MD, allows VA providers to share individual health records with community providers without requiring a signed, physical note of consent from the patient before treatment.
Instead, health information exchange (HIE) providers would be responsible for confirming that a patient has agreed to allow the VA to share his or her medical data with a community provider. In lieu of a physical document signed by the patient, an electronic attestation could be used to request records from the VA.
“While an estimated three out of four veterans enrolled in VA’s health care system also seek medical care in the community, HIE community partners’ requests for their VA health records must frequently be denied because VA does not have a consent on file,” the VA explains in the Federal Register. “The primary obstacle is that veterans will often seek care in the community prior to having the opportunity to provide the consent form to VA and are then left without any means of getting the consent into VA’s physical possession promptly once they are at the community health care facility.”
These types of bureaucratic barriers have contributed to delays in care for veterans, which in turn can cause them to give up seeking treatment. A 2017 study published by the National Institutes of Health showed that 29 percent of veterans delayed seeking needed healthcare in 2010-2011, compared to only 17.2 percent of all Americans.
The proposed rule doesn’t eliminate the need for the VA to be able to access a signed consent from a veteran. It includes language requiring the HIE community provider make the consent form available to the VA within 10 business days by either storing the written form electronically or mailing the physical copy to the VA.
Alternately, the community provider can retain the patient’s written consent under a memorandum of understanding drafted and signed by VA and the provider, which would be required to follow VA record retention requirements.
The VA is accepting public comments on the proposed rule through March 20.
Two new cancer treatments have shown miraculous cures, but if you happen to live in Arkansas or Montana, or a handful of other rural states—let alone outside the U.S.—you’ll have to travel hundreds of miles to get them. And it’s by no means certain that they’ll eventually be available everywhere.
These groundbreaking gene therapies, Kymriah and Yescarta, were approved last year in the U.S. Not only are they hugely expensive—Kymriah is $475,000 and Yescarta is $373,000 for a one-time treatment—but for now you can get them only in certain urban areas. We mapped those locations below. (Current Kymriah sites are red; current Yescarta sites are blue; sites where both therapies are available are green; and planned Kymriah sites are orange. Click on the tab in the top left corner to see a drop-down list of all sites.)
As you can see, some of the biggest gaps are in rural states, where cancer already kills more people than it does in cities. That’s a problem because both therapies are given as a last resort when traditional cancer drugs have failed. By the time patients get Kymriah or Yescarta, they’re often very sick, so traveling long distances is hard and could delay treatment.
To be fair, it’s early days and the companies that market the therapies, Novartis and Gilead, have plans to eventually add more sites. But in the short term, some far-flung cancer patients may be out of luck. And even in the long term, there are factors that could limit their access.
Called CAR-T cell therapies, Kymriah and Yescarta involve a highly specialized process. Doctors extract T cells—one of the immune system’s weapons against disease—from patients and genetically alter them, essentially supercharging them against cancer cells. They then infuse the modified immune cells back into the body.
Many patients have had remarkable recoveries, but they can also suffer toxic and sometimes deadly side effects. Aaron Levine at the Georgia Tech School of Public Policy, who has studied the ethics of CAR-T cell therapies, says these side effects will likely be the biggest obstacle to making the therapies more widely available, “as only a small number of physicians and medical teams are prepared to address them.”
If a lot of patients suffer these side effects or die in the initial rollout of Kymriah and Yescarta, that could slow the addition of more sites.
Another factor is that right now, CAR-T therapies primarily treat rare cancers. Currently, Kymriah treats a type of childhood cancer called acute lymphoblastic leukemia, and Novartis thinks only about 600 patients a year will be eligible for it. Yescarta treats large B-cell lymphoma in adults, and Gilead estimates it could help around 7,500 people a year.
“The reality is, the market isn’t that big, so it doesn’t make sense to train everyone to do it,” Levine says. More CAR-T therapies are in development, but so far it’s unclear how well they will work for more common cancers.
There’s also the question of whether insurance will pay for these staggeringly expensive treatments. Only a handful of patients have been treated with Yescarta; hundreds more are waiting because of payment delays. If some insurers decide they won’t cover the cost, that could foil companies’ plans to expand treatment sites. “We need to watch out for a situation in which these therapies only become available to urban elites who live near academic medical centers,” Levine says.
Still, Levine is hopeful. “It’s still early enough for things to change and evolve,” he says.
Peter Emanuel, director of the Winthrop P. Rockefeller Cancer Institute in Little Rock, Arkansas, which is 350 miles from the nearest treatment site for Kymriah or Yescarta, isn’t worried about what the map looks like right now.
He says administering these therapies and managing potential side effects requires a large and specialized team of hospital workers, so it’s probably best—at least for now—that Kymriah and Yescarta are available only at hospitals with more resources.
The real test, says Emanuel, will be if and when new CAR-T therapies are approved for more common cancers. “At that point, I think it’s justified to expand the number of centers, and hopefully that expansion includes smaller cities and more rural states,” he says.