Most patients haven’t thought much about data sharing, according to Sara Riggare, but those who have “find the current system unreasonable. Patients expect that health care professionals and researchers use patient data in the best possible way. That there is a fight over what the best way is is perplexing and disappointing.”
Riggare is an engineer and doctoral student at the Health Informatics Center at Karolinska Institutet in Stockholm, where she researches models and methods for “digital self-care” in chronic disease — ways to use technology in monitoring and treating oneself. She is also a patient. Riggare had her first symptoms of Parkinson’s disease in her early teens and calls herself a “digital patient.” Actively engaged in her own care, she advocates both for patients’ right to access their own medical data and for the health care system to more actively use patients’ experiences as a resource. She shares her opinions on her blog, “Sara. Not patient but im-patient.”1
Like Riggare, the patients who participated in the recent Journal summit on aligning incentives for data sharing want their data shared quickly, especially to ensure that other patients know about possible side effects. But they also want some control over how the data are shared. For example, they would be more hesitant to participate if commercial or other interests were involved — for instance, if health care systems wanted to use the data to decide whether to provide care to certain groups or if drug or insurance companies had a commercial interest in them.
Patients also said they wanted trial results to be shared with participants themselves, along with an explanation of what the results mean for them — something that generally doesn’t happen now. In addition, most participants said they had entered a trial not to advance knowledge, but to obtain what their doctor thought was the best treatment option for them. All of which raises some questions about how well patients understand consent forms.
To some extent, what informed consent means to the parties involved guides thinking about who owns and should control clinical trial data. The value of trials for society and other patients is not necessarily the same as what benefits individual participants in research projects. Moreover, participants may be vulnerable and dependent on their doctor’s advice and follow-up. They may feel pressured to participate or may not understand the full implications of doing so — all of which contributes to the ethical dilemmas that necessitate careful regulation of medical research.
Before World War II, there were no written international rules or conventions regulating medical research. The concept that “the voluntary consent of the human subject is absolutely essential” was introduced in the Nuremberg Code,2 which was formulated in August 1947 by American judges presiding over the trial of Nazi doctors accused of conducting murderous and torturous human experiments in concentration camps.
It gradually became clear that the research community needed to further intensify its ethical awareness. For example, “voluntary consent” was not a license to conduct unethical experiments or do whatever the researcher wanted with patient data. The World Medical Association coordinated the work leading up to the Declaration of Helsinki, adopted in June 1964 (and since revised seven times). Though the Declaration isn’t legally binding, it has been codified in or influenced much national and regional legislation.
In 2008, the U.S. Food and Drug Administration ceased requiring trials conducted outside the United States to comply with the Declaration, and U.S. and European legislation now seems to be headed in different directions regarding data sharing. Simply put, the disagreement is over whether “consent” means that the patient transfers property rights to researchers or that patients own the data and give researchers license to use them for a defined purpose.
One summit attendee, Sharon Terry — president and CEO of Genetic Alliance, a network that aims to help people take charge of their own health, and the mother of two children with a rare genetic disorder — remarked that “Trial participants are not patients in the traditional sense of the word. It really should be looked at as a partnership.”
Yet the discussion about data sharing has largely taken place between clinical trialists, who spend years collecting, curating, and analyzing data from clinical trials, and data scientists, who would like to add value to those data by reanalyzing and reusing them in novel ways. The discussion has been about data ownership, whether a reanalysis conducted independently of the original research group will ever be valid, and whether it’s fair for data scientists to get data “for free.” Both sides claim to have the patient’s and the public’s best interests at heart, but not many partisans of either camp have asked patients what those interests are.
Patients and patient representatives were grateful to have a seat at the summit table and provided fresh perspectives on research, medicine, and patient care. “Digital patients” like Riggare and Dave deBronkart (who calls himself “e-patient Dave”), public advocates like Terry, and the creators and users of initiatives such as PatientsLikeMe and Quantified Self don’t want to be passive observers and sources of research data. They use the power of the Internet to engage in their own care, interact with clinicians and fellow patients, create new knowledge, and suggest new ways of delivering health care. They believe in sharing data and experiences in order to help themselves and fellow patients. Anna McCollister-Slipp, chief advocate for participatory research at the Scripps Translational Science Institute, reminded the summit audience that “Patients are smart, very receptive — and many of us have day jobs that can bring a whole new set of skills to medical research. We know how it is living with disease and can help solve problems faster.”
Asked whether she worried about patient privacy in the event that trial data became widely shared, Riggare said the need for “privacy is not a constant. It varies depending on the context. If you have a life-threatening disease and need help, you do not care much about privacy. The question is also: How many people will die if we don’t share data? Personally, I know people whose lives have been saved because tumor data were shared.”
Riggare’s point is that receiving health care always implies a loss of privacy. Patients must expose personal information to get help, and that help is usually built on knowledge gained from experiences of previous patients who have revealed personal information. Patients want their data used responsibly, however, so the question is really: Who should control how data are distributed and used by others? The patients themselves? Doctors and researchers? Research institutions or governments?
Traditionally, doctors collected and protected patients’ health information. As health care has become more complex and information has been shared between doctors and other health care workers, among institutions, and sometimes between countries, the legal and ethical framework securing that information has also grown increasingly complex. In addition, laws vary from country to country. This complexity and variation hinder sharing of patient data, whether from clinical trials or electronic health records.
In the United States, for example, absent specific language to the contrary in informed consent documents, research participants don’t have to give specific permission for their deidentified data to be used by other researchers. Indeed, the courts have ruled that even biospecimens don’t belong to patients. Europe seems to be moving in the opposite direction — requiring explicit consent for reuse of data or data sharing and allowing patients to withdraw their consent at any time. The relevant legislation is expected to be strengthened next year.3 Other regions will have their own concerns and priorities, but it seems unlikely that many will force or encourage their people to give up rights to their own clinical data if they participate in research.
Perhaps the solution to the data-sharing struggle lies in shifting data ownership and control to individual patients everywhere. What such a policy would look like in practice remains unclear, but current technology makes it easy to stay in touch with trial participants and seek their renewed consent, and dynamic consent forms are being developed and tested.4 Seeing clinical trial data as the property of each patient might simplify the data-sharing discussion: the patient shares, first with the clinical trialists and then, if the patient wishes, with data scientists.
William Stanley Jevons, the nineteenth-century English economist, once wrote to a friend that he’d had no special ambition as a young man. He just did what he had to do. After his father went bankrupt in the iron business, in 1848, Jevons reluctantly left London for Sydney, to take a job analyzing the quality of the coinage at the Australian Mint. Somehow, this combination of work, family history, and deep boredom led Jevons to spend his days developing a theory about value, helping to start what is known as the marginal revolution. Before Jevons, economists thought that prices should be based on the cost of making goods. Jevons showed that prices should reflect the degree to which a consumer values a product. The marginal revolution taught a seeming paradox: if industrialists lowered their prices, they could make more money; more people would buy their goods, enabling economies of scale. It was a change in pricing strategy, almost as much as one in technology, that led to mass production and the modern world.
There is one sector of the U.S. economy, however, that is stuck in the pre-Jevons conception of value: health care. The health-care crisis in the United States is in many ways a pricing crisis. Nearly all medical care is paid on a fee-for-service basis, which means that medical providers make more money if they perform more procedures. This is perverse. We don’t want an excess of health-care services, especially unnecessary ones; we want health. But hardly anybody gets paid when we are healthy.
A superior payment model has existed in various corners of the country for a long time. Mark Twain, in recalling his youth in Missouri, described a Dr. Meredith, who “saved my life several times” and charged the families in town twenty-five dollars a year, whether they were sick or well. This is what is now called capitation, an ungainly name for a system in which a medical provider is paid a fixed amount per patient—these days, it is typically upward of ten thousand dollars a year—whether that person needs expensive surgery or just a checkup.
This encourages maintaining health. Geisinger Health System, which is based in Danville, Pennsylvania, has used a capitation model for more than a century. Geisinger has long known that many of its diabetic patients live in areas with an abundance of fast-food places but no supermarkets. Last year, it began providing free, healthy groceries to those patients through a hospital pharmacy. “The results are so spectacular,” David Feinberg, the C.E.O. of Geisinger, told me. The average weight and blood pressure among diabetics fell, and fewer required dialysis or eye surgery, a costly side effect of unchecked diabetes. The cost for the food was two thousand dollars a year per patient. The savings from doing fewer procedures will come to more than twenty-four thousand dollars a year per patient. Similar experiments elsewhere in the country show better outcomes at a lower cost for joint replacement, post-surgical care, and over-all population health.
So why isn’t capitation everywhere? One reason is history. The 1973 Health Maintenance Organization Act took a then obscure model of capitation and mandated it for all large companies that offered health insurance. The law was poorly written, and led to a proliferation of H.M.O.s that failed to cut costs and deprived people of care, putting many off the idea of capitation. The Affordable Care Act, better known as Obamacare, experimented more gingerly with new payment systems. It left fee-for-service largely in place but created the Center for Medicare and Medicaid Innovation, to explore alternative payment systems. The center’s experiments have shown that, in order to assure adequate care, providers must be rewarded based on objective indicators of health—to prevent doctors from profiting by withholding care—and that patient groups must be large enough and diverse enough that treating sick people does not jeopardize the financial health of providers.
Capitation, at its best, both improves health care and cuts costs. David Feinberg estimates that replacing fee-for-service with per-patient payment would cut the nation’s health-care costs in half; others believe that the savings would be closer to ten per cent, which, for an industry that makes up nearly a fifth of the economy, would still mean an enormous savings. Capitation even has bipartisan support. Paul Ryan has called for alternatives to fee-for-service, as have both conservative and liberal think tanks. The left and right continue to argue about who should pay, the government or the private sector, but it is still remarkable that they find anything to agree on.
It’s strange, then, that in the rush to “repeal and replace” the Affordable Care Act the pricing of health-care services has scarcely been mentioned. The health-care bill recently passed by the House of Representatives would transfer money to the rich (in the form of a tax cut) and slash Medicaid, which would lead to an existential crisis for many health-care providers, leaving them in no shape to overturn the way they charge for their services.
If Republicans in Congress read their Jevons, they might appreciate that a properly designed payment system could, with time and good faith, lower costs and government spending while improving the health of Americans. Jevons seemed to anticipate this moment. He wrote that politicians are often asked to lower taxes to “leave the money to fructify in the hands of the people.” But, he reasoned, a short-term postponement of tax cuts could favor a long-term improvement of fiscal health. “Could a minister be found strong and bold enough” to make such common-sense economic policy, he wrote, “he would have an almost unprecedented claim to gratitude and fame.” ♦
Genetic information is becoming ubiquitous in research and medicine. The cost of genetic analysis continues to fall, and its medical and personal value continues to grow. Anticipating this age of genetic medicine, policymakers passed laws and regulations years ago to protect Americans’ privacy and prevent misuse of their health-related information. But a bill moving through the House of Representatives, called the Preserving Employee Wellness Programs Act (H.R. 1313), would preempt key protections. Because the bill, which was sent to the full House by the Education and the Workforce Committee in March, would substantially change legal protections related to the collection and treatment of personal health and genetic information by workplace wellness programs, it should be on the radar screens of physicians, researchers, and the public.
Several federal laws currently prohibit discrimination and safeguard the privacy of genetic and other health-related information. The Americans with Disabilities Act of 1990 (ADA) prohibits employment discrimination based on disability or perceived disability and generally bars employers from making medical inquiries and examinations. The Health Insurance and Portability and Accountability Act of 1996 (HIPAA) prohibits discrimination by group health plans based on health information and exclusion periods for coverage of preexisting conditions based on genetic information. HIPAA also includes privacy protections for personal health information that apply to employer-sponsored health plans but not to employers themselves.
The Genetic Information Nondiscrimination Act of 2008 (GINA) prohibits both employment and health insurance discrimination based on genetic information, and prohibits employers and insurers from requesting genetic information from individuals. Finally, the Affordable Care Act of 2010 (ACA) prohibits discrimination and preexisting-condition exclusions based on health status and genetic information in all types of health coverage.
These laws, however, made exceptions for voluntary wellness programs, and political pressure from employers and insurers has prompted broadening of those exceptions (http://fortune.com/2017/03/10/genetic-testing-workplace-wellness-bill/). Under the ACA, employer-sponsored health plans can have “reasonably designed” wellness programs that vary employees’ premium contributions by up to 30% of the total cost of the group health plan (combined employer and employee shares) according to whether the person meets specific biometric targets, such as normal blood glucose levels. Such “health contingent” programs are exempted from the prohibition on varying premium contributions according to health status. Under ACA regulations, a reasonably designed health-contingent wellness program doesn’t have to be based on scientific evidence or collect or report data on its efficacy in improving enrollees’ health. And the ACA did not limit incentives or set standards for wellness programs that vary insurance premiums on the basis of employees’ participation only and not their health outcomes (“participatory” programs), other than requiring that such programs be offered equally to all similarly situated persons.
In 2016, the Equal Employment Opportunity Commission revised ADA and GINA regulations to redefine “voluntary” wellness programs, which are excepted from the rules against employers’ requesting health and genetic information. Until then, employees who declined to disclose such information to wellness programs could not be penalized. Now, workers (and their spouses) can be charged up to an additional 30% of the cost of health coverage for refusing to participate in wellness programs that collect health information, although they cannot be fired or denied health benefits for refusing to disclose information. The final regulations did little to limit information collection by wellness programs
The 2016 rules did retain certain protections for genetic information. Wellness programs must waive penalties for persons who decline to disclose genetic information about themselves, and they may not provide incentives for disclosure of health information about workers’ children. And four other GINA standards still apply: wellness programs can request genetic information only if they offer health or genetic services; people must give prior, knowing, voluntary, written authorization to have their genetic information disclosed; individually identifiable information can be obtained only by the patient and the licensed health professional providing the services; and identifiable information can be available only for the purpose of such services and cannot be disclosed to the employer except in aggregated, deidentified forms.
Today, just 8% of large employers offer health-contingent programs authorized by the ACA and subject to ACA limits on incentives; but nearly three quarters of large employers collect employee health information through wellness programs, and more than half of them provide incentives to employees to participate.1 Yet only 41% of workers agree to disclose health information. Privacy is one reason why. The incidence of stigmatized health conditions among adults covered by employer health plans (30%)1 may help explain why people prefer not to participate in wellness surveys and medical exams asking about mental illness, pregnancy, and other health information. In addition, workers worry about how wellness-program vendors might use their information for purposes such as marketing.2 Consumer and patient advocates note that wellness programs routinely obtain passive authorization from participants to access further information about them, including claims and medical records data, and share it with their business partners.
Now, the legal landscape for wellness programs may change dramatically. H.R. 1313’s sponsors argue that GINA and ADA rules conflict with ACA rules — like road signs at an intersection reading “right turn only” and “left turn only” — threatening wellness programs that could promote healthier lifestyles and reduce costs (www.washingtontimes.com/news/2017/apr/18/employee-wellness-programs-should-be-preserved/). Making wellness-program rules consistent, they say, would end confusion and preserve the option for employers to offer wellness programs and individuals to participate in them. But the laws involved address different questions. GINA and the ADA set standards for “voluntary” wellness programs, which may request sensitive information that employers are otherwise prohibited from collecting. The ACA sets standards for how such information may be used to vary workers’ premiums as an incentive to improve their health. Perhaps a more apt analogy would be one road sign stating the speed limit and another reading “stop for pedestrians”: drivers can and must obey both.
H.R. 1313 takes a different approach, however. It would deem workplace wellness programs that comply with ACA standards — including the absence of standards for practices the law doesn’t address — to be compliant with ADA and GINA standards. As a result, most participatory wellness programs that collect health information would face no limit on incentives for getting people to divulge information. GINA wellness-program standards would no longer protect genetic information. Employers could pressure employees to disclose information, and wellness programs could share identifiable information with employers.
H.R. 1313 undermines the principle that genetic information needs the highest level of protection so that people can make decisions about obtaining their own information without fearing that it might be used against them. It thus challenges individual autonomy, a bedrock ethical principle in medicine and research. In the genetics context, this principle requires that people be provided adequate information about what a genetic test is and what the result may mean for them and their families so they can make an informed decision about being tested. Autonomy encompasses the “right not to know” one’s genetic information, which is particularly important with tests that reveal future health risks when no prevention or intervention is available to reduce those risks. This bill would permit employers to use financial incentives to get employees to take a genetic test that might not be medically necessary or ethically appropriate. It would also challenge employees’ ability to guard their privacy, overriding the requirement that individually identifiable genetic information gathered by wellness programs be shared only with the patient and the clinician providing care.
Downstream, such legislation could also stifle medical progress. Today, genetic tests are available for more than 10,000 health conditions, and whole-genome sequencing may eventually become the standard of care and part of our medical records.3 DNA analysis is increasingly included in biomedical research protocols. If employers and business associates of wellness programs can request and share genetic information, and impose penalties for nondisclosure, surely that will dampen enthusiasm about participating in research.
Take, for example, the National Institutes of Health’s planned All of Us Research Program. The investigators aim to partner with a million or more people throughout the United States who will provide detailed information about their health, lifestyle, and environment. Volunteers will also provide biospecimens that will be used to generate genetic and other information and will participate over many years to provide longitudinal data that will be invaluable for research on myriad diseases. Participants will have access to their study data. Under H.R. 1313, employers could ask employees to disclose their research information to wellness programs — and penalize them for refusing. This possibility would have to be disclosed to prospective All of Us participants and might dissuade many otherwise eager and altruistic volunteers. Fear of discrimination could discourage people from getting tests that could save their lives and from participating in research that could lead to future cures.
Over and over I hear some doctor or EHR industry person say, “Why doesn’t the government just provide one EHR for all of healthcare?” Usually this is followed by some suggestion that the government has invested millions (or is it billions?) of dollars in the Vista EHR software and they should just make that the required national EHR.
You can see where this thinking comes from. The government has invested millions of dollars in the Vista EHR software. It’s widely used across the country. It’s used by most (and possibly all) of the various medical specialties. Lots of VA users love the benefit of having one EHR system where their records are always available no matter where in the VA system you go for health care. I’m sure there are many more reasons as well.
While the idea of a single EHR for all of healthcare is beautiful in theory, the reality of our healthcare system is that it’s impossible.
I’ve always known that the idea of a single government EHR was impossible, but I didn’t have a good explanation for why I thought it was impossible. Today, I saw a blog post called “Health IT Down the Drain” on Bobby Gladd’s blog. The blog post refers to the $1.3 billion over the last 4 years (their number) that has been spent trying to develop a single EHR system between the Department of Defence (DoD) and Veterans Affairs (VA). Congress and the President have demanded an “integrated” and “interoperable” solution between the two departments and we yet to see results. From Bobby’s post comes this sad quote:
“The only thing interoperable we get are the litany of excuses flying across both departments every year as to why it has taken so long to get this done,” said Miller, the chairman of the Veterans Affairs Committee…
The government can’t even bring together two of its very own departments around a single EHR solution. Imagine how it would be if the government tried to roll out one EHR system across the entire US healthcare system.
I hope those people who suggest one government EHR can put that to bed. This might work in a much smaller country with a simpler healthcare system. It’s just never going to happen in the US.
The vast majority of companies are more exposed to cyberattacks than they have to be. To close the gaps in their security, CEOs can take a cue from the U.S. military. Once a vulnerable IT colossus, it is becoming an adroit operator of well-defended networks. Today the military can detect and remedy intrusions within hours, if not minutes. From September 2014 to June 2015 alone, it repelled more than 30 million known malicious attacks at the boundaries of its networks. Of the small number that did get through, fewer than 0.1% compromised systems in any way. Given the sophistication of the military’s cyberadversaries, that record is a significant feat.
One key lesson of the military’s experience is that while technical upgrades are important, minimizing human error is even more crucial. Mistakes by network administrators and users—failures to patch vulnerabilities in legacy systems, misconfigured settings, violations of standard procedures—open the door to the overwhelming majority of successful attacks.
The military’s approach to addressing this dimension of security owes much to Admiral Hyman Rickover, the “Father of the Nuclear Navy.” In its more than 60 years of existence, the nuclear-propulsion program that he helped launch hasn’t suffered a single accident. Rickover focused intensely on the human factor, seeing to it that propulsion-plant operators aboard nuclear-powered vessels were rigorously trained to avoid mistakes and to detect and correct anomalies before they cascaded into serious malfunctions. The U.S. Department of Defense has been steadily adopting protocols similar to Rickover’s in its fight to thwart attacks on its IT systems. Two of this article’s authors, Sandy Winnefeld and Christopher Kirchhoff, were deeply involved in those efforts. The article’s purpose is to share the department’s approach so that business leaders can apply it in their own organizations.
Like the Defense Department, companies are under constant bombardment from all types of sources: nation-states, criminal syndicates, cybervandals, intruders hired by unscrupulous competitors, disgruntled insiders. Thieves have stolen or compromised the credit-card or personal information of hundreds of millions of customers, including those of Sony, Target, Home Depot, Neiman Marcus, JPMorgan Chase, and Anthem. They’ve managed to steal proprietary information on oil and gas deposits from energy companies at the very moment geological surveys were completed. They’ve swiped negotiation strategies off internal corporate networks in the run-up to major deals, and weapons systems data from defense contractors. And over the past three years intrusions into critical U.S. infrastructure—systems that control operations in the chemical, electrical, water, and transport sectors—have increased 17-fold. It’s little wonder, then, that the U.S. government has made improving cybersecurity in both public and private sectors a national priority. But, as the recent hacking of the federal government’s Office of Personnel Management underscores, it is also a monumental challenge.
The Military’s Cyberjourney
Back in 2009, the Defense Department, like many companies today, was saddled with a vast array of disparate IT systems and security approaches. Each of its three military branches, four uniformed services, and nine unified combatant commands had long functioned as its own profit-and-loss center, with substantial discretion over its IT investments. Altogether, the department comprised 7 million devices operating across 15,000 network enclaves, all run by different system administrators, who configured their parts of the network to different standards. It was not a recipe for security or efficiency.
That year, recognizing both the opportunities of greater coherency and the need to stem the rise in harmful incidents, Robert Gates, then the secretary of defense, created the U.S. Cyber Command. It brought network operations across the entire .mil domain under the authority of one four-star officer. The department simultaneously began to consolidate its sprawling networks, collapsing the 15,000 systems into a single unified architecture called the Joint Information Environment. The work has been painstaking, but soon ships, submarines, satellites, spacecraft, planes, vehicles, weapons systems, and every unit in the military will be linked in a common command-and-control structure encompassing every communication device. What once was a jumble of more than 100,000 network administrators with different chains of command, standards, and protocols is evolving toward a tightly run cadre of elite network defenders.
At the same time, the U.S. Cyber Command has been upgrading the military’s technology. Sophisticated sensors, analytics, and consolidated “security stacks”—suites of equipment that perform a variety of functions, including big data analytics—are giving network administrators greater visibility than ever before. They can now quickly detect anomalies, determine if they pose a threat, and alter the network’s configuration in response.
The U.S. Department of Defense experiences 41M scans, probes, and attacks a month.
Source U.S. Department of Defense
The interconnection of formerly separate networks does introduce new risks (say, that malware might spread across systems, or that a vulnerability in one system would allow someone to steal data from another). But these are greatly outweighed by the advantages: central monitoring, standardized defenses, easy updating, and instant reconfiguration in the event of an attack. (Classified networks are disconnected from unclassified networks, of course.)
However, unified architecture and state-of-the-art technology are only part of the answer. In nearly all penetrations on the .mil network, people have been the weak link. The Islamic State briefly took control of the U.S. Central Command’s Twitter feed in 2015 by exploiting an individual account that had not been updated to dual-factor authentication, a basic measure requiring users to verify their identity by password plus a token number generator or encrypted chip. In 2013 a foreign nation went on a four-month spree inside the U.S. Navy’s unclassified network by exploiting a security flaw in a public-facing website that the navy’s IT experts knew about—but failed to fix. The most serious breach of a classified network occurred in 2008, when, in a violation of protocol, a member of the Central Command at a Middle Eastern base inserted a thumb drive loaded with malware directly into a secure desktop machine.
While the recent intrusions show that security today is by no means perfect, the human and technical performance of the military’s network administrators and users is far stronger by a number of measures than it was in 2009. One benchmark is the results of commands’ cybersecurity inspections, whose numbers have increased from 91 in 2011 to an expected 285 in 2015. Even though the grading criteria have become more stringent, the percentage of commands that received a passing grade—proving themselves “cyber-ready”—has risen from 79% in 2011 to over 96% this year.
Companies need to address the risk of human error too. Hackers penetrated JPMorgan Chase by exploiting a server whose security settings hadn’t been updated to dual-factor authentication. The exfiltration of 80 million personal records from the health insurer Anthem, in December 2014, was almost certainly the result of a “spear phishing” e-mail that compromised the credentials of a number of system administrators. These incidents underscore the fact that errors occur among both IT professionals and the broader workforce. Multiple studies show that the lion’s share of attacks can be prevented simply by patching known vulnerabilities and ensuring that security configurations are correctly set.
The clear lesson here is that people matter as much as, if not more than, technology. (Technology, in fact, can create a false sense of security.) Cyberdefenders need to create “high-reliability organizations”—by building an exceptional culture of high performance that consistently minimizes risk. “We have to get beyond focusing on just the tech piece here,” Admiral Mike Rogers, who oversees the U.S. Cyber Command, has said. “It’s about ethos. It’s about culture. [It’s about] how you man, train, and equip your organization, how you structure it, the operational concepts that you apply.”
The High-Reliability Organization
The concept of a high-reliability organization, or HRO, first emerged in enterprises where the consequences of a single error can be catastrophic. Take airlines, the air-traffic-control system, space flight, nuclear power plants, wildfire fighting, and high-speed rail. Within these highly technical operations, the interaction of systems, subsystems, human operators, and the external environment frequently gives rise to deviations that must be corrected before they become disastrous problems. These organizations are a far cry from continuously improving “lean” factories. Their operators and users don’t have the luxury of learning from their mistakes.
The annual global cost of cybercrime against consumers is $113B.
Source2013 Norton Report, Symantec
Safely operating technology that is inherently risky in a dangerous, complex environment takes more than investing in the best engineering and materials. High-reliability organizations possess a deep awareness of their own vulnerabilities, are profoundly committed to proven operational principles and high standards, clearly articulate accountability, and vigilantly probe for sources of failure.
The U.S. Navy’s nuclear-propulsion program is arguably the HRO with the longest track record. Running a nuclear reactor on a submarine deep in the ocean, out of communication with any technical assistance for long periods of time, is no small feat. Admiral Rickover drove a strict culture of excellence into each level of the organization. (So devoted was he to ensuring that only people who could handle such a culture entered the program that, during his 30 years at its helm, he personally interviewed every officer applying to join it—a practice that every one of his successors has continued.)
At the heart of that culture are six interconnected principles, which help the navy weed out and contain the impact of human error.
1. Integrity.
By this we mean a deeply internalized ideal that leads people, without exception, to eliminate “sins of commission” (deliberate departures from protocol) and own up immediately to mistakes. The nuclear navy inculcates it in people from day one, making it clear there are no second chances for lapses. Workers thus are not only unlikely to take shortcuts but also highly likely to notify supervisors of any errors right away, so they can be corrected quickly and don’t necessitate lengthy investigations later—after a problem has occurred. Operators of propulsion plants faithfully report every anomaly that rises above a low threshold of seriousness to the program’s central technical headquarters. Commanding officers of vessels are held fully accountable for the health of their programs, including honesty in reporting.
2. Depth of knowledge.
If people thoroughly understand all aspects of a system—including the way it’s engineered, its vulnerabilities, and the procedures required to operate it—they’ll more readily recognize when something is wrong and handle any anomaly more effectively. In the nuclear navy, operators are rigorously trained before they ever put their hands on a real propulsion plant and are closely supervised until they’re proficient. Thereafter, they undergo periodic monitoring, hundreds of hours of additional training, and drills and testing. Ship captains are expected to regularly monitor the training and report on crew proficiency quarterly.
3. Procedural compliance.
On nuclear vessels, workers are required to know—or know where to find—proper operational procedures and to follow them to the letter. They’re also expected to recognize when a situation has eclipsed existing written procedures and new ones are called for.
One of the ways the nuclear navy maximizes compliance is through its extensive system of inspections. For instance, every warship periodically undergoes tough Operational Reactor Safeguard Examinations, which involve written tests, interviews, and observations of day-to-day operations and of responses to simulated emergencies. In addition, an inspector from the Naval Reactors regional office may walk aboard anytime a ship is in port, without advance notice, to observe ongoing power-plant operations and maintenance. The ship’s commanding officer is responsible for any discrepancies the inspector may find.
4. Forceful backup.
When a nuclear-propulsion plant is operating, the sailors who actually control it—even those who are highly experienced—are always closely monitored by senior personnel. Any action that presents a high risk to the system has to be performed by two people, not just one. And every member of the crew—even the most junior person—is empowered to stop a process when a problem arises.
5. A questioning attitude.
This is not easy to cultivate in any organization, especially one with a formal rank structure in which immediate compliance with orders is the norm. However, such a mindset is invaluable: If people are trained to listen to their internal alarm bells, search for the causes, and then take corrective action, the chances that they’ll forestall problems rise dramatically. Operators with questioning attitudes double- and triple-check work, remain alert for anomalies, and are never satisfied with a less-than-thorough answer. Simply asking why the hourly readings on one obscure instrument out of a hundred are changing in an abnormal way or why a network is exhibiting a certain behavior can prevent costly damage to the entire system.
6. Formality in communication.
To minimize the possibility that instructions are given or received incorrectly at critical moments, operators on nuclear vessels communicate in a prescribed manner. Those giving orders or instructions must state them clearly, and the recipients must repeat them back verbatim. Formality also means establishing an atmosphere of appropriate gravity by eliminating the small talk and personal familiarity that can lead to inattention, faulty assumptions, skipped steps, or other errors.
Cybersecurity breaches caused by human mistakes nearly always involve the violation of one or more of these six principles. Here’s a sample of some the Defense Department uncovered during routine testing exercises:
A polite headquarters staff officer held the door for another officer, who was really an intruder carrying a fake identification card. Once inside, the intruder could have installed malware on the organization’s network. Principles violated: procedural compliance and a questioning attitude.
A system administrator, surfing the web from his elevated account, which had fewer automatic restrictions, downloaded a popular video clip that was “viral” in more ways than one. Principles violated: integrity and procedural compliance.
A staff officer clicked on a link in an e-mail promising discounts for online purchases, which was actually an attempt by the testers to plant a phishing back door on her workstation. Principles violated: a questioning attitude, depth of knowledge, and procedural compliance.
A new network administrator installed an update without reading the implementation guide and with no supervision. As a result, previous security upgrades were “unpatched.” Principles violated: depth of knowledge, procedural compliance, and forceful backup.
A network help desk reset a connection in an office without investigating why the connection had been deactivated in the first place—even though the reason might have been an automated shutdown to prevent the connection of an unauthorized computer or user. Principles violated: procedural compliance and a questioning attitude.
Creating a High-Reliability IT Organization
To be sure, every organization is different. So leaders need to account for two factors in designing the approach and timetable for turning their companies into cybersecure HROs. One is the type of business and its degree of vulnerability to attacks. (Financial services, manufacturing, utility, and large retail businesses are especially at risk.) Another is the nature of the workforce. A creative workforce made up predominantly of Millennials accustomed to working from home with online-collaboration tools presents a different challenge from sales or manufacturing employees accustomed to structured settings with lots of rules.
It’s easier to create a rule-bound culture for network administrators and cybersecurity personnel than it is for an entire workforce. Yet the latter is certainly possible, even if a company has a huge number of employees and an established culture. Witness the many companies that have successfully changed their cultures and operating approaches to increase quality, safety, and equal opportunity.
Whatever the dynamics of their organizations, leaders can implement a number of measures to embed the six principles in employees’ everyday routines.
Take charge.
A recent survey by Oxford University and the UK’s Centre for the Protection of the National Infrastructure found that concern for cybersecurity was significantly lower among managers inside the C-suite than among managers outside it. Such shortsightedness at the top is a serious problem, given the financial consequences of cyberattacks. In a 2014 study by the Ponemon Institute, the average annualized cost of cybercrime incurred by a benchmark sample of U.S. companies was $12.7 million, a 96% increase in five years. Meanwhile, the time it took to resolve a cyberattack had increased by 33%, on average, and the average cost incurred to resolve a single attack totaled more than $1.6 million.
The reality is that if CEOs don’t take cybersecurity threats seriously, their organizations won’t either. You can bet that Gregg Steinhafel, who was ousted from Target in 2014 after cybercriminals stole its customers’ information, wishes he had.
Over the past 3 years, intrusions into critical U.S. infrastructure have increased 17x.
Source U.S. Department of Defense
Chief executives know that consolidating their jumble of network systems, as the Defense Department has done, is important. But many are not moving fast enough—undoubtedly because this task can be massive and expensive. In addition to accelerating that effort, they must marshal their entire leadership team—technical and line management, and human resources—to make people, principles, and IT systems work together. Repeatedly emphasizing the importance of security issues is key. And CEOs should resist blanket assurances from CIOs who claim they’re already embracing high-reliability practices and say all that’s needed is an increase in the security budget or the newest security tools.
CEOs should ask themselves and their leadership teams tough questions about whether they’re doing everything possible to build and sustain an HRO culture. Are network administrators making sure that security functions in systems are turned on and up-to-date? How are spot audits on behavior conducted, and what happens if a significant lapse is found? What standardized training programs for the behavioral and technical aspects of cybersecurity are in place, and how frequently are those programs refreshed? Are the most important cybersecurity tasks, including the manipulation of settings that might expose the system, conducted formally, with the right kind of backup? In essence, CEOs must constantly ask what integrity, depth of knowledge, procedural compliance, forceful backup, a questioning attitude, and formality mean in their organizations. Meanwhile, boards of directors, in their oversight role, should ask whether management is adequately taking into account the human dimension of cyberdefense. (And indeed many are beginning to do this.)
Make everyone accountable.
Military commanders are now held responsible for good stewardship of information technology—and so is everyone all the way down the ranks. The Defense Department and the U.S. Cyber Command are establishing a reporting system that allows units to track their security violations and anomalies on a simple scorecard. Before, information about who committed an error and its seriousness was known only to system administrators, if it was tracked at all. Soon senior commanders will be able to monitor units’ performance in near real time, and that performance will be visible to people at much higher levels.
The goal is to make network security as much of an everyday priority for troops as keeping their rifles clean and operational. Every member of an armed service must know and comply with the basic rules of network hygiene, including those meant to prevent users from introducing potentially tainted hardware, downloading unauthorized software, accessing a website that could compromise networks, or falling prey to phishing e-mails. When a rule is broken, and especially if it’s a matter of integrity, commanders are expected to discipline the offender. And if a climate of complacency is found in a unit, the commander will be judged accordingly.
Companies should do likewise. While the same measures aren’t always available to them, all managers—from the CEO on down—should be responsible for ensuring their reports follow cybersafety practices. Managers should understand that they, along with the employees in question, will be held accountable. All members of the organization ought to recognize they are responsible for things they can control. This is not the norm in many companies.
Institute uniform standards and centrally managed training and certification.
The U.S. Cyber Command has developed standards to ensure that anyone operating or using a military network is certified to do so, meets specific criteria, and is retrained at appropriate intervals. Personnel on dedicated teams in charge of defending networks undergo extensive formal training. For these cyberprofessionals the Defense Department is moving toward the model established by the nuclear navy: classroom instruction, self-study, and at the end of the process, a formal graded examination. To build a broad and deep pipeline of defenders, the military academies require all attendees to take cybersecurity courses. Two academies offer a major degree in cyberoperations, and two offer minor degrees. All services now have schools for advanced training and specific career paths for cybersecurity specialists. The military is also incorporating cybersecurity into continuing education programs for all personnel.
Relatively few companies, in contrast, have rigorous cybertraining for the rank and file, and those that do rarely augment it with refresher courses or information sessions as new threats arise. Merely e-mailing employees about new risks doesn’t suffice. Nor does the common practice of requiring all employees to take an annual course that involves spending an hour or two reviewing digital policies, with a short quiz after each module.
Admittedly, more-intensive measures are time-consuming and a distraction from day-to-day business, but they’re imperative for companies of all sizes. They should be as robust as programs to enforce ethics and safety practices, and companies should track attendance. After all, it takes only one untrained person to cause a breach.
Couple formality with forceful backup.
In 2014 the U.S. military created a construct that spelled out in great detail its cyber-command-and-control structure, specifying who is in charge of what and at what levels security configurations are managed and changed in response to security events. That clear framework of reporting and responsibilities is supported with an extra safeguard: When security updates on core portions of the Defense Department’s network are made or system administrators access areas where sensitive information is stored, a two-person rule is in effect. Both people must have their eyes on the task and agree that it was performed correctly. This adds an extra degree of reliability and dramatically reduces the risk of lone-wolf insider attacks.
The Department of Defense is consolidating 15,000 networks into a single unified architecture.
Source U.S. Department of Defense
There’s no reason companies can’t also do these things. Most large firms have already aggressively pruned their list of “privileged” system users and created processes for retracting the access rights of contractors leaving a project and employees leaving the firm. Midsize and smaller enterprises should do the same.
One form of backup can be provided by inexpensive, easy-to-install software that either warns employees when they’re transferring or downloading sensitive information or prevents them from doing it and then monitors their actions. Regularly reminding employees that their adherence to security rules is monitored will reinforce a culture of high reliability.
Check up on your defenses.
In June 2015 the U.S. Cyber Command and the Defense Department announced sweeping operational tests for both network administrators and users. The military also is establishing rigorous standards for cybersecurity inspections and tightly coordinating the teams that conduct them.
Companies should follow suit here as well. While many large firms do security audits, they often focus on networks’ vulnerability to external attacks and pay too little attention to employees’ behaviors. CEOs should consider investing more in capabilities for testing operational IT practices and expanding the role of the internal audit function to include cybersecurity technology, practices, and culture. (External consultants also may provide this service.)
In addition to scheduled audits, firms should do random spot-checks. These are highly effective at countering the shortcuts and compromises that creep into the workplace—like transferring confidential material to an unsecured laptop to work on it at home, using public cloud services to exchange sensitive information, and sharing passwords with other employees. Such behavior is important to discover—and correct—before it results in a serious problem.
Eliminate fear of honesty and increase the consequences of dishonesty.
Leaders must treat unintentional, occasional errors as opportunities to correct the processes that allowed them to occur. However, they should give no second chances to people who intentionally violate standards and procedures. Edward Snowden was able to access classified information by convincing another civilian employee to enter his password into Snowden’s workstation. It was a major breach of protocol for which the employee was rightfully fired. It made many military leaders realize that an operational culture that stressed integrity, a questioning attitude, forceful backup, and procedural compliance could have created an environment in which Snowden would have been stopped cold. Such a breach of the rules would have been unthinkable in the reactor department of a navy vessel.
At the same time, employees should be encouraged to acknowledge their innocent mistakes. When nuclear-propulsion-plant operators discover a mistake, they’re conditioned to quickly reveal it to their supervisors. Similarly, a network user who inadvertently clicks on a suspicious e-mail or website should be conditioned to report it without fear of censure.
Finally, it should be easy for everyone throughout the organization to ask questions. Propulsion-plant operators are trained to immediately consult a supervisor when they encounter an unfamiliar situation they aren’t sure how to handle. Similarly, by ensuring that all employees can readily obtain help from a hotline or their managers, companies can reduce the temptation to guess or hope that a particular action will be safe.
Yes, we’re calling for a much more formal, regimented approach than many companies now employ. With cyberthreats posing a clear and present danger to individual companies and, by extension, the nation, there is no alternative. Rules and principles are needed to plug the many holes in America’s cyberdefenses.
Couldn’t companies just focus on protecting their crown jewels? No. First, that would mean multiple standards for cybersecurity, which would be difficult to manage and, therefore, hazardous. Second, the crown jewels often are not what you think they are. (One could argue that the leak of embarrassing e-mails was the most damaging aspect of North Korean hackers’ attack on Sony Pictures Entertainment.) Finally, hackers often can gain access to highly sensitive data or systems via a seemingly low-level system, like e-mail. A company needs a common approach to protecting all its data.
Technical Capability, Human Excellence
Over the past decade, network technology has evolved from a simple utility that could be taken for granted to an important yet vulnerable engine of operations, whose security is a top corporate priority. The soaring number of cyberattacks has made that abundantly clear. Technology alone can not defend a network. Reducing human errors is at least as important, if not more. Embracing the principles that an irascible admiral implanted in the nuclear navy more than 60 years ago is the way to do this.
Building and nurturing a culture of high reliability will require the personal attention of CEOs and their boards as well as substantial investments in training and oversight. Cybersecurity won’t come cheap. But these investments must be made. The security and viability of companies—as well as the economies of the nations in which they do business—depend on it.
A version of this article appeared in the September 2015 issue (pp.86–95) of Harvard Business Review.
James A. (Sandy) Winnefeld Jr. was the ninth vice chairman of the U.S. Joint Chiefs of Staff and an admiral in the U.S. Navy until August 2015, when he retired.
Christopher Kirchhoff is a special assistant to the chairman of the Joint Chiefs of Staff.
David M. Upton is the American Standard Companies Professor of Operations Management at the University of Oxford’s Saïd Business School.
A directive to HHS calls for a national strategy on how patients are matched with their health records.
A national strategy to advance standards for matching patients with their own health records is one step closer to reality. Language encouraging HHS to support the development of a national strategy was added to the Congressional FY 2017 Omnibus bill approved by both houses of Congress.
The bill specifies that the Office of the National Coordinator for Health Information Technology and CMS, “provide technical assistance to private-sector led initiatives to develop a coordinated national strategy that will promote patient safety by accurately identifying patients to their health information.”
The new directive changes course after 19 years in which HHS was prohibited from supporting work that would lead to a “unique patient identifier” due to privacy concerns. In effect, the bill establishes patient matching as a permitted exception that should be encouraged in the market.
In recent years, it has become clear there is a need for technology that would make simplify the matching a patient with their health records.
Carla Smith, HIMSS executive vice president, said that “No longer is a UPI considered a credible solution… the focus has shifted to patient data matching and the need for a coordinated national strategy across the public and private healthcare sectors.”
Smith said that HIMSS and collaborators been active in pursuing a change in policy. Part of that effort is an “Innovator In Residence Fellowship” which HIMSS has funded since 2013. The resident fellow who works in the Office of the HHS Chief Technology Officer, Adam Culbertson, has worked to gain adoption for patient matching strategies that are currently in development.
One of the projects supported by Culbertson is ONC’s Patient Matching Algorithm Challenge, which is currently soliciting entries. Steve Posnack, HHS director, office of standards and technology, wrote that ONC expects the challenge “will spur the development of innovative new algorithms, benchmark current performance, and help industry coalesce around common metrics for success.”
Congress reiterated its prohibition on HHS from supporting any standards “providing for the assignment of a unique health identifier.” But the bill’s language makes it clear that members of the Appropriations Committee who wrote the legislative language, understand the need to find a solution.
“Accordingly, the Committee encourages the Secretary, acting through the Office of the National Coordinator for Health Information Technology and CMS, to provide technical assistance to private-sector led initiatives to develop a coordinated national strategy that will promote patient safety by accurately identifying patients to their health information,” the legislation states.
The VA waitlist scandal of a few years ago was a dark period in the agency’s history. Fallout from the scandal reverberated through the media and claimed the job of VA Secretary Eric Shinseki.
Now, three years later, the push to reform the VA and prevent a similar situation continues. Naturally, the healthcare information technology tools that employees use, including the scheduling system, are a focus. The gravity of reform, however, is also pulling in the electronic health record and other clinical components of VistA, the VA’s long-serving and varied system.
While changes to VistA are warranted and necessary, trashing the entire system because one component may be flawed makes little sense from technological or financial perspectives. The VA scheduling scandal was the product of an agency overwhelmed by veterans returning from two theaters of war. In that scenario, the scheduling system became a scapegoat for organizational and human resources challenges that were bound to manifest in one way or another.
The VA should not heed calls to replace VistA for these key reasons:
Doctors at the VA really like VistA. In Medscape’s 2016 EHR Report, VistA was the top-rated EHR overall and the most preferred solution for use as a clinical tool. It’s difficult to exaggerate the importance of the relationship between user and tool that this survey clearly identifies. VistA is an institution at the VA and was originally developed with direct input from physicians working for the agency; it was customized to meet their needs. While advocates for commercial systems argue that the dominant EHRs available today are far superior to VistA, it would be unwise—perhaps catastrophically so—to not factor in the value of familiarity and commitment among VA physicians. A new system will take a long time to customize to meet VA needs, a long time to implement and optimize, a long time to adopt, and it may create resentment. All those intangibles still have a dollar value that must be considered.
The overall cost of a replacement EHR system will be staggering. The initial value of Cerner’s contract with the Department of Defense is $4.3 billion and is expected to rise to at least $9 billion. It could conceivably cost more than that. To be fair, DoD physicians didn’t like the system Cerner is replacing and will probably be happy to see it go. The same cannot be said of VistA. If one crucial goal of good governance is to limit the cost to taxpayers of new projects, it’s hard to make an argument for spending billions to replace a system that is well-liked and functioning effectively. How much more might a commercial system cost than enhancements to VistA? Using the recent VistA Scheduling Enhancement (VSE) project as a measuring stick, by using existing code from a scheduling application development effort at another federal agency, VA selected the VSE tool over a commercially developed application estimated at 25 times the cost.
There is a private-sector solution for VistA replacement that will save billions. In the last month, the VA has floated the idea of a private company taking over responsibility for VistA code and selling the solution back to VA as a cloud-based service. Proposals were due last week. As more than one leader at VA has said, the agency wants to get out of the software development and maintenance business for good. The usability of VistA, the fact that it was developed in an arguably customized fashion for VA specifically, and the potential cost of a replacement make VistA as a cloud-based service a brilliant alternative. A relationship with an external source of development and support frees VA up to focus on healthcare. It gives the agency one throat to choke when push comes to shove. Code developed for a government agency also exists in the public domain, meaning it can be used to jump start other healthcare IT projects that contribute to greater efficiency of care while holding down costs.
The VistA EHR has never been the culprit in high-profile VA challenges. In 2014, the existing scheduling system and VA policy were to blame for falsified wait times and the death of veterans. Especially on the clinical functionality side, VistA was not the problem. Replacing effective VistA components to address unrelated problems smacks of throwing the baby out with the bathwater.
The VA is being held to a standard most private systems would not live up to. As Ezra Klein wrote in a column, “when they [VA] fall short, it’s a scandal. But when private systems fall short, no one even knows.” Indeed, VA has made a public commitment to improving care by identifying mistakes and issues and working to see they stop happening.
“There’s this section in my book about the VA’s pioneering effort to really show how many medical errors there were,” says Phillip Longman, a senior fellow at the New America Foundation and author of a book about VA care entitled Best Care Anywhere. “They allowed people in the VA to report medical errors anonymously and tried to create a no-blame culture … That meant publishing statistics on how many medical errors occurred at the VA. When they first did that, the press pounced on those reports.”
The fact is that mistakes are happening at all levels of healthcare, both public and private, and most of us know nothing about them. Replacing VistA with a commercial system will not, in and of itself, eliminate errors any more than it has at community hospitals.
None of this is to say that there are not issues to fix at VA. There are many. It’s past time to bring VistA up to speed with commercial products. Internal development, absent the culture that existed at system inception, is no longer effective or sufficient. But none of those truths necessitate acquisition of a very expensive commercial product.
The VA has proven, viable options in the private sector for code maintenance, modernization, development and support. VistA code is the foundation for EHR commercial development efforts by several non-profit organizations and at least two corporations. Versions of VistA are also running national healthcare systems in several countries. These and other examples demonstrate the flexibility and developability of VistA as an affordable, viable alternative to hugely expensive and inflexible commercial EHRs that must inevitably be customized at enormous expense to meet VA needs anyway.
With strong arguments for VistA and billions of dollars as arguments against, VA leadership is well positioned to make a decision that both saves the agency money and sets an example for others to follow.
The movement toward sharing data from clinical trials has divided the scientific community, and the battle lines were evident at a recent summit sponsored by the Journal. On one side stand many clinical trialists, whose lifeblood — randomized, controlled trials (RCTs) — may be threatened by data sharing. On the other side stand data scientists — many of them hailing from the genetics community, whose sharing of data markedly accelerated progress in that field.
At a time when RCT funding is shrinking, trialists know that sharing data adds substantial costs to clinical trial execution; a requirement to share data might mean that fewer trials, and smaller ones, will be conducted. Many trialists also worry that complex data will be misinterpreted by people who weren’t involved in generating them, and who may therefore produce misleading results. Furthermore, journal publications are the currency of academic advancement. Researchers often invest 5 to 10 years gathering trial data, expecting to write several papers after their primary publication. An expectation that data will be shared quickly may therefore create a disincentive for conducting RCTs.
Data scientists promoting data sharing are joined by some members of the medical community, who point to abundant unpublished studies with negative results as missed learning opportunities and invitations to wasteful repetition of trials. Some proponents see resistance to data sharing as motivated purely by self-interest. As Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School, recounted, when geneticists began aggregating their data, there were notable holdouts who, fearing being scooped, withheld data and slowed the community’s progress. Yet as Ewan Birney, a geneticist who codirects the European BioInformatics Institute, noted at the summit, “once everyone has done it for a little bit of time, you will forget you had these arguments.”
And everyone may have to do it soon. The National Institutes of Health now requires that grant applicants outline a data-sharing plan, as do the Cancer Moonshot, the Gates Foundation, and the Wellcome Trust. But many details need to be worked out, from incentive structures to sustain data generation, to standards for data exchange, to identification of the subset of clinical questions for which sharing is most cost-effective. Indeed, the focus of the data-sharing summit was less about whether to share data and more about how best to do so.
Perhaps the most incisive question, posed by Rory Collins, a University of Oxford epidemiologist and trialist, was the most obvious one: What problem are we trying to solve? The advancement of science depends on the open exchange of ideas and the opportunity to replicate or refute others’ findings. But will data sharing address our current system’s shortcomings in a way that advances science? For example, though it’s troubling when trials provide incomplete information about adverse events, requiring the sharing of individual patient data from every trial might not be the best way to fix that problem. A more effective solution, Collins suggests, may be publishing, alongside the primary trial results, an easily accessible appendix containing adverse-event data in tabular form. Proponents of data sharing also believe it will allow other investigators to generate new insights and hypotheses. But will such insights advance health in a way that justifies the cost?
Preliminary evidence reveals less enthusiasm than anticipated for using shared RCT data. In 2007, GlaxoSmithKline created the website clinicalstudydatarequest.com (CSDR), where data from at least 3049 trials are currently available, from 13 industry sponsors. According to the independent panel that reviewed research proposals, in the first 2 years, 177 proposals were submitted, most of them for a new study and publication, but despite substantial investment by industry sponsors, only four manuscripts have been submitted for publication thus far.1 Brian Strom, one of the panel members, noted that because industry analyzes its data so exhaustively in anticipation of intensive interrogation from the Food and Drug Administration, it’s possible that nonindustry data will yield more new findings. But industry’s resources also far exceed academia’s, so relatively speaking, data sharing’s costs for academics will be far greater.
The more substantial challenge described by investigators seeking to use others’ data, however, seems to be the burdensome nature of analyzing data behind a firewall. Rather than receiving data to analyze themselves, investigators submitted to CSDR statistical inquiries that were run by the repository’s managers, a time-consuming process. The consensus, according to Strom, was that “true data sharing would be preferable to data access on a dedicated website.”
So what is true data sharing? Does the work required to overcome these challenges, or the outputs achieved, differ when sharing is imposed from the outside rather than motivated by prospectively identified common goals? Clinical investigators, after all, have long collaborated in efforts to address unanswered questions. More than 30 years ago, for instance, Collins led the creation of the Clinical Trial Service Unit, a global consortium of trialists who sought to share data and pool their results. Though it required a tremendous time investment and endless communication among investigators to understand each other’s data sets, there was a shared sense of purpose and pride in the clinically meaningful results. For instance, though it was believed that tamoxifen reduced recurrence but did not improve survival among women with breast cancer, through a planned combined analysis with longer follow-up and more patients, the group found that there was a survival benefit — transforming the standard of care.2
Collins, therefore, firmly believes in data sharing’s benefits, but he recommends considering all the likely hitches. Comparing the relative ease of data sharing in 1995, when a cholesterol-treatment meta-analysis was prospectively planned,3 to current “clunky” and more time-consuming processes, Collins notes, “It is ironic that as a result of the data-sharing agenda and the formalization of the systems, it is now more difficult to get access to the data.” The aggregation of vast genetics databases suggests that these technical and bureaucratic challenges are growing pains. But clinical trial data sets may be sufficiently complex that streamlining data exchange will require extensive input from the data’s generators. Ideally, the trialist community will create uniform standards and data will be collected with those standards in mind.
A greater risk may be to the clinical trialist community. It’s assumed that data sharing will advance the public health, but will the public benefit if there are steep declines in the number and size of clinical trials? Though more “open” science may yield as-yet-unimagined innovations, unplanned and retrospective secondary analyses can only generate, not test, hypotheses. The type of hypothesis testing that can advance treatment of disease will always depend on active and motivated clinical trialists asking questions prospectively.
And though tinkering with data-exclusivity periods and designations of academic credit may reduce the disincentives created by data sharing, I think there is something at stake here that incentives can’t solve: our capacity to rationally weigh trade-offs as we debate how best to advance science. While the recent summit was civil and collaborative, the tenor of the broader data-sharing conversation has framed the matter as one of trialist self-interest versus public good. But such a frame vastly oversimplifies the situation — and tends to entrench people in polarized positions, articulated with righteous indignation.
The indignation of data-sharing advocates arises in part from the claim that the absence of data sharing slows the development of cures. In addition, at a political moment when promises of data democratization overshadow faith in traditional expertise, reservations about data sharing are easily dismissed as elitist — as are the experts who point out misunderstandings of a topic they’ve spent years studying. The value placed on transparency also contributes: any resistance to greater openness is branded as secrecy and deceit. Finally, the deepest (and perhaps most valid) source of moral outrage may be the sentiment that clinical trial data aren’t ours to begin with, that they should belong to the patients who put themselves at risk to participate. And in principle, patients want their data shared.
But patients also want better treatments for their diseases. And though data sharing may sometimes lead to better treatments, it may also divert limited resources to types of research that are less fruitful than RCTs, impeding the evidence generation required for improving care. The irony in the framing of this debate is that to share data in a way that advances knowledge, we must be open to one another’s experience and expertise, setting aside ideology in pursuit of more objective truths. Fulfilling this obligation, as we refine the scientific process, will require not only sharing what we find but also resisting the temptation to demonize those who see different paths to our shared goal.
While the healthcare industry debates the future of the outdated EHR, VA’s acting undersecretary said the agency is still looking into all options.
While the industry is waiting for the Department of Veterans of Affairs to pick Cerner an off-the-shelf EHR to replace its legacy VistA electronic health record, a VA official on Friday said fixing the maligned platform isn’t being ruled out yet. The EHR selection is one of the biggest projects the agency is working on right now, and VA realizes it needs to modernize, said VA’s Acting Undersecretary for Health Poonam Alaigh, MD, at the Health Datapalooza event. “We’re looking into modernizing VistA or using a commercial-off-the-shelf EMR.”
“It’s crucial to modernize the system,” she said. “VA is engrained into the healthcare society. When the VA makes progress, the rest of our healthcare system makes progress.”
Technology and modernization is at the heart of the VA’s goals, evidenced by the recent launch of the Access to Care site that provides veterans transparency on wait times and clinical care quality.
VistA has been a hot topic of debate, since VA Secretary David Shulkin announced the agency would make a decision on its future by July. From keeping VistA and shifting to open source, to canning VistA and moving to Cerner, there are many viable options for the outdated system.
In the healthcare industry, however, many expect Cerner will get the contract, especially in the wake of its deal to modernize the Department of Defense’s system.
Black Book, which rates tech platforms, recently said Cerner would be the right choice for the agency, as well.
However, there have been outliers. Open source advocates, for example, think the VA should just improve VistA using open source code. At the same time, a Chilmark Research executive said picking any vendor other than Cerner would be a better choice if the agency wants to advance interoperability.
The VA is expected to make its decision on whether to seek an off-the-shelf product in July.