healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

National Biodefense Strategy: Protect the Nation Against all Biological Threats – HHS

Posted by timmreardon on 09/20/2018
Posted in: Uncategorized. Leave a comment

September 18, 2018

By: Robert P. Kadlec, MD, MTM&H, MS, HHS Assistant Secretary for Preparedness and Response

Summary:

As a nation, public and private partners must work closely together to plan adaptively for current and emerging biothreats.

Today, the White House and four federal departments unveiled a comprehensive National Biodefense Strategy to make America safer against modern biological threats to the United States. In the 21st century, biological threats are increasingly complex and dangerous, and that demands that we act with urgency and singular effort to save lives and protect Americans.

Whether a natural outbreak, an accidental release, or a deliberate attack, biological threats are among the most serious we face, with the potential for significant health, economic and national security impacts.  Therefore, promoting our health security is a national security imperative.

The strategy released today not only establishes the U.S. government vision for biodefense, it prioritizes and coordinates federal biodefense activities and budgets. In meeting the aggressive demands of the National Biodefense Strategy – PDF as directed by President Trump’s National Security Presidential Memorandum, we will improve the nation’s readiness and response capabilities to combat 21st century biological threats to humans, animals, agriculture and the environment.

HHSx2

National Disaster Medical System training

Combating 21st century biothreats requires a whole-of-nation approach

Biodefense entails a range of coordinated actions to counter biothreats, reduce risks, and prepare for, respond to, and recover from incidents. As biothreats continue to evolve, so must our biodefense capabilities.

Coordination of such complex actions requires a sound strategy, commitment, and governance structure. As an initial step in implementing the strategy, leaders from every federal department involved in biodefense formed a steering committee, led by Secretary Azar, that provides strategic guidance in preparing for, countering, and responding to biological threats. I am honored that Secretary Azar asked me to lead the day-to-day coordination team that supports this committee in improving biodefense readiness. As the assistant secretary for preparedness and response, my office routinely coordinates federal preparedness, response and recovery efforts to address the healthcare and public health impacts of public health emergencies and other disasters, including bioincidents.

The National Biodefense Strategy’s coordination team also will engage state, local, tribal and territorial governments, as well as private and international partners as appropriate, because while coordinating federal activities and budgets across the full spectrum of biodefense sectors and activities represents a monumental step forward; being truly successful will require a whole-of-nation approach with government agencies at all levels and non-government stakeholders playing important roles in providing support and guidance. As a nation, public and private partners must work closely together to plan adaptively for current and emerging biothreats, whether they stem from terrorist groups, pandemics, natural disasters, or even rouge nation states.

Seventeen years ago today, our nation experienced a biological attack with anthrax mailed in letters, killing five people and injuring 17, and costing an estimated $6 billion in clean up and lost revenue. Those historic attacks, the more recent Ebola outbreaks and the emergence of potentially deadly influenza viruses demonstrate how severe biological threats continue to evolve, both man-made and from nature, can be. By meeting the call to action in the National Biodefense Strategy, we become better equipped to protect this nation against all biothreats.

Article link: https://www.hhs.gov/blog/2018/09/18/protect-the-nation-against-all-biological-threats.html

THE STATE OF DATA SHARING AT THE U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES – HHS

Posted by timmreardon on 09/20/2018
Posted in: Uncategorized. Leave a comment

 

HHS-x1

ELECTRONIC HEALTH RECORDS Clear Definition of the Interagency Program Office’s Role in VA’s New Modernization Effort Would Strengthen Accountability – GAO

Posted by timmreardon on 09/18/2018
Posted in: Uncategorized. Leave a comment


GAOIPO

Congress Doesn’t Know Who’s in Charge of VA’s $10 Billion Health Records Overhaul – Nextgov

Posted by timmreardon on 09/18/2018
Posted in: Uncategorized. Leave a comment

Nextgovzx

September 14, 2018

Lawmakers are trying to figure out what to do about the lack of accountability within the program’s leadership.

The office created to ensure health record interoperability between the Veterans Affairs and Defense departments will cripple the agencies’ latest multibillion-dollar overhaul efforts if it doesn’t change its role, according to a congressional watchdog.

“Based on the [Interagency Program Office]’s past history, I think it’s evident they never had the clout to mediate and resolve issues between VA and DOD as it relates to interoperability,” Carol Harris, director of IT acquisition management issues at the Government Accountability Office, said Thursday. “If the IPO continues the way it’s operating today, we are going to continue to have dysfunction moving forward.”

The IPO was created by Congress in 2008 to act as a single point of accountability for the two agencies’ electronic health record exchange efforts. But after VA wasted more than $1 billion over six years on failed modernization attempts, Congress is re-evaluating the office’s worth.

In the inaugural hearing of the House Veterans’ Affairs Technology Modernization subcommittee, lawmakers on Thursday questioned whether IPO can hold officials’ feet to the fire as VA embarks on a 10-year, $10 billion EHR overhaul.

And at least for the time being, witnesses said, the answer is no.

Projects of this size are only successful if there’s a single “executive-level entity” calling the shots and taking the fall if things go wrong, Harris told lawmakers. That means deputy secretaries should be leading the program, she said, but instead, agencies are relying on a convoluted web of governance boards and steering committees to do so.

“Accountability has been so diffused that when the wheels fall off the bus, you can’t point to a single entity that’s responsible,” Harris said at the hearing. “When everyone’s responsible, no one’s responsible. That’s [what] led us to where we are today.”

Even IPO Director Lauren Thompson conceded the office can’t fulfill its responsibilities without more funding and manpower. Still, she and Harris agreed IPO could play a valuable role in measuring performance and keeping each agency in the loop on the other’s progress.

Shortly before the hearing, GAO published a report recommending VA clearly describe the role of IPO in overseeing the latest project with Cerner Corp. Harris also suggested Congress consider revising legislation to make the office’s role in the program more advisory than decision-making.

But lawmakers seemed skeptical that the government’s two biggest bureaucracies would take orders from any third-party entity, regardless of the duties it’s given.

“I think from day one, we made a terrible mistake … in not saying to both these major players ‘one of you is in charge,’” said Rep. Mike Coffman, R-Colo. “I don’t think this is doable. I think we’re going to waste more taxpayer dollars.”

The hearing came two weeks after two top officials left VA’s electronic health record modernization office citing conflicts with agency leadership. John Windom, who became the organization’s acting director after its previous chief resigned, told lawmakers VA expects turnover in leadership and no single person will make or break the overhaul.

VA also will likely face significant technical setbacks in standing up the new system.

Three years after signing a multibillion contract to modernize its own electronic health records system, the Defense Department is struggling to implement the system amid significant operational challenges. The Cerner platform performed so poorly during the Pentagon’s first three field tests that officials decided to scrap plans to test at a fourth facility. Since then, officials said they’ve ironed out many of the technical issues and today the system is showing “measurable success.”

Despite the many impediments that lay ahead, subcommittee leaders reiterated their commitment to seeing the project across the finish line.

Chairman Jim Banks, R-Ind., underscored the fact that it’s rare for a program to have such intense congressional oversight from its inception, suggesting the added scrutiny might prevent both agencies from getting too far off course.

“I think this project has great promise,” said subcommittee ranking member Conor Lamb, D-Pa. “We need to focus on accountability. That’s something that can be difficult to track in an agency as large as the VA.”

Article link: https://www.nextgov.com/it-modernization/2018/09/congress-doesnt-know-whos-charge-vas-10-billion-health-records-overhaul/151278/

Why the government must help shape the future of AI – Brookings

Posted by timmreardon on 09/17/2018
Posted in: Uncategorized. Leave a comment

Brookings-1

Rapid advances in artificial intelligence (AI) are raising serious ethical concerns. For many workers who have not seen significant wage growth in decades, AI represents a potential threat to the jobs on which they depend, and its potential interaction with the effects of globalization is alarming. Thoughtful observers worry about its capacity to intensify concentrations of public and private power, increase information asymmetries, and diminish transparency—all at the expense of citizens. In these circumstances, the significance of individual consent—one of the hallmarks of a free society—is called into question.

The developers of AI in the private sector are aware of these issues, and they have begun to develop codes to regulate their own activities. For example, Microsoft has laid out six principles for the AI systems it is creating: fairness, safety and reliability, privacy, inclusion, transparency, and accountability. Each of these principles, in turn, will need to be specified and applied to a range of cases.[1] Google has done the same thing through its “Responsible Development of AI” process.[2] Many other companies are considering ethics codes designed to guide their corporate decision-making.

But it is not a simple matter to apply these principles to artificial intelligence. Take fairness, as an example. It will require systematic efforts to ensure that the data from which AI programs can “learn” is representative of the relevant population. It will also require the capacity to distinguish between algorithm-driven decisions based on statistical regularities and individual determinations. Local bankers often make loan decisions relying on their knowledge of the character of individual borrowers, many of whom might not qualify for loans if they had to comply with the standards of regional and national financial institutions.

Some kinds of fairness cannot be reduced to rules; not even the combination of AI learning and rich, statistically representative data sets can exhaust this important norm. As the bank lending example shows, the universal application of probabilistic generalizations can generate its own kind of unfairness when they exclude individuals who don’t measure up on paper but can meet the performance standards the generalizations were intended to represent.

Other principles raise different problems. What does transparency mean as applied to autonomous systems whose creators cannot predict what these systems will do as they learn from new data, including feedback from previous conclusions? What does privacy mean when these systems can not only monitor individuals but prompt (or as the behavioral economists would say, nudge) new actions based on statistical inferences from their past behavior?

If the private and public sectors can work together, each making its own contribution to an ethically aware system of regulation for AI, we have an opportunity to avoid past mistakes and build a better future.

Beyond these unavoidable questions are even larger issues. How far can self-regulation go in the private sector? Under what circumstances should the public sector step in? And when it does, what are the relevant ethical principles? This policy brief will explore these issues using three case-studies: facial recognition systems, self-driving vehicles, and lethal autonomous weapons. I use these examples to illustrate the ethical challenges of AI and the need to clarify our thinking in this area.

Facial recognition

In 1791, the British philosopher and social reformer Jeremy Bentham published a proposal for a new prison he called the Panopticon, designed to allow a single guard to watch all the inmates without being visible to them. The idea was considered fanciful, and Bentham died embittered that the government had failed to accept it. But it became a potent symbol for the dystopian prospect of universal surveillance.

Today, with the development of computer-assisted facial recognition, this prospect has become all too real. The Chinese government has the capacity to track the movements of many individuals living under its jurisdiction. For societies that cherish liberty and privacy, this new capability raises deep ethical challenges—for the experts that create it, for the businesses that sell it, and for the governments that must decide how to use and regulate it.

In July of this year, Brad Smith, the president of Microsoft, responded with an urgent plea. On the one hand, he said, some emerging uses are positive: imagine the police being able to locate a kidnapped child by recognizing her as she is being hustled down the street by her abductor or being able to single out a terrorist from the crowd at a sporting event. But other potential applications are chilling, he adds: “Imagine a government tracking you everywhere . . . without your permission or knowledge. Imagine a database of everyone who attended a political rally, [an activity] that constitutes the very essence of free speech.”[3]

“Facial recognition raises a critical question,” he insists: “What role do we want this type of technology to play in everyday society?” Critics of surveillance sometimes invoke a norm, the “reasonable expectation of privacy.” But the idea of privacy does not fit comfortably with inherently observable activities in spaces such as public streets. And individuals who seek and benefit from publicity can hardly complain if they are recognized in public places.

A more appropriate norm, I suggest, is a reasonable expectation of anonymity. If we are going about our business in a lawful way, public authorities should not use facial recognition systems to identify and track us without a justification weighty enough to override the presumption against doing so, and this process should be regulated by law. Identification of specific individuals should require the equivalent of a search warrant, which for most purposes is authorized only for probable cause. Mere suspicion is not enough.

Identification of specific individuals should require the equivalent of a search warrant, which for most purposes is authorized only for probable cause. Mere suspicion is not enough.

If a crime has been committed, the presumption shifts toward a generic search for the perpetrators. Facial recognition systems may be used, for example, to identify individuals fleeing the scene of a bank robbery. Some may turn out to be innocent victims fearing for their lives; others may be the robbers themselves. A similar standard—with a broader catchment area—governs the response to a terrorist attack. Because in such instances there are reasons to fear a conspiracy extending beyond the individuals who carried out the attack, the use of facial recognition systems would be legitimate well outside the scene of crime, as would monitoring the usual suspects. Sometimes the Casablanca standard makes sense.

These issues would be urgent even if the technology were perfect, but it is far from that. Recent studies show that as of now it works better for men than women and for people of lighter complexion than for people of color. The danger of false positives pervaded with systematic biases cannot be ignored.

As of now, facial recognition technology works better for men than women and for people of lighter complexion than for people of color. The danger of false positives pervaded with systematic biases cannot be ignored.

This risk matters because most of us tend to give great weight to technological innovations. Sometimes—with regards to DNA testing, for example—this deference makes sense, but often it doesn’t. As a historical illustration, phrenology was once widely used by both prosecutors and defense attorneys in criminal trials. Until it can be demonstrated that facial recognition systems are more accurate and less biased than human eyewitnesses, their use for legal and other official purposes is suspect, because the evidence they generate is too likely to enjoy excessive credibility.

Self-driving vehicles

You’re driving at the speed limit on a two-lane suburban road when a ball bounces into the street with a child in hot pursuit. There’s an oncoming car in the other lane, and there’s not enough time to stop short of the child. What do you do?

If you’re a normal human being, you want to do everything possible to avoid hitting the child, but not at the cost of your own life. Your options include: swerving left across the lane going in the opposite direction, minimizing the chance of hitting the child but at the risk being sideswiped if you don’t clear that lane in time; or swerving right, increasing the chances of hitting the child if your momentum carries you too far forward.

This assessment assumes that the other car will remain in its lane while braking. But it would be reasonable for the other driver to fear that the child might continue to run across the road. To minimize the risk to the child, this driver swerves right, increasing the chances of a collision if the other driver swerves left. The optimal strategy depends on the interaction between the drivers.

Now vary the example slightly: assume that you are the parent of the child who runs into the street. In this circumstance, you may well choose to risk sacrificing your own life to save your child. If so, the optimal strategy is the left swerve, whatever the risks of colliding with the oncoming car.

Vary the example again: you’re driving with your child buckled into a car seat in the rear of your car when a child you don’t know runs into the street. Are you morally required to be neutral between the life of your child and that of a stranger? If not, the optimal strategy is the right swerve.

One more example: you’re driving with your child when two other children run into the street. Do numbers affect the moral judgment? And even if they do, can they outweigh the special responsibility you have for your own child?

I have multiplied these examples to underscore the kinds of challenges facing the designers of autonomous vehicles. First, because interactions between and among vehicles matter, either government or an industry consortium must establish a protocol across models and makers that governs interactions in the widest possible range of cases. One possibility is would be a system of receptors and transmitters in every vehicle that instantly communicates responses to problematic situations and permits real-time coordination between vehicles.

Second, programming decisions necessarily will encode answers to the ethical choices I’ve posed. These answers must be explicit, not tacit. And car makers alone are not entitled to make these decisions, which instead must reflect public discussion and debate.

Third, because specific circumstances matter, autonomous vehicles must be able to receive and deploy relevant information to the greatest extent possible. If society decides that numbers or special relationships make an ethical difference, then the vehicle’s control system must be aware of them. This may require the installation of sensors and even facial recognition devices as elements of the autonomous driving package.

Finally, the deployment of autonomous vehicles will tilt the balance of liability away from car owners toward their manufacturers. Individuals cannot be held responsible for programming defects—or decisions—which they have no ability to diagnose. As Bryant Walker Smith has argued, the existing body of product liability law could easily be adjusted to accommodate the issues autonomous vehicles will raise.[4] This must be done before these vehicles are deployed, as part of the regulatory process that governs their introduction. It would be burdensome and unfair to force individual plaintiffs to go to court to establish the standards that government has the responsibility to lay down in advance.

Autonomous weapons

As the laws of war have long recognized, the decision to deprive other human beings of life raises the gravest ethical questions and warrants the greatest degree of care. When human beings interact with technology as they make these decisions, new issues arise.

Consider the following case. In 1988, the U.S.S. Vincennes, a guided missile cruiser operating in the Persian Gulf, shot down an Iranian passenger jet, killing all 290 people on board. The plane’s course, speed, and radio signal all indicated, correctly, that it was a civilian aircraft. But the ship’s Aegis system, which had been programmed to target Soviet bombers, misidentified it. Despite the evidence from standard indicators, not one of the 18-member Aegis crew was willing to challenge the computer, and so they authorized the firing of the missile that brought down the Iranian plane. The result was a human tragedy that damaged the reputation of the U.S. military and drove the already poisonous relationship between the U.S. and Iranian governments to a new low.[5]

As the laws of war have long recognized, the decision to deprive other human beings of life raises the gravest ethical questions and warrants the greatest degree of care. When human beings interact with technology as they make these decisions, new issues arise.

The Aegis system was not fully autonomous, of course. The Vincennes’ commanding officer had the ultimate responsibility to authorize the strike. But this case highlights the undue deference we tend to give the technology we create, even when the evidence of our senses contradicts it. Investing technology with the power to act in specific cases without human review and authorization would only heighten the danger.

Many current procedures recognize this risk. For example, not only do human operators direct unmanned drones, but also the Obama White House and the Department of Defense created an elaborate protocol to guide decisions about strikes on specific targets. The basic laws of war—distinction, proportionality, non-combatant immunity, etc.—were observed. The target had to be identified accurately beyond a reasonable doubt. In addition, domestic laws and norms had to be weighed—for example, the fact that Anwar al-Awlaqi, who influenced young people to become terrorists, was a U.S. citizen. Decision-makers had to balance the facts and circumstances of each case to reach an all-things-considered judgment.

There is a difference between lethal weapons and weapons directed against non-human targets such as unmanned weapons. But this distinction does not fully resolve the ethical issue, because the target must be accurately identified as unmanned. A fully autonomous anti-missile system must determine that an incoming object is a missile rather than (say) a friendly aircraft. As the Vincennes episode shows, the ability of a specific technology to do this cannot be taken for granted and must be demonstrated with high probability before non-lethal autonomous systems are deployed.

Four major reservations have been raised against the deployment of fully autonomous lethal weapons. The first is the broad claim that “machine programming will never reach the point of satisfying the fundamental ethical and legal principles required to field a lawful autonomous lethal weapon.”[6] This would be true if human decisions in complex cases turn out to be non-algorithmic, as more than one moral theory suggests. If the weighing and balancing of often-competing factors, empirical and moral, occurs within a framework of rules but is not determined by them, then all-things-considered judgments would be irreducibly case-specific. If so, even programs capable of learning from feedback and other evidence would never fully replace human decision-making and, as Kenneth Anderson and Matthew Waxman put it, no autonomous system could ever pass an “ethical Turing Test.”

To the extent that this is an empirical question, it is safe to say that we are far from knowing the answer. Until we do, we should refrain from deploying fully autonomous weapons and ensure that a human being remains in ultimate command.

For some critics of these systems, however, the ultimate issue is not empirical but moral. It is per se wrong, they argue, to remove human beings from the decision. We are more than rational calculators. Our ability to experience pleasure and pain, to understand the sentiments of others, to feel empathy and compassions—these are features of our humanity that we must bring to bear on our practical judgments if they are to be adequate to the full range of moral claims. We have no reason to believe that any man-made system will ever share these aspects of our inner life. If not, it is a moral mistake to delegate life and death decisions to such a system.[7]

Anderson and Waxman’s rejection of this moral argument is instructive but not dispositive. It is true, as they say, that we probably will turn over more and more functions with life and death implications to autonomous machines as their capacities increase and that our basic notions about decision-making will evolve accordingly. The correct moral question is not whether machines are just the same as humans but whether they can meet the appropriate standards of conduct—for lethal autonomous weapons, the laws of war, not some abstract moral theory. “What matters morally,” they conclude, “is the ability consistently to behave in a certain way and to a specified level of performance. The ‘package’ it comes in, machine or human, is not the deepest moral principle.”

To bolster their case, Anderson and Maxwell offer examples of activities—automatic robotic surgery and self-driving vehicles—where the concept of attaining a “specific level of performance” makes intuitive sense. If this technology attains better results in delicate operations ranging from brain and prostate surgery to reattaching severed limbs, then using it is certainly defensible. At some point, not using it might come to be considered malpractice. After all, measures such as post-operative survival, complications, and recovery of function are objective. And considering the bedside manners of many surgeons, their patients might prefer a mute machine.

As we have seen, self-driving cars raise complex moral issues. Nonetheless, as with surgical techniques, we can measure their performance against widely accepted standards. If these vehicles get into fewer accidents and kill or injure fewer people than their human-operated counterparts, the prima facie case for permitting or even preferring them would be strong. At some point, drivers with safety records worse than that of autonomous vehicles might be required to undergo further training or even to relinquish the wheel.

These examples raise a philosophical question that goes back to Plato’s “Republic:” is it correct to regard all human activities as technical skills? Plato offers powerful arguments for a negative answer. Moral agency may involve certain skills, such as the capacity to reason well, but it is more than the ensemble of these skills. In this respect, the practice of law is more like morality than surgery.

The application of broad legal principles to specific cases is far from a mechanical process and involves more than logical deduction. Often it involves choices between analogies: if the application of the law to cases A and B is settled, is the disputed case of C more like A or like B? Non-algorithmic insight may the best way of making such choices, even when it cannot persuade those who see things differently. And the capacity for seeing things clearly often requires the ability to feel as well as think. The distinctiveness of human agency is built into our understanding of moral conduct.

If mistakes occur that violate the laws of war or widely held moral norms, who is responsible—soldiers on the battlefield, commanders who chose to deploy the weapon, the designer who programmed it, the law-makers who authorized and funded it?

A third argument against autonomous weapons is that they weaken systems of accountability by defusing responsibility. If mistakes occur that violate the laws of war or widely held moral norms, who is responsible—soldiers on the battlefield, commanders who chose to deploy the weapon, the designer who programmed it, the law-makers who authorized and funded it? All of the above, or none of the above? Singling out any link in this chain seems unfair. But if everyone shares responsibility in theory, no one will be held responsible in practice.

Anderson and Waxman worry that focusing on individual accountability will end up blocking the development of systems that reduce actual harms to soldiers and civilians. And besides, they argue, the laws of armed conflict are enforced principally against state actors, not individuals. Analogies to criminal law are at best misleading.

The counterargument is that the most egregious violations of war conventions have been enforced against individuals to great effect. The Nuremberg trials held individuals responsible for specific decisions, not the entire German nation. The German people decided for themselves to accept a measure of collective responsibility, but as a moral not legal matter. The U.S. military holds individuals on the front line and up the chain of command responsible for acts of commission, and for acts of omission when the duty to act was clear or when the results of inaction were reasonably foreseeable. The military’s practice reflects not only moral intuitions about the nature of responsibility but also a pragmatic judgment of how best to deter unwanted activities.

Consider the analogy of corporations that paid large fines for misconduct during the run-up to the Great Recession. It did not escape citizens’ attention that the executives who authorized and presided over the culpable behavior did not pay an individual price, but most kept their jobs and many of them actually received generous bonuses. If the law had made it clear in advance that they would be held personally responsible, they would have had a powerful incentive to stay on the right side of what was, after all, a pretty bright line.

The final main argument against lethal autonomous weapons is that reducing the risks faced by human soldiers weakens an important disincentive to the use of armed force. Anderson and Waxman dismiss this claim on the ground that it treats soldiers as mere means to pressure political leaders. If this tactic fails and conflict ensues, soldiers will die whose lives could have been spared if autonomous weapons had been deployed.

This argument is not without force, but it cuts both ways. In the wake of the Vietnam war, the United States abandoned the military draft in favor of the All-Volunteer Forces (AVF). If we had adopted in AVF in the 1950s, the war in Vietnam might have lasted as long as the war in Afghanistan—seventeen years, with no end in sight. A war fought by draftees is sustainable if the American people remain united in its support, as they did throughout World War II. But when substantial portions of the people come to question a war’s practicality or morality, controversies about the draft will put pressure on civilian leaders to change course. Many people believe (I’m one of them) that this direct nexus between war and the people’s will is good for democracy. When the armed forces become remote from the experience of ordinary citizens and their elected representatives, leaders can afford to downplay the absence of popular authorization for the use of force.

Although there are many ethical reasons to proceed cautiously with the deployment of lethal autonomous weapons, there are practical considerations on the other side that may prove decisive. As retired USAF Gen. Charles Dunlap pointed out during a recent Brookings panel, the United States is not acting in a vacuum. We have adversaries, and they get a vote. If they rush to deploy these weapons, we may have no choice but to respond.

There may be a compromise that makes sense, all things considered. The arguments in favor of installing and using Israel’s Iron Dome are compelling, mostly because the system is entirely defensive. It kills missiles, not people, except by rare accident. Failing to develop the autonomous weapons that can protect our armed forces against those of an adversary makes no sense from either a moral or military point of view.

The line between offense and defense is not so clear, of course. A standard objection to anti-ballistic missile systems that they may encourage nations who deploy them to believe that they can undertake offensive actions with relative impunity. Still, leaders—especially in democracies—will have a hard time explaining their failure to take feasible steps to protect their armed forces and civilian populations. Appeals to abstruse deterrence theories will fall flat. If North Korea demonstrates the capacity to stage a ballistic missile attack on the United States, then accelerating the development of an effective ABM system is not optional.  Because promoting the public’s safety and security is the first duty of political leaders, failing to do so is a breach of their moral compact with the people.

Over the next decade, events will determine the extent to which the responsibility to defend the American people drives the development of defensive autonomous weapons. We will also find out the extent to which ethical reservations about the development and deployment of these weapons for offensive purposes shapes the next phase of our security strategy.

Conclusion

In another essay, Darrell M. West discusses the norms and practices corporations can use to help guide their development of new AI technologies.[8] It is an impressive list, the adoption of which would reflect a high level of ethical self-awareness among some of America’s largest and most important companies.

Self-regulation is a necessary component of a system of ethical guidance for AI, but the case-studies discussed in this paper suggest that it will not be sufficient. National defense is a quintessentially public function, and the decision to deploy AI-directed weapons must be made through an accountable political process. Facial recognition systems raise policy issues that cannot be relegated to the private sector. Which uses of these systems breach norms of privacy and anonymity? What is their evidentiary status in criminal trials? Does their deployment constitute an unacceptable concentration of power, whether in the private or public sector? It is conceivable that the makers of autonomous vehicles and their associated guidance systems might adopt voluntary standards. But even here, history suggests that agreements of this sort will need a public backstop to be effective.

A thoughtful private-sector leader concurs. While acknowledging the progress the private sector has made toward developing principles and practices of self-regulation, Brad Smith regards this effort as inherently limited. “If there are concerns about how a technology will be deployed more broadly across society,” he declares, “the only way to regulate this broad use is for government to do so.”

In support of this conclusion, Microsoft’s chief executive offers three reasons. First, in a democratic republic, self-regulation addressing matters of broad public concern is an inadequate substitute for laws ratified by the people’s elected representatives.

Second, competitive dynamics are likely to undermine self-regulatory regimes. Even if some companies adhere to voluntary standards, the problem will remain if others refuse to go along or break ranks, which they will have a powerful incentive to do. Only the force of law can provide a level playing field so that the practices of ethical actors are not nullified by the self-interest of others.

Third, there are many markets—autos, air safety, foods, and pharmaceutical products, among others—where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. “A world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards,” Smith insists.

In the past, our society has allowed new technologies to diffuse widely without adequate ethical guidance or public regulation. As revelations about the misuse of social media proliferate, it has become apparent that the consequences of this neglect are anything but benign.

In the past, our society has allowed new technologies to diffuse widely without adequate ethical guidance or public regulation. As revelations about the misuse of social media proliferate, it has become apparent that the consequences of this neglect are anything but benign. If the private and public sectors can work together, each making its own contribution to an ethically aware system of regulation for AI, we have an opportunity to avoid past mistakes and build a better future.

Article link: https://www.brookings.edu/research/why-the-government-must-help-shape-the-future-of-ai/

Author

William A. Galston

Ezra K. Zilkha Chair and Senior Fellow – Governance Studies

BillGalston

 

Footnotes

  1. 1 Microsoft, The Future Computed: Artificial Intelligence and its Role in Society, 2018.
  2. 2 Google, “Responsible Development of AI,” 2018.
  3. 3 Brad Smith, “Facial recognition technology: The need for public regulation and corporate responsibility, https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-the-need-for-public-regulation-and-corporate-responsibility/
  4. 4 Bryant Walker Smith, “Automatic Driving and Product Liability,” 2017 Mich. St. L. Rev. 1
  5. 5 Summary based on Shane Harris, “Out of the Loop: The Human-Free Future of Unmanned Aerial Vehicles,” Koret-Taube Task Force on National Security and Law, 2012.
  6. 6 See Kenneth Anderson and Matthew C. Waxman, “Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can,” American University Washington College of Law Research Paper No. 2013-11, for discussion of this argument and the three that follow.
  7. 7 For a forceful statement of this position, see Peter Asaro, “On banning autonomous weapons systems: human rights, automation, and the dehumanization of lethal decision-making,” International Review of the Red Cross 94:886 (Summer 2012).
  8. 8 Darrell M. West, “How to Address AI Ethical Dilemmas,” Brookings Institution paper, September 13, 2018.

Despite the Hype, Big Data Might Not Be What You Need – ReadWrite.com

Posted by timmreardon on 09/17/2018
Posted in: Uncategorized. 1 Comment

Brad Anderson / 14 Sep 2018

IOT-1The big data trend continues, and more and more companies are hopping on the bandwagon. While many organizations assume they need big data’s wisdom, often the “small” operational data they already have will do just fine.

Operational data is internal data, such as the data that gives Uber its ability to dispatch cars. Big data, in comparison, is information collected in high volume and at high velocity. It’s occasionally collected internally, but purchasing it remains a more common practice.

If you don’t truly need big data, embedded business intelligence company Exago explains why you may regret pursuing it: “The trouble is, big data is notoriously difficult to wrangle on account of its size and complexity. Setting aside for the moment that many enterprises have to purchase access to big data they don’t produce themselves, the process of grooming that data for reporting and analysis can be prohibitively expensive.”

In addition, just because a company purchases fancy new analytics tools and huge volumes of data to go along with them doesn’t mean they have a clue about how to extract the pearls of insight from the oysters. Mining data for actionable information requires attentive management, accurate analysis, and continuous adjustment, and buying software and raw data doesn’t provide companies with the skills necessary to master these processes overnight.

This year, the way organizations gather and use data has been under something of a microscope. Facebook has taken most of the heat, but others have been scrutinized as well. Despite this criticism, big data does still offer invaluable insights for some situations and challenges — what you need to figure out is whether yours are among them.

If you’re thinking about investing in big data for your organization, take the following considerations into account to ensure you’re truly working toward the goal you seek.

1. Needing a lot of data doesn’t always mean you need big data.

Before jumping in, make sure the problem you’re trying to solve or the goal you’re hoping to achieve actually requires big data rather than just a lot of data. As Jim Gallo, national director of business analytics at ICC, explains. “Just because you have a lot of data doesn’t mean it should be considered big data.” Although the term seems to emphasize volume over anything else, “big data” actually just describes a quantity of data that requires new tools to process it. Typically, big data utilizes multiple physical or virtual machines working together.

If you’re merely storing and retrieving large volumes of files in and from a data warehouse, you’re facing a different kind of challenge. Huge data sets are an issue that many organizations have been dealing with for years, and the quantity of data by no means indicates a “big data” problem.

2. Even with big data, operational data remains critical.

It’s a common misconception that organizations must choose between big data and small operational data. In fact, a complete big data solution could depend on combining them.

Big data is most commonly used retrospectively, and analytical big data technologies such as Hadoop can generate valuable insights after data has been collected. However, operational big data systems are still responsible for importing and storing data via real-time workloads. Incorporating both types of data will ensure your data efforts produce the most effective results.

3. The payout from big data requires big changes.

The hype surrounding big data has inflated expectations, in many cases well beyond what’s reasonable. Gaining a competitive advantage from big data can also require enormous changes that are impossible or impractical for many organizations to make. For instance, big data helped a retailer see that by keeping items on the showroom floor for a longer period, both before and after discounting them, it could increase its profits significantly. Unfortunately, this change had far-reaching supply chain implications, and the company was unable to put it into practice.

The insights generated through big data analytics can be easy to replicate, so it’s possible that consultants in your industry might already provide the services you’re looking to glean from big data. Be sure to do your homework before you spend the money on a big data initiative.

Although big data is everywhere you look, it may not actually be the right solution for your organization. Big data can be insightful, but these insights are distilled after the data has been collected and analyzed. Ultimately, before you go chasing big data, you might want to focus on better using the operational data you already have. Even if you end up needing big data after all, you’ll be better prepared for it after you get a handle on your in-house data.

Brad Anderson

Brad is the editor overseeing contributed content at ReadWrite.com. You can reach me at brad at readwrite.com

Article link: https://readwrite.com/2018/09/14/despite-the-hype-big-data-might-not-be-what-you-need/

Why Is Your Doctor Typing? Electronic Medical Records Run Amok – Forbes

Posted by timmreardon on 09/15/2018
Posted in: Uncategorized. Leave a comment
Steve Denning Senior Contributor
www.forbes.com

It was only a short while ago when a visit to the doctor was a face-to-face conversation. The doctor would ask questions. He was interested in what I said. He listened to my responses and we discussed what to do. It was a positive interaction.

In the last year or two, there’s been a shift. Much of my time with doctors has been spent watching them type. In one case, the doctor tapped away on his laptop, occasionally looking up to ask questions before returning to the main focus of his attention: his computer. In another case, the doctor intermittently tapped on an iPad while we spoke. In a third instance, the doctor had a conversation with me and then apologized that he would be spending the next half of our session typing up the results of our conversation. All this typing was required, he said, if he was ever going to be reimbursed for his services. It was getting in the way of being a doctor.

Surely, I said, computerized medical records generate benefits. They are easily retrievable. They can be transferred from one practice to another and accessible to the many different service providers—hospitals, laboratories, specialists, radiology and so on—that might be involved in any one patient.

“In theory, perhaps,” he replied. “But in practice, it’s a horrible and costly bureaucracy that is being imposed on doctors. I spend less time with patients, and more time filling out multiple boxes on forms that don’t fit the way I work. Often I am filling out the same information over and over again. A lot of it is checking boxes, rather than understanding what this patient really needs.”

What about retrieving information? Isn’t that easier?

“Again, in theory, retrieval should be easy and quick,” he said, “But you can’t flip through these records the way you do with a paper file and easily find what you want. The other day, for instance, I inherited a new patient along with her electronic records. Her previous care-givers had checked forty-five boxes of problems. There’s no way that I can deal with a patient with forty-five problems. She and I talked for some time and eventually we figured out that she had six real health problems: then we could begin to discuss what to do. And then I had to input that discussion into the computer. The electronic record didn’t save time. It made everything take longer.”

But at least now you can get the records electronically?

“Sometimes,” he said. “But each network has its own system and often the systems are incompatible. The systems don’t talk to each other. So transferring records from one system to another becomes another nightmare.”

But why do you type while the patient is there?

“Filling out these forms and checking all the boxes takes me a lot of time,” he said, “If I don’t do it now, I will spend half the night trying to remember the discussion and typing up the results of the day’s visits. The outcome is that I have less time to spend with patients. Instead of making the system better, it’s making everything more costly.”

What we are seeing here is the implementation of Obamacare—the Affordable Care Act—which has provided reimbursement incentives and an electronic medical records deadline for those who adopt electronic medical records (EMR). However, for those who don’t meet the electronic medical records deadline for implementation, the government has laid out a series of penalties. The message to doctors is clear: implement electronic records or pay a price.

“The government already is wasting billions on the medical EMR,” my doctor told me. “They are committed to giving each health care system  $17,000 per doctor who is successfully using electronic medical records to help them cover their  software investment.  This money goes to the health care system and not the docs.  So it’s basically a very lucrative pass through to the software people for  generating an inadequate and burdensome system.”

Survey: most doctors lose money with electronic records

My doctor is not alone in seeing problems with the way that electronic medical records are being implemented.

A recent survey published in Health Affairs by Julia Adler-Milstein, Carol Green and David W. Bates, estimates that doctors who install electronic medical records systems should expect an initial loss of around $44,000 on their investment. Almost two-thirds of the practices using electronic records would lose money even with government subsidies, the researchers said.

Having electronic records is in principle a good idea, but only if one imagines implementation as quick and intuitive so that it’s easier for doctors to input and retrieve information rather than from scribbling notes on paper. Reformers imagine some kind of well-tuned iPhone or iPad with lots of cool gadgets and apps that make life easier.

But in practice, implementation of electronic health records today is anything but quick and intuitive or easy to use. It’s mostly like old-style form-filling software that is an aggravating pain to use. It takes forever, involves continuous repetition, is counter-intuitive to use and offers few benefits in return. Along with upfront costs, doctors said they have to work longer hours because of the software. Smaller offices, those with five doctors or fewer, struggled the most.

The study shows that 27 percent of practices are projected to gain by seeing more patients or getting more claims approved by insurers, though there is no indication what happened to the quality of care in such accelerated throughput.

Implementing electronic records is expensive and difficult

Another review of the experience with electronic records is entitled, “Physicians Use Of Electronic Medical Records: Barriers And Solutions.” Here Robert H. Miller and Ida Sim also conclude that achieving quality improvement through electronic medical use is neither low-cost nor easy.

Despite the theoretical potential for quality improvement from computerized records, they found that few physician practices use electronic records. Miller and Sim argue that “the path to quality improvement and financial benefits lies in getting the greatest number of physicians to use the electronic medical records [EMR] (and not paper) for as many of their daily tasks as possible. The key obstacle in this path to quality is the extra time it takes physicians to learn to use the EMR effectively for their daily tasks.”

Miller and Sim report:

“Interviewees reported that most physicians using EMRs spent more time per patient for a period of months or even years after EMR implementation. The increased time costs resulted in longer workdays or fewer patients seen, or both, during that initial period… Even highly regarded, industry-leading EMRs to be challenging to use because of the multiplicity of screens, options, and navigational aids… Although vendors are slowly improving EMR usability, most vendor interviewees doubted that any “silver bullet” technology (for example, voice recognition, tablet computers, or mobile computing) will dramatically simplify EMR usage. Designing easy-to-use software for knowledge workers is a challenge that spans the software industry beyond health care.”

Miller and Sim suggest policy interventions to overcome these barriers, including providing work/practice support systems, improving electronic clinical data exchange, and providing financial incentives for quality improvement.

This thinking is fanciful. Paying people to work unintelligently doesn’t work and ultimately will be ineffective. What is needed are systems that actually help doctors do their work.

The world didn’t need incentives or support systems to get people to adopt iPhones or iPads. We embraced the iPhones and iPads because they are easy to use and they made our lives better.

When the software embedded in electronic records isn’t adapted to the doctor’s needs and the way they work, hopes of major productivity gains through policy fixes like incentives or training are doomed.

The politics of electronic health records

As the penalties of Obamacare start to be imposed, the predictable politics will kick in. Critics of the Obamacare will trumpet the (correct) conclusion that electronic medical records as now being implemented are likely to increase costs, and will argue (incorrectly) that the idea of pushing for medical records is a bad one.

Proponents will argue (correctly) that electronic records have the potential to save costs, and argue (incorrectly) that soldiering on with the current flawed systems can succeed with more incentives and training.

Both sides are correct and incorrect. We know from long experience in other sectors that dumping big, clunky, computer systems on to the work of skilled professionals doesn’t save costs. It increases them. Providing incentives for the professionals to use hard-to-use computer systems is a losing game. What is needed is easy-to-use software that fits the way doctors work and makes their working lives easier and better.

Beyond politics or health care: Agile

In essence, we are not dealing here with a health care problem. We are dealing with a management problem. There is a long experience over many decades in dealing with it. It’s called Agile.

The traditional 20th Century management approach to doing anything, whether software development or anything else, was a standard sequence of specifications, planning, implementation, and delivery.

With large complex systems involving professional work, the approach runs into major problems. The end result isn’t just that the projects don’t finish on time. A large proportion of such schemes never finish at all. Rising costs and user resistance to implementation ends up in the project being scuttled.

How the Air Force wasted several billion dollars

A recent illustration comes from the Air Force which recently canceled a six-year-old software modernization effort that had consumed $1.3 billion and produced nothing of value. Note, that’s $1.3 billion, not $1.3 million. And it’s not that the project produced less benefit than expected. It produced absolutely no benefits at all. The whole project has been canned.

The fiasco is described in the New York Times in an article by Randall Stross which notes that the Air Force’s effort began the project in 2006. The project had been “restructured” a number of times. When the Air Force realized that it would cost still another $1 billion just to achieve one-quarter of the capabilities originally planned that wouldn’t meet even minimal requirements—and that even then the system would not be ready before 2020—it gave up on the project entirely.

The need for Agile

Traditional hierarchical methods of management are ill-adapted to software development where systems are involved in constant change, during design and construction (as the true needs of the system become better understood) and during deployment (when end users start to exercise the system). Most of the costs of such systems come over the whole life of the system, not during the initial design phase.

As an interesting book by Alan W. Brown, Enterprise Software Delivery (Addison Wesley, 2012), points out the consequent shift from software development to software delivery, combining constant innovation and control.

The health sector needs to learn how to cope with the complexity of software delivery. Brown notes how Agile thinking, such as Scrum, has transformed the management of software by reconciling the requirements of innovation and control. It involves:

  • Horizontal collaboration and transparency
  • A focus on quality and constant testing
  • Loosely coupled highly cohesive architecture that is designed to evolve
  • A focus on delivering working software
  • Individual and team flexibility

The key: transform the management culture

As Brown points out, the biggest challenge in enterprise software delivery lies not in these software practices themselves, but rather in the overall management culture. If the organization remains in a vertical, hierarchical mode, with an approach of “here’s the system—implement it”, none of the advantages of computerization will accrue. In fact, costs will increase.

The problem is that hierarchical bureaucracy is still pervasive in the health sector. Producing easy-to-use software or the agility to make continuous adjustments from experience lies beyond the performance envelope of this type of management.

A radically different kind of management is needed. It needs to begin, not with a goal of “introducing electronic medical records”, but rather with the goal of “improving the working lives of doctors through better technology”.

Instead of producing outputs in the form of electronic records, the goal needs to be generating positive outcomes for doctors. Once outcomes for doctors are positive, the need for incentives vanishes: doctors will want to improve their working lives. There will be stampede to use the new technology.

But this entails a revolution in management thinking. Instead of seeing electronic records as merely a shift in technology from paper to IT, it involves a transformation in the way the health sector thinks about and manages work. It means new goals, new roles for managers, new ways of coordinating work, new values and new ways of communicating.

Fortunately, there is a vast experience in thousands of organizations around the world for over a decade to show the way. It’s still the best kept secret in the management world.

We need to stop torturing doctors with systems that make work more difficult and generate systems that are better—better for doctors, better for patients and better for the health system overall.  And for that to happen, we need a different kind of management.

Other sectors have learned to stop wasting billion dollars through failing to embrace Agile. When will the health sector learn?

A different way of thinking about organizations

The health sector needs to recognize that it can’t solve the problems of health within the health sector alone. The difficulties of implementing electronic medical records are part of a broader societal problem involving how we think about organizations.

Unless we see the pervasive problem of hierarchical bureaucracy infecting every aspect of society, not just health, the army of the management experts, the MBA graduates, the efficiency experts, the management consulting firms and the politicians will conspire to re-introduce and reinforce the hierarchical bureaucracy, yet again, with new language, but sadly similar results.

There are many organizations that are being run in the Agile mode—producing outcomes not outputs, with managers acting as enablers, with work being done by self-organizing teams, with values of transparency and continuous improvement and horizontal communications . In the private sector, they are hugely profitable. They present models for the health sector. The health sector needs to join with this broader movement to reform organizations across every sector of the economy.

Article link: https://www.forbes.com/sites/stevedenning/2013/04/25/why-is-your-doctor-typing-hint-think-agile/#b1ec54352109

And read also:

The best-kept management secret on the planet: Agile

Reconciling Innovation With Control: The Air Force’s $1.3 Billion Lesson In Agile

________________________

Steve Denning’s most recent book is: The Leader’s Guide to Radical Management (Jossey-Bass, 2010).

Follow Steve Denning on Twitter @stevedenning

 

Four keys to successful digital transformations in healthcare – McKinsey

Posted by timmreardon on 09/14/2018
Posted in: Uncategorized. Leave a comment

McKinseyx3

By Sastry Chilukuri and Steve Van Kuiken

By taking a comprehensive approach to digitization, healthcare companies can deliver products and services more quickly, boost innovation in the industry, and hold down costs.

Healthcare companies (device manufacturers, payors, and providers, among others) have long relied on technology as a core utility—for tracking R&D efforts and patient information, scheduling payments and services, launching new care options, and generally keeping the lights on.

The digitization of products and processes, however, has dramatically changed the game for everyone. Consumers’ expectations about healthcare services are increasingly being informed by their experiences with large digital-born companies. With this “customer experience” frame in mind, healthcare companies are seeking to integrate the latest technologies into existing business models and IT architectures to improve services. At the same time, they are grappling with new, nontraditional entrants to the marketplace (IBM, and Microsoft, for instance), as well as ever-present regulatory and risk-related concerns.

More and more healthcare companies worldwide are finding that digital technologies must be managed not as utilities but as strategic assets. Some are attempting to bridge the gap between legacy and digital IT by undertaking complex systems transformations. One large healthcare-technology company is experimenting with ways to maintain its existing IT architecture while using analytics to securely mine the data it collects for useful business insights. Similarly, a large drugmaker is exploring the use of cloud platforms to reduce data storage and processing costs and to boost the speed of its R&D efforts.

Still, most pharmaceutical and medical-technology companies are digital laggards compared with companies in travel, retail, telecommunications, and other sectors (Exhibit 1). Their digital-transformation efforts can stall for many of the same reasons such efforts are thwarted in other sectors—for instance, a limited understanding of the specific ways that implementation of new technologies across complex product and services lines can create business value, a shortage of native digital talent, and insufficient focus on digital topics from senior leadership.

McKinsey1x

Our experience with companies inside and outside the healthcare ecosystem suggests there are four core principles for succeeding with this kind of all-encompassing change program. Healthcare companies first need to identify and prioritize their critical sources of value; they need to identify the products and services they provide that lead to competitive differentiation and that would benefit most from digitization. Second, they must build their service-delivery capabilities—not just in physically integrating and managing new digital technologies but also in implementing new approaches to product development and distribution (for instance, agile and DevOps methodologies). Third, healthcare companies should look for ways to modernize their IT foundations, for example upgrading pools of talent and expertise in the IT organization, moving to digital platforms such as cloud servers and software-as-a-service products, managing data as a strategic asset, and improving security protocols for the company’s most vital assets. And fourth, companies must ensure that they build and maintain core management competencies. In other words, all the enablers that allow them to pursue a successful digital agenda.

In this article we consider the changing healthcare landscape, the emerging opportunities in digitization, and the four core principles healthcare companies can follow to succeed with their digital transformations. Consistent with digital leaders in other industries, the front-runners in digital healthcare have a significant opportunity not just to win in their desired markets but also to change the rules of the game.

Understanding the changing landscape

Healthcare companies across the world face a different competitive environment than they did a decade or more ago—in part, because of the degree to which digital tools and technologies are disrupting typical product- and service-development processes, customer interactions, delivery mechanisms, back-office operations, and supplier relationships for large players in the sector.

Indeed, never before have so many technologies with the potential to affect the healthcare industry matured so quickly en masse. Next-generation genomics; big data and advanced analytics; machine learning and automation programs; connected, sensor-enabled devices and wearables; 3-D printing; and robotics—all have the potential to fundamentally change the way healthcare companies develop products and provide services. Consumers are more informed about and more engaged in healthcare decisions because of technology, and regulators and policy makers are advocating for the development of open data and technology standards as well as knowledge-sharing initiatives among companies in the industry.

As a result, some of today’s healthcare companies are focused on using technology to improve their interactions with patients and ecosystem partners, rein in costs, streamline operations, and better manage changing industry regulations. They’re acknowledging the shift toward evidence-based medicine and exploring ways to use big data to customize care programs and make the case for investment in and reimbursement for emerging devices or treatments. A good example of digital reinvention in healthcare is the life sciences giant Johnson & Johnson: the company has undertaken a massive digital transformation of its IT organization, moving a bulk of its processing workload to a hybrid cloud environment and incorporating data lakes, data analytics, and agile development practices into its operations. As a result, the company has been able to bring together different businesses capabilities—design thinking, deep clinical knowledge, and a global understanding of healthcare systems—to create new patient-centered offerings. (See “Healthcare giant shares prescription for digital reinvention.”)

By making the shift from healthcare company to digital enterprise, industry participants can capitalize on a number of emerging “battleground” opportunities. Among them are the following:

  • Building direct relationships with consumers to influence treatment outcomes rather than working through institutional intermediaries. One service provider, for example, has linked disparate sources of data so clinicians can more easily analyze personal, clinical, demographic, genomic, and environmental information to determine which personalized interventions would be appropriate for patients suffering from chronic conditions such as asthma and multiple sclerosis.
  • Finding new sources of value in different profit pools. For instance, some healthcare companies, particularly new market entrants from the technology sector, are looking for ways to take caregiving out of its traditional hospital setting. Instead, they are developing ways to offer digital diagnostic services, remote health monitoring, and home healthcare.
  • Collaborating to acquire complementary capabilities. Increasingly, providers and device manufacturers are partnering with other companies in the healthcare ecosystem, including market entrants from the high-tech sector. The latter are masters of consumer marketing, but, in general, they are relatively unfamiliar with regulatory processes in healthcare. Healthcare companies can help fill that expertise gap.
  • Contributing to burgeoning industry standards and conduct. Healthcare companies at all levels of the service chain have an opportunity to define new rules of engagement. For instance, they could collaborate with the government on standards for open access to patient information or care protocols, thereby democratizing the delivery of healthcare.

Succeeding with a digital transformation

The healthcare environment is becoming more distributed and complex. To adapt, companies will need to embrace open systems that allow for sophisticated analysis of multiple streams of data and the development of customer-centric services. They must be able to view processes as end-to-end flows rather than discrete hand-offs, embrace more risk (as appropriate), move at higher speeds, and engage in innovative partnerships. All of this is easier said than done for companies saddled with decades-old legacy systems, processes, and operating models that were optimized for a brick-and-mortar world.

In our experience, the odds of successfully transitioning to digital systems and ways of working increase when healthcare companies focus on four important dimensions of their businesses: critical sources of value for the company, the means by which the company delivers products and services, the company’s IT architecture, and its talent, finance, and governance processes (Exhibit 2). Let’s take a closer look at each.

McKinseyx2

Identify and prioritize critical sources of value

As a first step toward digitization, healthcare companies must clarify where the company provides distinctive value to consumers and stakeholders, and determine how the use of digital technologies could enable those activities. Companies can then determine how best to adjust investments in digital technologies and development approaches to meet the highest priorities. They can also help steer management’s attention (always in short supply) in the right direction at the right times during the complicated transformation process.

There are any number of value propositions that companies may wish to target; a lot depends on the company’s position in the value chain. A clear source of value emerging for most healthcare companies is an ability to get closer to customers to give them targeted products and services, and engage them in value-based relationships. Some device manufacturers, for instance, may want to create intelligent products—sensor-enabled devices, inhalers, and auto-injectors, for example, that can monitor and manage specific conditions or assist in medical procedures. Pharmaceutical makers could build digital platforms so they could collect and analyze medical data, conduct synthetic clinical trials, manage market access, and accelerate their research efforts.

Some healthcare companies may want to explore ways to mitigate risk using previously isolated data sets. For instance, if manufacturers had greater access to cost-of-care figures, patient outcomes, satisfaction scores, and other metrics, they could devise new types of contracts and risk-sharing models with service providers. Consider that in a typical joint-replacement surgery, the implant itself represents just 15 percent of the total cost of care. Forward-looking manufacturers and providers could use shared, collected data to collaborate on ways to optimize the remaining 85 percent of the cost.

And finally, some companies in the healthcare ecosystem may want to use automation, robotics, and Industry 4.0 technologies, such as sensor-based equipment and the Internet of Things, to break down walls between business units and functions, thereby speeding up processes and decision making and reducing administration costs.

Build service-delivery capabilities

Once priorities for digital transformation have been set, healthcare companies will need to focus on the means by which they will offer targeted digital products and services to consumers and stakeholders. In most cases, companies must understand user needs in a detailed way and reimagine their work flow and processes as end-to-end activities that can be automated, virtualized, and personalized employing real-time insights. For example, insights about the supply chain—say, the current levels of inventory compared with sales forecasts—could help healthcare companies reduce general and administrative costs and improve customer service. Agile development, data sciences, and customer-experience design can be useful approaches for these companies to explore.

Agile, a software development methodology, has been around for decades, but it is experiencing a renaissance in the digital world. Agile development involves short, fast phases of development, prototyping, reassessment, and adaptation. To make a step in the agile direction, companies will need to modify their organizational structures to be more product oriented, find ways to improve interactions between the business users and IT, redefine roles within the business units and the IT organization, and reconsider their budget and planning models.1

The agile development approach can be combined with capabilities in data sciences and customer-experience design to rev up the provision of digital services. Colocated business, IT operations, and analytics professionals can jointly develop and deploy products and services in a matter of weeks rather than months or years. Indeed, an at-scale digital healthcare organization can have up to 100 agile teams running projects in parallel at any given time. Of course, companies will need to make the business case for agile to senior management, in an outcomes-driven process. They will also need to think boldly; rather than tag certain projects as agile, senior leaders in business and IT at one large healthcare manufacturer started with a presumption that all new initiatives would be structured as agile projects, unless proved otherwise.

The results of combining agile operations with data science and customer-experience design can be significant. Some device makers are wrapping digital solutions around their products to create better patient outcomes—allowing for predictive diagnostics and early detection in patients with certain types of disease (atrial fibrillation, for instance), or the launch of fully digital surgical units, or remote monitoring of patient care. Meanwhile, some pharma companies are using advanced analytics to discover drugs or identify new uses for established ones.

Modernize IT foundations

Once digital priorities are identified, and digital delivery models discussed, healthcare companies need to examine their IT infrastructure: Is it truly capable of supporting the activities required? Complex legacy technology systems usually become the main sticking point for healthcare companies seeking to go digital. Aging systems have typically been built up in patchwork fashion: new applications and gateways are bolted on to existing ones. The result is spaghetti code and fragmentation, neither of which promotes speed and transparency in IT operations. To support strategic priorities and agile approaches to development, companies will need to modernize their IT foundations.

They must build a solid, reliable data backbone to ensure that all data are managed holistically so that users can access data sets quickly and easily. Access should be governed according to a single framework, and data sets should be harmonized according to business use case. In this way, companies can establish a “golden source” of truth for critical information relating to pricing, products, customers, invoices, and contracts.

Healthcare companies should also consider ways to build flexibility into their IT infrastructures by looking at software-as-a-service or cloud-based platforms and products. Johnson & Johnson, for instance, is more than halfway toward its goal of migrating 85 percent of its computing workload to a cloud-based platform; the company has been able to manage capacity based on demand, ensure network reliability, and hold costs in check.

Companies should also start incorporating connectivity into their IT architectures—for instance, using sensors and other monitoring technologies to generate and manage data collected from medical devices in the field. Some manufacturers have created internal platforms that let them analyze real-world treatment data to prove the efficacy, safety, and value of their offerings. Other device makers have been able to use data collected from devices implanted in patients to predict treatment outcomes or intervene earlier in certain types of cases.

Of course, companies will need rigorous cybersecurity policies and infrastructures to protect the most relevant pieces of information in the corporation. Leaders can take a series of steps to protect these “crown jewels”—including identifying and mapping digital assets (data, systems, and applications) across the business value chain; assessing risks for each asset, using surveys and executive workshops; identifying potential attackers, the availability of assets to users, and current controls in place; locating the weakest points of security around crown-jewel assets and identifying remedies; and, finally, creating a set of initiatives to address highest-priority risks and gaps in control.2

Strengthen core management capabilities

Any large transformation effort requires that companies strengthen and maintain their capabilities in several core areas. The first is talent and partnerships. In the case of digital transformation, companies must develop a deep bench of internal staffers with expertise in digital technologies and approaches, while also bolstering their ability to acquire top digital talent from outside the organization. They will need to assess existing recruitment and retention capabilities and modify them to incorporate new skill sets, training needs, and employee requirements. Particularly in the field of healthcare and life sciences, a sense of mission and challenging work assignments may be more critical for attracting top talent than money. Companies may also need to look outside the traditional sources of talent to find the right people—hence, the need to develop partnerships with other companies in the healthcare ecosystem and in other relevant industry clusters.

Another core capability is in financial processes. Healthcare players must ensure that investment priorities are communicated clearly, revisited regularly, and updated as needed, and that sufficient capital is available. Some companies have established funds dedicated to digital initiatives, separate from day-to-day budgets. Companies will also need to create a formal governance structure that is inclusive, where internal and external stakeholders alike have an opportunity to weigh in on digital decisions. We have seen healthcare players address this in a number of ways, including convening external advisory boards and creating internal governance councils.

And last, but never least, culture is critical. Our research suggests that 70 percent of large transformation efforts fail because of poor organizational health. Companies must establish a healthy work environment that is open to new ideas and best practices. Senior leaders should focus all employees on five critical questions: Where do we want to go? How ready are we to go there? What must we do to get there? How will we manage the journey? And how do we keep moving forward? In the spirit of agile development, for example, senior leaders might convene frequent problem-solving and information-sharing sessions (formal and ad hoc) to help break down barriers between functional and business groups and create more transparency and collaboration.


Like companies in other sectors, the healthcare industry is being disrupted by digitization—and CEOs and boards are taking notice. It’s by now a common story: incumbents face threats from digital natives, who are relatively free of legacy constraints and so are able to capture value from nontraditional sources. The winners in digital health, however, are moving quickly to initiate change and capitalize on the battlegrounds cited earlier. They are investing early in promising technologies and risk-sharing relationships with other companies, inside and outside the industry. They are embracing new development and operating models, and relying more on data-driven insights to make critical business decisions. Most important, they are reimagining themselves as digital enterprises—adaptive, collaborative organizations that can keep pace with changes in the healthcare marketplace. The four core principles for change that we’ve outlined can help companies join the ranks of the winners. They can tackle their transformation programs successfully, creating better patient outcomes and more value for all stakeholders.

Article link: https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/four-keys-to-successful-digital-transformations-in-healthcare

The Secret Drug Pricing System Middlemen Use to Rake in Millions – Bloomberg

Posted by timmreardon on 09/11/2018
Posted in: Uncategorized. Leave a comment
Bloomberx1.pngBy Robert Langreth, David Ingold and Jackie Gu
September 11, 2018

Not everybody reads the legal notices inside the Ottumwa Courier. But in January, Iowa pharmacist Mark Frahm noticed something unusual in the paper.
For years, Frahm’s South Side Drug bought pills from distributors, and dispensed prescriptions to the Wapello County jail. In turn, the pharmacy got reimbursed for the drugs by CVS Health Corp., which managed the county’s drug benefits plan.
As he compared the newspaper notice with his own records, and then with the county’s, Frahm saw that for a bottle of generic antipsychotic pills, CVS had billed Wapello County $198.22. But South Side Drug was reimbursed just $5.73.
So why was CVS charging almost $200 for a bottle of pills that it told the pharmacy was worth less than $6? And what was the company doing with the other $192.49?
Frahm had stumbled across what’s known as spread pricing, where companies like CVS mark up—sometimes dramatically—the difference between the amount they reimburse pharmacies for a drug and the amount they charge their clients.
It’s where pharmacy benefit managers (PBMs) like CVS make a part of their profit. But Frahm says he didn’t think the spread could be thousands of percent.
“Middlemen have to make some money, but we didn’t expect it to be this extreme,” said Frahm, who said his pharmacy lost money in the jail account last year because CVS paid so little. “We figured everyone was playing fair.”
How Spread Pricing Works

Bloomberg

Read more: https://www.bloomberg.com/graphics/2018-drug-spread-pricing/

Leaders Focus Too Much on Changing Policies, and Not Enough on Changing Minds – HBR

Posted by timmreardon on 09/10/2018
Posted in: Uncategorized. Leave a comment

Tony Schwartz

June 25, 2018
HBR-x1

Not long ago, I asked 100 CEOs attending a conference how many of them were currently involved in a significant business transformation. Nearly all of them raised their hands, which was no surprise. According to a study by BCG, 85% of companies have undertaken a transformation during the past decade.

The same research found that nearly 75% of those transformations fail to improve business performance, either short-term or long-term.

So why is transformation so difficult to achieve?

Among many potential explanations, one that gets very little attention may be the most fundamental: the invisible fears and insecurities that keep us locked into behaviors even when we know rationally that they don’t serve us well. Add to that the anxiety that nearly all human beings experience in the face of change. Nonetheless, most organizations pay far more attention to strategy and execution than they do to what their people are feeling and thinking when they’re asked to embrace a transformation. Resistance, especially when it is passive, invisible, and unconscious, can derail even the best strategy.

Business transformations are typically built around new structural elements, including policies, processes, facilities, and technology. Some companies also focus on behaviors — defining new practices, training new skills, or asking employees for new deliverables.

What most organizations typically overlook is the internal shift — what people think and feel — which has to occur in order to bring the strategy to life. This is where resistance tends to arise — cognitively in the form of fixed beliefs, deeply held assumptions and blind spots; and emotionally, in the form of the fear and insecurity that change engenders. All of this rolls up into our mindset, which reflects how we see the world, what we believe and how that makes us feel.

The result is that transforming a business also depends on transforming individuals — beginning with the most senior leaders and influencers. Few of them, in our experience, have spent much time observing and understanding their own motivations, challenging their assumptions, or pushing beyond their intellectual and emotional comfort zones. The result is something that the psychologists Lisa Lahey and Robert Kegan have termed “immunity to change.”

We first ran up against the power of mindset two decades ago when we began to make a case inside organizations that rest and renewal are essential for sustaining high performance. The scientific evidence we presented to clients was compelling. Nearly all of them found the concept persuasive and appealing, both logically and intuitively. We taught them very simple strategies to build renewal into their lives, and they left our workshops eager to change the way they worked.

Nonetheless, most of them struggled with changing their behavior when they got back to their jobs. They continued to equate continuous work and long hours with success. Taking time to renew during work days made them feel as if they were slacking. Even when organizations built nap rooms, they often went unused. People worried that if they rested at all, they wouldn’t get their work done, and above all, they feared failing. Despite their best intentions, many of them eventually defaulted back to their habitual patterns.

More recently, we worked with the senior team of a large consumer product company which had been severely disrupted by smaller, more agile online competitors selling their services directly to consumers. On its face, the team was aligned, focused, and committed to a new multi-faceted strategy with a strong digital component. But when we looked at the team’s mindset more deeply, we discovered that they shared several underlying beliefs including, “Everything we do is equally important,” “More is always better,” and “It has to be perfect or we don’t do it.” They summarized these beliefs in a single sentence: “If we don’t keep running as hard as we can, and attend to every detail, everything will fall apart.”

Not surprisingly, the leaders found they were spreading themselves too thin, struggling to pull the trigger on new initiatives, and feeling exhausted. Simply surfacing these costs and their consequences proved highly valuable and motivating. We also launched several initiatives to address these issues individually and collectively.

One of the most successful began with a simple exercise aimed at helping the leaders to define their three highest priorities. Then we took them through a structured exercise including delving into their calendars to assess whether they were using their time to best advantage, including setting aside time for renewal. This process prompted them to examine more consciously why they were working in self-defeating ways.

We also developed an online site where leaders agreed to regularly share their progress on prioritizing, as well as any feelings of resistance that were arising, and how they managed them. Their work is ongoing, but among the most common feelings people reported were liberation and relief. Their worst fears failed to materialize.

Several factors typically hold mindset in place. The first is that much of it gets deeply rooted early in our lives. Over time we tend to develop confirmation bias, forever seeking evidence that reinforces what we already believe, and downplaying or dismissing what doesn’t. We’re also designed, both genetically and instinctively, to put our own safety first, and to avoid taking too much risk. Rather than using our capacity for critical thinking to assess new possibilities, we often co-opt our prefrontal cortex to rationalize choices that were actually driven by our emotions.

All this explains why the most effective transformation begins with what’s going on inside people — and especially the most senior leaders, given their disproportionate authority and influence.  Their challenge is to deliberately turn attention inward in order to begin noticing the fixed patterns in their thinking, how they’re feeing in any given moment, and how quickly the instinct for self-preservation can overwhelm rationality and a longer term perspective, especially when the stakes are high.

Leaders also have an outsize impact on the collective mindset — meaning the organizational culture. As they begin to change the way they think and feel, they’re more able to model new behaviors and communicate to others more authentically and persuasively. Even employees highly resistant to change tend to follow their leaders, simply because most people prefer to fit in, rather than stick out.

Ultimately, personal transformation requires the courage to challenge one’s current comfort zone, and to tolerate that discomfort without overreacting. One of the most effective tools, we’ve found is a series of provocative questions we ask leaders and their teams to build a practice around asking themselves:

“What am I not seeing?

“What else is true?”

“What is my responsibility in this situation?”

“How is my perspective being influenced by my fears?”

Great strategy remains foundational to transformation, but successful execution also requires surfacing and continuously addressing the invisible reasons that people and cultures so often resist changing, even when the way they’re working isn’t working.


 

Tony Schwartz is the president and CEO of The Energy Project and the author of The Way We’re Working Isn’t Working. Become a fan of The Energy Project on Facebook and connect with Tony at Twitter.com/TonySchwartz and Twitter.com/Energy_Project.

Article link: https://hbr.org/2018/06/leaders-focus-too-much-on-changing-policies-and-not-enough-on-changing-minds

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (1)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...