healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

RSA 2019: FBI Director Christopher Wray says ‘today’s cybersecurity threat is bigger than government itself’ – Healthcare IT News

Posted by timmreardon on 03/25/2019
Posted in: Uncategorized. Leave a comment

Top Federal Bureau of Investigation official calls for greater public-private partnership to fight back increasingly sophisticated adversaries and attacks.

By Tom SullivanMarch 05, 201904:21 PM

FBI1.jpg

SAN FRANCISCO — Christopher Wray, the director of the U.S. Federal Bureau of Investigation, struck a cautious tone on Tuesday morning here at RSA 2019 and he called for more cooperation to tackle the cybersecurity problem.

“Today’s cyberthreat is bigger than any one government agency — in fact it’s bigger than the government itself,” Wray said. “The scope, breadth, depth, sophistication and diversity of the threat we face now is unlike anything we’ve had in our lifetimes.”

Wray pointed to multinationals, criminal organizations, hacktavists, insider threats, China and North Korea all as among the most pressing issues as well as the blended threat of foreign intelligence units enlisting the help of hackers.

He said that cyber, perhaps more than any other area, is where public-private cooperation is needed for everyone’s sake.

“The key is having the private sector start to form relationships with field offices because it’s not just prevention but in many cases it’s mitigation and that’s where speed really matters,” Wray explained. Having an existing relationship, in turn, will help with that speed.

Brookings Institution Senior Fellow Susan Hennessey pressed Wray on the thorny issue of cooperating with the FBI in the wake of its incident with Apple. After Apple refused to grant the FBI access to San Bernandino shooter Syed Rizwan Farook’s iPhone in 2015 and the Bureau reportedly worked with a third-party to crack the phone anyway — a move that experts said demonstrated that even encrypted data is not entirely safe from nefarious actors.

“We are not trying to weaken encryption or find back doors. This is an issue getting worse and worse all the time,” Wray said. “It can’t be a sustainable end state that there is a space beyond our access for criminals to hide. I would love to see people come together and try to work toward solutions.”

The threats are not going to get easier or more simplistic.

Cisco Talos VP Matt Watchinski, for instance, said that as our world continues changing, technologies being invented today, including critical infrastructure, will challenge security professionals to keep getting better at what they do.

“At the end of the day,” FBI’s Wray said, “we need each other in a way that is becoming more and more apparent every day.”

Twitter: @SullyHIT
Email the writer: tom.sullivan@himssmedia.com

Article link: https://www.healthcareitnews.com/news/rsa-2019-fbi-director-christopher-wray-says-today’s-cybersecurity-threat-bigger-government

 

 

Triton is the world’s most murderous malware, and it’s spreading – MIT Technology Review

Posted by timmreardon on 03/21/2019
Posted in: Uncategorized. Leave a comment

As an experienced cyber first responder, Julian Gutmanis had been called plenty of times before to help companies deal with the fallout from cyberattacks. But when the Australian security consultant was summoned to a petrochemical plant in Saudi Arabia in the summer of 2017, what he found made his blood run cold.

The hackers had deployed malicious software, or malware, that let them take over the plant’s safety instrumented systems. These physical controllers and their associated software are the last line of defense against life-threatening disasters. They are supposed to kick in if they detect dangerous conditions, returning processes to safe levels or shutting them down altogether by triggering things like shutoff valves and pressure-release mechanisms.

The malware made it possible to take over these systems remotely. Had the intruders disabled or tampered with them, and then used other software to make equipment at the plant malfunction, the consequences could have been catastrophic. Fortunately, a flaw in the code gave the hackers away before they could do any harm. It triggered a response from a safety system in June 2017, which brought the plant to a halt. Then in August, several more systems were tripped, causing another shutdown.

The first outage was mistakenly attributed to a mechanical glitch; after the second, the plant’s owners called in investigators. The sleuths found the malware, which has since been dubbed “Triton” (or sometimes “Trisis”) for the Triconex safety controller model that it targeted, which is made by Schneider Electric, a French company.

In a worst-case scenario, the rogue code could have led to the release of toxic hydrogen sulfide gas or caused explosions, putting lives at risk both at the facility and in the surrounding area.

Gutmanis recalls that dealing with the malware at the petrochemical plant, which had been restarted after the second incident, was a nerve-racking experience. “We knew that we couldn’t rely on the integrity of the safety systems,” he says. “It was about as bad as it could get.”

In attacking the plant, the hackers crossed a terrifying Rubicon. This was the first time the cybersecurity world had seen code deliberately designed to put lives at risk. Safety instrumented systems aren’t just found in petrochemical plants; they’re also the last line of defense in everything from transportation systems to water treatment facilities to nuclear power stations.

Triton’s discovery raises questions about how the hackers were able to get into these critical systems. It also comes at a time when industrial facilities are embedding connectivity in all kinds of equipment—a phenomenon known as the industrial internet of things. This connectivity lets workers remotely monitor equipment and rapidly gather data so they can make operations more efficient, but it also gives hackers more potential targets.

Those behind Triton are now on the hunt for new victims. Dragos, a firm that specializes in industrial cybersecurity, and where Gutmanis now works, says it’s seen evidence over the past year or so that the hacking group that built the malware and inserted it into the Saudi plant is using some of the same digital tradecraft to research targets in places outside the Middle East, including North America. And it’s creating new strains of the code in order to compromise a broader range of safety instrumented systems.

Red alert

News of Triton’s existence was revealed in December 2017, though the identity of the plant’s owner has been kept secret. (Gutmanis and other experts involved in the initial investigation decline to name the company because they fear doing so might dissuade future targets from sharing information about cyberattacks privately with security researchers.)

Some notable cyber-physical threats
  • 2010 💥 StuxnetDeveloped by America’s National Security Agency, working in conjunction with Israeli intelligence, the malware was a computer worm, or code that replicates itself from computer to computer without human intervention. Most likely smuggled in on a USB stick, it targeted programmable logic controllers which govern automated processes, and caused the destruction of centrifuges used in the enrichment of uranium at a facility in Iran.
  • 2013 🕵️‍♂️ HavexHavex was designed to snoop on systems controlling industrial equipment, presumably so that hackers could work out how to mount attacks on the gear. The code was a remote access Trojan, or RAT, which is cyber-speak for software that lets hackers take control of computers remotely. Havex targeted thousands of US, European, and Canadian businesses, and especially ones in the energy and petrochemical industries.
  • 2015 ⚡️ BlackEnergyBlackEnergy, which is another Trojan, had been circulating in the criminal underworld for a while before it was adapted by Russian hackers to launch an attack in December 2015 on several Ukranian power companies that helped trigger blackouts. The malware was used to gather intelligence about the power companies’ systems, and to steal log-in credentials from employees.
  • 2016 ⚡️ CrashOverrideAlso known as Industroyer, this was developed by Russian cyber warriors too, who used it to mount an attack on a part of Ukraine’s electrical grid in December 2016. The malware replicated the protocols, or communications languages, that different elements of a grid used to talk to one another. This let it do things like show that a circuit breaker is closed when it’s really open. The code was used to strike an electrical transmission substation in Kiev, blacking out part of the city for a short time.

Over the past couple of years, cybersecurity firms have been racing to deconstruct the malware—and to work out who’s behind it. Their research paints a worrying picture of a sophisticated cyberweapon built and deployed by a determined and patient hacking group whose identity has yet to be established with certainty.

The hackers appear to have been inside the petrochemical company’s corporate IT network since 2014. From there, they eventually found a way into the plant’s own network, most likely through a hole in a poorly configured digital firewall that was supposed to stop unauthorized access. They then got into an engineering workstation, either by exploiting an unpatched flaw in its Windows code or by intercepting an employee’s login credentials.

Since the workstation communicated with the plant’s safety instrumented systems, the hackers were able to learn the make and model of the systems’ hardware controllers, as well as the versions of their firmware—software that’s embedded in a device’s memory and governs how it communicates with other things.

It’s likely they next acquired an identical Schneider machine and used it to test the malware they developed. This made it possible to mimic the protocol, or set of digital rules, that the engineering workstation used to communicate with the safety systems. The hackers also found a “zero-day vulnerability”, or previously unknown bug, in the Triconex model’s firmware. This let them inject code into the safety systems’ memories that ensured they could access the controllers whenever they wanted to.

Thus, the intruders could have ordered the safety instrumented systems to disable themselves and then used other malware to trigger an unsafe situation at the plant.

The results could have been horrific. The world’s worst industrial disaster to date also involved a leak of poisonous gases. In December 1984 a Union Carbide pesticide plant in Bhopal, India, released a vast cloud of toxic fumes, killing thousands and causing severe injuries to many more. The cause that time was poor maintenance and human error. But malfunctioning and inoperable safety systems at the plant meant that its last line of defense failed.

More red alerts

There have been only a few previous examples of hackers using cyberspace to try to disrupt the physical world. They include Stuxnet, which caused hundreds of centrifuges at an Iranian nuclear plant to spin out of control and destroy themselves in 2010, and CrashOverride, which Russian hackers used in 2016 to strike at Ukraine’s power grid. (Our sidebar provides a summary of these and other notable cyber-physical attacks.)

However, not even the most pessimistic of cyber-Cassandras saw malware like Triton coming. “Targeting safety systems just seemed to be off limits morally and really hard to do technically,” explains Joe Slowik, a former information warfare officer in the US Navy, who also works at Dragos.

Other experts were also shocked when they saw news of the killer code. “Even with Stuxnet and other malware, there was never a blatant, flat-out intent to hurt people,” says Bradford Hegrat, a consultant at Accenture who specializes in industrial cybersecurity.

It’s almost certainly no coincidence that the malware appeared just as hackers from countries like Russia, Iran, and North Korea stepped up their probing of “critical infrastructure” sectors vital to the smooth running of modern economies, such as oil and gas companies, electrical utilities, and transport networks.

In a speech last year, Dan Coats, the US director of national intelligence, warned that the danger of a crippling cyberattack on critical American infrastructure was growing. He drew a parallel with the increased cyber chatter US intelligence agencies detected among terrorist groups before the World Trade Center attack in 2001. “Here we are nearly two decades later, and I’m here to say the warning lights are blinking red again,” said Coats. “Today, the digital infrastructure that serves this country is literally under attack.”

At first, Triton was widely thought to be the work of Iran, given that it and Saudi Arabia are archenemies. But cyber-whodunnits are rarely straightforward. In a report published last October, FireEye, a cybersecurity firm that was called in at the very beginning of the Triton investigation, fingered a different culprit: Russia.

The hackers behind Triton had tested elements of the code used during the intrusion to make it harder for antivirus programs to detect. FireEye’s researchers found a digital file they had left behind on the petrochemical company’s network, and they were then able to track down other files from the same test bed. These contained several names in Cyrillic characters, as well as an IP address that had been used to launch operations linked to the malware.

That address was registered to the Central Scientific Research Institute of Chemistry and Mechanics in Moscow, a government-owned organization with divisions that focus on critical infrastructure and industrial safety. FireEye also said it had found evidence that pointed to the involvement of a professor at the institute, though it didn’t name the person. Nevertheless, the report noted that FireEye hadn’t found specific evidence proving definitively that the institute had developed Triton.

Researchers are still digging into the malware’s origins, so more theories about who’s behind it may yet emerge. Gutmanis, meanwhile, is keen to help companies learn important lessons from his experience at the Saudi plant. In a presentation at the S4X19 industrial security conference in January, he outlined a number of them. They included the fact that the victim of the Triton attack had ignored multiple antivirus alarms triggered by the malware, and that it had failed to spot some unusual traffic across its networks. Workers at the plant had also left physical keys that control settings on Triconex systems in a position that allowed the machines’ software to be accessed remotely.

 

 

Triton: a timeline
  • 2014Hackers gain access to network of Saudi plant
  • June 2017First plant shutdown 
  • August 2017Second plant shutdown
  • December 2017Cyberattack made public
  • October 2018Fireeye says Triton most likely built in Russian lab
  • January 2019More details emerge of Triton incident response

If that makes the Saudi business sound like a security basket case, Gutmanis says it isn’t. “I’ve been into a lot of plants in the US that were nowhere near as mature [in their approach to cybersecurity] as this organization was,” he explains.

Other experts note that Triton shows government hackers are now willing to go after even relatively obscure and hard-to-crack targets in industrial facilities. Safety instrumented systems are highly tailored to safeguard different kinds of processes, so crafting malware to control them involves a great deal of time and painstaking effort. Schneider Electric’s Triconex controller, for instance, comes in dozens of different models, and each of these could be loaded with different versions of firmware.

That hackers went to such great lengths to develop Triton has been a wake-up call for Schneider and other makers of safety instrumented systems—companies like Emerson in the US and Yokogawa in Japan. Schneider has drawn praise for publicly sharing details of how the hackers targeted its Triconex model at the Saudi plant, including highlighting the zero-day bug that has since been patched. But during his January presentation, Gutmanis criticized the firm for failing to communicate enough with investigators in the immediate aftermath of the attack.

Schneider responded by saying it had cooperated fully with the company whose plant was targeted, as well as with the US Department of Homeland Security and other agencies involved in investigating Triton. It has hired more people since the event to help it respond to future incidents, and has also beefed up the security of the firmware and protocols used in its devices.

Andrew Kling, a Schneider executive, says an important lesson from Triton’s discovery is that industrial companies and equipment manufacturers need to focus even more on areas that may seem like highly unlikely targets for hackers but could cause disaster if compromised. These include things like software applications that are rarely used and older protocols that govern machine-to-machine communication. “You may think nobody’s ever going to bother breaking [an] obscure protocol that’s not even documented,” Kling says, “but you need to ask, what are the consequences if they do?”

An analog future?

Over the past decade or so, companies have been adding internet connectivity and sensors to all kinds of industrial equipment. The data captured is being used for everything from predictive maintenance—which means using machine-learning models to better anticipate when equipment needs servicing—to fine-tuning production processes. There’s also been a big push to control processes remotely through things like smartphones and tablets.

All this can make businesses much more efficient and productive, which explains why they are expected to spend around $42 billion this year on industrial internet gear such as smart sensors and automated control systems, according to the ARC Group, which tracks the market. But the risks are also clear: the more connected equipment there is, the more targets hackers have to aim at.

To keep attackers out, industrial companies typically rely on a strategy known as “defense in depth.” This means creating multiple layers of security, starting with firewalls to separate corporate networks from the internet. Other layers are intended to prevent hackers who do get in from accessing plant networks and then industrial control systems.

These defenses also include things like antivirus tools to spot malware and, increasingly, artificial-intelligence software that tries to spot anomalous behavior inside IT systems. Then, as the ultimate backstop, there are the safety instrumented systems and physical fail-safes. The most critical systems typically have multiple physical backups to guard against the failure of any one element.

The strategy has proved robust. But the rise of nation-state hackers with the time, money, and motivation to target critical infrastructure, as well as the increasing use of internet-connected systems, means the past may well not be a reliable guide to the future.

Russia, in particular, has shown that it’s willing to weaponize software and deploy it against physical targets in Ukraine, which it has used as a testing ground for its cyber arms kit. And Triton’s deployment in Saudi Arabia shows that determined hackers will spend years of prodding and probing to find ways to drill through all those defensive layers.

Fortunately, the Saudi plant’s attackers were intercepted, and we now know a great deal more about how they worked. But it’s a sobering reminder that, just like other developers, hackers make mistakes too. What if the bug they inadvertently introduced, instead of triggering a safe shutdown, had disabled the plant’s safety systems just when a human error or other mistake had caused one of the critical processes in the plant to go haywire? The result could have been a catastrophe even if the hackers hadn’t intended to cause it.

Experts at places like the US’s Idaho National Laboratory are urging companies to revisit all their operations in the light of Triton and other cyber-physical threats, and to radically reduce, or eliminate, the digital pathways hackers could use to get to critical processes.

Businesses may chafe at the costs of doing that, but Triton is a reminder that the risks are increasing. Gutmanis thinks more attacks using the world’s most murderous malware are all but inevitable. “While this was the first,” he says, “I’d be surprised if it turns out to be the last.”

Article link: https://www.technologyreview.com/s/613054/cybersecurity-critical-infrastructure-triton-malware/amp/

Another Defense Agency Migrates Data to Amazon Cloud – Nextgov

Posted by timmreardon on 03/20/2019
Posted in: Uncategorized. Leave a comment

By FRANK KONKEL

The Defense Health Agency joins a growing list of defense agencies moving their data to the commercial cloud

The Defense Health Agency, which supports the delivery of health and medical services to millions of Army, Navy and Air Force personnel worldwide, is Amazon’s newest cloud customer.

Amazon Web Services will now host DHA’s Armed Forces Billing and Collection Utilization Solution in GovCloud U.S. West region, a cluster of data centers built specifically to host some of the government’s most sensitive data.

DHA joins the Army, U.S. Transportation Command and other defense agencies moving increasingly large—and sometimes highly sensitive—data sets to commercial clouds.

The cloud migration, executed by Virginia-based defense and technology contractor General Dynamics Information Technology, marks the first time DHA has migrated an Impact Level 4 workload—which includes sensitive unclassified data—to AWS.

“GDIT has reached a significant milestone by successfully navigating DHA’s workload to AWS GovCloud (US-West) Region,” said vice president Kamal Narang, head of GDIT’s Health Sector. “Our strong relationship with AWS and intimate knowledge of our customer’s needs made this challenge a reality and will usher in a new era of cloud agility for DHA.”

GDIT will continue its relationship with DHA following a $56 million, five-year task order awarded by the agency to operate and sustain the cloud-based Armed Forces Billing and Collection Utilization Solution.

The move to cloud reflects a growing trend within the Pentagon and across civilian agencies. Some analysts predict the Defense Department will spend as much as $2 billion in cloud-related services in the coming year. The Pentagon is also expected to award two cloud contracts—the Defense Enterprise Office Solutions and Joint Enterprise Defense Infrastructure contracts—potentially worth a total of $18 billion.

Article link: https://www.nextgov.com/it-modernization/2019/03/another-defense-agency-migrates-data-amazon-cloud/155699/

Most of AI’s business uses will be in two areas – McKinsey

Posted by timmreardon on 03/10/2019
Posted in: Uncategorized. Leave a comment

McKx1
By Michael Chui, Nicolaus Henke, and Mehdi Miremadi

An examination of more than 400 AI use cases revealed the two areas where AI can have the greatest impact, write the authors in Harvard Business Review.

While overall adoption of artificial intelligence (AI) remains low among businesses (about 20 percent upon our last study), senior executives know that AI isn’t just hype. Organizations across sectors are looking closely at the technology to see what it can do for their business. As they should—we estimate that 40 percent of all the potential value that can be created by analytics today comes from the AI techniques that fall under the umbrella “deep learning” (which utilize multiple layers of artificial neural networks, so-called because their structure and function are loosely inspired by that of the human brain). In total, we estimate deep learning could account for between $3.5 trillion and $5.8 trillion in annual value.
However, many business leaders are still not exactly sure where they should apply AI to reap the biggest rewards. After all, embedding AI across the business requires significant investment in talent and upgrades to the tech stack as well as sweeping change initiatives to ensure AI drives meaningful value, whether it be through powering better decision making or enhancing consumer-facing applications.
Through an in-depth examination of more than 400 actual AI use cases across 19 industries and nine business functions, we’ve discovered an old adage proves most useful in answering the question of where to put AI to work: “Follow the money.”
The business areas that traditionally provide the most value to companies tend to be the areas where AI can have the biggest impact. In retail organizations, for example, marketing and sales has often provided significant value. Our research shows that using AI on customer data to personalize promotions can lead to a 1 to 2 percent increase in incremental sales for brick-and-mortar retailers alone. In advanced manufacturing, by contrast, operations often drive the most value. Here, AI can enable forecasting based on underlying causal drivers of demand rather than prior outcomes, improving forecasting accuracy by 10 to 20 percent. This translates into a potential 5 percent reduction in inventory costs and revenue increases of 2 to 3 percent.

 

While applications of AI cover a full range of functional areas, it is in fact in these two cross-cutting ones—supply-chain management/manufacturing and marketing and sales—where we believe AI can have the biggest impact, at least for now, in several industries (exhibit). Combined, we estimate that these use cases make up more than two-thirds of the entire AI opportunity. AI can create $1.4 trillion to $2.6 trillion of value in marketing and sales across the world’s businesses, and $1.2 trillion to $2 trillion in supply-chain management and manufacturing (some of the value accrues to companies, while some is captured by customers). In manufacturing, the greatest value from AI can be created by using it for predictive maintenance (about $0.5 trillion to $0.7 trillion across the world’s businesses). AI’s ability to process massive amounts of data, including audio and video, means it can quickly identify anomalies to prevent breakdowns, whether that be an odd sound in an aircraft engine or a malfunction on an assembly line detected by a sensor.

McK4x

Another way business leaders can home in on where to apply AI is to simply look at the functions that are already taking advantage of traditional analytics techniques. We found that the greatest potential for AI to create value is in use cases where neural network techniques could either provide higher performance than established analytical techniques or generate additional insights and applications. This is true for 69 percent of the AI use cases identified in our study. In only 16 percent of use cases did we find a “greenfield” AI solution that was applicable where other analytics methods would not be effective. (While the number of use cases for deep learning will likely increase rapidly as algorithms become more versatile and the type and volume of data needed to make them viable become more available, the percentage of greenfield deep learning use cases might not increase significantly, because more established machine learning techniques also have room to become better and more ubiquitous.)

McK3x

The executive’s AI playbook

Explore the interactive

 

We don’t want to come across as naïve cheerleaders. Even as we see economic potential in the use of AI techniques, we recognize the tangible obstacles and limitations to implementing AI. Obtaining data sets that are sufficiently large and comprehensive enough to feed the voracious appetite that deep learning has for training data is a major challenge. So, too, is addressing the mounting concerns around the use of such data, including security, privacy, and the potential for passing human biases onto AI algorithms. In some sectors, such as healthcare and insurance, companies must also find ways to make the results explainable to regulators in human terms: why did the machine come up with this answer? The good news is that the technologies themselves are advancing and starting to address some of these limitations.

Beyond these limitations, there are the arguably more difficult organizational challenges companies face as they adopt AI. Mastering the technology requires new levels of expertise, and process can become a major impediment to successful adoption. Companies will have to develop robust data maintenance and governance processes and focus on both the “first mile”—how to acquire data and organize data efforts—and the far more difficult “last mile,” how to integrate the output of AI models into workflows, ranging from those of clinical-trial managers and sales-force managers to procurement officers.

While businesses must remain vigilant and responsible as they deploy AI, the scale and beneficial impact of the technology on businesses, consumers, and society make pursuing AI opportunities worth a thorough investigation. The pursuit isn’t a simple prospect, but it can be initiated by evoking a simple concept: follow the money.
This article originally appeared in Harvard Business Review in July 2018.

About the author(s)
Michael Chui is a partner of the McKinsey Global Institute and is based in McKinsey’s San Francisco office, Nicolaus Henke is a senior partner in the London office, and Mehdi Miremadi is a partner in the Chicago office.

Article link https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/most-of-ais-business-uses-will-be-in-two-areas

These Are the World’s Healthiest Nations – Bloomberg

Posted by timmreardon on 03/06/2019
Posted in: Uncategorized. Leave a comment

By Lee J Miller and Wei Lu
February 24, 2019, 9:00 AM EST

  • Iceland, Japan, Switzerland round out top five; U.S. is 35th
  • Health index looks at life expectancy, environmental factors

Maybe it’s something in the gazpacho or paella, as Spain just surpassed Italy to become the world’s healthiest country.
That’s according to the 2019 edition of the Bloomberg Healthiest Country Index, which ranks 169 economies according to factors that contribute to overall health. Spain placed sixth in the previous gauge, published in 2017.

Four additional European nations were among the top 10 in 2019: Iceland (third place), Switzerland (fifth), Sweden (sixth) and Norway (ninth). Japan was the healthiest Asian nation, jumping three places from the 2017 survey into fourth and replacing Singapore, which dropped to eighth. Australia and Israel rounded out the top 10 at seventh and 10th place.

Bloombergx1

The index grades nations based on variables including life expectancy while imposing penalties on risks such as tobacco use and obesity. It also takes into consideration environmental factors including access to clean water and sanitation.
Spain has the highest life expectancy at birth among European Union nations, and trails only Japan and Switzerland globally, United Nations data show. Spain by 2040 is forecast to have the highest lifespan, at almost 86 years, followed by Japan, Singapore and Switzerland, according to the University of Washington’s Institute for Health Metrics and Evaluation.
“Primary care is essentially provided by public providers, specialized family doctors and staff nurses, who provide preventive services to children, women and elderly patients, and acute and chronic care,” according to the European Observatory on Health Systems and Policies 2018 review of Spain, noting a decline the past decade in cardiovascular diseases and deaths from cancer.

Researchers say eating habits may provide clues to health levels enjoyed by Spain and Italy, as a “Mediterranean diet, supplemented with extra-virgin olive oil or nuts, had a lower rate of major cardiovascular events than those assigned to a reduced-fat diet,” according to a study led by the University of Navarra Medical School.
Meanwhile in North America, Canada’s 16th-place ranking far surpassed the U.S. and Mexico, both of which dropped slightly to 35th and 53rd. Life expectancy in the U.S. has been trending lower due to deaths from drug overdoses and suicides.
Cuba placed five spots above the U.S., making it the only nation not classified as “high income” by the World Bank to be ranked that high. One reason for the island nation’s success may be its emphasis on preventative care over the U.S. focus on diagnosing and treating illness, the American Bar Association Health Law Section said in a report last year after vising Cuba.

South Korea improved seven spots to 17th while China, home to 1.4 billion people, rose three places to 52nd. Life expectancy in China is on track to surpass the U.S. by 2040, according to the Institute for Health Metrics and Evaluation.
Sub-Saharan economies accounted for 27 of the 30 unhealthiest nations in the ranking. Haiti, Afghanistan and Yemen were the others. Mauritius was the healthiest in Sub-Sahara, placing 74th globally as it had the lowest death rate by communicable diseases in a region still marred by infectious mortality.

Methodology

Bloomberg evaluated health variables and risks ranging from those of behavioral nature to environmental characteristics. Final index only included nations with at least 0.3 million population and sufficient data. 169 WHO states met the criteria to be included.

To access the Bloomberg 2019 Healthiest Country Index data set for all nations, click HERE.

Bloombergx2

Article link: https://www.bloomberg.com/amp/news/articles/2019-02-24/spain-tops-italy-as-world-s-healthiest-nation-while-u-s-slips

CMS Office of the Actuary Releases 2018-2027 Projections of National Health Expenditures – CMS

Posted by timmreardon on 02/20/2019
Posted in: Uncategorized. Leave a comment

CMS Office of the Actuary Releases 2018-2027 Projections of National Health Expenditures

National health expenditure growth is expected to average 5.5 percent annually from 2018-2027, reaching nearly $6.0 trillion by 2027, according to a report published today by the independent Office of the Actuary at the Centers for Medicare & Medicaid Services (CMS). 

Growth in national health spending is projected to be faster than projected growth in Gross Domestic Product (GDP) by 0.8 percentage points over the same period.  As a result, the report projects the health share of GDP to rise from 17.9 percent in 2017 to 19.4 percent by 2027.

The outlook for national health spending and enrollment over the next decade is expected to be driven primarily by:

  • Key economic factors, such as growth in income and employment, and demographic factors, such as the baby-boom generation continuing to age from private insurance into Medicare; and
  • Increases in prices for medical goods and services (projected to grow 2.5 percent over 2018-2027 compared to 1.1 percent during the period of 2014-2017).

Similar to the findings in last year’s report, the report found that by 2027, federal, state and local governments are projected to finance 47 percent of national health spending, an increase of 2 percentage points from 45 percent in 2017.  As a result of comparatively higher projected enrollment growth in Medicare, average annual spending growth in Medicare (7.4 percent) is expected to exceed that of Medicaid (5.5 percent) and private health insurance (4.8 percent).

Selected highlights in projected health insurance enrollment and national health expenditures by sector and payer include:

Health Insurance Enrollment: Net enrollment gains across all sources are generally expected to keep pace with population growth with the insured share of the population going from 90.9 percent in 2017 to 89.7 percent in 2027.

Medicare: Medicare spending growth is projected to average 7.4 percent over 2018-2027, the fastest rate among the major payers.  Underlying the strong average annual Medicare spending growth are projected sustained strong enrollment growth as the baby-boomers continue to age into the program and growth in the use and intensity of covered services that is consistent with the rates observed during Medicare’s long-term history.

Medicaid: Average annual growth of 5.5 percent is projected for Medicaid spending for 2018-2027.  Medicaid expansions during 2019 in Idaho, Maine, Nebraska, Utah, and Virginia are expected to result in the first acceleration in growth in spending for the program since 2014 (from 2.2 percent in 2018 to 4.8 percent in 2019).  Medicaid spending growth is then projected to average 6.0 percent for 2020 through 2027 as the program’s spending patterns reflect an enrollment mix more heavily influenced by comparatively more expensive aged and disabled enrollees.

Private Health Insurance and Out-of-Pocket: For 2018-2027, private health insurance spending growth is projected to average 4.8 percent, slowest among the major payers, which is partly due to slow enrollment growth related to the baby-boomers transitioning from private coverage into Medicare.  Out-of-pocket expenditures are also projected to grow at an average rate of 4.8 percent over 2018-2027 and to represent 9.8 percent of total spending by 2027 (down from 10.5 percent in 2017).

Prescription Drugs:  Spending growth for prescription drugs is projected to generally accelerate over 2018-2027 (and average 5.6 percent) mostly as a result of faster utilization growth.  Underlying faster growth in the utilization of prescription drugs, particularly over 2020-2027, are a number of factors including efforts on the part of employers and insurers to encourage better medication adherence among those with chronic conditions, changing pharmacotherapy guidelines, faster projected private health insurance spending growth in lagged response to higher income growth, and an expected influx of new and expensive innovative drugs into the market towards the latter stage of the period.

Hospital:  Hospital spending growth is projected to average 5.6 percent for 2018-2027. This includes a projected acceleration in 2019, to 5.1 percent from 4.4 percent in 2018, reflecting the net result of faster expected growth in both Medicare (higher payment updates) and Medicaid (as a result of expansion in five states), but slower projected growth in private health insurance as enrollment declines slightly due to the repeal of the individual mandate.

Physician and Clinical Services: Physician and clinical services spending is projected to grow an average of 5.4 percent per year over 2018-2027.  This includes faster growth in prices over 2020-2027 for physician and clinical services due to anticipated rising wage growth related to increased demand from the aging population.

The Office of the Actuary’s report will appear at: http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsProjected.html
An article about the study is also being published by Health Affairs and is available here: https://protect2.fireeye.com/url?k=529199bd-0ec4906d-5291a882-0cc47a6a52de-97641bdac742d461&u=http://www.healthaffairs.org/doi/abs/10.1377/hlthaff.2018.05499

Article link: https://www.cms.gov/newsroom/press-releases/cms-office-actuary-releases-2018-2027-projections-national-health-expenditures

Get CMS news at cms.gov/newsroom, sign up for CMS news via email and follow CMS on Twitter CMS Administrator @SeemaCMS, @CMSgov, and @CMSgovPress.

Explainer: What is quantum communication? – MIT Technology Review

Posted by timmreardon on 02/18/2019
Posted in: Uncategorized. Leave a comment

Researchers and companies are creating ultra-secure communication networks that could form the basis of a quantum internet. This is how it works.

  • by Martin Giles
  • February 14, 2019

Barely a week goes by without reports of some new mega-hack that’s exposed huge amounts of sensitive information, from people’s credit card details and health records to companies’ valuable intellectual property. The threat posed by cyberattacks is forcing governments, militaries, and businesses to explore more secure ways of transmitting information.

Today, sensitive data is typically encrypted and then sent across fiber-optic cables and other channels together with the digital “keys” needed to decode the information. The data and the keys are sent as classical bits—a stream of electrical or optical pulses representing 1s and 0s. And that makes them vulnerable. Smart hackers can read and copy bits in transit without leaving a trace.

Quantum communication takes advantage of the laws of quantum physics to protect data. These laws allow particles—typically photons of light for transmitting data along optical cables—to take on a state of superposition, which means they can represent multiple combinations of 1 and 0 simultaneously. The particles are known as quantum bits, or qubits.

The beauty of qubits from a cybersecurity perspective is that if a hacker tries to observe them in transit, their super-fragile quantum state “collapses” to either 1 or 0. This means a hacker can’t tamper with the qubits without leaving behind a telltale sign of the activity.

Some companies have taken advantage of this property to create networks for transmitting highly sensitive data based on a process called quantum key distribution, or QKD. In theory, at least, these networks are ultra-secure.

What is quantum key distribution?

QKD involves sending encrypted data as classical bits over networks, while the keys to decrypt the information are encoded and transmitted in a quantum state using qubits.

Various approaches, or protocols, have been developed for implementing QKD. A widely used one known as BB84 works like this. Imagine two people, Alice and Bob. Alice wants to send data securely to Bob. To do so, she creates an encryption key in the form of qubits whose polarization states represent the individual bit values of the key.

The qubits can be sent to Bob through a fiber-optic cable. By comparing measurements of the state of a fraction of these qubits—a process known as “key sifting”—Alice and Bob can establish that they hold the same key.

As the qubits travel to their destination, the fragile quantum state of some of them will collapse because of decoherence. To account for this, Alice and Bob next run through a process known as “key distillation,” which involves calculating whether the error rate is high enough to suggest that a hacker has tried to intercept the key.

If it is, they ditch the suspect key and keep generating new ones until they are confident that they share a secure key. Alice can then use hers to encrypt data and send it in classical bits to Bob, who uses his key to decode the information.

We’re already starting to see more QKD networks emerge. The longest is in China, which boasts a 2,032-kilometer (1,263-mile) ground link between Beijing and Shanghai. Banks and other financial companies are already using it to transmit data. In the US, a startup called Quantum Xchange has struck a deal giving it access to 500 miles (805 kilometers) of fiber-optic cable running along the East Coast to create a QKD network. The initial leg will link Manhattan with New Jersey, where many banks have large data centers.

Although QKD is relatively secure, it would be even safer if it could count on quantum repeaters.

What is a quantum repeater?

Materials in cables can absorb photons, which means they can typically travel for no more than a few tens of kilometers. In a classical network, repeaters at various points along a cable are used to amplify the signal to compensate for this.

QKD networks have come up with a similar solution, creating “trusted nodes” at various points. The Beijing-to-Shanghai network has 32 of them, for instance. At these waystations, quantum keys are decrypted into bits and then reencrypted in a fresh quantum state for their journey to the next node. But this means trusted nodes can’t really be trusted: a hacker who breached the nodes’ security could copy the bits undetected and thus acquire a key, as could a company or government running the nodes.

Ideally, we need quantum repeaters, or waystations with quantum processors in them that would allow encryption keys to remain in quantum form as they are amplified and sent over long distances. Researchers have demonstrated it’s possible in principle to build such repeaters, but they haven’t yet been able to produce a working prototype.

There’s another issue with QKD. The underlying data is still transmitted as encrypted bits across conventional networks. This means a hacker who breached a network’s defenses could copy the bits undetected, and then use powerful computers to try to crack the key used to encrypt them.

The most powerful encryption algorithms are pretty robust, but the risk is big enough to spur some researchers to work on an alternative approach known as quantum teleportation.

What is quantum teleportation?

This may sound like science fiction, but it’s a real method that involves transmitting data wholly in quantum form. The approach relies on a quantum phenomenon known as entanglement.

Quantum teleportation works by creating pairs of entangled photons and then sending one of each pair to the sender of data and the other to a recipient. When Alice receives her entangled photon, she lets it interact with a “memory qubit” that holds the data she wants to transmit to Bob. This interaction changes the state of her photon, and because it is entangled with Bob’s, the interaction instantaneously changes the state of his photon too.

In effect, this “teleports” the data in Alice’s memory qubit from her photon to Bob’s. The graphic below lays out the process in a little more detail:

Researchers in the US, China, and Europe are racing to create teleportation networks capable of distributing entangled photons. But getting them to scale will be a massive scientific and engineering challenge. The many hurdles include finding reliable ways of churning out lots of linked photons on demand, and maintaining their entanglement over very long distances—something that quantum repeaters would make easier.

Still, these challenges haven’t stopped researchers from dreaming of a future quantum internet.

What is a quantum internet?

Just like the traditional internet, this would be a globe-spanning network of networks. The big difference is that the underlying communications networks would be quantum ones

It isn’t going to replace the internet as we know it today. Cat photos, music videos, and a great deal of non-sensitive business information will still move around in the form of classical bits. But a quantum internet will appeal to organizations that need to keep particularly valuable data secure. It could also be an ideal way to connect information flowing between quantum computers, which are increasingly being made available through the computing cloud.

China is in the vanguard of the push toward a quantum internet. It launched a dedicated quantum communications satellite called Micius a few years ago, and in 2017 the satellite helped stage the world’s first intercontinental, QKD-secured video conference, between Beijing and Vienna. A ground station already links the satellite to the Beijing-to-Shanghai terrestrial network. China plans to launch more quantum satellites, and several cities in the country are laying plans for municipal QKD networks.

Some researchers have warned that even a fully quantum internet may ultimately become vulnerable to new attacks that are themselves quantum based. But faced with the hacking onslaught that plagues today’s internet, businesses, governments, and the military are going to keep exploring the tantalizing prospect of a more secure quantum alternative.

Article link: https://www.technologyreview.com/s/612964/what-is-quantum-communications/

Martin GilesSan Francisco Bureau Chief

I am the San Francisco bureau chief of MIT Technology Review, where I cover the future of computing and the companies in Silicon Valley that are shaping it. Before joining the publication, I led research and publishing at a venture capital… More

Design thinking, explained – MIT Sloan

Posted by timmreardon on 02/15/2019
Posted in: Uncategorized. Leave a comment

Rebecca Linke

Sep 14, 2017

What is design thinking?

Design thinking is an innovative problem-solving process rooted in a set of skills.

The approach has been around for decades, but it only started gaining traction outside of the design community after the 2008 Harvard Business Review article[subscription required] titled “Design Thinking” by Tim Brown, CEO and president of design company IDEO.

Since then, the design thinking process has been applied to developing new products and services, and to a whole range of problems, from creating a business model for selling solar panels in Africa to the operation of Airbnb.

At a high level, the steps involved in the design thinking process are simple: first, fully understand the problem; second, explore a wide range of possible solutions; third, iterate extensively through prototyping and testing; and finally, implement through the customary deployment mechanisms.

The skills associated with these steps help people apply creativity to effectively solve real-world problems better than they otherwise would. They can be readily learned, but take effort. For instance, when trying to understand a problem, setting aside your own preconceptions is vital, but it’s hard. Creative brainstorming is necessary for developing possible solutions, but many people don’t do it particularly well. And throughout the process it is critical to engage in modeling, analysis, prototyping, and testing, and to really learn from these many iterations.

Once you master the skills central to the design thinking approach, they can be applied to solve problems in daily life and any industry.

Here’s what you need to know to get started.

Understand the problem 

The first step in design thinking is to understand the problem you are trying to solve before searching for solutions. Sometimes, the problem you need to address is not the one you originally set out to tackle.

“Most people don’t make much of an effort to explore the problem space before exploring the solution space,” said MIT Sloan professor Steve Eppinger. The mistake they make is to try and empathize, connecting the stated problem only to their own experiences. This falsely leads to the belief that you completely understand the situation. But the actual problem is always broader, more nuanced, or different than people originally assume.

Take the example of a meal delivery service in Holstebro, Denmark. When a team first began looking at the problem of poor nutrition and malnourishment among the elderly in the city, many of whom received meals from the service, it thought that simply updating the menu options would be a sufficient solution. But after closer observation, the team realized the scope of the problem was much larger, and that they would need to redesign the entire experience, not only for those receiving the meals, but for those preparing the meals as well. While the company changed almost everything about itself, including rebranding as The Good Kitchen, the most important change the company made when rethinking its business model was shifting how employees viewed themselves and their work. That, in turn, helped them create better meals (which were also drastically changed), yielding happier, better nourished customers.

Involve users

Imagine you are designing a new walker for rehabilitation patients and the elderly, but you have never used one. Could you fully understand what customers need? Certainly not, if you haven’t extensively observed and spoken with real customers. There is a reason that design thinking is often referred to as human-centered design.

“You have to immerse yourself in the problem,” Eppinger said.

How do you start to understand how to build a better walker? When a team from MIT’s Integrated Design and Management programtogether with the design firm Altitude took on that task, they met with walker users to interview them, observe them, and understand their experiences.

“We center the design process on human beings by understanding their needs at the beginning, and then include them throughout the development and testing process,” Eppinger said.

Central to the design thinking process is prototyping and testing (more on that later) which allows designers to try, to fail, and to learn what works. Testing also involves customers, and that continued involvement provides essential user feedback on potential designs and use cases. If the MIT-Altitude team studying walkers had ended user involvement after its initial interviews, it would likely have ended up with a walker that didn’t work very well for customers.

It is also important to interview and understand other stakeholders, like people selling the product, or those who are supporting the users throughout the product life cycle.

Go wild!

The second phase of design thinking is developing solutions to the problem (which you now fully understand). This begins with what most people know as brainstorming.

Hold nothing back during brainstorming sessions — except criticism. Infeasible ideas can generate useful solutions, but you’d never get there if you shoot down every impractical idea from the start.

“One of the key principles of brainstorming is to suspend judgment,” Eppinger said. “When we’re exploring the solution space, we first broaden the search and generate lots of possibilities, including the wild and crazy ideas. Of course, the only way we’re going to build on the wild and crazy ideas is if we consider them in the first place.”

That doesn’t mean you never judge the ideas, Eppinger said. That part comes later, in downselection. “But if we want 100 ideas to choose from, we can’t be very critical.”

In the case of The Good Kitchen, the kitchen employees were given new uniforms. Why? Uniforms don’t directly affect the competence of the cooks or the taste of the food.

But during interviews conducted with kitchen employees, designers realized that morale was low, in part because employees were bored preparing the same dishes over and over again, in part because they felt that others had a poor perception of them. The new, chef-style uniforms gave the cooks a greater sense of pride. It was only part of the solution, but if the idea had been rejected outright, or perhaps not even suggested, the company would have missed an important aspect of the solution.

Prototype and test. Repeat.

You’ve defined the problem. You’ve spoken to customers. You’ve brainstormed, come up with all sorts of ideas, and worked with your team to boil those ideas down to the ones you think may actually solve the problem you’ve defined.

What next?

“We don’t develop a good solution just by thinking about a list of ideas, bullet points and rough sketches,” Eppinger said. “We explore potential solutions through modeling and prototyping. We design, we build, we test, and repeat — this design iteration process is absolutely critical to effective design thinking.”

Repeating this loop of prototyping, testing, and gathering user feedback is crucial for making sure the design is right — that is, it works for customers, you can build it, and you can support it.

“After several iterations, we might get something that works, we validate it with real customers, and we often find that what we thought was a great solution is actually only just OK. But then we can make it a lot better through even just a few more iterations,” Eppinger said

Implementation

The goal of all the steps that come before this is to have the best possible solution before you move into implementing the design. Your team will spend most of its time, its money, and its energy on this stage.

“Implementation involves detailed design, training, tooling, and ramping up. It is a huge amount of effort, so get it right before you expend that effort,” said Eppinger.

Think big

Design thinking isn’t just for “things.” If you are only applying the approach to physical products, you aren’t getting the most out of it. Design thinking can be applied to any problem that needs a creative solution. When Eppinger ran into a primary school educator who told him design thinking was big in his school, Eppinger thought he meant that they were teaching students the tenets of design thinking.

“It turns out they meant they were using design thinking in running their operations and improving the school programs. It’s being applied everywhere these days,” Eppinger said.

In another example from the education field, Peruvian entrepreneur Carlos Rodriguez-Pastor hired design consulting firm IDEO to redesign every aspect of the learning experience in a network of schools in Peru. The ultimate goal? To elevate Peru’s middle class.

As you’d expect, many large corporations have also adopted design thinking. IBM has adopted it at a company-wide level, training many of its nearly 400,000 employees in design thinking principles.

What can design thinking do for your business?

The impact of all the buzz around design thinking today is that people are realizing that “anybody who has a challenge that needs creative problem solving could benefit from this approach,” Eppinger said. That means that managers can use it, not only to design a new product or service, “but anytime they’ve got a challenge, a problem to solve.”

Applying design thinking techniques to business problems can help executives across industries rethink their product offerings, grow their markets, offer greater value to customers, or innovate and stay relevant. “I don’t know industries that can’t use design thinking,” said Eppinger.

Ready to go deeper?

Read “The Designful Company” by Marty Neumeier, a book that focuses on how businesses can benefit from design thinking, and “Product Design and Development,” co-authored by Eppinger, to better understand the detailed methods.

Register for an MIT Sloan Executive Education course:

  • Systematic Innovation of Products, Processes, and Services, a five-day course taught by Eppinger and other MIT professors.
  • Leadership by Design: Innovation Process and Culture, a two-day course taught by MIT Integrated Design and Management director Matthew Kressy.
  • Managing Complex Technical Projects, a two-day course taught by Eppinger.
  • Apply for Innovation of Products and Services: MIT’s Approach to Design Thinking, an Emeritus Institute of Management course taught by Eppinger.

The expert

Steve Eppinger is a professor of management science and innovation at MIT Sloan. He holds the General Motors Leaders for Global Operations Chair and has a PhD from MIT in engineering. He is the faculty co-director of MIT’s System Design and Management program and Integrated Design and Management program, both master’s degrees joint between the MIT Sloan and Engineering schools. His research focuses on product development and technical project management, and has been applied to improving complex engineering processes in many industries.

Article link: http://mitsloan.mit.edu/ideas-made-to-matter/design-thinking-explained?utm_source=mitsloantwitter&utm_medium=social&utm_campaign=designthinkingexplainer

Apple’s deal with the VA is a big step toward giving patients control over their own health info – CNBC

Posted by timmreardon on 02/12/2019
Posted in: Uncategorized. Leave a comment
  • Apple’s work with the VA is a big deal, and not just for veterans. 
  • Health industry experts say that the barriers are now coming down, which have prevented patients, doctors and start-up app developers from accessing health information.
  • Apple is a first mover in taking advantage of new rules and regulations designed to prevent information blocking.

Apple announced Monday that it’s working with the U.S. Department of Veterans Affairs to bring health records to the iPhone.

That means that vets receiving their care from the VA will be able to see medical information like allergies, immunizations, labs and procedures directly on their iPhones with just a few clicks.

“By bringing Health Records on iPhone to VA patients, we hope veterans will experience improved healthcare that will enhance their lives,” said Apple’s COO, Jeff Williams, in a statement.

Apple already works with dozens of hospitals that are integrated with its Health Records software, so their patients can access clinical information. But the VA, which represents 9 million people, is a big step forward for the company, as it’s the largest medical system in the country.

It’s also a big deal for people who aren’t veterans.

Industry experts say that Apple is taking advantage of a bigger movement to force medical records companies and insurers to open up access to health information, which is supported by the government and different academic groups. Also this week, the Department of Health and Human Services shared its much-anticipated rules that are designed to prevent information blocking.

“The barriers are coming down,” said Kenneth Mandl, a director of computational health informatics at Boston Children’s Hospital and a longtime advocate of the “App Store for health” concept.

“And Apple is a first mover in taking advantage of these new laws and regulations,” he said.

The earliest beneficiaries will be early adopters of Apple’s health records software, who can use it to get their data from participating health systems, as well as veterans and patients enrolled in Medicare.

But Mandl believes it’s only a matter of time before commercial insurers and other groups follow suit.

As this trend continues, anyone with a smartphone will someday be able to see their clinical record (what happened when they got treated at a hospital or clinic), and their claims history (what got billed). That unified data set will also open up a lot of opportunities for health app developers that can generate important new insights.

It’s also good for patients, he said, as they can present everything to a doctor rather than having to be subjected to duplicate tests and procedures.

Mandl and others have already put decades of work toward making this a reality for patients, many of whom still must request records from every hospital or clinic they’ve visited. There’s been a big trend toward open standards, as well as to reduce the crazy fees some electronic medical records providers charge to give third-party access to this data.

“Until now, the data systems that patients and doctors rely on for their care have been composed of many different companies’ products run by many different hospitals without an overarching strategy for how they talk to each other, as well as other software systems,” said Mandl.

But these rules promote a “universal approach for connecting apps to health systems, the same way you might connect an app to your smartphone.”

Article link: https://www.cnbc.com/amp/2019/02/11/apple-va-deal-putting-patients-in-control-of-their-health-info.html

Winning Health Tech Entrepreneurs Will Focus On Implementation, Not Fetishize Invention – Forbes

Posted by timmreardon on 02/12/2019
Posted in: Uncategorized. Leave a comment

img_5699

One word: implementation.

Increasingly, I’m convinced that the underappreciated challenges of implementation describe the ever-expanding gap between the promise of emerging technologies (sensors, AI) and their comparatively limited use in clinical care and pharmaceutical research. (Updated disclosure: I am now a VC, associated with a pharma company; views expressed, as always, are my own.)

Technology Promises Disruption Of Healthcare…

Let’s start with some context. Healthcare, it is universally agreed, is “broken,” and in particular, many of the advances and conveniences we now take for granted in virtually every other domain remain largely aspirational goals, or occasionally pilot initiatives, in medicine.

Healthcare is viewed by many as an ossified enterprise desperately in need of some disruption. As emerging technologies shook up other industries originally viewed as too hide-bound to ever change, there was in many quarters a profound hope that advances like the smart phone or AI, and approaches like agile development and design thinking, could reinvent the way care is delivered, and more generally, help to reconceptualize the way each of us think about health and disease.

In particular, these technologies offered the promise of helping improve care in at least five ways:

  • From reactive to anticipatory
  • From episodic to continuous
  • From a focus on the average patient to a focus on each individual patient
  • From care based on precedent (previous patients) to care based on continuous feedback and learning
  • From patient-as-recipient of care to patient-as-participant (and owner/driver of care) – a foundational theme of both the PASTEUR translational medicine training program Denny Ausiello and I organized in 2000 (summarized in the American Journal of Medicine, here), as well as of Eric Topol’s The Patient Will See You Now (my Wall Street Journal review here).

As Denny and I wrote in 2013, this time in the context of our CATCH digital health translational medicine initiative, emerging technology:

“…provides a way for medicine to break out of its traditional constraints of time and place, and understand patients in a way that’s continuous rather than episodic, and that strives to offer care in a fashion that’s anticipatory or timely rather than reactive or delayed.”

These technologies also afforded new hopes to pharma, in particular, the ability to:

  • better understand disease (assessment of phenotype and genotype that’s at once more granular and more comprehensive);
  • better understand illness/patient experience of disease (can capture more completely and perhaps more quantitatively and in more dimensions what it feels to experience a condition);
  • forge a close connection with patients, and add value beyond the pill. The idea of moving from assets to solutions is a perennial favorite of consultants (this, for example).

… And How’s That Going?

And yet, here we are. While some consultants suggest we are further along than even their extravagantly optimistic predictions of 2013 had imagined, my own conversations with a range of stakeholders suggests progress has been painfully slow, and the practice of both medicine and drug development generally have not felt the impact of these emerging technologies, to put it very politely (and most experts with whom I’ve spoken over the last several months have been far blunter than that).

As Warner Wolf used to say, let’s go to the videotape. Consider some of the published, peer-reviewed studies evaluating digital health approaches:

  • Patients invited to share tracker data with providers via EHR: little initial traction.
  • Health coaching and telemonitoring after heart failure patients discharged: no impact on 180-day readmission rate.
  • Electronic pill bottles plus financial incentives on patients after discharge following heart attack: no impact on medical adherence or outcomes.
  • Smartphone self-monitoring: no impact on healthcare costs or utilization (especially striking since study was done – and to his credit, published – by smartphone-in-health champion Eric Topol).
  • Fitness trackers in promoting activity and driving weight loss: multiple studies (for example, here, here) showing absence of durable impact.

Given these data, it may not come as a great surprise that a soon-to-be-published review (previewed by its lead author, Brennan Spiegel, on twitter) representing a systematic evaluation of high-quality randomized control trials reportedly finds that “device enabled remote monitoring does not consistently improve clinical outcomes,” according to Spiegel.

The struggles of digital health to demonstrate value are not new – see this post from January of 2014 – nor are they exceptional in medicine. In fact, as I’ve previously noted, many technologies and approaches thought intuitively to offer obvious benefit turned out not to, from the use of bone marrow transplant in breast cancer to the use of a category of anti-arrhythmic medication following heart attack to the routine use of a pulmonary artery catheter in ICU patients. In each case, benefit was thought so obvious as to question the ethics of even doing a randomized study, and then studies were done refuting the hypothesis.

Technology Adoption In Pharma

Doubts about efficacy are also one of the reasons pharma has been extremely slow to adopt some new technology, concerns that have been validated by some early pilots. For example, I am aware of what seemed to be the perfect case of delivering a solution rather than a product: a company made a surgical product, and invested significant resources in developing a service that provided useful post-op advice and support (delivered by highly trained nurses) to patients who received the product, thus helping the patient, the surgeon (unburdening his or her office, while simultaneously helping drive better outcomes), and the company, by making it’s product more attractive. Result: I have been told that the service ultimately was shut down, because few patients availed themselves of it.

I’ve also heard from many stakeholders of a broader concern about the solution-not-pill approach: from the perspective of many commercial organizations, putting resources towards “solutions” takes money from commercial budgets, and is often felt not to deliver commensurate commercial returns. Translation: it eats into profit margins. Deeper translation: while “solutions” might be appealing in a truly value-based world, where revenue is driven by outcomes, at least today, revenue is driven by sales (more accurately, sales minus rebates), and investing in solutions, in my industry survey, tends to be seen as something that’s championed far more enthusiastically in public than in private.

One more dispiriting (similarly sanitized/anonymized) example: in 2013, I wrote that one of the advantages of capturing patient experience was that it provided a way of advancing a product that offered similar primary endpoints but which delivered a better patient experience. Shortly after that was published, I was told of an example where the exact opposite occurred: a company developed what was essentially an improved version of an existing oncology product, with better tolerability. However, it was killed at a relatively late stage by a pharma’s commercial team, who determined they would never be able to get “premium pricing,” especially since the existing product would be generic relatively soon. Without a significantly improved primary endpoint, the commercial group determined, payors would simply not reimburse for the new product. (Of course, one could argue that improved tolerance might lead to higher adherence and better real-world outcomes, but the commercial team, at least in this case, apparently didn’t anticipate that would be persuasive.) While I assume both pharmas and payors would (and will) push back against this example, I suspect it’s fairly representative of how these decisions are actually made.

Digital Biomarkers, In Context

An area often cited as holding particular promise is “digital biomarkers” – using technology to provide the sort of information we’ve long sought from traditional biomarkers, such as an early read around whether a medicine is doing what it should. The challenge here, though, is one of compounded hype – the extravagant expectations around biomarkers multiplied by the extravagant expectations around digital technology. The reality – as Anna Barker, leader of the National Biomarker Development Association has long emphasized – is that it’s astonishingly difficult to develop a robust, validated biomarker, and requires a methodical assessment process – as traditional medical device and diagnostics makers appreciate all-to-well.

The problem of utilizing biomarkers in early clinical drug development is especially close to home for me, as I spent several years in pharma specifically working in this area. The goal – and the real way biomarkers could save a ton of money – is not to provide reassurance to the clinical team, but to tell them their drug doesn’t seem to be working, and isn’t likely to work. By the time a drug is ready for testing in people, it more or less is what it is – either it will safely work for the intended indication or it won’t. Most won’t, and the sooner you realize that, the better. Thus, if a biomarker can persuade a team to kill a product in Phase 1 rather than after Phase 3, you save a huge amount of time and money.

The problem, however, is that understandably, all the incentives and all the glory (such as it is) in drug development revolve around moving promising products forward, and every team desperately wants to believe that its molecule is going to be the next Sovaldi or Keytruda. It’s in this context that biomarkers must prove their worth, and you can imagine what a high bar it is. While teams welcome data that vaguely support their product, they are far more critical of data that might derail development – understandable given all the work that’s already gone into getting the drug into clinical trials. Your biomarker data needs to be solid enough to persuade them that further effort is futile.

(Of course there are many other uses for biomarkers in clinical development, such as dose selection.)

Tech vs Pharma View of Data: Positive vs Negative Optionality

Taking a step back, from biomarkers specifically to data more generally, one of the most profound differences between pharma and tech companies is how they view data, and the collection of data. Most tech companies strive to collect as much data as possible, as these data represent largely positive optionality – lots of upside (opportunities to capitalize on the knowledge), comparatively little downside. On the other hand – and especially in late development – pharma companies have traditionally viewed data collection in an extremely guarded way, as a risky undertaking that in many ways provides more downside than upside. In my experience, pharmas aren’t seeking to bury relevant safety signals – if a product is harmful, they desperately want to know, at least from what I’ve seen and experienced. Their concern instead is their obligation to pursue anything they might discover, including the slew of false positives that inevitably emerge as you evaluate more and more data. While scientists and regulators (Norman Stockbridge, as I noted this April in the Washington Post, has been thinking about this for years) are increasingly used to handling this risk, and adjusting for false discovery rate, I suspect the concern is basically that trial lawyers and juries may be less interested in such subtleties.

A promising recent trend, however, is that increasingly, pharma companies have started to worry about the other side of this equation – the upside, the opportunities and insights they’re forgoing by not taking a closer and more coherent and integrated look at the data they have, and the data they could be collecting. My sense is that much of this newfound interest in pharma is actually being driven at the board level, down into the organization, as some board members seem concerned about the possibility of a “tech giant” swooping in and figuring out some key insights that were easily available to a pharma company willing to take off its own blinders. The recent AI hype has only added fuel to this fire.

There’s also two sides to pharma’s reputation for being conservative and resistant to change. Most leaders in the industry would acknowledge there’s a lot of truth to this, and many in the trenches would argue it’s for a good reason: someone’s always trying to sell pharma a shiny new object, and the vast majority of these promises have failed to deliver – often after significant investment of time and treasure. Industry insiders who were fans of Nassim Taleb might even invoke his critique of “neomania,” and perhaps assert they prefer assays and approaches that have been “battle-tested,” and have “withstood the test of time.”

On the other hand, many innovators I know within and outside of the industry are beyond frustrated by the glacial pace of change, by the fact that, as UCSF neurologist John Hixon notes (and you can find our 2015 Tech Tonics interview with him here), most pharmas still use paper diaries to assess seizures, even though better metrics and measures are now available. As he and many others have emphasized to me, the mentality seems to be that no one has even gotten fired for effectively executing an approach that’s led to FDA approvals before.

A particularly interesting side note here is that often in this sort of situation, you might anticipate that the innovation would come from the agile upstart company, perhaps a small biotech more willing to take a chance on a promising emerging methodology. But curiously, almost everyone with whom I’ve spoken – including experts at big pharma, small biotechs, and at CROs who serve them (eg this Tech Tonics interview with digital health expert at Medidata) – believes that the experimentation will be driven by big pharma, simply because they are more likely to have the money to explore such an approach. Small biotechs, in contrast, were felt to be less likely to take a chance, as many seek to partner programs with, or to be acquired by big pharmas, and thus need the big pharmas to be comfortable with their data and approach.

So where does this leave us? Is digital health “digital snake oil,” as the head of the AMA suggested last year, or “dead,” as an investor suggested (somewhat tongue-in-cheek) a few months ago? Phrased differently: is the juice worth the squeeze?

Implementation Matters

Which brings us back to implementation (remember implementation? This is a blog post about implementation….)

Most of us think about innovation (especially in technology) as revelation, a singular event that once made visible, radically changes how we think and act.

Yet that turns out to be a poor mental model of how innovation finds its way into our lives. Consider, for instance, the vast distance between the very first automobile, created by Carl Benz in Germany in 1885-1886, and the modern car. Benz’s car was a novelty, but didn’t immediately transform the lives of German citizens and immediately occupy the central role cars have in our lives today. The differences you’d probable notice at first are the construction, the materials, the design of engine. But also consider the world into which Benz’s car was born: there weren’t asphalt highways connecting every conceivable destination, there weren’t gas stations in every town, often at many intersections. There were mostly dirt roads and a lot of horses. In order for Benz’s innovation to be (more) fully realized, to be implemented at scale, there needed to be a series of advances.

The distinction between innovation and implementation was a key takeaway from a fascinating book written by Boston University economist (and former software engineer and technology CEO) James Bessen, entitled Learning By Doing (and hat-tip to AstraZeneca physician-scientist Kevin Horgan for suggesting it to me). While the book is perhaps best known as a response to the idea (raised in The Second Machine Age and elsewhere) that technology is associated with increased inequality (I don’t have the depth to mediate this debate), it was Bessen’s discussion of implementation that really caught my attention.

“The distinction between invention and implementation is critical and too often ignored,” Bessen writes, and goes on to cite data suggesting that “75-95% of the productivity gains from many major new technologies were realized only after decades of improvement in the implementation.” (See figure below.)

According to Bessen,

“A new technology typically requires much more than an invention in order to be designed, built, installed, operated, and maintained. Initially much of this new technical knowledge develops slowly because it is learned through experience, not in the classroom.”

The four hurdles to technology implantation at scale that Bessen outlines will seem familiar to anyone in healthcare:

  1. Many people in different occupations need to acquire emerging specialized knowledge, skill, and know-how.
  2. The technology itself often needs to be adapted for different applications.
  3. Businesses need to figure out how best to use the new technology, and how to organize the workplace.
  4. New training institutions and new labor markets are often required.

One problem – of particular relevance to healthcare – is what Bessen calls “coordination failure.” As he points out,

“[E]arly-stage technologies typically have many different versions. For example, early typewriters had different keyboard layouts. Workers choose a particular version to learn and firms invest in a particular version, but they need to coordinate their technology choices for markets to work well…. Market coordination won’t happen unless new technology standards are widely accepted, and sometimes that takes decades.”

While Bessen doesn’t offer any magic answers to the challenge of implementation (he does highlight open standards and employee mobility as positive factors), I was compelled by his detailed description of technology implementation, and his framing of it as perhaps at least as important as – but also very different from — the challenge of coming up with a new idea itself. It also seemed like an important reminder that some of the most important success stories at the intersection of health and tech may not come from the team that’s developed the sexiest new technology – the best AI, the most sensitive sensor – but rather the team that’s figured out how best to apply a technology, how best to pragmatically solve the implementation challenge.

There’s a final point that deserves mention, as it’s all too easily to overlook.  As we contemplate the challenges of technology implementation, and acknowledge how far we have to go, we must also remind ourselves why we’re trying so passionately in the first place. While it’s undoubtedly true that technology is overhyped – at times obscenely, comically, and dangerously so – it also provides us with radically different ways to see the world, engage the world, and change the world. New tools – as science historian/philosopher Douglas Robertson argued in his 2003 classic Phase Change (also recommended by Kevin Horgan) – play a critical role in driving most paradigmatic change in science, enabling us to ask questions previous generations could perhaps never even contemplate, and tackle them in a fashion previous generations might never have imagined.

Bottom Line

When we survey the landscape at the intersection of technology of health, it’s absolutely true, as critics point out, that the most of these technologies have not yet demonstrated the potential that advocates (to their credit) see and (to their shame) assert has already arrived, whether in precision medicine (as I reviewed in 2015) or digital health (as I reviewed last year). But I also believe there’s a there there. The powerful (and often interrelated) technologies we see emerging, from precision medicine to cloud computing to AI – really do afford the opportunity to radically reconceptualize our world, and more to the point, profoundly refine our understanding of health and disease, and ultimately effect positive change.

But coming up with even transformative technologies is just the first step, and a step that historically has rarely resulted in the sort of instant transformation we’ve naively anticipated. Yet it’s the long and complex path towards implementation that we should have anticipated, and with which many of our most imaginative innovators will now engage.

David Shaywitz Contributor

 

Article link: https://www.forbes.com/sites/davidshaywitz/2017/12/10/winning-health-tech-entrepreneurs-will-focus-on-implementation-not-fetishize-invention/#52eebfb943c2

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (1)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...