An AI-produced video could show Donald Trump saying or doing something extremely outrageous and inflammatory. It would be only too believable, and in a worst-case scenario it might sway an election, trigger violence in the streets, or spark an international armed conflict.
Fortunately, a new digital forensics technique promises to protect President Trump, other world leaders, and celebrities against such deepfakes—for the time being, at least. The new method uses machine learning to analyze a specific individual’s style of speech and movement, what the researchers call a “softbiometric signature.”
The researchers, from UC Berkeley and the University of Southern California, used an existing tool to extract the face and head movements of individuals. They also created their own deepfakes for Donald Trump, Barack Obama, Bernie Sanders, Elizabeth Warren, and Hillary Clinton using generative adversarial networks.
The team then used machine learning to distinguish the head and face movements that characterize the real person. These subtle signals—the way Bernie Sanders nods while saying a particular word, perhaps, or the way Trump smirks after a comeback—are not currently modeled by deepfake algorithms.
In experiments the technique was at least 92% accurate in spotting several variations of deepfakes, including face swaps and ones in which an impersonator is using a digital puppet. It was also able to deal with artifacts in the files that come from recompressing a video, which can confuse other detection techniques. The researchers plan to improve the technique by accounting for characteristics of a person’s speech as well. The research, which was presented at a computer vision conference in California this week, was funded by Google and DARPA, a research wing of the Pentagon. DARPA is funding a program to devise better detection techniques.
The problem facing world leaders (and everyone else) is that it has become ridiculously simple to generate video forgeries with artificial intelligence. False news reports, bogus social-media accounts, and doctored videos have already undermined political news coverage and discourse. Politicians are especially concerned that fake media could be used to sow misinformation during the 2020 presidential election.
Some tools for catching deepfake videos have been produced already, but forgers have quickly adapted. For example, for a while it was possible to spot a deepfake by tracking the speaker’s eye movements, which tended to be unnatural in deepfakes. Shortly after this method was identified, however, deepfake algorithms were tweaked to include better blinking.
“We are witnessing an arms race between digital manipulations and the ability to detect those, and the advancements of AI-based algorithms are catalyzing both sides,” says Hao Li, a professor at the University of Southern California who helped develop the new technique. For this reason, his team has not yet released the code behind the method .
Li says it will be particularly difficult for deepfake-makers to adapt to the new technique, but he concedes that they probably will eventually. “The next step to go around this form of detection would be to synthesize motions and behaviors based on prior observations of this particular person,” he says.
Li also says that as deepfakes get easier to use and more powerful, it may become necessary for everyone to consider protecting themselves. “Celebrities and political figures have been the main targets so far,” he says. “But I would not be surprised if in a year or two, artificial humans that look indistinguishable from real ones can be synthesized by any end user.”
By Brandi Vincent, Staff Correspondent June 20, 2019
Though some fruits of quantum information science (think atomic clocks and CAT scan technology) are increasingly prevalent in Americans’ daily lives, there is still a great deal of progress to be made across the quantum computing space, scientific experts from government, industry and academia said in Washington Wednesday.
At the end of last year, President Trump signed the National Quantum Initiative into law, which granted more than a billion in quantum research funding and followed a large effort led by the executive and legislative branches and many others from the quantum research and development community. The initiative calls on the nation to substantially increase its reasonably expansive QIS efforts.
“So we are in the process of going through that expansion, and part of that is through the president’s budget, part of that is by having individual agencies of the United States government take on expanded roles,” Jake Taylor, assistant director for quantum information science in the White House’s Office of Science and Technology Policy, said.
Taylor said the policy’s base is vast, but his team is expressly working to ensure that economic growth opportunities and opportunities for improving the world are baked into quantum policies and systems. He said they are also working with international collaborators to advance and govern the emerging technology.
“When you’re taking the scientific space and you’re trying to really push it forward, you’re seeking the best answers around the world, and so the way you do that is by building and making strong collaborations to keep that culture of discovery moving,” Taylor said.
The assistant director said his team participated in a “wonderful dialogue” with the European Union last month and will be engaging in a workshop this summer to look further into expanding its collaboration with the EU. He said the nations fundamentally support the same “science first” approach and understanding that policies should benefit individual citizens across diverse societies.
“What I can say is that by choosing to maintain a leadership role and to work with international collaborators and to cooperate across the world, we have the opportunity to realize that and it’s up to us to maintain that strength,” Taylor said. “This is what this bill is about and what we’re up to—come back to me in seven years and I’ll have more details.”
And as for the next decade, other officials said the U.S. can expect to witness a whirlwind of progress and quantum advancements.
“We are in the late ‘40s and early 1950s of information technology, when machines were big and ugly and had lots of wires hanging off of them,” said Bob Wisnieff, chief technology officer of quantum computing at IBM Research. “At this point, we should expect rapid generations of machines, growing capabilities, and machines that, like the IBM360 in the 1960s, are better suited to machine rooms than to your wrist.”
But Wisnieff said as America moves toward the quantum advantage, experts will need to establish new fields of thought and put the right people in place to explore how the machines will evolve and influence society over time.
“We are now getting into the regime where we need to deal with a lot of the engineering challenges, because as you scale up not only will we need to continue to make scientific advances here, but we will need to develop an entire new field of engineering—how one builds these systems and constructs them and tests them and operates them on a daily basis to provide value,” he said.
While there’s a long way to go, insiders said the potential impacts of the technology will be monumental.
For example, Associate Director of the Quantum Economic Development Consortium Celia Merzbacher said she sees exciting possibilities at the convergence of quantum computing, artificial intelligence, 5G and other technologies connected to the internet of things.
“Some of the most exciting things that come to my mind are in the ability to optimize certain problems that will allow for driverless cars and management of these big systems of things that need to be interacting with each other,” she said. “So whether it’s on the energy grid space or transportation or the financial sector, there are applications I think that are going to blow all our minds.”
Look across the Potomac River toward Rosslyn, where the corporate logos of government contractors crown a parade of office towers that follows the river past the Pentagon. The skyline, like America’s defense industrial landscape, is changing. Soon, 25,000 Amazon employees will be climbing the Metro escalators to work in Crystal City each morning along with the tens of thousands of workers from military, intelligence, and the defense industry organizations.
The arrival of Amazon’s HQ2 in the cradle of U.S. government contracting comes at a portentous time for the Defense Department. Technology is altering what makes us strong, prosperous, and secure. The defense industrial base is becoming the strategic innovation base. Today’s leading digital companies have disrupted every industry they have touched, from publishing to automotive. Could Amazon and the rest of the “FAANG companies”—Facebook, Apple, Netflix, and Google—or one of a handful of pure-play artificial-intelligence companies, such as the authors’ SparkCognition, become fixtures of this new industrial base?
While that remains to be seen, the Pentagon supplier that can master robotics and AI will become the most essential of the firms that build America’s arsenal. Moreover, the Defense Department’s practices will increasingly resemble those of this new wave of strategically important companies because that is what the current revolution in warfare requires.
The world is on the doorstep of an artificial intelligence- and robotics-driven revolution in conflict that, after decades of looming just over the horizon, now is a near-term certainty. Just as industrial-age tanks and machine guns devastated World War I battlefields and the U.S. Air Force’s GPS-guided weapons headlined the 1991 Gulf War, social media algorithms and AI-equipped robotic swarms will decide conflicts. Data is not just the new oil, as the saying goes. Data is also the new ammunition.
The Pentagon is preparing accordingly, doing everything from standing up an Army Futures Command to engaging technology luminaries with the Defense Innovation Board to establishing a Joint Artificial Intelligence Center to reforming mid-tier acquisitions policy. But it needs to do more — and do it faster — if the U.S. military is to prevail in future machine-speed conflicts. Fortunately, the Pentagon and its suppliers can learn from the digital disruptors in areas such as robotics, acquiring groundbreaking capabilities, software ecosystems, data management, and symbiotic innovation strategies.
Taken together, today’s leading digital companies have many of the traits for a reimagined, expanded defense industrial base, one that reflects the social, political, and strategic power of companies such as Amazon, Google, and Facebook. Moreover, the most strategically important machine learning and robotics technologies will likely originate in non-defense firms based on their overall investment, market-driven innovation cycles, and talent acquisition. U.S. defense policy is shifting but the speed of technological advancement remains far faster. In recent House testimony, DOD Chief Information Officer Dana Deasy said acquisition changes will come from asking “how do we move to a more startup mentality when moving to technologies like AI?”
Well, here is how.
Be aggressively robotized. While autonomous robotic swarms will become a staple of future battlefields, the nations that can harness automation for logistics, supply, and maintenance will have a decisive operational – and economic – edge.
As of 2017, Amazon reportedly had more than 100,000 robots on the job. This is especially relevant to the Defense Department and the future strategic innovation base because the shift to process automation is driven by the speed-and-cost expectations of “divinely discontent” customers, as CEO Jeff Bezos called them in the company’s 2017 annual report. Such automation — though unlikely to go as viral as a bounding robot biped — is of extreme consequence in the business world, to politics, and to American society.
Be highly acquisitive. In 2012, Facebook purchased Instagram for $1 billion; two years later it bought WhatsApp for $19 billion and virtual-reality company Oculus for $2 billion. With deep corporate coffers, Facebook could have built and marketed its own competing platforms. But this approach allowed Facebook to establish itself immediately with a suite of technologies it could integrate into its central products, while still allowing these separate entities to develop in parallel into innovative platforms in their own right.
Be software-driven. As machines learn to code faster and more accurately than humans, smaller and smaller organizations will be able to develop their own software applications to best suit their mission requirements. For all the beauty of an iPhone or iMac, Apple’s products are only as good as the software that runs on them —which gets better every day as developers bring new ideas to market through the App Store.
Think data, data, data. The importance of being able to effectively wield data at every level of Defense Department operations – from recruiting at home to finding targets abroad – cannot be overstated. Precision is expected in American military operations, which generally reduces the reliance on rampant destructive, kinetic military force. Groups that glean insight from what will be essentially a limitless pool of data will tip the balance even further toward precision and decreased destruction; to be sure, getting the right data at the right moment will be an enduring challenge during conflicts but the ability to develop such an insight with alternative data sets is an altogether novel paradigm. Whether it is creating the world’s most popular search engine or developing open-source machine learning algorithms, data is at the center of everything Google pursues. Currently, Google’s search engine processes tens of thousands of searches each second. Meanwhile, a broader and more decentralized array of devices, from smart TVs to mobile phones to fitness trackers, gather more and more data each day.
Pivot purposefully: A successful organization must be able to hold the virtual and physical in tension, while finding ways to develop both in a symbiotic manner underpinned by data analytics. In yet another sign of the sea-change in studio-developed entertainment, Netflix won 23 Emmy Awards this fall — as many as Amazon. While Netflix will still send you DVDs by mail, the company moved beyond that revenue stream years ago as advances in servers and telecom bandwidth allowed them to supplant the physical with the virtual. Its entry into the capital-intensive and hands-on business of film and show production is backed by years of data about customer tastes and preferences.
These are just a few of the lessons from leading technology companies in what it will take for America to have a globally dominant strategic industrial base. Fortunately, with the varied examples of innovation U.S. digital disruptors have already pioneered and demonstrated, the Pentagon need not look much farther for inspiration than its own backyard.
June 17, 2019 05:23 PM ET
Shannon Sartin and her team are working to cut through convoluted tech and give patients easier access to their data.
For decades, healthcare providers have doubled down on technology to improve the way they treat patients, but Shannon Sartin worries the industry’s expansive IT ecosystem is getting in the way of better care.
As executive director of the Digital Service at the Health and Human Services Department, Sartin is on the front lines of the government’s efforts to revamp healthcare for the 21st century. In recent years, much of the battle has played out at the Centers for Medicare and Medicaid Services, which oversees the care of more than 100 million people across the country.
Like other U.S. Digital Service outposts, she and her team work to bring tech industry best practices—like agile development, cloud services and user testing—to federal IT shops that are sometimes far behind the times. But while many digital services branches focus on modernizing processes unique to their respective agencies, Sartin sees the potential for her work to reach far beyond the borders of HHS, and even the government itself.
“We want to be in a place where our technology and the choices that we’re making are actually the ones that are impacting the way you and I receive healthcare,” Sartin said in a conversation with Nextgov. “That’s how much economic force CMS has. It could slow down or speed up the entire healthcare industry.”
HHS Digital Service’s efforts have been so successful that the Partnership for Public Service nominated Sartin and her team for one of its annual Service to America awards.
In Sartin’s eyes, technology can work both for and against patient care. If used correctly, IT can give people more control over their own health and allow doctors to focus more energy on treating patients. But incorporating too many different systems can overburden providers with administrative tasks, leading to impersonal, inefficient and expensive care.
Unfortunately, Sartin said, the modern healthcare industry seems to more closely resemble the second model than the first. But by overhauling its outdated infrastructure and making healthcare data more readily accessible, she said, CMS could free up more resources for treating patients ultimately drive change across the broader industry.
Last year, Sartin and her team launched an application programming interface called Blue Button 2.0, which contained detailed health information on some 53 million Medicare recipients. The API, which originated from an earlier CMS program, allows patients to connect their medical records to the numerous health-centric applications and services that have popped up in recent years. They can also opt to donate their data to inform medical research efforts around the globe.
So far, more than 1,800 developers have signed up for the platform and dozens of apps are already using the API to give patients a better handle on their healthcare.
Before Blue Button, Medicare claims data was only available to recipients in a PDF format, something “none of us really want” in the age of smartphones and wearable tech, Sartin said. Future, tech-savvier generations of beneficiaries will want to pay bills, access data and manage healthcare through their devices, and according to Sartin, APIs like Blue Button are laying the groundwork for that reality, she said.
“I think it’s really about … empowering innovators and entrepreneurs with the data,” she said. “How do we make sure it’s available in a way that not only aligns with what industry is already doing but actually sets the standard” for where it goes in the future.
On the provider side of the equation, Sartin and her team also launched a tool that sped up payments to doctors and offers rapid feedback on their treatment of patients. The platform came as part of a governmentwide effort to promote “value-based” care—the concept of paying doctors based on performance instead of the specific services they provide—and incentivizes doctors to take “a broader, holistic” approach to treating patients, according to Sartin.
When developing the tool, she and her team conducted extensive testing with doctors who would ultimately be using the system, a practice common in industry but not at CMS, Sartin said. Their feedback played a large role in shaping the platform, she said, and since its launch, the Quality Payment Program has been embraced throughout the medical community.
The system “ended up being one that doctors not only didn’t hate but were giving praise to, which is a rarity at CMS,” she said. “Our team really believes that partnership is how we achieve success.”
Looking ahead, Sartin said she expects her team to continue improving the platform and modernizing the backend payment system, which is still running on outdated COBOL software. While there won’t be “crazy, flashy launches” every few months, she said, incremental improvements to CMS’ payment processes will have significant impacts on both providers and patients.
Sartin said her team also plans to explore ways to make it easier to share patients’ data among HIPAA-covered entities like doctors and insurers.
In a recent report, GAO named the health IT system in use at HHS among the 10 federal legacy systems most in need of modernization to prevent security problems.
By Kate Monica
June 12, 2019 – The Government Accountability Office (GAO) urged HHS to modernize its 50 year-old health IT system in a report naming the ten federal legacy systems most in need of significant upgrades.
The health IT system, implemented in 1969, was ranked third on the list of ten outdated IT systems in need of revamping. The EHR system supports healthcare delivery for the Indian Health Service (IHS) and is considered a highly susceptible to security threats.
IHS uses the clinical and patient administrative information system to gather, store, and display clinical, administrative, and financial information about patients visiting its clinics and hospitals. Due to the sensitive nature of this information, HHS officials stated that a modernization effort is “imperative.”
Specifically, HHS officials interviewed by GAO said the system’s technical architecture and infrastructure are outdated, obstructing the federal agency’s efforts to fulfill regulatory requirements.
The EHR system software includes 50 additional modules implemented over time to keep pace with the changing demands of the evolving healthcare system.
“The software is installed on hundreds of separate computers, which has led to variations in the configurations at each site,” wrote GAO in the report. “According to HIS, this type of add-on development becomes detrimental over time and eventually requires a complete redesign to improve database design efficiency, process efficiency, workflow integration, and graphical user interfaces.”
System variations negatively affect interoperability and health data exchange between IHS employees. Replacing or modernizing the EHR system would help to boost interoperability with other healthcare organizations and facilities and improve care coordination for patients within the IHS network.
IHS officials expressed an interest in modernizing the EHR system earlier this year, but officials have not yet developed concrete modernization plans.
In September 2018, HHS awarded a contract to conduct research related to modernizing its health IT infrastructure, applications, and capabilities.
“According to the department, the research will be conducted in several stages over the next year, and a substantial part of the research will be an evaluation of the current state of health IT across IHS’s health facilities,” wrote GAO.
“Once the research is conducted, in consultation with IHS and its stakeholders, the contractor will use the findings and recommendations to propose a prioritized roadmap for modernization.”
HHS officials stated the agency expects to complete its modernization initiative in the next five years, and anticipates launching an EHR implementation plan as early as 2020.
While the federal agency has taken some steps to initiate a modernization effort, HHS has not calculated a potential cost for the project.
“With regards to potential cost savings, HHS noted that the modernization will take significant capital investment to complete and it is unknown whether the modernization will lead to cost savings,” noted GAO.
Launching an EHR modernization project at IHS would help to promote care coordination and data sharing between IHS, VA, and the Department of Defense (DoD.)
VA and DoD are currently in the process of replacing their legacy health IT systems with a Cerner EHR platform.
“HHS officials stated that this modernization could improve interoperability with its health care partners, the Department of Veterans Affairs and the Department of Defense, and significantly enhance direct patient care,” GAO officials wrote.
In 2017, IHS issued a request for information (RFI) to inform an EHR modernization effort. The RFI was part of the federal agency’s exploratory market research effort to assess health IT industry innovations and capabilities.
IHS provides healthcare services to American Indians and Alaska Natives per provisions established by the relationship between the federal government and Indian tribes in 1787.
By Nathan Eddy June 13, 2019 11:50 AM
The Federal Electronic Health Record Modernization office will replace the current Interagency Program Office.
The Department of Defense and the Veterans Affairs department announced the creation of a special office to help centralize decision-making as the VA makes a multi-billion dollar electronic health records upgrade.
The Federal Electronic Health Record Modernization (FEHRM) office will replace the current Interagency Program Office, headed by Lauren Thompson.
WHY IT MATTERS
“This management model creates a centralized structure for interagency decisions related to EHR modernization, accountable to both the VA and the DoD Deputy Secretaries,” FCW reported Thompson as saying during her testimony at a June 12 hearing of the Subcommittee on Technology Modernization of the House Veterans Affairs Committee.
Ranking member of the subcommittee Rep. Jim Banks (R-Ind.) noted during the hearing that lack of interoperability and clear management had been a continuing struggle, and it remains unclear if the new office will help keep the transition on schedule.
Plans were also revealed to implement a pilot program that will see all veterans receive a unique ID number to help track individuals’ medical records after they leave service.
THE LARGER TREND
The announcement comes as the VA department wrangles with a $16 billion implementation of the Cerner electronic health records system, which is slated to go live across care sites by 2028, but the lack of interoperability between DoD and VA remains a major stumbling block.
While progress is being made, the sheer vastness of the task – 1,700 sites, training of 300,000 VA employees and aggregation of decades of clinical data – delays in decision making and workflow difficulties have been challenges.
The VA signed a contract with Cerner in 2018 to replace the department’s 40-year-old legacy Veterans Integrated System Technology Architecture (Vista) healthcare records technology over the next 10 years with the Cerner system, which is currently in the pilot phase at DoD.
However, an array of major challenges with the DoD’s EHR modernization came to light last spring, with many IT, training and workflow hiccups reportedly causing inaccurate prescriptions, misdirected patient referrals and other complaints that clinicians said could put patient safety at risk.
Further complicating matters was the first House VA Subcommittee on Technology Modernization hearing, held in September 2018, which revealed that officials and congressional members are not on the same page when it comes to governance and EHR interoperability.
In January, the Senate confirmed James Gfrerer, a former marine and cybersecurity executive at Ernst & Young, to head the VA’s IT department — a position that has been without a permanent leader for the past two years.
The VA has also partnered with Microsoft with the aim of improving how veterans living in rural areas can access the VA’s online services and benefits.
Nathan Eddy is a healthcare and technology freelancer based in Berlin.
Email the writer: firstname.lastname@example.org
A company’s strategy is defined by its key performance indicators. Artificial intelligence can help determine which outcomes to measure, how to measure them, and how to prioritize them.
Many executives, intent on understanding and exploiting AI for their companies, travel to Silicon Valley to acquaint themselves with the technology and its many promises. These pilgrimages have grown so common that tours now exist to facilitate inside peeks at innovative startups. Buoyed by hype and smatterings of algorithmic knowledge, returning executives share a common goal: determining what products, services, and processes AI can enhance or inspire to sharpen competitive edges. They believe a comprehensive strategy for AI is essential for success.
That well-intentioned belief is off the mark. A strategy for AI is not enough. Creating strategy with AI matters as much — or even more — in terms of exploring and exploiting strategic opportunity. This distinction is not semantic gamesmanship; it’s at the core of how algorithmic innovation truly works in organizations. Real-world success requires making these strategies both complementary and interdependent. Strategies for novel capabilities demand different managerial skills and emphases than strategies with them.
Machine learning pioneers — Amazon, Google, Alibaba, and Netflix come to mind — have learned that separating strategies for developing disruptive capabilities from strategies deployed with those capabilities invariably leads to diminished returns and misalignments. Not incidentally, these organizations are intensely data- and analytics-driven. Their leaders rely heavily on metrics to define, communicate, and drive strategy. This reliance on quantitative measures has increased right along with their growing investment in AI capabilities.
Our research strongly suggests that in a machine learning era, enterprise strategy is defined by the key performance indicators (KPIs) leaders choose to optimize. (See “About the Analysis.”) These KPIs can be customer centric or cost driven, process specific or investor oriented. These are the measures organizations use to create value, accountability, and competitive advantage. Bluntly: Leadership teams that can’t clearly identify and justify their strategic KPI portfolios have no strategy.
In data-rich, digitally instrumented, and algorithmically informed markets, AI plays a critical role in determining what KPIs are measured, how they are measured, and how best to optimize them. Optimizing carefully selected KPIs becomes AI’s strategic purpose. Understanding the business value of optimization is key to aligning and integrating strategies for and with AI and machine learning. KPIs create accountability for optimizing strategic aspirations. Strategic KPIs are what smart machines learn to optimize. We see this with Amazon, Alibaba, Facebook, Uber, and assorted legacy enterprises seeking to transform themselves.
These principles have sweeping and disruptive implications. As “accountable optimization” becomes an AI-enabled business norm, there is no escaping analytically enhanced oversight. Boards of directors and members of the C-suite will have a greater fiduciary responsibility to articulate which KPIs matter most — and why — to shareholders and stakeholders alike. Transformative capabilities transform responsibilities. You are what your KPIs say you are.
About the Analysis
This article draws on results from a 2018 survey of 3,225 business executives, managers, and analysts from companies based in 107 countries and 20 industries. To complement our survey analysis, we conducted 30- to 60-minute interviews with 17 executives and academics about the role of KPIs as a leadership tool. Some related findings were published in the 2018 MIT SMR report “Leading With Next-Generation Key Performance Indicators.” This article extends that discussion by drawing out the implications of machine learning and AI for both identifying and optimizing strategic metrics.
Historical context and precedent are important: Blending strategy for and strategy with is hardly unique to AI and machine learning. John D. Rockefeller’s Standard Oil, for example, dominated the petroleum market not just because the company had an effective strategy for capitalizing on the nascent railroad industry’s emerging capabilities but also because it allowed those capabilities — logistical powers of transport and delivery — to shape its broader strategy. By ruthlessly exploiting scale and acquiring and designing fuel tank cars, Standard Oil consistently reaped disproportionate returns from a rapidly expanding physical network.1
More recently, incumbents grasped that they urgently needed a strategy for the internet to compete with disruptive born-digital startups. But those organizations discovered — sooner or later — that their strategies for the internet were contingent upon the success of their strategies with the internet. Retailers, for example, commonly use internet-based omnichannel strategies to compete on customer experience. They might start by building strong relationships with shoppers online, for example, but when those same customers go to physical store locations, geofencing apps alert the company to their imminent arrival. Staff is then primed to help facilitate customer pickups. These seamless experiences blend strategy with and for the internet.
Creating an enterprise strategy for developing or applying a capability is not organizationally, culturally, or operationally the same as cultivating a strategy with that capability. These activities are complements. A strategy for sustainability (such as lowering one’s carbon footprint or reducing waste) should not be divorced from having a sustainable overall strategy enabling the business to operate in thriving communities. Similarly, a strategy for AI shouldn’t be viewed as a substitute for creating a strategy with AI.
Where Opportunity Lies
What, then, does strategy with AI pragmatically mean? Like any corporate strategy, it expresses what enterprise leaders deliberately seek to emphasize and prioritize over a given time frame. Strategies articulate how and why an organization expects to succeed in its chosen market. These aspirations might involve, for example, superior customer experience and satisfaction, increased growth or profitability, greater market share, or agile fast-followership when rivals out-innovate the company.
Whatever the specific strategy, virtually all organizations create corresponding measures to characterize and communicate desirable strategic outcomes. Those metrics — be they KPIs, objectives and key results (OKRs), or a Balanced Scorecard — are how organizations hold humans and algorithms accountable. For public companies, strategic KPIs typically respect and reflect investor concerns; for private equity, strategic KPIs might be calibrated to maximize a sale price or facilitate an IPO. Data-driven systems, enhanced by machine learning, convert these aspirations into computation. World-class organizations can no longer meaningfully discuss optimizing strategic KPIs without embracing machine learning (ML) capabilities.
Uber, for example, runs hundreds of ML models to optimize its ride-sharing platform and food-delivery business. Uber has made enormous investments in its machine learning capabilities and implementations. Whether it enjoys an abundance of available cars on call or relies on relatively few, its ability to estimate accurate arrival times for customer and driver alike is essential to how it competes in the marketplace.
“Accurate ETAs are critical to a positive user experience,” observes Jeremy Hermann, who heads Uber’s machine learning platform, “and these metrics are fed into myriad other internal systems to help determine pricing and routing. However, ETAs are notoriously difficult to get right.”2
Yet, so many critical outcomes are dependent on robust ETA analytics — rider and driver expectations, fares, food pickup and delivery — that ETA is a core Uber metric. Hermann notes, “Uber’s Map Services team developed a sophisticated segment-by-segment routing system that is used to calculate base ETA values. These base ETAs have consistent patterns of errors. The Map Services team discovered they could use a machine learning model to predict these errors and then use the predicted error to make a correction. As this model was rolled out city by city (and then globally … ), we have seen a dramatic increase in the accuracy of the ETAs, in some cases reducing average ETA error by more than 50%.”3 [emphasis added]
Simply celebrating effective and globally scalable machine learning models misses the larger point. Uber cannot deliver on operational or strategic aspirations without reliably delivering on its ETA KPI. Chaotic ETA outcomes would prevent Uber from being a “low cost” or “best value” provider of mobility/delivery services. Technical, organizational, or operational changes that might threaten ETA outcomes are counterproductive. Uber must marginalize or minimize KPIs that might conflict or compete with effective ETA prediction.
Clarifying those constraints is crucial. In the words of Harvard Business School’s Michael Porter, “The essence of strategy is choosing what not to do.”4 Once those guardrails are established, identifying and minimizing unwelcome consequences becomes as important as promoting the outcomes you want. The essential takeaway here is that prioritizing KPIs — ranking them according to what matters most and what the organization must learn the best — is essential to enterprise strategy. In an always-on big data world, your system of measurement is your strategy.
Determining the optimal “metrics mix” for key enterprise stakeholders becomes an executive imperative. Are customer-centric strategies, for example, better optimized via customer lifetime value (CLV) or balanced blends of earnings before interest, taxes, depreciation, and amortization (EBITDA) and net promoter score? For what customer segments should profitability be privileged over satisfaction or loyalty? As algorithms get smarter, leaders must have the courage to explore how best to answer these questions. AI makes that feasible, affordable, and desirable.5
This optimization imperative, our research suggests, demands a rigorous rethinking of the metrics chosen to define desirable (and undesirable) strategic outcomes. When machine learning measures management and manages measurement, metrics don’t just reflect strategy but drive it. Achieving KPI outcomes (and suggesting new KPIs) is what smart machines need to do — and need to learn to do.
AI is not just about building products, services, or processes. Leaders need to recognize that AI must be primarily about enhancing the formulation and execution of strategy. To the extent that KPIs are essential to formulating and communicating strategy, strategy is quintessentially a system of measurement. Our research shows that AI transforms the strategist’s choices about which KPIs to optimize and how to optimize them. Strategy is about optimizing KPIs with AI/ML.
Looking Forward and Backward
Machine learning profoundly changes how to approach optimizing leading and lagging KPIs. McDonald’s has a multipart growth plan explicitly combining the two types of indicators. A key strategic aspiration is to once again be a family destination that appeals to parents. A lagging indicator is more visits by families with kids under the age of 13. A leading indicator is any evidence of becoming “a place I’m happy to bring my children,” says McDonald’s global chief marketing officer Silvia Lagnado.
Reliably measuring “happy place to bring my children” is methodologically challenging. Customer surveys are limited to those who fill them out, a source of selection bias. Machine learning-based sentiment analysis improves on this approach: It can classify large volumes of geotagged Twitter data and other data sets to correlate neighborhood-level well-being with comments about fast-food locations. A group of University of Utah academics developed a blueprint for this type of ML application.6 Such machine learning mashups are becoming standard practice in academic and business research.
With machine learning, McDonald’s can more effectively pursue high-priority KPIs. Marketers exploring in-store promotions with family-oriented advertising and menu options might improve family traffic but will fail if those promotions produce store conditions that annoy parents. Maximizing sales or revenues cannot come at that cost. Striking a productive balance between those measures is what optimization means. That’s what McDonald’s machines need to learn to serve up.
Not coincidentally, in March 2019, McDonald’s announced its $300 million acquisition of Israel-based Dynamic Yield, which uses machine learning and big data to make personalized recommendations. McDonald’s says it intends to use the company’s tools to customize the drive-thru experience by creating dynamic digital menu boards that recommend menu items based on local demographics, previous orders, weather, and time of day, among other factors.
GoDaddy, the multibillion-dollar web-hosting and internet registry innovator, is also embracing leading as well as lagging data-driven KPIs. Since 2016, the Scottsdale, Arizona-based company’s market value has grown more than 2.5X in no small part due to its dual commitment to strategic KPIs and machine learning. “We’re very excited about the prospect of using the large data sets that we have,” observes GoDaddy COO Andrew Low Ah Kee, “[to] train a model to solve and optimize against [customer] lifetime value as opposed to solving for transactional period revenue.”7
Low Ah Kee’s essential insight is that leaders have the duty and responsibility to pick which time horizons and “objective functions” to optimize. GoDaddy’s emphasis on customer lifetime value (which anticipates future revenues, costs, and loyalty in addition to capturing past purchase behavior) reduces short-termism and threats to customer experience quality, he asserts. “We see in our customer base, when we help our customers succeed, the lifetime value it brings to us is significantly higher than for people whom we approach with just a transactional view,” he notes. “As you start to extend the time horizon, I think the degree of [organizational] misalignment tends to go down.” It’s easier to miss long-term goals if the focus is on short-term tactics.
Learning What to Optimize
Optimizing known KPIs is important but not strategically sufficient. When appropriately trained, machine learning models can learn to identify and recommend novel or emergent KPIs. That is, machines can “learn to discover” enterprise KPIs on their own, without expert guidance. This is the difference between supervised and unsupervised learning. GE Healthcare CMO Glenn Thomas explains that his data science teams are “actually boiling out the KPIs from the data rather than setting the KPIs to be measured.”
Making Smarter Trade-Offs
We argue that strategy is best understood and experienced as how the business invests in, manages, and prioritizes its KPI portfolio. KPIs and the relationships between them are the critical strategic units of analysis. Strategic success means the company’s machines learn to optimize KPI portfolio returns.
To be clear, optimization in this context does not mean maximization. On the contrary, it means computationally learning to advance toward desired strategic outcomes through carefully calculated and calibrated KPI trade-offs. Understanding trade-offs among and between competing — and complementary — KPIs is essential. Simply optimizing individual KPIs by priority or rank ignores their inherent interdependence. For any KPI portfolio, identifying and calculating how best to weight and balance individual KPIs becomes the strategic optimization challenge. (See “Key Performance Indicators and Ethical Strategy.”)
Key Performance Indicators and Ethical Strategy
Google’s YouTube division introduced two new internal metrics in the past two years for gauging how well videos are performing, according to people familiar with the company’s plans. One tracks the total time people spend on YouTube, including comments they post and read (not just the clips they watch). The other is a measurement called “quality watch time,” a squishier statistic with a noble goal: to spot content that achieves something more constructive than just keeping users glued to their phones.
Even as “yield management” machine learning models for airlines, hotels, and other travel-related businesses algorithmically improve, strategic challenges sharpen: How can revenue-enhancement KPIs be optimized in the context of customer satisfaction and net promoter score KPIs? Do loyal customers deserve preferential rates or service bundles relative to typical customers? Learning to optimize for “best customers” draws on different data sets and expectations than learning to optimize for typical or average customers. What does an optimal balance between loyal customers and asset monetization margins look like? Smart machines can learn to strike that balance, but preemptively minimizing human insight and oversight seems foolish.
Similarly, high-frequency algorithmic traders may seek to maximize the frequency of profitable trades and/or maximize hourly, daily, or weekly profits. Yet, at the same time, they may wish to avoid or minimize the risk of regulatory intervention. One KPI maximizes profit (or “profits per trade” or “profits per trading strategy”) while another signals that the company’s trading patterns are unlikely to trigger an external review. Again, smart machines can learn to strike that balance. What is the risk appetite, not for particular trades but for particular regulators?
Every organization confronts this clash and conflict of strategic prioritization. No right answer exists. That said, some KPIs deliver disproportionate value and insight into helping company leaders better — or more optimally — achieve their strategic aspirations. Weighting these measures and metrics lends itself to machine learning applications. They facilitate alignment between local optima and the desired global optimum. Consequently, there can be no meaningful discussion about “optimal” strategic trade-offs in a KPI portfolio without a machine learning/AI capability.
The Essential Role of Data
There is no enterprise strategy for or with AI without an enterprise strategy for — and with — data. It is the essential ingredient for machine learning and dynamic optimization. As the Uber, McDonald’s, and GoDaddy examples affirm, optimizing strategic KPIs — ETAs, happy families, CLV — is contingent upon data volume, velocity, variety, and quality.
That makes data governance key. Organizations must invest in recognizing which data might enhance or elevate their KPIs — and which data will help their machines learn. Digital processes and platforms that combine and analyze data, siloed and scattered, empower the company’s artificial intelligentsia.
Technology titans and a growing number of legacy companies embrace comprehensive data strategies and practices. They explicitly, ruthlessly, and relentlessly manage data as an asset. This, as much as their technical prowess, sets them apart operationally and culturally. They employ chief data officers, data scientists, and data wranglers, holding people and processes accountable for getting value from data. Increasingly, much of that value comes from how quickly, accurately, and reliably that data trains machines.
Unfortunately, crisp and clear alignment between enterprise data governance and strategic AI initiatives remains elusive. A recent Forbes Insights CXO survey on AI and machine learning revealed that three out of four top executives declared AI a core component of their digital transformation plans. However, only 11% of the surveyed executives said their companies have begun implementing an enterprisewide data strategy, and only 2% said they have a serious “data governance” process in place.8
These findings, unhappily consistent with our own, suggest that successful and sustainable implementations of AI/ML-enabled optimization strategies are unlikely until data is explicitly treated as an asset. Organizations need effective data platforms and processes to enable effective machine learning platforms and processes. Ironically (even perversely), many companies have enormous amounts of timely, relevant, and valuable data for strategic AI efforts but lack the commitment and competence to harness it. Their data doesn’t inform their KPIs or their strategy. An unwillingness or inability to use strategic KPIs to prioritize or align data assets with strategic outcomes further undermines their AI aspirations. These gaps render strategies for/with AI impotent.
Like Rockefeller’s railroads and the internet, artificial intelligence and machine learning represent enormously powerful strategic capabilities. They computationally transform the economics of optimization for business. Appropriately developed and deployed, they can literally learn how to create more value for more customers at lower cost and with greater speed. A strategy for AI matters less than clearly articulating the strategic aspirations, goals, and outcomes that leaders wish to optimize. Machine learning, like transportation and communication, is a means to an end. What needs to be transported? What needs to be communicated? What needs to be optimized? Artificial intelligence and machine learning can, in principle and practice, offer actionable answers to these questions. The true strategic opportunity and impact of these technologies is the chance to rethink and redefine how the enterprise optimizes value for itself and its customers.
About the Authors
David Kiron (@davidkiron1) is the executive editor of MIT Sloan Management Review. Michael Schrage is a research fellow at the MIT Sloan School of Management’s Initiative on the Digital Economy.
By Benjamin Harris June 07, 2019 11:28 AM
A new assessment of cybersecurity threats highlights consumers’ growing role and predicts things will get worse before they get better.
Never before has the healthcare industry been so vulnerable on so many fronts. As consumers interact with online health data and even create their own, hospitals both need to cater to a larger connected presence as well as provide a larger attack surface on the internet.
Many machines run legacy software, including operating systems that are at or near the end of their lives. And even as consumers directly acknowledge these fears and claim some responsibility in protecting their data, they admit to knowing little about the state of their health data or their rights to it.
All of these scenarios and more are outlined in a new threat assessment from Morphisec, a cybersecurity firm.
Why it matters?
The report finds that the number of consumers accessing their health data online has grown to 42 percent, a significant jump from the previous year. Additionally, patients are using their smartphones and other devices to generate their own health data, which can be shared with a practitioner.
Internet of Things connected devices often can’t operate within the same security parameters that other devices do. As they continue to explode as a device class in healthcare settings, they present new vulnerabilities as well. These new forms of digital connection between patient and provider create new attack surfaces in a network, as well as enlarge ones that already exist.
While many consumers responded that they felt they had a shared responsibility to protect their data, the report notes that it still is the responsibility of the provider to secure data. Many consumers struggle with this, with nearly half responding that they felt their smartphones were “nearly as secure” as hospital data networks.
Consumers still have a hard time being engaged on their health data safety and while over half of the U.S. population has been exposed to some form of data breach, close to the same amount polled did not know whether their data had ever been compromised.
What is the trend?
Hospitals need to go above and beyond in their preparedness and response to cyber threats.
Large data breaches are becoming the norm and when targets include large national companies, the effects of an attack can be felt everywhere. Furthermore, every player in the security ecosystem – from vendor to practitioner to patient – needs to have a stake in maintaining the security of healthcare data.
On the record
“With nearly 90 percent of health organization CIOs indicating they purchase cybersecurity software to comply with HIPAA, rather than to reduce threat risk, consumers have a right to be worried about the cyber defenses protecting their health data,” said Tom Bain, vice president of security strategy at Morphisec. “Merely checking the box that cybersecurity defenses meet HIPAA requirements isn’t enough to protect healthcare organizations today from advanced and zero-day attacks from FIN6 and other sophisticated attackers.”
Benjamin Harris is a Maine-based freelance writer and former new media producer for HIMSS Media.
Health information technology remains an untapped opportunity with great promise for reducing disparities in healthcare delivery and outcomes in the clinical environment.
That’s the contention of 12 original research papers and five editorials and commentaries published in the June supplement to the journal Medical Care.
The supplement, supported with funding from the National Institute on Minority Health and Health Disparities, is based in part on presentations made at an NIMHD-funded workshop in collaboration with the National Science Foundation and the National Health IT Collaborative for the Underserved.
“Health IT tools such as EHRs, patient portals, patient-monitored health behaviors and clinical decision support (CDS) systems may yield population health benefits for underserved populations by enhancing patient engagement, improving implementation of clinical guidelines, promoting patient safety and reducing adverse outcomes,” according to an editorial by NIMHD Director Eliseo Pérez-Stable, MD, with NIMHD Health Scientist Administrator Beda Jean-Francois and NIMHD Chief of Staff Courtney Ferrell Aklin.
The authors contend that “EHRs should provide a platform for improved documentation of social determinants of health using standardized terminology and methods of ascertainment,” while the “availability of real-time actionable patient data, clinical care coordination and decision support enabled by health IT tools may also reduce disparities in quality of care for underserved populations.”
For example, an observational study published in the Medical Care June supplement shows how an EHR-based model—developed by researchers at Boston Medical Center—was able to gather social determinants of health information to screen primary care patients for unmet needs.
The model leverages a one-page screener that patients fill out in the waiting room before their appointment, and—based on that input—the EHR prompts providers to address any SDOH concerns raised by patients during the appointments.
“The potential is great in actually helping to decrease health disparities if it’s done correctly,” observes Courtney Ferrell Aklin, chief of staff at NIMHD. “The opportunity is there, but we need to make sure that we’re mindful of how we go about using healthcare IT and ensure that we are using it for the benefit of all.”
While African Americans and Latinos have a higher rate of use of mobile technology than their white counterparts, she notes that minorities lag behind when it comes to the use of patient portals in EHRs.
“As a result, we could see an increase in health disparities instead,” adds Aklin. “What we haven’t done is the research to have enough data to figure out how to mitigate those efforts.”
More research is needed to “investigate the potential unintended consequences of health technologies, such as identifying barriers that prevent the uptake and engagement with EHRs by medically underserved patients, develop effective approaches and models to deliver CDS in safety net clinical settings, and implement and evaluate the best models for the inclusion and utility of social determinants of health in order to advance health equity for racial and ethnic populations,” according to Aklin and her colleagues.