Technological leadership by the United States requires forethought and organization. The plan necessary to maintain that leadership—a national technology strategy—should be broad in scope. Its range includes investments in research, nurturing human talent, revamping government offices and agencies, and ensuring that laws, regulations, and incentives provide private industry with the ability and opportunity to compete fairly and effectively on the merits of their products, capabilities, and know-how. Given that key inputs are diffused globally, this plan must also carefully consider how the United States can effectively partner with other tech-leading democracies for mutual economic and security benefit. This includes taking measures to promote norms for technology use that align with shared values.
In the context of strategic competition with China, the need to craft new approaches to technology development and deployment is increasingly apparent to government leaders. Many lawmakers grasped the stark reality that U.S. technological preeminence was eroding when they realized that China had become a global juggernaut in telecommunications, a situation exacerbated by Beijing’s push to dominate global fifth generation (5G) wireless networks. The state of play poses national and economic security risks to the United States, which, along with its allies and partners in the Indo-Pacific and Europe, has made notable headway in addressing and mitigating these risks. However, much work remains. Chinese firms continue to push for greater digital entanglement around the world, from Southeast Asia to Africa to Latin America. Given the fundamental importance to the digital economy of communications networks and the standards that govern them, the more successful Beijing’s policies are, the greater the challenge for tech-leading democracies to maintain their economic competitiveness. There is also the specter of norms. If these are dominated by illiberal actors, their power to shape how networks are used and to manipulate data flows poses threats to liberal democratic values the world over.
The more successful Beijing’s policies are, the greater the challenge for tech-leading democracies to maintain their economic competitiveness.
It is time for tech-leading democracies to heed lessons from the 5G experience to prepare for what comes next, known as Beyond 5G technologies and 6G, the sixth generation of wireless. With telecommunications operators around the world still in the early stages of rolling out 5G, it is reasonable to ask why policymakers should focus now on 6G technologies that are not expected to be commercialized until around 2030. One reason is that governments of leading technology powers have already crafted various visions and strategic plans for 6G. Myriad research efforts, though nascent, are under way. Second, the 5G experience shows that belated attention to global developments in telecommunications resulted in vexing geopolitical problems that could have been better mitigated, or perhaps in some cases avoided altogether. Finally, because communications technologies are of fundamental importance to economic and national security, prudent and proactive policymaking in the early stages of technological development will help ensure that the United States and its allies and partners are well positioned to reap the benefits while countering the capabilities of adversarial competitors, most notably China.
To secure America’s 6G future, the U.S. executive and legislative branches should act on an array of issues. First and foremost is setting a road map for American leadership in 6G. This framework will then inform the scope and scale of the actions needed to make that vision a reality. The necessary actions range from investing in research and development (R&D) to developing infrastructure to initiating novel tech diplomacy.
Promote American Competitiveness in 6G
The White House should:
Craft a 6G strategy. The United States needs a strategic road map that lays out a vision for American leadership in 6G and the desired international and domestic telecommunications landscape of 2030 and beyond.
Expand R&D funding for 6G technologies. The White House should explore opportunities for additional 6G R&D funding through research grants, tax credits, and financial support.
Leverage existing capabilities for testing, verification, and experimentation of 6G technologies. The White House, working with the interagency Networking and Information Technology Research and Development Program, can establish government 6G testbeds (in the laboratory and field) to support and build upon 5G R&D.
Open additional experimental spectrum licenses to accelerate R&D efforts.
Establish a U.S. 6G Spectrum Working Group. The working group should identify spectrum needs for 6G rollouts and offer recommendations for spectrum access and management.
Promote the development of new 6G use cases by using the purchasing power of the U.S. government.
Congress should:
Designate the Department of Commerce a U.S. intelligence community (IC) member. Closer ties to the IC will improve information-sharing on foreign technology policy developments, such as adversaries’ strategies for challenging the integrity of standard-setting institutions. This action will also integrate the Department of Commerce’s analytical expertise and understanding of private industry into the IC.
Enact R&D funding to solve challenges for rural 6G development. 6G offers an opportunity to develop alternatives to traditional hardware such as fiber-optic cables, for example wireless optic solutions or non-terrestrial platforms, that can fill network gaps to more readily connect rural areas.
Attract and retain much-needed foreign science and technology talent by initiating immigration reform, such as by raising the cap for H-1B visas, eliminating the cap for advanced STEM degree holders, and amending the Department of Labor Schedule A occupations list so that it includes high-skilled technologists.
The National Science Foundation should:
Create an equivalent of its Resilient & Intelligent NextG Systems (RINGS) program for start-ups. RINGS, supported by government and major industry partners, offers grants for higher education institutions to find solutions for NextG resilience.1
Expand the Platforms for Advanced Wireless Research Program, a consortium of city-scale research testbeds, so that it includes software innovation hubs.2
Collaborate with Allies and Partners
Congress should:
Create a Technology Partnership Office at the Department of State. A new office, headed by an assistant secretary for technology, is needed to initiate, maintain, and expand international technology partnerships.
The White House should:
Organize an international 6G Policy and Security Conference series. U.S. policymakers should work with foreign counterparts of the techno-democracies to organize regular 6G conferences to discuss key issues including technology development, security, standard setting, and spectrum.
The White House, with the support of Congress, should:
Lead the creation of a Multilateral Digital Development Bank. In partnership with export credit and export finance entities in allied countries, the United States should lead in establishing a new organization with the mission of promoting secure and fair digital infrastructure development around the world.
The State Department should:
Spearhead a tech diplomacy campaign. The United States and allied governments should craft clear and consistent messaging to the Majority World about the risks of using technologies from techno-autocracies, especially China.
Ensure the Security of 6G Networks
The Federal Communications Commission, with the support of relevant agencies, should:
Identify, develop, and apply security principles for 6G infrastructure and networks. A proactive approach, with partners in industry and academia, should be undertaken to identify 6G security risks and ensure that international standards have cyber protections.
The White House and Congress should:
Promote and support the development of open and interoperable technologies. Coordinated outreach, joint testing, industry engagement, and policy collaboration can build global momentum and communicate risks associated with untrusted vendors.
Create a 6G security fund, building on existing efforts to ensure 5G security. This fund could be established in concert with the activities of the proposed Multilateral Digital Development Bank.
AUTHORSMartijn Rasser Senior Fellow and Director, Technology and National Security ProgramMartijn Rasser is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security (CNAS). Prior to joining CNAS, Mr. Ras…
Ainikki Riikonen Research Associate, Technology and National Security ProgramAinikki Riikonen is a Research Associate for the Technology and National Security Program at the Center for a New American Security (CNAS). Her research focuses on emerging te…
Henry Wu Former Intern, Technology and National Security ProgramHenry Wu is a former Joseph S. Nye, Jr. Intern for the Technology and National Security Program at the Center for a New American Security (CNAS). Prior to CNAS, Henry interned..
WASHINGTON — The U.S. Defense Department is creating a new position to oversee its digital and artificial intelligence activities, with the hope the office will be able to drive faster progress in those areas and meet threats posed by China, according to a senior defense official.
The new chief digital and artificial intelligence officer, or CDAO, will directly report to the deputy defense secretary and oversee the Joint Artificial Intelligence Center, the Defense Digital Service and the DoD’schief data officer, according to a memo released Dec. 8. Today, those offices directly report to the deputy defense secretary, something the senior defense official said has led to disjointedness.
“We’ve created the CDO, the JAIC and DDS each operating independently and as if the other ones don’t exist,” said the officer, who briefed media Dec. 8 on the condition of anonymity. “That causes two kinds of inefficiencies. One, it means we don’t have the kind of integration across their lines of effort that we could really maximize the impact of the things that any one organization is doing. Two, it means we don’t take advantage of when there are overlaps in what they’re doing, or underlaps in what they are doing to drive the right kind of prioritization in these spaces.”
The official insisted this new position is not meant to create more bureaucracy, but rather serve as an integration function to better drive priorities across these related functional areas.
It is unclear who will lead this organization, but the senior official said the department is looking both inside and outside the Pentagon. The intent is to establish an initial operating capability for the office by Feb. 1, 2022, and reach full operating capability no later than June 1, 2022, the official said.
After establishing an initial operating capability, the office will work with existing authorities to integrate and align the three offices it will oversee. The CDAO will serve as the successor organization to the JAIC, the official said, meaning it will be the lead AI organization within the Pentagon. The CDAO will act as an intervening supervisor for DDS, working to scale new digital solutions and apply them to other problems.
After reaching full operating capability, the official said, the DoD will submit legislative proposals to Congress to adjust authorities and reporting lines.
The heart of JADC2
The senior official noted that this new organizational change gets to the heart of the Pentagon’s Joint All-Domain Command and Control approach, which seeks to more seamlessly connect sensor information to shooters to allow for faster decision-making.
“It is JADC2. JADC2 is the integration of disparate data sources into a common architecture that allows us to have clear, senior-leader-down-to-operator decisions to drive warfighting improvements,” the official said. “To do that. you need a range of capabilities from common data architecture to a common development and deployment environment that allows you to take your applications either digital or AI-enabled and move them to the warfighter.”
To realize this vision, the department needs a single driver inside the Office of the Secretary of Defense.
Moreover, the hope is that the new CDAO position will accelerate progress on initiatives such as common data fabrics, open architectures and open APIs — all key enablers of JADC2.
The position is expected to help the DoD identify solutions to fit these problems and build toward common foundational elements, common development environments and common deployment environments.
It will also help scale the department, which currently has several startup efforts in these areas but needs to make them full-fledged projects.
“We have a couple of startups here, and to get to the scale at the speed we need in the department, we need a central advocate who can manage the resources, manage the priorities, connect with [combatant command] commanders and service leadership to really drive the prioritization and deployment of those solutions,” the official explained.
Biopharma companies should consider a new, integrated approach to evidence-generation strategies to better demonstrate the value of therapies to all stakeholders.
Understanding and improving patient outcomes through the generation of evidence is one of the most critical activities of an innovative biopharmaceutical company. Evidence that a therapy is both effective and safe is the primary requirement. But that is just the start. Clinicians need evidence to support optimal treatment decision making. Payers need evidence to support patient access. And patients need evidence to understand how a therapy might meet their needs. Increasing volumes and types of available data raise the potential to generate this evidence, but if it is to be realized, biopharmaceutical companies may need to change their approach to evidence generation, working far more strategically and collaboratively than they do at present.
The status quo in most companies is one in which each function draws up its own evidence-generation plan. These may well be included in a master, asset-level plan. Yet they tend to be bolted together, not integrated, and so are a far cry from the integrated evidence-generation plans (IEPs) being developed by a handful of companies. From the outset, IEPs take into account the evidence needs of different functions and geographies across the life cycle of an asset, and then collaboratively determine how to meet them using a broad range of methods and data (Exhibit 1). The end result is a significantly more efficient and effective use of resources in pursuit of better patient outcomes.
The case for change
The role that evidence generation plays among key stakeholders is evolving and becoming increasingly important.
Regulators are beginning to consider evidence beyond randomized controlled trials (RCTs). They recognize that real-world evidence (RWE) can help to accelerate drug development or indication expansion, minimize exposure of patients to placebo control arms, and help to offset rising drug-development costs. As a result, regulators around the world are beginning to incorporate RWE into their approval processes.1 In a few instances, RWE has served as a primary source of evidence for the approval of therapies. The US Food and Drug Administration (FDA), for example, approved the expansion of indications for Pfizer’s Ibrance to include male breast cancer on the basis of data from electronic health records and insurance claims.2 More commonly, RWE has been used to augment evidence from RCTs, rather than accepting RWE on its own merit.3 The growing importance of RWE is nonetheless clear: roughly one in three new drug applications and biologic license applications in 2019 offered RWE as supportive evidence.4
Payers need to make increasingly complex decisions about coverage and reimbursement. Against a backdrop of rising healthcare spending, aging populations, and rising numbers of innovative, high-cost treatments, the strategic generation of evidence can help clarify the clinical, economic, and humanistic impact a new therapy might have versus the standard of care, bearing in mind that different payers will have different evidence needs. For instance, Spain and Italy prioritize evidence of budget impact, the United Kingdom stresses cost effectiveness, while Germany emphasizes using the right standard of care, as well as safety and efficacy in patient subpopulations, as the comparators.5
Providing the right evidence can prove powerful. In Japan, post-launch evidence showed that Boehringer Ingelheim and Eli Lilly’s Jardiance, a drug for treating type 2 diabetes, reduced the risk of cardiovascular events.6
Clinicians need more support.The more new therapies that launch, the more clinicians need to understand how they differ, how best to use them, the predictors of response, and whether to switch patients to alternative therapies. Understanding clinicians’ concerns and then generating the evidence to address them is therefore critical, and biopharmaceutical companies are increasingly turning to RWE for assistance. For example, a retrospective review of real-world data from the US VICTORY Consortium demonstrated a favorable safety profile for Takeda’s Entyvio in treating ulcerative colitis and Crohn’s diseases.7
Patient centricity is growing.Companies must consider patients’ needs, preferences, and concerns throughout the clinical development process and generate evidence to address them, hence the importance of patient engagement. At least ten EU member states have a mechanism for patient engagement in their health technology assessment process, mostly at the advice and decision-making stage.8
The results of patient engagement can be impressive. Amgen’s Aimovig for migraines, for instance, was approved by the FDA with the help of a new, patient-reported instrument to assess how migraines were affecting patients’ ability to function. The information that patients reported was successful in serving as secondary end points.9
Why integration matters
Together, these trends make a strong case for why an integrated approach to generating evidence is valuable. An IEP identifies evidence gaps across functions, geographies, and the asset life cycle to meet the needs of external and internal stakeholders. And it aligns priorities and resource allocation to fill those gaps. As a result, an IEP can hold down drug-development costs for an asset’s primary indications and the costs for indication expansions through the judicious use of RWE in lieu of costly RCTs. It can also avoid the duplication of evidence generation within the company or even, as sometimes occurs, the purchase of similar data sets. And the collaborative approach can more rapidly identify and fill critical evidence gaps. Increasing numbers of companies are using them to great effect:
One company’s IEP for an oncology therapy identified no fewer than 18 critical evidence gaps that were not previously spotted by individual functions. Some gaps—such as those involving data to support payer needs in specific geographies, data to support patient adherence, and data on optimal treatment duration—needed to be addressed urgently. Others, such as supporting follow-on indications and improving patient outcomes with combination therapies, related to longer-term needs.Further, the IEP process helped focus and prioritize resources, such as by reducing the number of areas of interest for investigator-initiated research from more than 50 to 15. It also identified opportunities to generate data more effectively by means of collaboration—by medical supporting the design of health economics and outcomes research (HEOR) studies, and by HEOR and clinical working on post hoc analyses of clinical trial data for health technology assessment submissions and payer dossiers.
A biopharma company initiating an IEP for a pipeline cell therapy uncovered high-priority evidence gaps on the importance of manufacturing turnaround time on patient outcomes. This resulted in prioritizing seven manufacturing studies, which ultimately led to reduced vein-to-vein time. Further, the process highlighted the need to demonstrate safety and efficacy versus competitors given the increasingly crowded landscape. The IEP team identified a specific disease registry as the optimal way to generate this data and was able to catalyze leadership to approve the registry approximately six months earlier than expected.
An IEP identified a follow-on indication with high potential to be developed through RWE rather than sponsored trials as originally proposed—the result of insight sharing between medical, clinical, and commercial teams.
In a McKinsey survey of four IEP teams across functions and geographies, all respondents said the IEP had improved strategic alignment on evidence needs across stakeholders and led to the development of an effective execution plan. Over 80 percent said the IEP had increased synergies in cross-functional evidence generation.
The challenges of an integrated evidence-generation strategy
Companies that choose to transition to a more integrated evidence-generation strategy face three initial obstacles. Functions and geographies are siloed, hampering communication and collaboration; there is a tendency to focus on generating evidence to win near-term regulatory approval in key markets, rather than evidence to support the asset throughout its life cycle; and data are fragmented, making it hard to know what evidence is available and what might be needed. Understanding these challenges can help companies devise a sound plan to overcome them.
Functional and geographic silos
Integrated evidence-generation strategies depend upon transparency and coordination, with functions sharing data and information on stakeholder needs to identify evidence gaps, align on priorities, and plan how to most effectively fill the gaps. But this can be challenging for functions accustomed to focusing on their own, specific deliverables, with minimal input from other functions. And care is required to ensure legal compliance in how data are shared between functions such as commercial and medical.
Geographic coordination is also needed, though not the norm. Companies headquartered in the United States, for example, tend to use US-focused evidence-generation strategies. The fact that most drugs are first launched there and most evidence is generated there—plus the commercial importance of the US market—helps explain this focus. So while there is often strong US participation in developing evidence-generation strategies, other major markets are not always adequately consulted. Different languages and time zones pose further challenges to collaboration. The end result is a so-called global evidence-generation plan that often fails to address many regional or local needs and sometimes the existence of regional or local plans that run counter to the global strategy.
A near-term focus
A tendency to focus on generating near-term evidence is another obstacle to an integrated approach. Across the industry, companies concentrate on generating evidence from RCTs to obtain regulatory approval, which is a critical near-term goal. Yet evidence is needed to inform therapy use along the entire patient journey, which means that other functions—medical, HEOR, and market access, for instance—need to provide input when clinical trials are designed to enable optimal patient use and access once therapies are approved. (Exhibit 2).
A near-term focus can also diminish support for investigator-initiated trials (IITs) for development-stage assets. The perceived or actual regulatory risk associated with conducting IITs prior to approval means biopharmaceutical companies often choose not to fund them until after launch, at which point clinical development priorities shift to different development-stage assets. Yet postapproval asset evidence plans can benefit tremendously from continued support and input from clinical development.
More broadly, a focus on the near term can prioritize the launch indication and launch geographies for a development-stage asset over longer-term needs. Hence, evidence-generation plans may not fully reflect the potential to use RWE for life-cycle management, or additional secondary end points or comparator arms in trial designs to allow approval or uptake in additional geographies.
Fragmented data
Rarely is there a complete data catalog for an asset or disease, as different data sit with different functions and geographic units. In addition, each function typically uses its own system to track study execution, be that with the help of an ad hoc Excel spreadsheet, shared database, or sophisticated portfolio tools. And there is no guarantee that data are kept up to date—seldom is there a single accountable owner of the data even within a single function. As a result, obtaining a complete, cross-functional view of the data being generated for a specific asset or disease is hard, making it difficult to leverage all knowledge, know where evidence gaps might lie, or prevent the duplication of efforts to generate new evidence.
Making the shift
Companies that make the most progress implementing integrated evidence-generation strategies take account of these challenges. To that end, they do four things. They make sure the planning process is cross-functional and begins early in the asset life cycle. They foster change management to further encourage collaboration and break down functional and geographic silos. They establish governance processes that support a new way of working. And they invest not just in data platforms and advanced analytics capabilities but also in skills that facilitate cross-functional engagement.
Institute a cross-functional planning process two to three years before launch
Ideally, companies should begin building an IEP for an asset two to three years before launch. At that time, the strategic direction of the clinical development program will be clear but there remains an opportunity to expand evidence generation to support the asset along the entire patient journey and to support planned life cycle indications, not just the launch indication.
The work is typically led by global medical affairs, which is well placed to integrate the perspectives of all internal and external stakeholders. In some organizations, however, R&D plays a lead role prelaunch and then hands the task over to medical affairs postlaunch. Likewise, while in most organizations IEPs are distinct plans that synergistically build on clinical development plans (CDPs) to support an asset’s launch and full life cycle, in some organizations the CDP evolves into an IEP as the product progresses in development.
The typical IEP process generally entails the following steps (Exhibit 3):
Convene a cross-functional IEP team and develop and agree on the work plan: The medical-affairs lead assembles the team and introduces the process and framework for the IEP.
Develop strategic context:Clinical, medical, launch, and life-cycle management strategies need to inform the IEP.
Assemble an evidence catalog:Develop a centralized catalog of current and planned evidence-generation efforts that can be shared across functions, with appropriate safeguards in place, such as those that will keep commercial and medical affairs legally compliant.
Identify and prioritize evidence gaps: Identify, without imposing any initial constraints, potential evidence gaps across the patient journey and then prioritize those gaps that are both feasible to fill and that will have the most potential impact. Be sure to consider the evidence needs of both internal and external stakeholders across geographies.
Develop plans to fill priority evidence gaps: Think beyond RCTs and consider and weigh the trade-offs between different types of data and approaches (for example, secondary analyses of existing data sets, Phase IV trials, investigator-initiated trials, HEOR studies, RWE, advanced analytics). Tactically, ensure that the right combinations of internal and external expertise are leveraged to design, plan, and execute studies for optimal value creation.
Integrate input from additional stakeholders, then sign-off:Additional input can strengthen the plan. Stakeholders in different markets, as well as cross-functional partners that were not part of the IEP team, can provide insights, as can external stakeholders, often as part of an advisory board. A cross-functional governance body should give final approval to the plan (as discussed in “Establish effective governance,” below).
Keep the plan updated: IEPs should be living documents that reflect changes in the asset strategy and study-execution updates. Strategic updates are made after major data readouts. Executional ones, such as updates on study status, occur every six to 12 months.
Foster a change-management program
To develop an IEP, people will need to change the way they work—hence the need for a change-management program to break down functional and geographic silos and any cultural resistance. A change-management program should include the following elements:
Leadership role modeling:Executive sponsors, such as the CEO and head of global medical affairs, should be seen and heard supporting the new approach and driving change, as should the heads of each function.
A compelling value story: A change story that makes clear the rationale for integrated evidence generation should be cascaded through the organization, potentially through town halls, asset-level workshops, and one-on-one meetings. Showcasing evidence of how the approach has benefited brand teams will also encourage adoption.
Reinforcement mechanisms:Personal-development plans that encourage integrated evidence generation—key performance indicators (KPIs) for global disease teams that include the development and updating of IEPs, for instance—can accelerate adoption. Publicly celebrating the success of an IEP can also serve as an important reinforcement mechanism.
Confidence and skill building:Formal training to understand the new processes and governance will build capabilities as well as confidence in the new approach.
Establish effective governance
Cultural resistance can be an issue if clarity on leadership and governance is lacking.
There is much to clarify. Who, for example, is best placed to lead cross-functional planning, and who has final authority to decide which projects to prioritize? Should a scientific, medical, commercial, or hybrid body be responsible for governance? How frequently should an IEP be renewed? And who has signing-off rights for new IEPs or updates? While in some companies the cross-functional asset team signs off on the IEP, the ideal is a cross-functional, executive-level governing body. Bear in mind, however, that the ideal functional leader and the makeup of the governance body may change depending on the stage of development of an asset and the input required.
Whatever the choices made, they need to be set out clearly in a governance framework that eases the transition to the new approach. Importantly, there should also be agile funding mechanisms in place to ensure that prioritized studies are adequately resourced.
IEPs represent a fundamental shift in the way evidence is generated across functions, geographies, and phases of the asset life cycle.
Build new skills and capabilities
Companies wishing to adopt an integrated approach to the generation of evidence across the entire portfolio will need to build new skills and capabilities.
Data and analytics. For a successful evidence-generation strategy, companies will need an integrated data solution that seamlessly links the data catalog of completed studies (translational data, clinical trial data, and RWE, for example) with ongoing or planned studies across functions (with role-based access control, traceability, and auditability). Such a solution not only facilitates the rapid development and updates of IEPs but also can be used to power an analytics engine to generate hypotheses.
Talent. Attracting, retaining, and building the right talent will also be key. Experts will be needed in areas such as non-registrational data generation. But to overcome cultural roadblocks, soft leadership skills that facilitate cross-functional buy-in and engagement are critical too. There may also be a shortage of strategic or creative thinkers who can look beyond their asset or function to disease or franchise-level needs and identify opportunities for innovative methods of evidence generation. Training and coaching may be required to build the skills that lead to high-quality, integrated evidence-generation strategies and their execution.
Agile delivery. A playbook for generating integrated evidence-generation plans helps to ensure a consistent approach and speeds development, with periodic updates allowing newly discovered best practices to be codified and shared. For maximum impact, this can be paired with an agile project-management function dedicated to coordinating and communicating integrated evidence planning across the organization.
IEPs represent a fundamental shift in the way evidence is generated across functions, geographies, and phases of the asset life cycle, and as such will entail considerable effort to implement. But growing numbers of companies are discovering it is an effort well worth making in the pursuit of improved patient outcomes.
Palak Amin is a consultant in McKinsey’s Philadelphia office; Sarah Nam is an associate partner in the Washington, DC, office; and Lucy Pérez is a senior partner in the Boston office, where Jeff Smith is a partner.
The authors wish to thank David Champagne, Alex Davidson, Mattias Evers, Tomoko Nagatani, Brandon Parry, Lydia The, and Jan van Overbeeke for their contributions to this article.
To ensure social distancing and avoid infection, healthcare practices in many countries shifted from in-person consultations to telemedicine.
Nearly two-thirds of healthcare providers across 14 global markets are now investing heavily in digital health.
In developing countries, digital healthcare is also helping, with remote access to specialists.
Senior healthcare leaders from 14 countries say strengthening resilience and preparing for future crises is a top priority, according to a new report commissioned by Royal Philips.
The pandemic has seen many countries shift from in-person medical consultations to telemedicine, using apps, phone and video appointments. Industry analyst IDC predicts that by 2023 nearly two-thirds of patients will have accessed healthcare via a digital front end.
Improving resilience and planning for future crises is the top priority for more than two-thirds of senior healthcare leaders surveyed, with France, the Netherlands and Germany scoring the highest. Second in line is the continued shift to remote and virtual care (42%), led by India, the Netherlands and the US.
The top priorities are preparing to respond to crises and facilitating a shift to remote/virtual care.Image: Royal Philips, 2021
Accordingly, 64% of healthcare leaders are investing heavily in digital health technology at the moment, but the number drops to 40% when they were asked about their investment levels in three years’ time. This may be because respondents expect solid foundations to have been laid by then or due to continued uncertainty about healthcare funding beyond the pandemic.
People may expect telehealth foundations to have been laid three years from now, which would require less spending.Image: Royal Philips, 2021
Digital health needs AI
A major focus for future health technology investments is the deployment of Artificial Intelligence (AI) and machine learning.
At present 19% of healthcare leaders polled by Royal Philips said they are prioritizing investments in AI but 37% said they plan to do so over the next three years. The aim is to have AI help with clinical decision-making and to predict clinical outcomes.
This ties in with a growing shift from volume-based care targets to value-based care, where predicting patient outcomes will play a key role.
In value-based healthcare models,providers get paid for improving health outcomes rather than for the volume of patients treated. The focus is on treating illnesses and injuries more quickly and avoiding chronic conditions such as diabetes or high blood pressure. The results are better health outcomes and lower costs for both the healthcare system and the patient, thanks to fewer doctor’s visits, tests, interventions and prescriptions.
IDC has forecast that by 2026 two-thirds of medical imaging processes will use AI to detect diseases and guide treatment. A growing number of healthcare leaders believe that investing in AI technology is important for the future of their medical facility, according to the Royal Philips report.
The forecast level of healthcare leaders’ belief that investment in predictive technologies will prepare their healthcare facility for the future.Image: Royal Philips, 2021
HEALTHCARE
What is the World Economic Forum doing about healthcare value and spending?
Each year, $3.2 trillion is spent on global healthcare making little or no impact on good health outcomes.
This council partners with governments, leading companies, academia, and experts from around the world to co-design and pilot innovative new approaches to person-centered healthcare.Hide
Overcoming barriers to digital health
While healthcare leaders are clearly aware of the value of their digital investments, there are still many barriers to the sector’s digital transformation.
A lack of technology experience among staff is one major obstacle, highlighting the need for more digital training for those at the front line of healthcare provision. At the same time, governance, interoperability and data security challenges need to be overcome.
Resolving those will not be easy – which is why 41% of respondents highlighted the importance of forming strategic partnerships with technology companies or other healthcare facilities to jointly roll out new digital technology.
Freeing up hospitals
The formation of technology-enabled ecosystems is expected to contribute to offloading around a quarter of routine care from hospitals. Over the next three years, across the 14 markets surveyed, healthcare services at walk-in clinics and in-patient treatment centres will grow by around 10% each, pharmacies by 4% and home care by 6% on average.
Healthcare services at walk-in clinics and in-patient treatment centres are expected to grow over the next 3 years.Image: Philips, 2021
This trend is stronger in countries where healthcare provision is more likely to be in a rural setting, such as India and China.
This may be because digital technology has the potential to bridge healthcare gaps in underserved rural communities, especially in emerging markets. For example, an all-female health provider in Pakistan, Sehat Kahani, has e-health clinics around the country where – for a cost of $0.66 – patients can see one of a network of 1,500 doctors via a digital platform.GOVERNANCE
What is the World Economic Forum doing about healthcare data privacy?
The Healthcare Data Project at the World Economic Forum Centre for the Fourth Industrial Revolution Japan grapples with the question of how societies should balance the interests of individual citizens, businesses and the public at large when it comes to sensitive healthcare issues. An improved approach to governance during a number of health crises, including pandemics, can help build trust and possibly even save lives.
Additionally, a recent white paperexamining existing data-governance models, discovering that most are biased toward the interests of one of three major stakeholder groups. The whitepaper revealed the need for a balanced governance model designed to maximize the socially beneficial potential of data while protecting individual rights such as privacy and the legitimate interests of data holders.
A new report makes the national security case for overseas talent and increased research and development for a 6G infrastructure.
As the Biden administration looks to improve the nation’s telecommunications networks and expand access to internet services to reduce the digital divide in the U.S., industry analysts are recommending a slew of federal investments, including a nationwide 6G strategy and supporting infrastructure.
A report published by researchers at the Center for a New American Security, framed within the rivalry between the U.S. and China over technological innovation and deployment, offered solutions to expedite and streamline the U.S. rollout of advanced 6G connectivity.
“It is time for tech-leading democracies to heed lessons from the 5G experience to prepare for what comes next, known as Beyond 5G technologies and 6G, the sixth generation of wireless,” the authors write
The report outlines a to-do list for the legislative branches of government.
For the White House, the report encourages a formal 6G strategy to establish a roadmap for 6G deployment. Additionally, it advocates increased federal investment in the research and development of 6G technology, and establishing a working group focused on the plan for 6G deployment.
At the Congressional level, the report calls on lawmakers to designate the Department of Commerce as a member intelligence community.
“Closer ties to the IC will improve information-sharing on foreign technology policy developments, such as adversaries’ strategies for challenging the integrity of standard-setting institutions. This action will also integrate the Department of Commerce’s analytical expertise and understanding of private industry into the IC,” the report states.
Other supporting tactics include allocating funds to ensure the rollout of 6G to vulnerable, underserved rural communities that have a historical lack of access to fast network capabilities, as well as attracting more foreign technologists to assist in 6G developments, specifically through extensions in the H-1B visa process.
Researchers also advocate for the creation of new offices and programs within agencies like the Department of State, National Science Foundation, and White House to continue to support 6G network implementation.
Although the nation has struggled over the years to secure a 5G rollout, analysts suggest that policymakers should focus on a faster 6G deployment.
“The United States cannot afford to be late to the game in understanding the implications of 6G network developments,” the report reads. “To articulate the best way forward, policymakers should heed the lessons of 5G rollouts—both specific technical developments and broader tech policy issues—and understand the scope of the 6G tool kit available to them.”
China in particular took this approach targeting a 6G infrastructure. In 2019, the Chinese government unveiled plans to launch new research and development efforts to deploy 6G after previously focusing on 5G technologies.
Congress has taken some steps to set the stage for the U.S.’s 6G deployment. In the recently proposed Next Generation Telecommunications Act, a group of senators included provisions that would allocate public funds to support 6G advancements in urban areas.
The predictive software used to automate decision-making often discriminates against disadvantaged groups. A new approach devised by Soheil Ghili at Yale SOM and his colleagues could significantly reduce bias while still giving accurate results.
Organizations rely on algorithms more and more to inform decisions, whether they’re considering loan applicants or assessing a criminal’s likelihood to reoffend. But built into these seemingly “objective” software programs are lurking biases. A recent investigation by The Markup found that lenders are 80% more likely to reject Black applicants than similar White applicants, a disparity attributed in large part to commonly used mortgage-approval algorithms.
It might seem like the obvious solution is to instruct the algorithm to ignore race—to literally remove it from the equation. But this strategy can introduce a subtler bias called “latent discrimination.” For instance, Black people might be more concentrated in certain geographic areas. So the software will link the likelihood of loan repayment with location, essentially using that factor as a proxy for race.
In a new study, Soheil Ghili, an assistant professor of marketing at Yale SOM, and his colleagues proposed another solution. In their proposed system, sensitive features such as gender or race are included when the software is being “trained” to recognize patterns in older data. But when evaluating new cases, those attributes are masked. This approach “will reduce discrimination by a substantial amount without reducing accuracy too much,” Ghili says.
With algorithms being used for more and more important decisions, bias in algorithms must be addressed, Ghili says. If a recidivism algorithm predicts that people of a certain race are more likely to re-offend, but the real reason is that those people are arrested more often even if they’re innocent, then “you have a model that implicitly says, ‘Those who treated this group with discrimination were right,’” Ghili says.
And if an algorithm’s output causes resources to be withheld from disadvantaged groups, this unfair process contributes to a “vicious loop,” he says. Those people “are going to become even more marginalized.”
The prediction process carried out by machine-learning algorithms takes place in two stages. First, the algorithm is given training data so it can “learn” how attributes are linked to certain outcomes. For example, software that predicts recidivism might be trained on data containing criminals’ demographic details and history. Second, the algorithm is given information about new cases and predicts, based on similarities to previous cases, what will happen.
Since removing sensitive details from training data can lead to latent discrimination, researchers need to find alternative approaches to reduce bias. One possibility is to boost the scores of people from disadvantaged groups, while also attempting to maximize the accuracy of predictions.
In this scenario, however, two applicants who are identical other than their race or gender could receive different scores. And this type of outcome “usually has a potential to lead to backlash,” Ghili says.
Ghili collaborated with Amin Karbasi, an associate professor of electrical engineering and computer science at Yale, and Ehsan Kazemi, now at Google, to find another solution. The researchers came up with an algorithm that they called “train then mask.”
During training, the software is given all information about past cases, including sensitive features. This step ensures that the algorithm doesn’t incorrectly assign undue importance to unrelated factors, such as location, that could proxy for sensitive features. In the loan example, the software would identify race as a significant factor influencing loan repayment.
But in the second stage, sensitive features would be hidden. All new cases would be assigned the same value for those features. For example, every loan applicant could be considered, for the purposes of prediction, a White man. That would force the algorithm to look beyond race (and proxies for race) when comparing individuals.
The researchers tested their algorithm with three tasks: predicting a person’s income status, whether a credit applicant would pay bills on time, and whether a criminal would re-offend. They trained the software on real-world data and compared the output to the results of other algorithms.
The “train then mask” approach was nearly as accurate as an “unconstrained” algorithm—that is, one that had not been modified to reduce unfairness. For instance, the unconstrained software correctly predicted income status 82.5% of the time, and the “train then mask” algorithm’s accuracy on that task was 82.3%.
Another advantage of this approach is that it avoids what Ghili calls “double unfairness.” Consider a situation with a majority and a minority group. Let’s say that, if everything were perfectly fair, the minority group would perform better than the majority group on certain metrics—for instance, because they tend to be more highly educated. However, because of discrimination, the minority group performs the same as the majority.
Now imagine that an algorithm is predicting people’s performance based on demographic traits. If the software designers attempt to eliminate bias by simply minimizing the average difference in output between groups, this approach will penalize the minority group. The correct output would give the minority group a higher average score than the majority, not the same score.
The “train then mask” algorithm does not explicitly minimize the difference in output between groups, Ghili says. So this “double unfairness” problem wouldn’t arise.
Some organizations might prioritize keeping groups’ average scores as close to each other as possible. In that case, the “train then mask” algorithm might not be the right choice.
Ghili says his team’s strategy may not be right for every task, depending on which types of bias the organization considers the most important to counteract. But unlike some other approaches, the new algorithm avoids latent discrimination while also guaranteeing that two applicants who differ in their gender or race but are otherwise identical will be treated the same.
“If you want to have these two things at the same time, then I think this is for you,” Ghili says.
Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.
But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.
How AI bias happens
We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages.
Framing the problem. The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer’s creditworthiness, but “creditworthiness” is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that “those decisions are made for various business reasons other than fairness or discrimination,” explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn’t the company’s intention.
Collecting the data. There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same.
Preparing the data. Finally, it is possible to introduce bias during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. (This is not to be confused with the problem-framing stage. You can use the same attributes to train a model for very different goals or use very different attributes to train a model for the same goal.) In the case of modeling creditworthiness, an “attribute” could be the customer’s age, income, or number of paid-off loans. In the case of Amazon’s recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. This is what people often call the “art” of deep learning: choosing which attributes to consider or ignore can significantly influence your model’s prediction accuracy. But while its impact on accuracy is easy to measure, its impact on the model’s bias is not.
Why AI bias is hard to fix
Given that context, some of the challenges of mitigating bias may already be apparent to you. Here we highlight four main ones.
Unknown unknowns. The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it. In Amazon’s case, when the engineers initially discovered that its tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like “women’s.” They soon discovered that the revised system was still picking up on implicitly gendered words—verbs that were highly correlated with men over women, such as “executed” and “captured”—and using that to make its decisions.
Imperfect processes. First, many of the standard practices in deep learning are not designed with bias detection in mind. Deep-learning models are tested for performance before they are deployed, creating what would seem to be a perfect opportunity for catching bias. But in practice, testing usually looks like this: computer scientists randomly split their data before training into one group that’s actually used for training and another that’s reserved for validation once training is done. That means the data you use to test the performance of your model has the same biases as the data you used to train it. Thus, it will fail to flag skewed or prejudiced results.
Lack of social context. Similarly, the way in which computer scientists are taught to frame problems often isn’t compatible with the best way to think about social problems. For example, in a new paper, Andrew Selbst, a postdoc at the Data & Society Research Institute, identifies what he calls the “portability trap.” Within computer science, it is considered good practice to design a system that can be used for different tasks in different contexts. “But what that does is ignore a lot of social context,” says Selbst. “You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different.” Advertisementnull
The definitions of fairness. It’s also not clear what the absence of bias should look like. This isn’t true just in computer science—this question has a long history of debate in philosophy, social science, and law. What’s different about computer science is that the concept of fairness has to be defined in mathematical terms, like balancing the false positive and false negative rates of a prediction system. But as researchers have discovered, there are many different mathematical definitions of fairness that are also mutually exclusive. Does fairness mean, for example, that the same proportion of black and white individuals should get high risk assessment scores? Or that the same level of risk should result in the same score regardless of race? It’s impossible to fulfill both definitions at the same time (here’s a more in-depth look at why), so at some point you have to pick one. But whereas in other fields this decision is understood to be something that can change over time, the computer science field has a notion that it should be fixed. “By fixing the answer, you’re solving a problem that looks very different than how society tends to think about these issues,” says Selbst.
Where we go from here
If you’re reeling from our whirlwind tour of the full scope of the AI bias problem, so am I. But fortunately a strong contingent of AI researchers are working hard to address the problem. They’ve taken a variety of approaches: algorithms that help detect and mitigate hidden biases within training data or that mitigate the biases learned by the model regardless of the data quality; processes that hold companies accountable to the fairer outcomes and discussions that hash out the different definitions of fairness.
“‘Fixing’ discrimination in algorithmic systems is not something that can be solved easily,” says Selbst. “It’s a process ongoing, just like discrimination in any other aspect of society.”
This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.
Implementing a global set of standards on AI ethics can lead to a more collaborative future. Image: Pexels.
19 Nov 2021
Pascale Fung Director of the Centre for Artificial Intelligence Research (CAiRE) and Professor of Electrical & Computer Engineering, The Hong Kong University of Science and Technology
Hubert Etienne Ph.D candidate in AI ethics, Ecole Normale Supérieure, Paris
As the use of AI increases so does the need for a global governance framework.
The cultural differences between China and Europe present a unique set of challenges when it comes to aligning core ethical principles.
We examine the historical influences which inform current societal thinking and how they might work together in the future.
As a driving force in the Fourth Industrial Revolution, AI systems are increasingly being deployed in many areas of our lives around the world. A shared realisation about their societal implications has raised awareness about the necessity to develop an international framework for the governance of AI, more than 160 documents aim to contribute by proposing ethical principles and guidelines. This effort faces the challenge of moral pluralism grounded in cultural diversity between nations.
To better understand the philosophical roots and cultural context underlying these challenges, we compared the ethical principles endorsed by the Chinese National New Generation Artificial Intelligence Governance Professional Committee (CNNGAIGPC) and those promoted by the European High-level Expert Group on AI (HLEGAI).
Table comparing the ethical principles endorsed by the Chinese National New Generation Artificial Intelligence Governance Professional Committee (CNNGAIGPC) and those promoted by the European High-level Expert Group on AI (HLEGAI).
Collective vs individualistic view of cultural heritage
In many aspects the Chinese principles seem similar to the EU’s, both promoting fairness, robustness, privacy, safety and transparency. Their prescribed methodologies however reveal clear cultural differences.
The Chinese guidelines derive from a community-focused and goal-oriented perspective. “A high sense of social responsibility and self-discipline” is expected from individuals to harmoniously partake into a community promoting tolerance, shared responsibilities and open collaboration. This emphasis is clearly informed by the Confucian value of “harmony”, as an ideal balance to be achieved through the control of extreme passions – conflicts should be avoided. Other than a stern admonition against “illegal use of personal data”, there is little room for regulation. Regulation is not the aim of these principles, which are rather conceived to guide AI developers in the “right way” for the collective elevation of society.
The European principles, emerging from a more individual-focused and rights-based approach, express a different ambition, rooted in the Enlightenment and coloured by European history. Their primary goal is to protect individuals against well identified harms. Whereas the Chinese principles emphasize the promotion of good practices, the EU focuses on the prevention of malign consequences. The former draws a direction for the development of AI, so that it contributes to the improvement of society, the latter sets the limitations to its uses, so that it does not happen at the expense of certain people.
This distinction is clearly illustrated by the presentation of fairness, diversity and inclusiveness. While the EU emphasizes fairness and diversity with regard to individuals from specific demographic groups (specifying gender, ethnicity, disability, etc.), Chinese guidelines urge for the upgrade of “all industries”, reduction of “regional disparities” and prevention of data monopoly. While the EU insists on the protection of vulnerable persons and potential victims, the Chinese encourage “inclusive development through better education and training, support”.
In the promotion of these values, we also recognize two types of moral imperatives. Centred on initial conditions to fulfil, the European requirements express a strict abidance by deontologist rules in the pure Kantian tradition. In contrast, and as referring to an ideal to aim, the Chinese principles expresses softer constraints that could be satisfied on different levels, as part of a process to improve society. For the Europeans the development of AI “must be fair”, for the Chinese it should “eliminate prejudices and discriminations as much as possible”. The EU “requires processes to be transparent”, China’s requires them to “continuously improve” transparency.
Utopian vs dystopian view
Even when promoting the same concepts in a similar way, Europeans and Chinese mean different things by “privacy” and “safety”.
Aligned with the The General Data Protection Regulation (GDPR), the European promotion of privacy encompasses the protection of individual’s data from both state and commercial entities. The Chinese privacy guidelines in contrast only target private companies and their potential malicious agents. Whereas personal data is strictly protected both in the EU and in China from commercial entities, the state retains full access in China. Shocking to Europeans, this practice is readily accepted by Chinese citizens, accustomed to living in a protected society and have consistently shown the highest trust in their government. Chinese parents routinely have access to their children’s personal information to provide guidance and protection. This difference goes back to the Confucian tradition of trusting and respecting the heads of state and family.
AI, MACHINE LEARNING, TECHNOLOGY
How is the Forum helping governments to responsibly adopt AI technology?
The World Economic Forum’s Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines and to improve their AI procurement processes.
Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI – standards that can be eventually adopted by industries.
Example of a challenge-based procurement process mentioned in the guidelines
We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.
With regards to safety, the Chinese guidelines express an optimism which contrasts with the EU’s more pessimistic tone. They approach safety as something that needs to be “improved continuously”, whereas the European vision urges for a “fall back plan”, associating the loss of control as a no-way back situation. The gap in the cultural representation of AI – perceived as a force for good in Asian cultures, and as a deep seated wariness of a dystopian technological future in the Western world – helps make sense of this difference. Robots are pets and companions in the utopian Chinese vision, they tend to become insurrectional machines as portrayed by a Western media heavily influenced by the cyberpunk subgenre of sci-fi, embodied by films like Blade Runner and The Matrix, and the TV series Black Mirror.
Where is the common ground on AI ethics?
Despite the seemingly different, though not contradictory, approaches on AI ethics from China and the EU, the presence of major commonalities between them points to a more promising and collaborative future in the implementation of these standards.
One of these is the shared tradition of Enlightenment and the Science Revolution in which all members of the AI research community are trained today. AI research and development is an open and collaborative process across the globe. The scientific method was first adopted by China among other Enlightenment values during the May Fourth Movement in 1919. Characterised as the “Chinese Enlightenment”, this movement resulted in the first ever repudiation of traditional Confucian values, and it was then believed that only by adopting Western ideas of “Mr. Science” and “Mr. Democracy” in place of “Mr. Confucius” could the nation be strengthened.
Despite the seemingly different … approaches on AI ethics from China and the EU, the presence of major commonalities between them points to a more promising and collaborative future in the implementation of these standards.—Pascale Fung and Hubert Etienne.
This anti-Confucian movement took place again during the Cultural Revolution which, given its disastrous outcome, is discredited. In the years since the third generation of Chinese leaders, the Confucian value of the “harmonious society” is again promoted as a cultural identity of the Chinese nation. Nevertheless, “Mr. Science” and “technological development” continue to be seen as a major engines for economic growth, leading to the betterment of the “harmonious society”.
Another common point between China and Europe relates to their adherence to the United Nations Sustainable Development goals. Both guidelines refer to some of these goals, including poverty and inequality reduction, gender equality, health and well-being, environmental sustainability, peace and justice, economic growth, etc., which encompass both societal and individual development and rights.
The Chinese and the EU guidelines on ethical AI may ultimately benefit from being adopted together. They provide different levels of operational details and exhibit complementary perspectives to a comprehensive framework for the governance of AI.
Pascale Fung, Director of the Centre for Artificial Intelligence Research (CAiRE) and Professor of Electrical & Computer Engineering, The Hong Kong University of Science and Technology
Hubert Etienne, Ph.D candidate in AI ethics, Ecole Normale Supérieure, Paris
The views expressed in this article are those of the author alone and not the World Economic Forum.