The principles identify areas in which international standards organizations and other collaborative groups could advance what the agency calls Good Machine Learning Practice.
The U.S. Food and Drug Administration released a list of “guiding principles” this week aimed at helping promote the safe and effective development of medical devices that use artificial intelligence and machine learning.
The FDA, along with its U.K. and Canadian counterparts, said the principles are intended to lay the foundation for Good Machine Learning Practice.
“As the AI/ML medical device field evolves, so too must GMLP best practice and consensus standards,” said the agency regarding the principles.
WHY IT MATTERS
As the FDA notes, AI and ML technologies have the potential to radically expand the healthcare industry – but their complexity also presents unique considerations.
The 10 guiding principles identify points at which international standards organizations and other collaborative bodies, including the International Medical Device Regulators Forum, could work to advance GMLP.
The agency says stakeholders can use the principles to tailor and adopt good practices from other sectors to be used in the health tech sector, as well as to create new specific practices.
The principles are:
The total product life cycle uses multidisciplinary expertise.
The model design is implemented with good software engineering and security practices.
Participants and data sets represent the intended patient population.
Training data sets are independent of test sets.
Selected reference data sets are based upon best available methods.
Model design is tailored to the available data and reflects intended device use.
Focus is placed on the performance of the human-AI team.
Testing demonstrates device performance during clinically relevant conditions.
Users are provided clear, essential information.
Deployed models are monitored for performance, and retraining risks are managed.
“Areas of [international] collaboration include research, creating educational tools and resources, international harmonization, and consensus standards, which may help inform regulatory policies and regulatory guidelines,” said FDA officials.
At a virtual meeting of the agency’s Center for Devices and Radiological Health and Patient Engagement Advisory Committee last October, Bakul Patel, director of the Digital Health Center of Excellence, emphasized the importance of balancing innovation with patient protection.
“We all know there are some constraints, because of location or the amount of information available, about the cleanliness of the data,” said Patel at the time. “That might drive inherent bias. We don’t want to set up a system where we figure out, after the product is out in the market, that it is missing a certain type of population, or demographic, or other aspect that we have accidentally not realized.”
ON THE RECORD
“Strong partnerships with our international public health partners will be crucial if we are to empower stakeholders to advance responsible innovations in this area,” said FDA officials as they unveiled the new principles. “Thus, we expect this initial collaborative work can inform our broader international engagements, including with the IMDRF.”
Computation is the driving technological force behind many modern social and economic shifts. To better understand how computers will impact critical areas of our lives, we explore how they’re being used today and what forces will shape their future.
By Chuck Buck Original story posted on: October 25, 2021
The Senate Appropriations Committee’s vote is seen as a victory for AHIMA and other leading healthcare organizations.
There was a hint of expectation when Katherine Lusk, chair and president of the American Health Information Management Association (AHIMA), recently teased Talk-Ten-Tuesdays audience members regarding a pending piece of legislation for a national patient identification program.
The Patient ID Now coalition, of which AHIMA is a founding member, continues to make an impact on the problem of patient misidentification. “We’re proud of the Framework for a National Strategy on Patient Identity that Patient ID Now released earlier this year,” Lusk said.
Later that same day, on Oct. 19, the U.S. Senate’s Appropriations Committee removed a longstanding prohibition to fund a national patient identifier program. AHIMA, along with a coalition of other healthcare organizations, had rallied around the cause.
According to the bill’s text, released by AHIMA, it “drops prohibition on using funding to develop a unique patient health identifier for each individual’s health information. The longstanding ban has been a barrier for health institutions to reliably share information about patients, and during the COVID-19 pandemic, for health entities to effectively trace contacts and track immunizations.”
“By joining the bipartisan movement to remove barriers to accurate patient identification, the Senate has taken a firm step towards protecting patient safety, patient privacy, and supporting efforts to address patient identification issues exacerbated by the COVID-19 pandemic,” AHIMA said in a news release posted on its website. “Patient ID Now has highlighted challenges caused by patient misidentification in the healthcare sector’s response to the pandemic. These challenges include thousands of duplicate records created during the vaccination registration process and disruptions in vaccine availability at provider sites because of inaccurate patient documentation.”
Lusk said AHIMA continues to promote the importance of protecting and securing health information. She told Talk Ten Tuesdays audience members that the Association had recently launched its own AHIMA dHealth™, a seal-of-approval program that helps digital health companies demonstrate how they protect patient health information.
Published 17 December 2020 – ID G00739647 – 37 min read By Christian Canales, Tim Zimmerman, and 4 more
Communications technologies must continually evolve to match transformations across digital business landscapes. Product leaders must address evolving demand posed by evolving cloud, networking, security, cellular connectivity and infrastructure business models.
Overview
Key Findings
5G network slicing, 5G security, Wi-Fi 6 (802.11ax), function accelerator cards and 6G will have the highest impact on industries, business functions, and markets, replacing legacy product capabilities.
Enterprise demand for multiprovider (private and public) cloud connectivity will drive adoption of technologies, such as multicloud networking and software-defined cloud interconnect, with a high impact. Multiple use cases apply, including consistent configuration management, better visibility, and compliance.
Enterprise 5G and Wi-Fi connectivity will largely continue to coexist. While the features and timetable for 6G are not yet clearly defined and commercialization is expected in 2028, delivery of 802.11be (Wi-Fi 7) is expected around 2024, with adoption crossing the early adopter chasm by 2026.
Recommendations
Product leaders seeking to develop or expand their portfolio of communications technology solutions via emerging technologies should:
Leverage artificial intelligence technology advancements intelligently. “We have AI/ML” will not give you a head start or separate you from the crowd, unless you have real products and value behind that and you can articulate your differentiation.
Orient workload-centric networking solutions to the cloud, where the practice of deploying multiple cloud providers is on the rise. Prioritize targeting organizations with a distributed user footprint.
Prioritize greater agility by developing network automation tools that enable an orchestration, policy-based and intent-based networking systems (IBNS)-oriented approach, rather than operationally focused network configuration and change management tools.
Develop a strategy for 5G private enterprise indoor services by identifying the key capabilities and enterprise use cases that will find market demand and create service delivery differentiation.
Analysis
Overview of Emerging Technology Horizon
The following Emerging Technology Horizon profiles technology advancements or net new trends that will significantly impact the communications markets within the next three year horizon. Many of these technologies and trends are emerging to address new demands created from other areas of IT adoption or transformation. These include technologies that are evolving to address multiple cloud provider environments, the Internet of Things (IoT) and 5G connectivity, edge computing, and beyond (see Figure 1).Figure 1: Emerging Technology Horizon — Communications
The communications markets are in a constant state of flux. New vendors are frequently emerging and market leaders are continually acquiring. Growing adoption of SaaS and other public cloud services, as well as the need for near-local processing of data to ensure excellent user experience of application services, has changed the way traffic flows in networks. The traditional-data-center-focused, hub-and-spoke model, optimal for data residing in a single location, is no longer relevant. Data resides in multiple locations, decentralizing data traffic flows, and impacting security implications. A similar disaggregation of public cloud services is emerging, pushing a greater variety of client-impact functions from the cloud node to the edge.The growing use of public cloud services brings growing connectivity complexity for enterprises. Most enterprises access public cloud services on an ad hoc basis via internet connections, and increasingly to a growing number of different providers. While multicloud networking products are nascent, they will become increasingly relevant as vendors expand their capabilities in the next two years. Software-defined cloud interconnect (SDCI) technology serves as a hub to connect an enterprise to a wide variety of cloud, network and internet service providers. By the end of 2023, we estimate that approximately 30% of medium and large enterprises will employ SDCI services, up from less than 2% today. Another driver is the demands of edge computing. As more intelligence gets pushed to the edge, the need for on-device processing and more distributed architectures rises, creating shifts and new challenges for WAN connectivity. Edge applications will increasingly rely on a mesh of connections, balancing the need of mission-critical traffic processed locally and access to cloud services. As requirements for accessibility of real-time data increase, so does the need to process more data locally, while leveraging access to applications hosted in public cloud providers’ networks and ecosystems.Mobile service provider marketing hype about 5G often includes contentions that the technology can replace the IEEE 802.11-based corporate Wi-Fi network. While both 5G and Wi-Fi will largely continue to coexist, private 5G will predominantly be driven by organizations that cannot wait for reliable indoor public 5G coverage, with opportunities for industrial and manufacturing use cases. Outdoor private 5G opportunities will arise from edge computing and mission-critical IoT applications, and in the support of large-scale installations, more effectively served by cellular than Wi-Fi connectivity. The arrival of Wi-Fi 7 (802.11be) is expected around the 2024 time frame. While we have so far historically seen each new IEEE standard displacing its predecessor in a three to four year time frame (and currently also holding true for Wi-Fi 6), Wi-Fi 7’s higher performance lacks offering a realistic value proposition and its impact is expected to be more limited.There are many factors that make 5G security more complex. The rise of diversified services, cloud architecture and potentially massive numbers of IoT connections to 5G networks expose new security concerns and challenges. However, from a standards point of view, 5G provides enhanced security features compared with 4G — introducing unified authentication, a more flexible security policy for diverse use cases, secure service-based architecture and slice isolation, for example. Although 5G security has not been particularly highlighted in many early communication service provider (CSP) deployments, Gartner expects that industrial users will demand that private networks leverage standards for security and privacy.Automation is a key capability that spans across many technology profiles in this research document. This embraces NetOps, as a networking approach that incorporates the use of DevOps tools to improve the operational experience, enabling a more nimble, agile and easier to manage network. Infrastructure and operations leaders are gradually transforming network operations by investing in analytics and automation, while improving integration with DevOps and security to support their digital business. This remains a bumpy road though, partly because network vendors largely need to continue improving tooling and automation solutions. Also, enterprises often have a strong culture of risk aversion, limiting adoption of automation initiatives. Gartner nonetheless expects the use of NetOps 2.0 principles to grow by 40% by 2023, with organizations embracing these principles reducing application delivery times by 25% (for more information see NetOps 2.0: Embrace Network Automation and Analytics to Win in the Era of ContinuousNext).In the campus networking space, “Wi-Fi Network Assurance” solutions enable simplified operations through automation capabilities, and the use of artificial intelligence/machine learning (AI/ML) functionality has also begun to extend to wired switching connectivity. Intent-based networking systems (IBNS) takes automation to a wider portion of the network, including the WAN, data center, colocation facilities and cloud provider infrastructures. For software-defined cloud interconnect (SDCI) technology, Gartner advises organizations to prioritize providers that employ high levels of automation and orchestration in their hubs. For multicloud networking products a distinct capability includes configuration/provisioning, automation, management and troubleshooting functionality. Gartner expects that by 2023, 20% of enterprises will use public cloud operational tools to manage and control at least 15% of their on-premises data center resources.
How to Use the Emerging Technology Horizon
This Emerging Technology Horizon content analyzes and illustrates two significant aspects of impact:
When we expect it to have a significant impact on the market (specifically, range).
How much of an impact it will have on relevant markets (namely, mass).
Each emerging technology or trend profile analysis is composed of these two aspects. The profiles are organized by range starting with the center and moving to the outer rings of the Horizon, see Figure 1. (See Research and Methodology for the Emerging Technology Horizon in Note 1 for a more complete description of our approach to this research.)Time to impact or “range” is measured in the years to early majority adoption. (Fans of Geoffrey A. Moore can think of it as time to cross the chasm.) This is when technology adoption is “ready for prime time.” It is important to point out that the time to technology impact or “range” is not the same as time to act on the technology. When and how a technology product or service leader should act depends on the company’s business strategy. Providers that want to be “first movers” with an emerging technology or trend will need to act far sooner than those that are comfortable waiting for their competition to compel them into action.The “mass” component examines the extent of the impact on existing products and markets. To assess how massive the impact we explore two main aspects — breadth and depth. The breadth of impact concerns how many sectors are affected (products, services, markets, business functions, industries and geographies). The depth of the impact includes an analysis of the potential disruption to existing products, services and markets.
Communications Emerging Technologies and Trend Technology Horizon Profiles
Use Table 1 to jump to specific profiles. Each profile name is linked to the full technology profile to enable easier navigation.Table 1. Emerging Technologies in Communications Based on Time to AdoptionEnlarge Table
Back to TopAnalysis by: Tim Zimmerman and Bill RayDescription: Wi-Fi 6 (802.11ax) is the latest iteration of the IEEE 802.11 WLAN technology standards. Its main enhancements are allowing the network to control device connectivity for the first time and to improve the efficiency of existing 2.4 GHz and 5 GHz spectrum. The new standard also increases the theoretical throughput of the wireless medium to 10 Gigabits for densely populated areas. As such, its goal is to assure a larger number of devices with varying requirements are properly connected to the enterprise infrastructure.Wi-Fi 6 also adds the ability to allocate bandwidth between endpoints, using orthogonal frequency-division multiple access (OFDMA) so low-speed applications (such as a Wi-Fi light switch) receive a smaller allocation than those which require high speeds (such as a television). However, this functionality is dependent on the endpoints also supporting Wi-Fi 6, and this will take some time as the latest standard still carries a price premium. While the 802.11ax standard provides backward compatibility for legacy .11b/g/n/ac clients, these cannot benefit from the enhanced features of Wi-Fi 6, including the higher data rates, the improved multiuser multiple input/multiple output (MU-MIMO) capabilities and Basic Service Set (BSS) coloring.Wi-Fi 6 can also be extended into the 6 GHz band. In this form, it is branded “Wi-Fi 6E” and will deliver significantly more speed and capacity, depending on the spectrum available. Wi-Fi 6E will be available in the U.S. early in 2021, with the U.K. and Europe hoping to follow (with half the 6 GHz frequency allocation later the same year). Other countries will follow, but are unlikely to allocate quite as much additional radio spectrum so speeds will be commensurately slower.Sample Providers: Cisco, CommScope (RUCKUS), Extreme Networks, H3C, Hewlett Packard Enterprise [HPE] (Aruba), Huawei, Juniper Networks (Mist Systems), Ruijie NetworksRange: Now (0 to 1 Year)Gartner rates the range of Wi-Fi 6 as 0-1 year because:
We expect 802.11ax to become an IEEE ratified standard within the next six months. Prestandards chips are already available and the Wi-Fi Alliance has already created a test bed for certification for the standard which is being marketed as “Wi-Fi 6.”
All leading Wi-Fi providers have already released Wi-Fi 6 APs for enterprises to purchase, with the ability to update them to the ratified version of the standard.
Mobile device vendors (e.g., smartphones, tablets, laptops) have already committed to creating products with the new standard. Wi-Fi 6 APs additionally guarantee backward compatibility to support .11b/g/n/ac clients.
802.11ax WLAN (Wi-Fi) access points (APs), as a percentage of overall APs shipped to enterprises, have grown from 0.8% in 1Q19, to 16.4% in 4Q19 and 18.8% in 1Q20, according to our market share data. We expect this share to exceed 35% by year-end 2020 and 55% by year-end 2021.Mass: Very HighThe impact of Wi-Fi 6 is expected to be very high, with more than 30% of WLAN upgrades for large enterprises based on 802.11ax in the next 12 months. In the light of the current COVID-19 pandemic crisis, even though we are seeing many WLAN upgrade projects getting delayed, the economic downturn does not seem to be visibly altering the shift from 802.11ac to 802.11ax that we forecast late in 2019. While the price premium of .11ax over .11ac (for the enterprise market, excluding small-business APs) remained relatively high at 53% in 1Q20 ($295 revenue per AP for .11ax, versus $193 for .11ac), it continues to decline (from 58% in 4Q19 and 90% in 1Q19). Also, this calculation does not take into account the different penetration levels across vendors. For the two leading providers in terms of revenue share, Cisco and HPE (Aruba), the price premium of .11ax dropped below 25% in 1Q20. We are seeing Wi-Fi 6 getting increasingly proposed in price contracts by default, matched by end-user demand driven by future-proofing aspirations.While previous generations of Wi-Fi have focused on improving speed, Wi-Fi 6 introduces several innovations which make it applicable across a wider range of applications. Specific use cases include remote collaboration using higher resolution (4K) video and augmented reality (AR) and virtual reality (VR) applications (e.g., remote field services, training and simulation, product design and visualization and AR commerce). In the IoT world, bandwidth needs can largely vary, from very low requirements for data collection devices to very high needs for AR/VR devices. In the past, both devices resided in the same domain as all solutions determined where they wanted to associate and how they wanted to communicate to the infrastructure. Wi-Fi 6 changes the control mechanism for the wireless medium from the device to the network, allowing APs to intelligently segment devices and making Wi-Fi more competitive with Bluetooth in low-power/low speed applications such as sensors and automation systems. Improvements in the communication scheduling also helps IoT devices to achieve higher battery life, again pushing Wi-Fi into the IoT market.As highlighted in Market Trends: Will the Advent of 5G Make Enterprise Wi-Fi Connectivity Less Relevant?, 5G communication service providers’ (CSPs’) marketing hype has sparked questions among enterprises and tech vendors about 5G potentially displacing Wi-Fi connectivity. This can confuse enterprises regarding the availability and capabilities of 5G. Countering that requires product marketers to create a differentiated position that emphasizes how Wi-Fi can outperform and outsell 5G. For more information see How to Promote Enterprise Wi-Fi Connectivity Against the Advent of 5G.Recommended Actions:
Support of operational technology (OT) connectivity should be an important part of your Wi-Fi 6 value proposition, taking product differentiation, such as IoT onboarding and security capabilities, beyond a Wi-Fi-centric focus. The progressive convergence of the “traditional” IT and building automation networks continues to increase the number of IoT devices that organizations have to manage.
Highlight differentiation with latency monitoring, including response times with encrypted applications. Other metrics should include jitter, packet loss, mean opinion score (MOS) scores and even location, to address business-criticality by application.
Provide support for Wi-Fi Alliance Certified Location (802.11mc), which can already provide an accurate location using Wi-Fi 6 access points and round-trip timing (RTT).
Back to TopAnalysis by: Tim Zimmerman and Christian CanalesDescription: The term Wi-Fi Network Assurance refers to collecting data into a data lake and, combined with the use of artificial intelligence/machine learning (AI/ML) algorithms, to train, baseline, monitor, react, proactively resolve and report Wi-Fi network performance issues. The ability to baseline the quality of Wi-Fi connectivity and collect the right data to resolve simple and advanced issues such as time correction problems provides the basis for the system to be able to guarantee that certain quality levels are met as enterprises move to eliminate campus network administrators. The use cases behind AI/ML span from optimizing and analyzing network performance over time and user density, service quality management to meet SLAs, and self-healing capabilities to maximize reliability and better security. Basic solutions in the market provide suggestions for network administrators to tune up Wi-Fi settings based on inferences, while others have gone a step further and can eliminate some human intervention even for advanced issues.Sample Providers: Cisco, CommScope (RUCKUS), Extreme Networks, H3C, Hewlett Packard Enterprise (Aruba), Huawei, Juniper Networks (Mist Systems)Range: Now (0 to 1 Year)Gartner rates the range of Wi-Fi Network Assurance as 0-1 year level for several reasons:
Most of the leading Wi-Fi providers serving the enterprise market already have solutions, and adoption is expected to cross the early adopter chasm within the next 12 months as enterprises continue to seek cost optimization opportunities. However, it is important to acknowledge that AI/ML is a very hyped topic today, as the variation in the data collected and the difference in algorithms (interference, supervised ML, unsupervised ML) determines the types of problems that can be resolved. Most Wi-Fi providers have a sales pitch on AI/ML technology, yet only a handful provide differentiated functionality.
NetOps (see Note 2) is an emerging use case, driven by recent advances in analytics, AI and ML. At the access layer, NetOps today applies predominantly to Wi-Fi, although it has begun to extend to wired networking. For too long Wi-Fi has been one of the “pain points” for organizations, as it comes with inherent challenges associated with interference, distance and is a shared medium.
A strategic planning assumption by Gartner estimates that by 2022, 65% of enterprises will deploy network automation (NA) in the access layer (up from less than 15% in 2017). We also anticipate growing use of artificial intelligence for operations (AIOps) platforms that will improve Wi-Fi performance, based on the use of automated root cause analysis in conjunction with network datasets and the increased confidence in problem resolution. Any Wi-Fi vendor not investing in NetOps functionality and understanding the data that must be collected to resolve advanced issues therefore will be left behind.
Mass: MediumThe impact of Wi-Fi Network Assurance is believed to be high for the higher end of the enterprise market (for organizations with more than 500 employees, especially those with complex network needs), but moderate overall due to a more limited impact for small and midsize organizations. The majority of providers targeting the midmarket today lag the required capabilities typically due to lack of investment and knowledge.Improperly implemented Wi-Fi installations continue to result in a poor end-user experience. For too long, Wi-Fi has been one of the pain points for organizations, as it comes with inherent challenges associated with interference and distance and is a shared medium. We have seen the integration of sensors into Wi-Fi APs to improve monitoring if SLAs are being met. This would provide network administrators the ability to run frequent tests to ensure network performance continued to meet SLAs. Wi-Fi service assurance takes this to a higher dimension, with the ability to eliminate error-prone and tedious manual intervention. As such, it lowers the burden on network administrators, giving enterprises flexibility in reallocating network administration resources. Organizations have the issue that IT personnel staffing levels will likely remain flat or decline in years to come. This is a problem that Wi-Fi product marketers can flip to their advantage to communicate product differentiation.Recommended Actions:
Develop differentiation related to AI/ML technology by focusing on the data that is collected and the algorithms used which target radio frequency (RF) management as the ability to adapt to changing conditions in the RF environment (e.g., gaps in coverage or changes in capacity or performance) while providing insightful analytics such as meeting SLAs.
Target delivering simplified operations through automation capabilities that document time savings for improved ROI. Key aspects should include automation workflows that help eliminate error-prone and tedious manual intervention, or the ability to orchestrate multiple device configurations at network scale.
Include integration with third-party provisioning/configuration management software in your roadmap. For instance, this should embrace the ability to leverage tools for continuous configuration automation (e.g., Red Hat Ansible, Puppet) to automate multiple aspects of the configuration life cycle, as well as reporting or ticketing tools (e.g., ServiceNow) to record changes made to the network.
Short-Range Impact
5G Network Slicing
Back to TopAnalysis by: Peter LiuDescription: 5G network slicing is a form of virtual network technology. It allows a network-based CSP to create multiple independent end-to-end logical networks in the form of a “network slice” on top of a common shared physical infrastructure at the provider’s network domain. Each slice can be customized to have its own network architecture, engineering mechanism, network provisioning methodology, configuration and service quality profile based on the requirements that it serves.Sample Providers: Ciena, Cisco, Ericsson, Huawei, Mavenir, Nokia, ZTE, Zeetta NetworksRange: Short (1 to 3 Years)Network slicing adoption for 5G is still in its early stage, with many concerns and issues remaining unsolved. To succeed, network slicing requires new business models to be developed that drive innovative partnerships, standards for alone and virtualized network infrastructure, demanding SLAs need to be agreed between operators and vertical markets, as well as collaboration by standardization bodies among other details. All of these elements are not fully ready at this moment.While many technology and standards-based obstacles remain, network slicing for 5G will become a key differentiating feature in the next one to three years. The main drivers as below:
5G commercial rollout has begun in various countries; network slicing has become a key differentiating feature in the next one to three years. CSPs are eager to use network slicing to move beyond selling simple connectivity to offering enterprise customers more advanced connectivity options — specifically, guaranteed levels of network performance on a given network slice. More CSPs have started evaluations and trials of network slicing. They include BT, China Mobile, Deutsche Telekom (DT), SK Telecom (SKT) and Vodafone U.K.
The latest freeze of 5G standard R16 enhances the network slicing and 5G core features which enable more vertical industry use cases. In addition, CSPs in China and Korea will start deploying the commercial stand-alone 5G and multiaccess edge computing (MEC) in large scale in 2020, which we believe will accelerate the network slicing adoption maturity and enable more innovation opportunities
Although network slicing has been positioned as a 5G technology differentiator, it can be applied to 4G LTE. As such, there are already many promising use cases quickly and easily supported by 4G that will improve through the evolution to the hybrid 4G/5G networks and emerge as fully automated experience in the coming years.
Most of the leading network equipment and service providers serving the CSP market already have network slicing solutions ready. Adoption is expected to cross the early adopter chasm within the next 12 months as CSPs continue to seek to monetize the opportunity.
Mass: Very HighThe impact of network slicing is believed to be very high for the communication industry and is widely believed to have the potential to redefine how CSPs conduct their business. The slicing with appropriate resources and optimization is expected to broaden the horizon of CSPs in many vertical segments, such as automotive, energy, finance, healthcare, manufacturing and public sector. By being able to individually service particular communication and connectivity needs of specific industries, CSPs could transform from a “dumb pipe” provider to an infrastructure partner in variety of industries’ digitization initiatives.In addition, given that multiple slices can run on a common shared infrastructure, including costly components (i.e., nodes, base stations, fiber), operators enjoy the economies of scale that any shared infrastructure provides.However, a number of challenges lie ahead, which also provide opportunities for vendors to differentiate themselves:
Network slicing adds complexity to CSP network management and orchestration, which are already complex and operationally disruptive. It requires significant operational transformation, particularly for the large-scale deployments involved.
Security becomes critical and challenging. Different infrastructures will have different security levels and policies since those are managed and administered by both telecom and non-telecom players.
Business models require development on a per slice/service basis to meet the dynamic demands and traffic variations.
Standardization of services and handovers across various industry players have to be renegotiated in much more detail than before.
Recommended Actions:
Take a phased approach when developing your network slicing product offering. Do not wait for full readiness of network slicing capabilities. Lower the entry point. Start with static slicing for industry verticals that have clear and common requirements on the network, such as mission-critical communication, entertainment (ultrahigh-definition live broadcast and augmented reality/virtual reality [AR/VR]), gaming, and manufacturing.
Reduce the complexity of network slicing management and orchestration. Enhance network slicing creation and deployment automation capabilities in your products through leverage AI and data analytics. Offer a business customer the capability to manage their own services or slices (e.g., dimensioning, configuration) by means of application programming interfaces (APIs).
Enhance security features in your network slicing offering while allowing resource sharing among multiple tenants, as such networks must also ensure the security requirements needed for each slice scenario that is employed.
Adopt common, open architectures that demonstrate how network slicing can be applied in multivendor, multidomain, multioperator contexts for a range of candidate use cases and services.
Back to TopAnalysis by: Sylvain FabreDescription: 5G security is enabled by a set of 5G network mechanisms, and improves 4G security with:
Unified authentication (4G authentication is access dependent).
Flexible security policy for diverse use cases (versus single 4G policy).
Encrypted transmission Subscription Permanent Identifier (SUPI) prevents International Mobile Subscriber Identity (IMSI) leakage for user privacy.
All 5G network infrastructure vendors implement the same 3GPP standards for 5G security. However development processes may vary; concerns about specific vendors, as well as geopolitical tensions, have increased procurement scrutiny around 5G infrastructure.Sample Providers: Ericsson, Huawei, Intel, Mobileum, Netcracker, Nokia, VIAVI SolutionsRange: Short (1 to 3 Years)Despite all the current market hype around 5G, Gartner rates the range of 5G security is one to three years for two main reasons:
There is not a single security standard. The main standardization organization is 3GPP, where 5G security involves security solutions from different standardization organizations. 5G security embraces several security protocols, such as IPsec, EAP, and TLS, and others under development, such as security for network function virtualization (NFV).
Standardization does not guarantee 5G security, as there are many factors that make 5G security more complex. The rise of diversified services, cloud architecture, and massive IoT connections in 5G expose new security concerns and challenges. Protection involves implementing security in the configuration and operation of the 5G network to ensure cybersecurity hygiene.
While organizations opting for private 5G will likely put more attention to securing these networks, we don’t see security as being a priority in many public 5G rollouts. This is also due to the immaturity of some 5G capabilities, such as slicing, which will mature from today’s 3GPP release 15 (R15) to R16 in 2021 and R17 from 2022. Growing adoption of 5G security, including broader use of more sophisticated security mechanisms, is nonetheless long term unavoidable. Ensuring protection of identity and privacy is no longer an option. We estimate that by 2023 65% of the world’s population will have their personal data covered under modern privacy regulations, up from 10% today.Mass: Very HighThe impact of 5G security is believed to be very high overall. It will impact a multitude of sectors, namely all organizations using 5G services, and 5G security will largely replace current 4G product capabilities. All 5G network infrastructure vendors implement the same 3GPP standards for 5G security. However, development processes may vary. Concerns about specific vendors, as well as geopolitical tensions, have increased procurement scrutiny around 5G infrastructure, including security implications. There are a number of security challenges lying ahead, which also provide opportunities for vendors to differentiate themselves:
5G will increase the number and diversity of connected objects, potential distributed denial of service (DDoS) attack vectors and entry points. This also provides more telemetry for anomaly detection.
5G infrastructure virtualization, automation and orchestration of a service-based architecture increase exposure.
Slicing virtual networks across shared infrastructure will impact security due to lateral movement risk, and cross-slice permeability issues.
A wider ecosystem delivering industrial 5G use cases, with varying security competencies and credentials, makes SLAs and service assurance, including end-to-end (E2E) security, challenging.
Backward compatibility with 4G/Long Term Evolution (LTE) means some legacy security issues will persist in 5G.
Cross-network layer security will need to be managed between 5G macro and small cell layers and strength of different algorithms.
Recommended Actions:
Prioritize DDoS mitigation capabilities, cloud web application and API protection (WAAP), such as cloud web application firewalling, bot mitigation, DNS protection and intrusion prevention system (IPS) in your roadmap. Include leveraging integration with on-premises DDoS appliances.
Offer 5G security services that include time-sensitive reporting to clients due to maturing data protection regulations. Establish a strategy to minimize the impact of zero-day vulnerabilities through regular updates to software patches.
Stress your multivendor (or multiprovider) security capabilities. The key is the ability to detect a potential security threat in shared 5G infrastructure (between different CSPs or utility providers, or multitenant scenarios). IoT segmentation, anomaly detection and authentication with Wi-Fi networks will complement the end-to-end value proposition.
Back to TopAnalysis by: Alan PriestleyDescription: AI for traffic management is the use of deep neural network (DNN)-based AI algorithms to manage the flow of data through 5G network infrastructure to maintain service quality and data throughput. High-speed 5G deployments have a significant increase in data traffic flowing through the network — through both base stations and back-end core infrastructure — this places challenges on network capacity and availability. Data types are also rapidly expanding ranging from user centric data (such as video streaming) to a wide range of machine related data (high-speed data traffic to IoT sensor data). At the same time data security and protection is now paramount.To ensure consistent security and meet contractual quality of service (QoS) and experience for all users, 5G systems need to implement sophisticated traffic management algorithms that can dynamically manage and analyze data traffic through the network.Sample Providers: Nokia, Ericsson, NECRange: Short (1 to 3 Years)Traffic management models used in existing 4G LTE networks have been rules based. However, with rapid growth in deployment of 5G networks with increasing complexity of data and network topologies, machine learning techniques are being utilized, with a rapid transition over the next three years to the use of deep neural network (DNN) based (often referred to as artificial intelligence) solutions underway.Mass: HighMass is high as traffic management algorithms are deployed across the network infrastructure with some “simpler” elements in base stations and other more complex tasks within the core network data centers. Many implementations will leverage standard CPUs to execute these DNN-based algorithms, and many of the latest generation CPUs utilized within the network core have extensions to their instruction sets to enable them to more efficiently execute these workloads. Dedicated workload accelerator chips, such as application-specific integrated circuits (ASICs), graphics processing units (GPUs) and field-programmable gate arrays (FPGAs) are also being deployed in core data centers to support these new AI-based traffic management workloads. Base station designs typically have a more constrained form factor than the core data centers and will utilize dedicated chips to support the CPU in executing DNN-based traffic management algorithms.Recommended Actions:
Evaluate the use of DNN-based AI algorithms to enhance traffic management and analysis.
Integrate dedicated accelerator chips into base station designs to support DNN-based workloads.
Ensure core infrastructure designs are capable of supporting workload accelerators.
Back to TopAnalysis by: Julia PalmerDescription: Nonvolatile memory express over fabrics (NVMe-oF) is a network protocol that takes advantage of the parallel-access and low-latency features of NVMe Peripheral Component Interconnect Express (PCIe) devices. NVMe-oF enables tunneling the NVMe command set and data over additional transports beyond PCIe over various networked interfaces to the remote subsystems across a data center network. The specification defines a common protocol interface and is designed to work with high-performance fabric technology including RDMA over Fibre Channel, InfiniBand or Ethernet with RoCEv2, iWARP or TCP.Sample Providers: Dell Technologies, Excelero, Hitachi Vantara, IBM, Silk, Lightbits, NetApp, Pavilion Data Systems, Pure Storage, StorCentric.Range: Short (1 to 3 Years)Gartner believes the range for this profile is from one to three years because even though NVMe is today broadly deployed, NVMe-oF still lacks wide adoption as a data center protocol because end to end NVMe-oF support is nascent. Gartner rates this technology profile impact as high from an end-user perspective, as NVMe-oF has the potential to significantly lower storage latency for shared storage arrays. End-to-end NVMe-oF implementations balance the performance and simplicity of direct-attached storage (DAS) with the scalability and manageability of shared storage.Today, many NVMe-oF offerings that use fifth-generation and/or sixth-generation Fibre Channel (FC-NVMe) are available, but adoption of NVMe-oF within 25/50/100 Gigabit Ethernet is slower. In the future, it is likely that TCP will evolve to be an important data center transport for NVMe-oF. Unlike server-attached flash storage, shared accelerated NVMe and NVMe-oF can scale out to high capacity with high-availability features and be managed from a central location, serving dozens of compute clients.Mass: MediumThe adoption of NVMe-oF is moderate overall. NVMe-oF complexity and costs will be barriers to broad adoption in the near future. While a variety of highly performant workloads (such as AI/ML, high-performance computing [HPC], in-memory databases or transaction processing), can leverage NVMe-oF today, most of the mainstream workloads are not planning quick transitioning to end-to-end NVMe architecture. NVMe-oF upgrades might require uplifts and updates to storage networks that encompass switches, host bus adapters (HBAs), as well as OS kernel drivers. This barrier will be removed with introduction and maturity of NVMe-oF over TCP, which will not require drastic infrastructure changes. However, this technology is still missing broad ecosystem support.Most storage arrays vendors already offer solid-state arrays with internal NVMe storage. During the next 12 months, an increasing number of infrastructure vendors will offer support of NVMe-oF connectivity to the compute hosts. Integrated, converged and hyperconverged integrated system (HCIS) systems will be able to hide complexity and shorten the learning curve for the adoption of NVMe-oF elements, and deliver those products in an integrated, turnkey format during the next 12 to 24 months. NVMe-oF delivered as software-defined storage (SDS) solutions is most appealing to hyperscale vendors, which are leading the adoption curve of this nascent technology.Recommended Actions:
Develop NVMe-oF offerings that support integration with RDMA over converged Ethernet v2 (RDMA RoCEv2) or NVMe-oF over TCP-based products.
Build a ROI value proposition for the customers with business-critical applications that can leverage high throughput and low latency of end-to-end NVMe-oF capabilities.
Back to TopAnalysis by: Nat SmithDescription: Secure access service edge (SASE, pronounced “sassy”) combines comprehensive networking and security functions to support the dynamic secure access needs of the workforce. It connects people to services. SASE mandates cloud edge for networking and security services provided, though some SASE use cases still require a portion of the service to be delivered on-premises.SASE is evolving from five contributing security segments: software-defined WAN (SD-WAN), firewall as a service (FWaaS), secure web gateway (SWG), cloud access security brokers (CASB) and zero trust network access (ZTNA). The consolidation of markets into a single SASE market will happen over time. Today, there are still five separate buyers and five separate security segments. As the evolution unfolds, vendors in each of these contributing segments that embrace the SASE framework should be considered SASE vendors — or at least less mature SASE vendors.While the list of individual capabilities continues to evolve and will likely initially differ between products in the contributing segments, serving those capabilities from the cloud edge is non-negotiable and fundamental to SASE. Core capabilities of a completely consolidated SASE are the aggregation of the functionality from these individual segments. However, in the short term, best-of-breed capabilities in each of the contributing segments that are served from cloud edge are considered a SASE solution.Sample Providers: Akamai; Broadcom-Symantec; Cato Networks; Fortinet; iboss; Netskope; Palo Alto Networks; Versa Networks; VMware, ZscalerRange: Short (1 to 3 Years)Even though some vendors are not implementing all portions of this framework today, Gartner estimates SASE is about one to three years away from early majority adoption. Additionally, there is already some consolidation among the segments, with both SD-WAN and FWaaS vendors offering similar capabilities and are often found on the same shortlist from buyers. Similarly, SWG, CASB and ZTNA vendors are consolidating, as this aggregate feature set is often sought for remote worker security. The largest ZTNA vendors are also some of the larger SWG vendors.Mass: HighSASE extends to five of the larger security markets in security, predicting that they will ultimately consolidate into a single market with a single buyer. The influence of this evolution is large and the extensibility of the framework, allowing new features and capabilities to easily be incorporated ensures that SASE will grow beyond these five contributing segments today.Although SASE represents an evolution versus a transformation, the changes required to evolve to a SASE framework will be significant and this adds to the mass of the technology and merits a medium level impact. Appliance-based vendors will need to rearchitect their solution for the cloud, implementing cloud-delivered network security services. However, the services alone will not be sufficient — vendors will also need points of presence (POPs) or cloud edge presence as well, which may require substantial investment or partnerships.Recommended Actions:
Adopt a flexible service-based architecture that gives buyers the flexibility to easily adapt their network security capabilities to changing end-user environments and use cases.
Develop cloud-based components as scalable microservices that can all process packets in a single pass.
Build a network of distributed points of presence (POPs) through colocation facilities, service provider POPs and infrastructure as a service (IaaS) to reduce latency and improve performance for network security services.
Back to TopAnalysis by: Nat SmithDescription: Zero trust networking, or identity-based networking, is the use of identities to establish sessions and control traffic in the network. In zero trust networking, connectivity policies are created in terms of the actual users, devices and services — not Internet Protocol (IP) addresses. It simplifies connectivity and actually makes it scale. It is also a call for network security vendors to invest and take a much more active role in identity systems and architectures as a function of all traffic passing through their offerings.For example, VPN tunnels are created as an encrypted path between two sets of IP addresses. Network segmentation is often accomplished with subnets or a range of related IP addresses. Firewall rules are often written using only IP addresses. Policies are made artificially complex and large as users, devices and services move around. This is one of the reasons why we see firewalls with thousands of rules. Not only do we need more rules to accommodate moving users, data and services, but also the intent of the rules and why they were added are easily forgotten. When we do not know why a rule was put in place or what it is supposed to do (e.g., just an IP address in the rule with no other context), we just leave it in so as not to interrupt connectivity. Leaving a rule in place because no one knows what it does is a perfect example of things getting too complicated.Zero trust network access (ZTNA) is one of the best examples of zero trust networking in action. Instead of setting up static and contextless rules as had been the practice with VPNs, policies are simple, logical and easily overlay on the existing low-level network infrastructure. As services increasingly move to environments where organizations do not control the network infrastructure (e.g., IaaS), the forced use of IP addresses to specify connectivity policies will increasingly be a burden.Sample Providers: Akamai; Appgate; Cisco; Citrix; Proofpoint Meta; Netskope; Odo; Okta; Palo Alto Networks; Perimeter 81; ZscalerRange: Short (1 to 3 Years)The recent global pandemic has accelerated adoption of this technology, particularly as part of ZTNA solutions. That alone dictates a short range for this technology. However, the overhead and conversion of existing firewall rules alone will make this transition slow and IP-based rules will perpetuate for many years to come.Mass: MediumZero trust networking is already underway, firmly a part of connecting remote workers to private services (ZTNA). However, the conversion of other network security and connectivity solutions away from IP addresses and to users or services will take some time. In addition, the requirement to reassess trust of identity will require some new technology and new architecture for many vendors. As a result, Gartner rates this technology trend as medium in mass.Recommended Actions:
Study ZTNA as a template for access control under zero trust networking.
Use existing object technology in rules and configuration to prove the concept of policies solely by user and service, learning market preferences and workflow optimizations.
Expand product integrations with identity services and vendors, not to log into products, but to verify path and permission before traffic is allowed to pass. Constant reassessment of identity (trust) should be part of the packet path.
Note 1: Research and Methodology for the Emerging Technology Horizon
The Emerging Technology Horizon content analyzes and illustrates two significant aspects of impact:
When we expect it to have a significant impact on the market (specifically, range).
How big an impact it will have on relevant markets (namely, mass).
Analysts evaluate range and mass independently and score them each on a one-to-five Likert-type scale:
For range, this scoring determines in which Horizon ring the Emerging Technologies and Trends will appear.
For mass, the score determines the size of the Horizon point.
In the Emerging Technology Horizon, the range estimates the distance (in years) that the technology, technique or trend is from crossing over from early-adopter status to early majority adoption. This indicates that the technology is prepared for and progressing toward mass adoption. So at its core, range is an estimation of the rate at which successful customer implementations will accelerate. That acceleration is scored on a five-point scale with one being very distant (beyond eight years) and five being very near (within a year). Each of the five scoring points corresponds to a ring of the Emerging Technology Horizon graphic (see Figure 1). Those Emerging Technologies and Trends with a score of one (beyond eight years) do not qualify for inclusion on the Horizon. When formulating scores for range, Gartner analysts consider many factors, including:
The volume of current successful implementations
The rate of new successful implementations
The number of implementations required to move from early adopter to early majority
The growth of the vendor community
The growth in venture investment
Mass in the Emerging Technology Horizon estimates how substantial an impact the technology or trend will have on existing products and markets. Mass is also scored on a five-point scale — with one being very low impact and five being very high impact. Emerging Technologies and Trends with a score of one are not included in the Horizon. When evaluating mass, Gartner analysts examine the breadth of impact across existing products (specifically, sectors affected) and the extent of the disruption to existing product capabilities. It should be noted that an emerging technology or trend may be expressed in different positions on different Emerging Technology Horizons. This occurs when the maturity of Emerging Technologies and Trends varies based on the scope of Horizon coverage.
Note 2: NetOps
NetOps is a networking approach that incorporates the use of DevOps tools and methods to improve the operational experience, with a more scalable and programmable network infrastructure approach. The primary driver is to reduce the operational burden and costs associated with managing network infrastructure.
Nearly every industry in the US has experienced substantial improvements in productivity over the last 50 years, with 1 major exception: health care. In 2019, the US spent an estimated $3.8 trillion on health care, including an estimated $950 billion on nonclinical, administrative functions, and that number has increased despite major technological enhancements.1,2 This Viewpoint considers several specific steps that can be taken to simplify administration in health care and boost overall productivity in the economy.
To run any organization, a base of administration is necessary. A typical US services industry (for example, legal services, education, and securities and commodities) has approximately 0.85 administrative workers for each person in a specialized role (lawyers, teachers, and financial agents). In US health care, however, there are twice as many administrative staff as physicians and nurses, with an estimated 5.4 million administrative employees in 2017, including more than 1 million who have been added since 2001.3
The administrative complexity of health care is profound. There are multiple transaction nodes, including more than 6000 hospitals, 11 000 nonemployed physician groups (defined as hospital-affiliated and independent practices with 5 or more physicians),4 and 900 private payers; regulatory complexity (compliance requirements such as the Health Insurance Portability and Accountability Act and regulated markets such as Medicare Advantage); and contrasting incentives, for example, market-driven checks and balances, such as prior authorization.4 The sheer complexity associated with so many entities makes administrative simplification difficult.
A new report provides an extensive evaluation of administrative spending to determine which parts are necessary and which could be simplified.2 The analysis dissected profit and loss statements of individual health care organizations, estimated spending on specific processes, and compared administrative spending in health care with that of other industries. The conclusion of the report is that an estimated $265 billion, or approximately 28% of annual administrative spending, could be saved without compromising quality or access by implementing about 30 interventions that could be carried out in the next 3 years.2 This set of interventions works within the structure of today’s US health care system in order to preserve its market nature (eg, multipayer, multiclinician, multi–health care center) and the associated benefits (eg, world-leading innovation in care delivery).
The starting point is 5 functional areas that account for approximately 94% of administrative spending (see eTable in the Supplement). The largest of these is industry-agnostic corporate functions: general administration, human resources, nonclinical information technology, general sales and marketing, and finance. This functional area accounts for an estimated $375 billion of spending annually. The second-largest category is the financial transactions ecosystem, which includes claims processing, revenue cycle management, and prior authorization, accounting for an estimated $200 billion annually. The rest is made up of industry-specific operational functions, such as insurance underwriting (an estimated $135 billion annually), administrative clinical support operations such as case management (an estimated $105 billion annually), and customer and patient services such as call centers (an estimated $80 billion annually).
For each of these functional focus areas, known interventions that could reduce spending without harming patient care were considered. This meant using a financial and operational perspective for the analysis, but also acknowledging that these interventions could and likely will have broader benefits on other outcomes, such as access, quality, patient experience, physician satisfaction, and equity.
“Within” and “Between” Interventions at the Organizational Level
The individual organization level was used as the starting point, by looking at “within” interventions, those that can be controlled and implemented by individual organizations, and “between” interventions, those that require agreement to act between organizations but not broader, industry-wide change. This spending is amenable to interventions that address highly manual, inefficient workflows, such as patient admission and discharge planning in case management; poor data management and lack of standardization, such as nonstandardized submission processes for prior authorization forms; and disconnected tools and systems, for example, the lack of interoperability between the claims systems of payers and hospitals.
Organizations could potentially save an estimated $210 billion annually by addressing these issues.2 The majority of those savings reside in industry-agnostic corporate functions such as finance or human resources. Interventions that affect these functions include automating repetitive work such as generation of standard invoices and financial reports; using analytical tools for human resources departments to better predict and address temporary labor shortages; integrating a suite of tools and solutions to coordinate staffing for nurse managers; and building strategic communications platforms between payers and hospitals to send unified messages. These interventions have been adopted in the marketplace by some payers, hospitals, and physician groups, with a positive return on investment using current technology and nominal investment (that is, once the interventions are fully rolled out, the cost of implementation is generally paid off in about a year by the recurring savings). Research has shown that organizations that aggressively pursue industry-leading productivity programs are twice as likely to be in the top quintile of their peers as measured by economic profit.5
Since many of these interventions are relatively standard, the question that arises is why they have not been implemented to date. A common set of barriers to implementation currently exists, including high levels of complexity and overlapping compliance rules such as privacy guidelines and requirements on how and where data can be stored; the need to manage labor displacement in an industry that is a driver of workforce growth; contrasting incentives for payers, hospitals, and physician groups in a primarily fee-for-service reimbursement model; and lack of prioritization from industry leaders on administrative simplification. Successful organizations often have common lessons for implementation including prioritizing administrative simplification as a top strategic initiative; committing to transformational change vs incremental steps; engaging the broader partnership ecosystem for the right capabilities and investments; and disproportionally investing in the underlying drivers of productivity, such as technology and talent.
“Seismic” Interventions at the Industry Level
Some of the inertia at the organizational level reflects market failures that require industry-level intervention, including the necessary decision-makers and influencers from both the public and private sectors for a given intervention. For example, individual organizations alone cannot change the systemic lack of interoperability in the US health care system. A set of “seismic” interventions were identified that require broad, structural collaboration across the health care industry.2 These include new technology platforms such as the use of a centralized, automated claims clearinghouse; operational alignment such as standardizing medical policies across payers, for example, requiring the same set of diagnostics and clinical data before agreeing to cover a more complicated procedure or drug therapy; and payment design such as globally capitated payment models for segments of the care delivery system. These are meant to be examples of what is possible and are based on analogs from other industries that have undergone this type of change. If currently identified seismic interventions were undertaken, an estimated $105 billion of savings could occur annually.2 These savings would largely occur in the financial transactions ecosystem and industry-specific operational functions such as clinician credentialing and medical records management.
Launching these seismic interventions could be considerably more difficult than the within and between interventions. A framework that focuses on how to promote innovation in the public sector was applied to isolate the mechanism required to enable action for each seismic intervention.6 For example, individual organizations do not experience the financial pressure today that would bring them together to create a centralized automated clearinghouse (which is what happened in banking). Financial incentives could help overcome this inertia.
A set of common actions is necessary to galvanize this change. These actions include using interoperability frameworks to support high-value use cases such as the assembly of longitudinal patient records; creating public-private partnerships such as piloting a complete Health Information Exchange in 1 or more states; and selecting third parties, such as foundations, to research facts to galvanize movement (for example, a foundation-backed randomized trial of administrative interventions to validate the conditions for success).
Why Now?
Across the 3 types of interventions, the analyses suggest that simplifying administration could save the US health care system an estimated $265 billion annually after accounting for $50 billion of overlap between organizational and industry-level interventions.2 These savings, if realized, would be more than 3 times the combined 2019 budgets of the National Institutes of Health ($39 billion), the Health Resources and Services Administration ($12 billion), the Substance Abuse and Mental Health Services Administration ($6 billion), and the Centers for Disease Control and Prevention ($12 billon).7 In per capita terms, $265 billion is approximately $1300 for each adult in the US.
Economic downturn often leads to health system change. With COVID-19 creating enormous disruption to the health care system, a known opportunity to capture more than a quarter-trillion dollars in the next few years without compromising the US health care system’s ability to deliver care could be quite attractive. The sooner health care administration is simplified, the easier it will be for all to engage the US health care system.Back to topArticle Information
Corresponding Author: Nikhil R. Sahni, BA, BAS, BSe, MBA, MPA/ID, McKinsey & Company, 280 Congress St, Ste 1100, Boston, MA 02210 (nikhil_sahni@mckinsey.com).
Conflict of Interest Disclosures: Mr Sahni and Mr Carrus are partners at McKinsey & Company. Dr Cutler reported receipt of personal fees as part of the multidistrict litigation against opioid manufacturers, distributors, and pharmacies and the multidistrict litigation against JUUL.
Jon Stresing At the intersection of the AI and the DoD! — NVIDIA Account Manager – Army | DISA | JHUAPL | DLA — President, AFCEA Central Maryland Chapter Emerging
From the Joint Chiefs of Staff (JCS) to the Battalion/Squadron level, the entire Department of Defense is organized around a deliberate staff structure. At the JCS level, because it is a Joint Environment like the Combatant Commands (COCOMs), the staffs are broken out into J1-J8. What many don’t know is that each of these staffs directly correlate to an applicable business function in the corporate world. And every business function in the corporate world has been massively disrupted by artificial intelligence (AI) and machine learning (ML). This makes each J-Staff ripe for massive technological disruption.
In the Army, at the Pentagon level, we have the Chief of Staff of the Army who is on the JCS, and below him/her exists the General-staff (G-Staff) broken out into G1-9. Below the Chief of Staff of the Air Force, there is the Air Force as Air-Staff (A-Staff). The Marines, Space Force and Guard all have their own letter designations. At the Brigade/Regiment/Wing/Ship down to the Battalion/Squadron/Department level, the staff is generally designated as an “S” staff. But for the sake of this article, it is irrelevant.
The illustrious J-Staff is usually confusing to those outside of the military. Honestly, as a lower enlisted soldier, I did not even know what the G/J-Staff was. I did not know until I worked at DISA as a contractor…but I digress. The roles of each staff are below, and generally are the same across the DoD.
– J/G/N/S x1 Manpower and Personnel: Think of this as your human resources department in your company. It is the division of a business that is charged with finding, screening, recruiting, and training job applicants, as well as administering employee-benefit programs.
HR business functions are currently being massively disrupted, and the x1 staff could take advantage of several capabilities:
o AI for recruitment – AI could predict where the best candidates from the military come from, and help analyze candidate information and background to ensure they are placed in the right jobs. AI using Natural Language Processing and Conversational AI could also power chat bots for recruiters.
o One of the 1-Staff’s most important functions is manpower optimization. All businesses right now are under incredible stress to “do more with less, “and the DoD is no different. The correct AI algorithms would revolutionize the way the 1 staff is able to ensure comprehensive Joint Force readiness to meet warfighting requirements.
o On the back end of the HR cycle, AI can be used for retention. Organizations are using models in place today to predict when and why workers will leave, and even providing solutions for how to stop it.
–x2 – Intelligence: Although the DoD has a very important and different function for intelligence, most large businesses do conduct some sort of Business intelligence (BI) operations. BI is a technology-driven process for analyzing data and delivering actionable information that helps executives, managers and workers make informed business decisions.
Intelligence tradecraft brings in data from multiple sources and tries to derive some sort of insight or prediction from that data. Because of that, many x2-Staffs are further ahead in the adoption of AI; however, a few ideas include:
o One of the most basic intelligence disciplines is Human-Intelligence. In today’s modern world, an intelligence officer does not need to travel around the world to find human intelligence, because many share so much information openly. This very article and these words I am writing right now will be read by LinkedIn AI algorithms, and likely algorithms from nefarious actors.
o The electromagnetic spectrum offers a trove of data. However, there are not enough people in America to listen to all of that radio traffic. And that makes AI for SIGINT a perfect use case. AI can sift through all of that data, analyze it, correlate it, do automatic language translations, do speech to text, and create labeled transcriptions.
o Imagery Intelligence is another area where there are not enough analysts in all of America to sift through every satellite, spy plane, balloon, or drone image/video to derive insight. Long gone are the days of dark rooms with magnifying glasses looking for Russian SCUDs in Cuba. These days, powerful technology analyzes thousands of images a second and derives more and better insight from each image than a human could. This same technology is used to save lives as well.
–x3 – Operations: Out of all the X-Staff, this one should be the most understandable to a business. Operations is the execution of the “Mission” of the organization. Most companies have a mission statement. The DoD has a broader mission than any company, but business operations are easily translatable. The best known applications for AI in the DoD probably fit into Military Operations in some capacity including: Robotics, Computer Vision, Natural Language Processing, Signals Processing, and Smart Bases, among many others. Aside from the typical Military Operations functions, there are other ways AI can improve the 3 shop.
All businesses have an operations function. The operations staff need AI in order to stay competitive with adversaries:
o The easiest way AI can help is with basic decision making. There are times with “going with your gut” is the right call. However, using scientific, math-based approach to decision making has its strengths as well. AI neural network decision making models will have a place in all future decision-making processes.
o Predictive maintenance knowledge flows from the ground to operations leaders, and could predict future combat power and readiness of units and the military as a whole.
–x4 – Logistics: All business have some sort of input and output. There are supply chains and distribution chains. The DoD is the largest logistics organization in the world, as it is within the mission to project force across the globe, across thousands of bases. Within the DoD, logistics focuses and aligns with the core logistics capabilities of supply, maintenance operations, deployment and distribution, health services support, engineering, logistics services, and operational contract support.
Logistics has been using math-based optimization for a long time now. There is still a long way to go for logistics to be fully optimized, and AI can help:
o AI can be used to read labels and to track shipments using computer vision. The same technology that can scan an image in the 2 staff can also be used to identify what item needs to go where, and track that inventory along the way for chain of custody. Computer vision can also be used to figure out the perfect way to pack a box, truck or airplane.
o Everyday, Amazon uses AI to perfectly route millions of packages to end consumers. Amazon has it perfected to the point that there is no statistical way for each package to be anymore optimized. This same technology could easily be applied to the shipping of boots, beans, and bullets. Or it could easily be applied to tanks, helicopters, boats, and airplanes as well. Logistics optimization could save the DoD billions of dollars per year.
o Smart warehouses are here, and AI at the edge in the form of robots are a great way for 4 shops to bring AI into their organizations.
o One of my other favorite use cases in the 4-Staff is for maintenance operations – Predictive Maintenance. The DoD spends an incomprehensible about of money on maintenance of vehicles, like the Blackhawk. AI for predictive maintenance will save billions in taxpayer money, but most importantly, save lives.
–x5 – Strategy, Plans and Policy: It is the role of the strategy department of an organization to plan out how that organization is going to execute on the CEO’s (Commander-in-Chief) vision. Most, if not all, companies have some sort of Chief Strategy Officer (CSO). Typically, CSO’s communicate and implement a company’s strategy internally and externally so that all employees, partners, suppliers, and contractors understand the company-wide strategic plan and how it carries out the company’s overall goals. That said, the most important AI function for the 5 shop will be to implement an AI strategy across their organization. The DoD published an AI Strategy in 2019.
Because the 5 shop will be instrumental in developing the AI strategy for the organization, all of the use cases outlined in this document will be important to the people working there. * Note: There are considerable ethical considerations within using AI for purposes of conflict, and It will be important for some sort of AI Ethics Officer to be positioned there.
o The adversaries of the Department of Defense will be using AI against the United States. AI can develop plans and strategies that humans cannot, and those future capabilities will be used against DoD organizations.
o It will be impossible to develop plans and strategy to counter advisories’ AI without some sort of AI red teaming technologies in the 5 shop.
o Recommender engines could significantly decrease the time of data analysis spent developing strategy, plans, and policy.
–x6 – Command, Control, Communications, & Computers/Cyber: The mission of the Joint Staff J6 is to assist the CJCS in providing best military advice, while advancing cyber defense, joint/coalition interoperability, and C2 capabilities required by the Joint Force to preserve the Nation’s security. The head of the x6 directorate for DoD organizations will oftentimes be the CIO of that organization. This is the IT shop for all units. From base switching and internet service all the way up to the CIO office of the DoD, the 6 shop is always where IT lives. Because of the nature of AI, many people believe that this is where AI lives exclusively, and the goal of this thought leadership piece is to explore use cases outside of the J6.
· Because of the nature of AI, the 6 shop is a prime target for numerous AI initiatives:
o AI for cyber-security is the #1 use case for AI within the office of the CIO.
o Recommender engines can be used to help suggest statistically accurate actions for engineers to take across the organization.
o Predictive maintenance can play a huge role in assessing the health of an IT environment, and predicting when circuits, disk drives or compute cards may fail.
–x7 – Joint Force DevelopmentOR Education and Training OR Exercises and Training: The J-7 is responsible for the six functions of joint force development: Doctrine, Education, Concept Development & Experimentation, Training, Exercises and Lessons Learned. Typically, this part of the staff will be responsible for the training of the Soldiers/Airmen/Sailors/Marines and civilian workforce. 7 staff will have an important role in the development of AI across the organization, because they will be responsible for training all of the people within it.
· AI in education is nothing new, and most modern education has some sort of AI in the background. It will be important for the DoD to recognize this and implement several AI initiatives into the 7-Staff.
o With a large part of modern learning being virtual, AI can help test and tailor curriculums to a learner’s needs and goals.
o Utilizing augmented and virtual reality, AI will create simulated environments for training in hands on military occupations from Infantry to Mechanic.
–x8 – Force Structure, Resources, and Assessment Directorateor Integration of Capabilities & Resources in some capacity. The J-8 Directorate is charged with providing support to CJCS for evaluating and developing force structure requirements. J-8 conducts joint, bilateral, and multilateral war games and interagency politico-military seminars and simulations. It develops, maintains, and improves the models, techniques, and capabilities used by the Joint Staff and combatant commands, to conduct studies and analyses for CJCS. This is where the rubber meets the road for making sure everything works together, and whatever service branch the staff is in can effectively conduct their mission. The Army describes the 8 staff as, “the Army’s lead for matching available resources to the defense strategy and the Army plan.” The 8-Staff usually hold the finance function of the organization as well.
· The 8-Staff have a variety of AI optimal roles from finance to running massive statistical analysis of optimized structure and outcomes. AI could further enable these capabilities, and let leadership know if the force is optimized.
o Within the comptrollers office, AI can help eliminate human error and decrease burden on accounting professionals, leaving humans to do more high value work.
o AI techniques can help construct the organizations portfolios based on more accurate risk and return forecasts, and more complex constraints.
o Predictive maintenance directly correlates to predictive combat power and asset optimization, and AI can assist the analytics needed to derive optimal outcomes for an organization.
-x9 – This could be anything depending on where you are. There is no 9 staff at the JCS. The Army uses the G9 for Installations. The Navy uses the N9 for warfare systems. USCYBERCOM uses the J9 for Advanced Concepts and Technologies and Technical Outreach. And the Air Force uses the A9 for Studies, Analysis, and Assessments.
· Because of the wide depth and breadth of the various x9 staffs out there, I will simply end by pointing to the wide variety of other use cases above.
Artificial intelligence holds tremendous promise for improvements across each of the Department of Defense’s Staff functions. Each staff function directly correlates to a business process that every single industry has a need for. And every single industry has been disrupted by AI. AI can affect positive change across all of the business functions of the DoD, and not just Robotics or ISR.
And again, the business functions of United States Department of Defense are ripe for massive technological disruption.
Inclusive leaders are: – Visibly committed to diversity – Humble – Aware of their own bias – Curious about others – Culturally intelligent – Effective collaborators
Summary.
Companies increasingly rely on diverse, multidisciplinary teams that combine the collective capabilities of women and men, people of different cultural heritage, and younger and older workers. But simply throwing a mix of people together doesn’t guarantee high performance; it requires inclusive leadership — leadership that assures that all team members feel they are treated respectfully and fairly, are valued and sense that they belong, and are confident and inspired. Research involving 3,500 ratings by employees of 450 leaders found that inclusive leaders share six behaviors — and that leaders often overestimate how inclusive they really are. These are the behaviors: visible commitment, humility, awareness of bias, curiosity about others, cultural intelligence, and effective collaboration.
Companies increasingly rely on diverse, multidisciplinary teams that combine the collective capabilities of women and men, people of different cultural heritage, and younger and older workers. But simply throwing a mix of people together doesn’t guarantee high performance; it requires inclusive leadership — leadership that assures that all team members feel they are treated respectfully and fairly, are valued and sense that they belong, and are confident and inspired.
Inclusiveness isn’t just nice to have on teams. Our research shows that it directly enhances performance. Teams with inclusive leaders are 17% more likely to report that they are high performing, 20% more likely to say they make high-quality decisions, and 29% more likely to report behaving collaboratively. What’s more, we found that a 10% improvement in perceptions of inclusion increases work attendance by almost 1 day a year per employee, reducing the cost of absenteeism.
What specific actions can leaders take to be more inclusive? To answer this question, we surveyed more than 4,100 employees about inclusion, interviewed those identified by followers as highly inclusive, and reviewed the academic literature on leadership. From this research, we identified 17 discrete sets of behaviors, which we grouped into six categories (or “traits”), all of which are equally important and mutually reinforcing. We then built a 360-degree assessment tool for use by followers to rate the presence of these traits among leaders. The tool has now been used by over 3,500 raters to evaluate over 450 leaders. The results are illuminating.
These are the six traits or behaviors that we found distinguish inclusive leaders from others:
Visible commitment: They articulate authentic commitment to diversity, challenge the status quo, hold others accountable and make diversity and inclusion a personal priority.
Humility: They are modest about capabilities, admit mistakes, and create the space for others to contribute.
Awareness of bias: They show awareness of personal blind spots as well as flaws in the system and work hard to ensure meritocracy.
Curiosity about others: They demonstrate an open mindset and deep curiosity about others, listen without judgment, and seek with empathy to understand those around them.
Cultural intelligence: They are attentive to others’ cultures and adapt as required.
Effective collaboration: They empower others, pay attention to diversity of thinking and psychological safety, and focus on team cohesion.
These traits may seem like the obvious ones, similar to those that are broadly important for good leadership. But the difference between assessing and developing good leadership generally versus inclusive leadership in particular lies in three specific insights.
First, most leaders in the study were unsure about whether others experienced them as inclusive or not. More particularly, only a third (36%) saw their inclusive leadership capabilities as others did, another third (32%) overrated their capabilities and the final third (33%) underrated their capabilities. Even more importantly, rarely were leaders certain about the specific behaviors that actually have an impact on being rated as more or less inclusive.
Second, being rated as an inclusive leader is not determined by averaging all members’ scores but rather by the distribution of raters’ scores. For example, it’s not enough that, on average, raters agree that a leader “approaches diversity and inclusiveness wholeheartedly.” Using a five-point scale (ranging from “strongly agree” to “strongly disagree”), an average rating could mean that some team members disagree while others agree. To be an inclusive leader, one must ensure that everyone agrees or strongly agrees that they are being treated fairly and respectfully, are valued, and have a sense of belonging and are psychologically safe.
Third, inclusive leadership is not about occasional grand gestures, but regular, smaller-scale comments and actions. By comparing the qualitative feedback regarding the most inclusive (top 25%) and the least inclusive (bottom 25%) of leaders in our sample, we discovered that inclusive leadership is tangible and practiced every day.
These verbatim responses from our assessments illustrate some of the tangible behaviors of the most inclusive leaders in the study.
Shares personal weaknesses: “[This leader] will openly ask about information that she is not aware of. She demonstrates a humble unpretentious work manner. This puts others at ease, enabling them to speak out and voice their opinions, which she values.”
Learns about cultural differences: “[This leader] has taken the time to learn the ropes (common words, idioms, customs, likes/dislikes) and the cultural pillars.”
Acknowledges team members as individuals: “[This leader] leads a team of over 100 people and yet addresses every team member by name, knows the work stream that they support and the work that they do.”
The following verbatims illustrate some of the behaviors of the least inclusive leaders:
Overpowers others: “He can be very direct and overpowering which limits the ability of those around him to contribute to meetings or participate in conversations.”
Displays favoritism: “Work is assigned to the same top performers, creating unsustainable workloads. [There is a] need to give newer team members opportunities to prove themselves.”
Discounts alternative views: “[This leader] can have very set ideas on specific topics. Sometimes it is difficult to get an alternative view across. There is a risk that his team may hold back from bringing forward challenging and alternative points of view.”
What leaders say and do has an outsized impact on others, but our research indicates that this effect is even more pronounced when they are leading diverse teams. Subtle words and acts of exclusion by leaders, or overlooking the exclusive behaviors of others, easily reinforces the status quo. It takes energy and deliberate effort to create an inclusive culture, and that starts with leaders paying much more attention to what they say and do on a daily basis and making adjustments as necessary. Here are four ways for leaders to get started:
Know your inclusive-leadership shadow: Seek feedback on whether you are perceived as inclusive, especially from people who are different from you. This will help you to see your blind spots, strengths, and development areas. It will also signal that diversity and inclusion are important to you. Scheduling regular check-ins with members of your team to ask how you can make them feel more included also sends the message.
Be visible and vocal: Tell a compelling and explicit narrative about why being inclusive is important to you personally and the business more broadly. For example, share your personal stories at public forums and conferences.
Deliberately seek out difference: Give people on the periphery of your network the chance to speak up, invite different people to the table, and catch up with a broader network. For example, seek out opportunities to work with cross-functional or multi-disciplinary teams to leverage diverse strengths.
Check your impact: Look for signals that you are having a positive impact. Are people copying your role modeling? Is a more diverse group of people sharing ideas with you? Are people working together more collaboratively? Ask a trusted advisor to give you candid feedback on the areas you have been working on.
There’s more to be learned about how to become an inclusive leader and harness the power of diverse teams, but one thing is clear: leaders who consciously practice inclusive leadership and actively develop their capability will see the results in the superior performance of their diverse teams.
AT Andrea Titus is a consultant in Human Capital, Deloitte Australia, and PhD candidate in organizational psychology at Macquarie University. Email her at aespedido@deloitte.com.au
NXP Semiconductors N.V. has kicked off what could be the start of a new trend in the computer chipmaking industry, announcing that it’s moving the vast majority of its electronics design automation workloads to Amazon Web Services Inc.’s public cloud platform.
The company said today it has selected AWS as its preferred cloud provider. By moving its EDA workloads to Amazon’s cloud, NXP will gain increased efficiency and more compute power that should help it to design a new generation of faster and more powerful computer chips for the automotive, industrial “internet of things,” mobile and communications infrastructure sectors.
According to Amazon, NXP has already seen benefits that include enhanced collaboration and increased EDA throughput since moving to AWS, while reducing costs and gaining more time to focus on actual design, rather than managing compute resources.
More interesting, though, are the expected long-term benefits of moving such a compute-heavy workload as EDA to the cloud. NXP says it believes it will be able to achieve some important process improvements that will revolutionize the way it designs and tests its central processing units.
As the company explains, each new chip design is put through extensive testing and validation before it’s manufactured to ensure it is functionally safe and secure and delivers the expected performance. This work includes front-end design workflows such as performance simulation and verification, as well as back-end workloads around timing and power analysis, design rule checks and other applications necessary to prepare a new chip for production.
Previously, NXP, like other chipmakers, had always done this on-premises in internal data centers with a fixed compute capacity. But because of the increasing complexity of newer chips, it means these processes can take many months or even years to complete, and they call for the accurate forecasting and installation of new compute infrastructure.
As a result, it makes sense for NXP to move to the cloud, where it can tap into AWS’ advanced infrastructure and the scale and agility it needs to advance multiple chip design and testing projects at the same time. The move will enable it to run dozens of performance simulations in parallel, resulting in faster overall design times.
Shifting to the cloud also enables NXP to leverage key AWS analytics and machine learning services that can aid its research and development efforts. For instance, NXP is already using Amazon QuickSight, a machine learning-powered business intelligence service, to boost workflow efficiencies. By rapidly translating the results from one step of testing into modifications for another, it can reduce the time it takes to iterate on chip designs, NXP said.
The company also makes use of Amazon SageMaker, a service that’s used to build, train and deploy machine learning models in the cloud and at the edge, to optimize the way it structures compute, storage and third-party software application licenses.
NXP also benefits from the wide range of specialized instances available on AWS that allow it to achieve the perfect balance of price/performance for its EDA workflows.
Constellation Research Inc. analyst Holger Mueller told SiliconANGLE the key advantage of cloud platforms such as AWS is that they allow “commercial elasticity” for enterprises.
“In the roller-coaster bound semiconductor industry it makes sense to evaluate capital spending demand, so it’s a smart idea by NXP to move its EDA workloads to the cloud,” Mueller said. “EDA is early in the value chain for semiconductor makers, and not all design work leads to actual chips that are made.”
Charles King, an analyst with Pund IT Inc., said moving its EDA workloads to Amazon could provide compelling value to relatively smaller silicon player compared with the biggest ones such as Intel Corp. “Offloading compute and storage to a cloud vendor should reduce capital expenditures, and may also result in the company reducing IT headcount,” King said. “It wouldn’t be surprising if other vendors in NXP’s class are contemplating similar moves, though I doubt major semiconductor players will follow suit.”
NXP Semiconductors Chief Information Officer Olli Hyyppa said cloud-based EDA is necessary to accelerate semiconductor innovation and get new designs to market faster.
“AWS gives us the best scale, global presence, and selection of compute and storage options, with continuous improvements in price performance, that we need,” he said. “This will give precious time back to our design engineers to focus on innovation and lead the transformation of the semiconductor industry.”