
Archives
All posts for the month October, 2021
November 2021

Computation is the driving technological force behind many modern social and economic shifts. To better understand how computers will impact critical areas of our lives, we explore how they’re being used today and what forces will shape their future.
Article link: https://www.technologyreview.com/magazines/the-computing-issue/?
By Chuck Buck
Original story posted on: October 25, 2021
The Senate Appropriations Committee’s vote is seen as a victory for AHIMA and other leading healthcare organizations.
There was a hint of expectation when Katherine Lusk, chair and president of the American Health Information Management Association (AHIMA), recently teased Talk-Ten-Tuesdays audience members regarding a pending piece of legislation for a national patient identification program.
The Patient ID Now coalition, of which AHIMA is a founding member, continues to make an impact on the problem of patient misidentification. “We’re proud of the Framework for a National Strategy on Patient Identity that Patient ID Now released earlier this year,” Lusk said.
Later that same day, on Oct. 19, the U.S. Senate’s Appropriations Committee removed a longstanding prohibition to fund a national patient identifier program. AHIMA, along with a coalition of other healthcare organizations, had rallied around the cause.
According to the bill’s text, released by AHIMA, it “drops prohibition on using funding to develop a unique patient health identifier for each individual’s health information. The longstanding ban has been a barrier for health institutions to reliably share information about patients, and during the COVID-19 pandemic, for health entities to effectively trace contacts and track immunizations.”
“By joining the bipartisan movement to remove barriers to accurate patient identification, the Senate has taken a firm step towards protecting patient safety, patient privacy, and supporting efforts to address patient identification issues exacerbated by the COVID-19 pandemic,” AHIMA said in a news release posted on its website. “Patient ID Now has highlighted challenges caused by patient misidentification in the healthcare sector’s response to the pandemic. These challenges include thousands of duplicate records created during the vaccination registration process and disruptions in vaccine availability at provider sites because of inaccurate patient documentation.”
Lusk said AHIMA continues to promote the importance of protecting and securing health information. She told Talk Ten Tuesdays audience members that the Association had recently launched its own AHIMA dHealth™, a seal-of-approval program that helps digital health companies demonstrate how they protect patient health information.
Article link:
https://www.icd10monitor.com/senate-votes-to-remove-ban-for-funding-unique-patient-health-identifier
Published 17 December 2020 – ID G00739647 – 37 min read By Christian Canales, Tim Zimmerman, and 4 more
Communications technologies must continually evolve to match transformations across digital business landscapes. Product leaders must address evolving demand posed by evolving cloud, networking, security, cellular connectivity and infrastructure business models.
Overview
Key Findings
- 5G network slicing, 5G security, Wi-Fi 6 (802.11ax), function accelerator cards and 6G will have the highest impact on industries, business functions, and markets, replacing legacy product capabilities.
- Enterprise demand for multiprovider (private and public) cloud connectivity will drive adoption of technologies, such as multicloud networking and software-defined cloud interconnect, with a high impact. Multiple use cases apply, including consistent configuration management, better visibility, and compliance.
- Enterprise 5G and Wi-Fi connectivity will largely continue to coexist. While the features and timetable for 6G are not yet clearly defined and commercialization is expected in 2028, delivery of 802.11be (Wi-Fi 7) is expected around 2024, with adoption crossing the early adopter chasm by 2026.
Recommendations
Product leaders seeking to develop or expand their portfolio of communications technology solutions via emerging technologies should:
- Leverage artificial intelligence technology advancements intelligently. “We have AI/ML” will not give you a head start or separate you from the crowd, unless you have real products and value behind that and you can articulate your differentiation.
- Orient workload-centric networking solutions to the cloud, where the practice of deploying multiple cloud providers is on the rise. Prioritize targeting organizations with a distributed user footprint.
- Prioritize greater agility by developing network automation tools that enable an orchestration, policy-based and intent-based networking systems (IBNS)-oriented approach, rather than operationally focused network configuration and change management tools.
- Develop a strategy for 5G private enterprise indoor services by identifying the key capabilities and enterprise use cases that will find market demand and create service delivery differentiation.
Analysis
Overview of Emerging Technology Horizon
The following Emerging Technology Horizon profiles technology advancements or net new trends that will significantly impact the communications markets within the next three year horizon. Many of these technologies and trends are emerging to address new demands created from other areas of IT adoption or transformation. These include technologies that are evolving to address multiple cloud provider environments, the Internet of Things (IoT) and 5G connectivity, edge computing, and beyond (see Figure 1).Figure 1: Emerging Technology Horizon — Communications

The communications markets are in a constant state of flux. New vendors are frequently emerging and market leaders are continually acquiring. Growing adoption of SaaS and other public cloud services, as well as the need for near-local processing of data to ensure excellent user experience of application services, has changed the way traffic flows in networks. The traditional-data-center-focused, hub-and-spoke model, optimal for data residing in a single location, is no longer relevant. Data resides in multiple locations, decentralizing data traffic flows, and impacting security implications. A similar disaggregation of public cloud services is emerging, pushing a greater variety of client-impact functions from the cloud node to the edge.The growing use of public cloud services brings growing connectivity complexity for enterprises. Most enterprises access public cloud services on an ad hoc basis via internet connections, and increasingly to a growing number of different providers. While multicloud networking products are nascent, they will become increasingly relevant as vendors expand their capabilities in the next two years. Software-defined cloud interconnect (SDCI) technology serves as a hub to connect an enterprise to a wide variety of cloud, network and internet service providers. By the end of 2023, we estimate that approximately 30% of medium and large enterprises will employ SDCI services, up from less than 2% today. Another driver is the demands of edge computing. As more intelligence gets pushed to the edge, the need for on-device processing and more distributed architectures rises, creating shifts and new challenges for WAN connectivity. Edge applications will increasingly rely on a mesh of connections, balancing the need of mission-critical traffic processed locally and access to cloud services. As requirements for accessibility of real-time data increase, so does the need to process more data locally, while leveraging access to applications hosted in public cloud providers’ networks and ecosystems.Mobile service provider marketing hype about 5G often includes contentions that the technology can replace the IEEE 802.11-based corporate Wi-Fi network. While both 5G and Wi-Fi will largely continue to coexist, private 5G will predominantly be driven by organizations that cannot wait for reliable indoor public 5G coverage, with opportunities for industrial and manufacturing use cases. Outdoor private 5G opportunities will arise from edge computing and mission-critical IoT applications, and in the support of large-scale installations, more effectively served by cellular than Wi-Fi connectivity. The arrival of Wi-Fi 7 (802.11be) is expected around the 2024 time frame. While we have so far historically seen each new IEEE standard displacing its predecessor in a three to four year time frame (and currently also holding true for Wi-Fi 6), Wi-Fi 7’s higher performance lacks offering a realistic value proposition and its impact is expected to be more limited.There are many factors that make 5G security more complex. The rise of diversified services, cloud architecture and potentially massive numbers of IoT connections to 5G networks expose new security concerns and challenges. However, from a standards point of view, 5G provides enhanced security features compared with 4G — introducing unified authentication, a more flexible security policy for diverse use cases, secure service-based architecture and slice isolation, for example. Although 5G security has not been particularly highlighted in many early communication service provider (CSP) deployments, Gartner expects that industrial users will demand that private networks leverage standards for security and privacy.Automation is a key capability that spans across many technology profiles in this research document. This embraces NetOps, as a networking approach that incorporates the use of DevOps tools to improve the operational experience, enabling a more nimble, agile and easier to manage network. Infrastructure and operations leaders are gradually transforming network operations by investing in analytics and automation, while improving integration with DevOps and security to support their digital business. This remains a bumpy road though, partly because network vendors largely need to continue improving tooling and automation solutions. Also, enterprises often have a strong culture of risk aversion, limiting adoption of automation initiatives. Gartner nonetheless expects the use of NetOps 2.0 principles to grow by 40% by 2023, with organizations embracing these principles reducing application delivery times by 25% (for more information see NetOps 2.0: Embrace Network Automation and Analytics to Win in the Era of ContinuousNext).In the campus networking space, “Wi-Fi Network Assurance” solutions enable simplified operations through automation capabilities, and the use of artificial intelligence/machine learning (AI/ML) functionality has also begun to extend to wired switching connectivity. Intent-based networking systems (IBNS) takes automation to a wider portion of the network, including the WAN, data center, colocation facilities and cloud provider infrastructures. For software-defined cloud interconnect (SDCI) technology, Gartner advises organizations to prioritize providers that employ high levels of automation and orchestration in their hubs. For multicloud networking products a distinct capability includes configuration/provisioning, automation, management and troubleshooting functionality. Gartner expects that by 2023, 20% of enterprises will use public cloud operational tools to manage and control at least 15% of their on-premises data center resources.
How to Use the Emerging Technology Horizon
This Emerging Technology Horizon content analyzes and illustrates two significant aspects of impact:
- When we expect it to have a significant impact on the market (specifically, range).
- How much of an impact it will have on relevant markets (namely, mass).
Each emerging technology or trend profile analysis is composed of these two aspects. The profiles are organized by range starting with the center and moving to the outer rings of the Horizon, see Figure 1. (See Research and Methodology for the Emerging Technology Horizon in Note 1 for a more complete description of our approach to this research.)Time to impact or “range” is measured in the years to early majority adoption. (Fans of Geoffrey A. Moore can think of it as time to cross the chasm.) This is when technology adoption is “ready for prime time.” It is important to point out that the time to technology impact or “range” is not the same as time to act on the technology. When and how a technology product or service leader should act depends on the company’s business strategy. Providers that want to be “first movers” with an emerging technology or trend will need to act far sooner than those that are comfortable waiting for their competition to compel them into action.The “mass” component examines the extent of the impact on existing products and markets. To assess how massive the impact we explore two main aspects — breadth and depth. The breadth of impact concerns how many sectors are affected (products, services, markets, business functions, industries and geographies). The depth of the impact includes an analysis of the potential disruption to existing products, services and markets.
Communications Emerging Technologies and Trend Technology Horizon Profiles
Use Table 1 to jump to specific profiles. Each profile name is linked to the full technology profile to enable easier navigation.Table 1. Emerging Technologies in Communications Based on Time to AdoptionEnlarge Table
Now Range Impact
Wi-Fi 6 (802.11ax)
Back to TopAnalysis by: Tim Zimmerman and Bill RayDescription: Wi-Fi 6 (802.11ax) is the latest iteration of the IEEE 802.11 WLAN technology standards. Its main enhancements are allowing the network to control device connectivity for the first time and to improve the efficiency of existing 2.4 GHz and 5 GHz spectrum. The new standard also increases the theoretical throughput of the wireless medium to 10 Gigabits for densely populated areas. As such, its goal is to assure a larger number of devices with varying requirements are properly connected to the enterprise infrastructure.Wi-Fi 6 also adds the ability to allocate bandwidth between endpoints, using orthogonal frequency-division multiple access (OFDMA) so low-speed applications (such as a Wi-Fi light switch) receive a smaller allocation than those which require high speeds (such as a television). However, this functionality is dependent on the endpoints also supporting Wi-Fi 6, and this will take some time as the latest standard still carries a price premium. While the 802.11ax standard provides backward compatibility for legacy .11b/g/n/ac clients, these cannot benefit from the enhanced features of Wi-Fi 6, including the higher data rates, the improved multiuser multiple input/multiple output (MU-MIMO) capabilities and Basic Service Set (BSS) coloring.Wi-Fi 6 can also be extended into the 6 GHz band. In this form, it is branded “Wi-Fi 6E” and will deliver significantly more speed and capacity, depending on the spectrum available. Wi-Fi 6E will be available in the U.S. early in 2021, with the U.K. and Europe hoping to follow (with half the 6 GHz frequency allocation later the same year). Other countries will follow, but are unlikely to allocate quite as much additional radio spectrum so speeds will be commensurately slower.Sample Providers: Cisco, CommScope (RUCKUS), Extreme Networks, H3C, Hewlett Packard Enterprise [HPE] (Aruba), Huawei, Juniper Networks (Mist Systems), Ruijie NetworksRange: Now (0 to 1 Year)Gartner rates the range of Wi-Fi 6 as 0-1 year because:
- We expect 802.11ax to become an IEEE ratified standard within the next six months. Prestandards chips are already available and the Wi-Fi Alliance has already created a test bed for certification for the standard which is being marketed as “Wi-Fi 6.”
- All leading Wi-Fi providers have already released Wi-Fi 6 APs for enterprises to purchase, with the ability to update them to the ratified version of the standard.
- Mobile device vendors (e.g., smartphones, tablets, laptops) have already committed to creating products with the new standard. Wi-Fi 6 APs additionally guarantee backward compatibility to support .11b/g/n/ac clients.
802.11ax WLAN (Wi-Fi) access points (APs), as a percentage of overall APs shipped to enterprises, have grown from 0.8% in 1Q19, to 16.4% in 4Q19 and 18.8% in 1Q20, according to our market share data. We expect this share to exceed 35% by year-end 2020 and 55% by year-end 2021.Mass: Very HighThe impact of Wi-Fi 6 is expected to be very high, with more than 30% of WLAN upgrades for large enterprises based on 802.11ax in the next 12 months. In the light of the current COVID-19 pandemic crisis, even though we are seeing many WLAN upgrade projects getting delayed, the economic downturn does not seem to be visibly altering the shift from 802.11ac to 802.11ax that we forecast late in 2019. While the price premium of .11ax over .11ac (for the enterprise market, excluding small-business APs) remained relatively high at 53% in 1Q20 ($295 revenue per AP for .11ax, versus $193 for .11ac), it continues to decline (from 58% in 4Q19 and 90% in 1Q19). Also, this calculation does not take into account the different penetration levels across vendors. For the two leading providers in terms of revenue share, Cisco and HPE (Aruba), the price premium of .11ax dropped below 25% in 1Q20. We are seeing Wi-Fi 6 getting increasingly proposed in price contracts by default, matched by end-user demand driven by future-proofing aspirations.While previous generations of Wi-Fi have focused on improving speed, Wi-Fi 6 introduces several innovations which make it applicable across a wider range of applications. Specific use cases include remote collaboration using higher resolution (4K) video and augmented reality (AR) and virtual reality (VR) applications (e.g., remote field services, training and simulation, product design and visualization and AR commerce). In the IoT world, bandwidth needs can largely vary, from very low requirements for data collection devices to very high needs for AR/VR devices. In the past, both devices resided in the same domain as all solutions determined where they wanted to associate and how they wanted to communicate to the infrastructure. Wi-Fi 6 changes the control mechanism for the wireless medium from the device to the network, allowing APs to intelligently segment devices and making Wi-Fi more competitive with Bluetooth in low-power/low speed applications such as sensors and automation systems. Improvements in the communication scheduling also helps IoT devices to achieve higher battery life, again pushing Wi-Fi into the IoT market.As highlighted in Market Trends: Will the Advent of 5G Make Enterprise Wi-Fi Connectivity Less Relevant?, 5G communication service providers’ (CSPs’) marketing hype has sparked questions among enterprises and tech vendors about 5G potentially displacing Wi-Fi connectivity. This can confuse enterprises regarding the availability and capabilities of 5G. Countering that requires product marketers to create a differentiated position that emphasizes how Wi-Fi can outperform and outsell 5G. For more information see How to Promote Enterprise Wi-Fi Connectivity Against the Advent of 5G.Recommended Actions:
- Support of operational technology (OT) connectivity should be an important part of your Wi-Fi 6 value proposition, taking product differentiation, such as IoT onboarding and security capabilities, beyond a Wi-Fi-centric focus. The progressive convergence of the “traditional” IT and building automation networks continues to increase the number of IoT devices that organizations have to manage.
- Highlight differentiation with latency monitoring, including response times with encrypted applications. Other metrics should include jitter, packet loss, mean opinion score (MOS) scores and even location, to address business-criticality by application.
- Provide support for Wi-Fi Alliance Certified Location (802.11mc), which can already provide an accurate location using Wi-Fi 6 access points and round-trip timing (RTT).
Recommended Reading:Innovation Tech Insight for Wi-Fi 6 (802.11ax) — Assessing the Need for Enterprises
Wi-Fi Network Assurance
Back to TopAnalysis by: Tim Zimmerman and Christian CanalesDescription: The term Wi-Fi Network Assurance refers to collecting data into a data lake and, combined with the use of artificial intelligence/machine learning (AI/ML) algorithms, to train, baseline, monitor, react, proactively resolve and report Wi-Fi network performance issues. The ability to baseline the quality of Wi-Fi connectivity and collect the right data to resolve simple and advanced issues such as time correction problems provides the basis for the system to be able to guarantee that certain quality levels are met as enterprises move to eliminate campus network administrators. The use cases behind AI/ML span from optimizing and analyzing network performance over time and user density, service quality management to meet SLAs, and self-healing capabilities to maximize reliability and better security. Basic solutions in the market provide suggestions for network administrators to tune up Wi-Fi settings based on inferences, while others have gone a step further and can eliminate some human intervention even for advanced issues.Sample Providers: Cisco, CommScope (RUCKUS), Extreme Networks, H3C, Hewlett Packard Enterprise (Aruba), Huawei, Juniper Networks (Mist Systems)Range: Now (0 to 1 Year)Gartner rates the range of Wi-Fi Network Assurance as 0-1 year level for several reasons:
- Most of the leading Wi-Fi providers serving the enterprise market already have solutions, and adoption is expected to cross the early adopter chasm within the next 12 months as enterprises continue to seek cost optimization opportunities. However, it is important to acknowledge that AI/ML is a very hyped topic today, as the variation in the data collected and the difference in algorithms (interference, supervised ML, unsupervised ML) determines the types of problems that can be resolved. Most Wi-Fi providers have a sales pitch on AI/ML technology, yet only a handful provide differentiated functionality.
- NetOps (see Note 2) is an emerging use case, driven by recent advances in analytics, AI and ML. At the access layer, NetOps today applies predominantly to Wi-Fi, although it has begun to extend to wired networking. For too long Wi-Fi has been one of the “pain points” for organizations, as it comes with inherent challenges associated with interference, distance and is a shared medium.
- A strategic planning assumption by Gartner estimates that by 2022, 65% of enterprises will deploy network automation (NA) in the access layer (up from less than 15% in 2017). We also anticipate growing use of artificial intelligence for operations (AIOps) platforms that will improve Wi-Fi performance, based on the use of automated root cause analysis in conjunction with network datasets and the increased confidence in problem resolution. Any Wi-Fi vendor not investing in NetOps functionality and understanding the data that must be collected to resolve advanced issues therefore will be left behind.
Mass: MediumThe impact of Wi-Fi Network Assurance is believed to be high for the higher end of the enterprise market (for organizations with more than 500 employees, especially those with complex network needs), but moderate overall due to a more limited impact for small and midsize organizations. The majority of providers targeting the midmarket today lag the required capabilities typically due to lack of investment and knowledge.Improperly implemented Wi-Fi installations continue to result in a poor end-user experience. For too long, Wi-Fi has been one of the pain points for organizations, as it comes with inherent challenges associated with interference and distance and is a shared medium. We have seen the integration of sensors into Wi-Fi APs to improve monitoring if SLAs are being met. This would provide network administrators the ability to run frequent tests to ensure network performance continued to meet SLAs. Wi-Fi service assurance takes this to a higher dimension, with the ability to eliminate error-prone and tedious manual intervention. As such, it lowers the burden on network administrators, giving enterprises flexibility in reallocating network administration resources. Organizations have the issue that IT personnel staffing levels will likely remain flat or decline in years to come. This is a problem that Wi-Fi product marketers can flip to their advantage to communicate product differentiation.Recommended Actions:
- Develop differentiation related to AI/ML technology by focusing on the data that is collected and the algorithms used which target radio frequency (RF) management as the ability to adapt to changing conditions in the RF environment (e.g., gaps in coverage or changes in capacity or performance) while providing insightful analytics such as meeting SLAs.
- Target delivering simplified operations through automation capabilities that document time savings for improved ROI. Key aspects should include automation workflows that help eliminate error-prone and tedious manual intervention, or the ability to orchestrate multiple device configurations at network scale.
- Include integration with third-party provisioning/configuration management software in your roadmap. For instance, this should embrace the ability to leverage tools for continuous configuration automation (e.g., Red Hat Ansible, Puppet) to automate multiple aspects of the configuration life cycle, as well as reporting or ticketing tools (e.g., ServiceNow) to record changes made to the network.
Short-Range Impact
5G Network Slicing
Back to TopAnalysis by: Peter LiuDescription: 5G network slicing is a form of virtual network technology. It allows a network-based CSP to create multiple independent end-to-end logical networks in the form of a “network slice” on top of a common shared physical infrastructure at the provider’s network domain. Each slice can be customized to have its own network architecture, engineering mechanism, network provisioning methodology, configuration and service quality profile based on the requirements that it serves.Sample Providers: Ciena, Cisco, Ericsson, Huawei, Mavenir, Nokia, ZTE, Zeetta NetworksRange: Short (1 to 3 Years)Network slicing adoption for 5G is still in its early stage, with many concerns and issues remaining unsolved. To succeed, network slicing requires new business models to be developed that drive innovative partnerships, standards for alone and virtualized network infrastructure, demanding SLAs need to be agreed between operators and vertical markets, as well as collaboration by standardization bodies among other details. All of these elements are not fully ready at this moment.While many technology and standards-based obstacles remain, network slicing for 5G will become a key differentiating feature in the next one to three years. The main drivers as below:
- 5G commercial rollout has begun in various countries; network slicing has become a key differentiating feature in the next one to three years. CSPs are eager to use network slicing to move beyond selling simple connectivity to offering enterprise customers more advanced connectivity options — specifically, guaranteed levels of network performance on a given network slice. More CSPs have started evaluations and trials of network slicing. They include BT, China Mobile, Deutsche Telekom (DT), SK Telecom (SKT) and Vodafone U.K.
- The latest freeze of 5G standard R16 enhances the network slicing and 5G core features which enable more vertical industry use cases. In addition, CSPs in China and Korea will start deploying the commercial stand-alone 5G and multiaccess edge computing (MEC) in large scale in 2020, which we believe will accelerate the network slicing adoption maturity and enable more innovation opportunities
- Although network slicing has been positioned as a 5G technology differentiator, it can be applied to 4G LTE. As such, there are already many promising use cases quickly and easily supported by 4G that will improve through the evolution to the hybrid 4G/5G networks and emerge as fully automated experience in the coming years.
- Most of the leading network equipment and service providers serving the CSP market already have network slicing solutions ready. Adoption is expected to cross the early adopter chasm within the next 12 months as CSPs continue to seek to monetize the opportunity.
Mass: Very HighThe impact of network slicing is believed to be very high for the communication industry and is widely believed to have the potential to redefine how CSPs conduct their business. The slicing with appropriate resources and optimization is expected to broaden the horizon of CSPs in many vertical segments, such as automotive, energy, finance, healthcare, manufacturing and public sector. By being able to individually service particular communication and connectivity needs of specific industries, CSPs could transform from a “dumb pipe” provider to an infrastructure partner in variety of industries’ digitization initiatives.In addition, given that multiple slices can run on a common shared infrastructure, including costly components (i.e., nodes, base stations, fiber), operators enjoy the economies of scale that any shared infrastructure provides.However, a number of challenges lie ahead, which also provide opportunities for vendors to differentiate themselves:
- Network slicing adds complexity to CSP network management and orchestration, which are already complex and operationally disruptive. It requires significant operational transformation, particularly for the large-scale deployments involved.
- Security becomes critical and challenging. Different infrastructures will have different security levels and policies since those are managed and administered by both telecom and non-telecom players.
- Business models require development on a per slice/service basis to meet the dynamic demands and traffic variations.
- Standardization of services and handovers across various industry players have to be renegotiated in much more detail than before.
Recommended Actions:
- Take a phased approach when developing your network slicing product offering. Do not wait for full readiness of network slicing capabilities. Lower the entry point. Start with static slicing for industry verticals that have clear and common requirements on the network, such as mission-critical communication, entertainment (ultrahigh-definition live broadcast and augmented reality/virtual reality [AR/VR]), gaming, and manufacturing.
- Reduce the complexity of network slicing management and orchestration. Enhance network slicing creation and deployment automation capabilities in your products through leverage AI and data analytics. Offer a business customer the capability to manage their own services or slices (e.g., dimensioning, configuration) by means of application programming interfaces (APIs).
- Enhance security features in your network slicing offering while allowing resource sharing among multiple tenants, as such networks must also ensure the security requirements needed for each slice scenario that is employed.
- Adopt common, open architectures that demonstrate how network slicing can be applied in multivendor, multidomain, multioperator contexts for a range of candidate use cases and services.
Recommended Reading:4 Hype Cycle Innovations That Should Be on the Private Mobile Networks Roadmap for 5G Security, CSP Edge and Slicing
5G Security
Back to TopAnalysis by: Sylvain FabreDescription: 5G security is enabled by a set of 5G network mechanisms, and improves 4G security with:
- Unified authentication (4G authentication is access dependent).
- Flexible security policy for diverse use cases (versus single 4G policy).
- Encrypted transmission Subscription Permanent Identifier (SUPI) prevents International Mobile Subscriber Identity (IMSI) leakage for user privacy.
All 5G network infrastructure vendors implement the same 3GPP standards for 5G security. However development processes may vary; concerns about specific vendors, as well as geopolitical tensions, have increased procurement scrutiny around 5G infrastructure.Sample Providers: Ericsson, Huawei, Intel, Mobileum, Netcracker, Nokia, VIAVI SolutionsRange: Short (1 to 3 Years)Despite all the current market hype around 5G, Gartner rates the range of 5G security is one to three years for two main reasons:
- There is not a single security standard. The main standardization organization is 3GPP, where 5G security involves security solutions from different standardization organizations. 5G security embraces several security protocols, such as IPsec, EAP, and TLS, and others under development, such as security for network function virtualization (NFV).
- Standardization does not guarantee 5G security, as there are many factors that make 5G security more complex. The rise of diversified services, cloud architecture, and massive IoT connections in 5G expose new security concerns and challenges. Protection involves implementing security in the configuration and operation of the 5G network to ensure cybersecurity hygiene.
While organizations opting for private 5G will likely put more attention to securing these networks, we don’t see security as being a priority in many public 5G rollouts. This is also due to the immaturity of some 5G capabilities, such as slicing, which will mature from today’s 3GPP release 15 (R15) to R16 in 2021 and R17 from 2022. Growing adoption of 5G security, including broader use of more sophisticated security mechanisms, is nonetheless long term unavoidable. Ensuring protection of identity and privacy is no longer an option. We estimate that by 2023 65% of the world’s population will have their personal data covered under modern privacy regulations, up from 10% today.Mass: Very HighThe impact of 5G security is believed to be very high overall. It will impact a multitude of sectors, namely all organizations using 5G services, and 5G security will largely replace current 4G product capabilities. All 5G network infrastructure vendors implement the same 3GPP standards for 5G security. However, development processes may vary. Concerns about specific vendors, as well as geopolitical tensions, have increased procurement scrutiny around 5G infrastructure, including security implications. There are a number of security challenges lying ahead, which also provide opportunities for vendors to differentiate themselves:
- 5G will increase the number and diversity of connected objects, potential distributed denial of service (DDoS) attack vectors and entry points. This also provides more telemetry for anomaly detection.
- 5G infrastructure virtualization, automation and orchestration of a service-based architecture increase exposure.
- Slicing virtual networks across shared infrastructure will impact security due to lateral movement risk, and cross-slice permeability issues.
- A wider ecosystem delivering industrial 5G use cases, with varying security competencies and credentials, makes SLAs and service assurance, including end-to-end (E2E) security, challenging.
- Backward compatibility with 4G/Long Term Evolution (LTE) means some legacy security issues will persist in 5G.
- Cross-network layer security will need to be managed between 5G macro and small cell layers and strength of different algorithms.
Recommended Actions:
- Prioritize DDoS mitigation capabilities, cloud web application and API protection (WAAP), such as cloud web application firewalling, bot mitigation, DNS protection and intrusion prevention system (IPS) in your roadmap. Include leveraging integration with on-premises DDoS appliances.
- Offer 5G security services that include time-sensitive reporting to clients due to maturing data protection regulations. Establish a strategy to minimize the impact of zero-day vulnerabilities through regular updates to software patches.
- Stress your multivendor (or multiprovider) security capabilities. The key is the ability to detect a potential security threat in shared 5G infrastructure (between different CSPs or utility providers, or multitenant scenarios). IoT segmentation, anomaly detection and authentication with Wi-Fi networks will complement the end-to-end value proposition.
Recommended Reading:Market Trends: Strategies Communications Service Providers Can Use to Address Key 5G Security Challenges
AI for Traffic Management
Back to TopAnalysis by: Alan PriestleyDescription: AI for traffic management is the use of deep neural network (DNN)-based AI algorithms to manage the flow of data through 5G network infrastructure to maintain service quality and data throughput. High-speed 5G deployments have a significant increase in data traffic flowing through the network — through both base stations and back-end core infrastructure — this places challenges on network capacity and availability. Data types are also rapidly expanding ranging from user centric data (such as video streaming) to a wide range of machine related data (high-speed data traffic to IoT sensor data). At the same time data security and protection is now paramount.To ensure consistent security and meet contractual quality of service (QoS) and experience for all users, 5G systems need to implement sophisticated traffic management algorithms that can dynamically manage and analyze data traffic through the network.Sample Providers: Nokia, Ericsson, NECRange: Short (1 to 3 Years)Traffic management models used in existing 4G LTE networks have been rules based. However, with rapid growth in deployment of 5G networks with increasing complexity of data and network topologies, machine learning techniques are being utilized, with a rapid transition over the next three years to the use of deep neural network (DNN) based (often referred to as artificial intelligence) solutions underway.Mass: HighMass is high as traffic management algorithms are deployed across the network infrastructure with some “simpler” elements in base stations and other more complex tasks within the core network data centers. Many implementations will leverage standard CPUs to execute these DNN-based algorithms, and many of the latest generation CPUs utilized within the network core have extensions to their instruction sets to enable them to more efficiently execute these workloads. Dedicated workload accelerator chips, such as application-specific integrated circuits (ASICs), graphics processing units (GPUs) and field-programmable gate arrays (FPGAs) are also being deployed in core data centers to support these new AI-based traffic management workloads. Base station designs typically have a more constrained form factor than the core data centers and will utilize dedicated chips to support the CPU in executing DNN-based traffic management algorithms.Recommended Actions:
- Evaluate the use of DNN-based AI algorithms to enhance traffic management and analysis.
- Integrate dedicated accelerator chips into base station designs to support DNN-based workloads.
- Ensure core infrastructure designs are capable of supporting workload accelerators.
Recommended Reading:Emerging Technologies: Critical Insights on AI Semiconductors for Endpoint & Edge ComputingForecast Analysis: Data Center Workload Accelerators, Worldwide
Nonvolatile memory express over fabrics (NVMe-oF)
Back to TopAnalysis by: Julia PalmerDescription: Nonvolatile memory express over fabrics (NVMe-oF) is a network protocol that takes advantage of the parallel-access and low-latency features of NVMe Peripheral Component Interconnect Express (PCIe) devices. NVMe-oF enables tunneling the NVMe command set and data over additional transports beyond PCIe over various networked interfaces to the remote subsystems across a data center network. The specification defines a common protocol interface and is designed to work with high-performance fabric technology including RDMA over Fibre Channel, InfiniBand or Ethernet with RoCEv2, iWARP or TCP.Sample Providers: Dell Technologies, Excelero, Hitachi Vantara, IBM, Silk, Lightbits, NetApp, Pavilion Data Systems, Pure Storage, StorCentric.Range: Short (1 to 3 Years)Gartner believes the range for this profile is from one to three years because even though NVMe is today broadly deployed, NVMe-oF still lacks wide adoption as a data center protocol because end to end NVMe-oF support is nascent. Gartner rates this technology profile impact as high from an end-user perspective, as NVMe-oF has the potential to significantly lower storage latency for shared storage arrays. End-to-end NVMe-oF implementations balance the performance and simplicity of direct-attached storage (DAS) with the scalability and manageability of shared storage.Today, many NVMe-oF offerings that use fifth-generation and/or sixth-generation Fibre Channel (FC-NVMe) are available, but adoption of NVMe-oF within 25/50/100 Gigabit Ethernet is slower. In the future, it is likely that TCP will evolve to be an important data center transport for NVMe-oF. Unlike server-attached flash storage, shared accelerated NVMe and NVMe-oF can scale out to high capacity with high-availability features and be managed from a central location, serving dozens of compute clients.Mass: MediumThe adoption of NVMe-oF is moderate overall. NVMe-oF complexity and costs will be barriers to broad adoption in the near future. While a variety of highly performant workloads (such as AI/ML, high-performance computing [HPC], in-memory databases or transaction processing), can leverage NVMe-oF today, most of the mainstream workloads are not planning quick transitioning to end-to-end NVMe architecture. NVMe-oF upgrades might require uplifts and updates to storage networks that encompass switches, host bus adapters (HBAs), as well as OS kernel drivers. This barrier will be removed with introduction and maturity of NVMe-oF over TCP, which will not require drastic infrastructure changes. However, this technology is still missing broad ecosystem support.Most storage arrays vendors already offer solid-state arrays with internal NVMe storage. During the next 12 months, an increasing number of infrastructure vendors will offer support of NVMe-oF connectivity to the compute hosts. Integrated, converged and hyperconverged integrated system (HCIS) systems will be able to hide complexity and shorten the learning curve for the adoption of NVMe-oF elements, and deliver those products in an integrated, turnkey format during the next 12 to 24 months. NVMe-oF delivered as software-defined storage (SDS) solutions is most appealing to hyperscale vendors, which are leading the adoption curve of this nascent technology.Recommended Actions:
- Develop NVMe-oF offerings that support integration with RDMA over converged Ethernet v2 (RDMA RoCEv2) or NVMe-oF over TCP-based products.
- Build a ROI value proposition for the customers with business-critical applications that can leverage high throughput and low latency of end-to-end NVMe-oF capabilities.
Recommended Reading:Top 10 Technologies That Will Drive the Future of Infrastructure and Operations2020 Strategic Roadmap for StorageCritical Capabilities for Solid-State Arrays
Secure Access Service Edge (SASE)
Back to TopAnalysis by: Nat SmithDescription: Secure access service edge (SASE, pronounced “sassy”) combines comprehensive networking and security functions to support the dynamic secure access needs of the workforce. It connects people to services. SASE mandates cloud edge for networking and security services provided, though some SASE use cases still require a portion of the service to be delivered on-premises.SASE is evolving from five contributing security segments: software-defined WAN (SD-WAN), firewall as a service (FWaaS), secure web gateway (SWG), cloud access security brokers (CASB) and zero trust network access (ZTNA). The consolidation of markets into a single SASE market will happen over time. Today, there are still five separate buyers and five separate security segments. As the evolution unfolds, vendors in each of these contributing segments that embrace the SASE framework should be considered SASE vendors — or at least less mature SASE vendors.While the list of individual capabilities continues to evolve and will likely initially differ between products in the contributing segments, serving those capabilities from the cloud edge is non-negotiable and fundamental to SASE. Core capabilities of a completely consolidated SASE are the aggregation of the functionality from these individual segments. However, in the short term, best-of-breed capabilities in each of the contributing segments that are served from cloud edge are considered a SASE solution.Sample Providers: Akamai; Broadcom-Symantec; Cato Networks; Fortinet; iboss; Netskope; Palo Alto Networks; Versa Networks; VMware, ZscalerRange: Short (1 to 3 Years)Even though some vendors are not implementing all portions of this framework today, Gartner estimates SASE is about one to three years away from early majority adoption. Additionally, there is already some consolidation among the segments, with both SD-WAN and FWaaS vendors offering similar capabilities and are often found on the same shortlist from buyers. Similarly, SWG, CASB and ZTNA vendors are consolidating, as this aggregate feature set is often sought for remote worker security. The largest ZTNA vendors are also some of the larger SWG vendors.Mass: HighSASE extends to five of the larger security markets in security, predicting that they will ultimately consolidate into a single market with a single buyer. The influence of this evolution is large and the extensibility of the framework, allowing new features and capabilities to easily be incorporated ensures that SASE will grow beyond these five contributing segments today.Although SASE represents an evolution versus a transformation, the changes required to evolve to a SASE framework will be significant and this adds to the mass of the technology and merits a medium level impact. Appliance-based vendors will need to rearchitect their solution for the cloud, implementing cloud-delivered network security services. However, the services alone will not be sufficient — vendors will also need points of presence (POPs) or cloud edge presence as well, which may require substantial investment or partnerships.Recommended Actions:
- Adopt a flexible service-based architecture that gives buyers the flexibility to easily adapt their network security capabilities to changing end-user environments and use cases.
- Develop cloud-based components as scalable microservices that can all process packets in a single pass.
- Build a network of distributed points of presence (POPs) through colocation facilities, service provider POPs and infrastructure as a service (IaaS) to reduce latency and improve performance for network security services.
Recommended Reading:Forecast Analysis: Gartner’s Initial Secure Access Service Edge ForecastMarket Trends: How to Win as WAN Edge and Security Converge Into the Secure Access Service EdgeGeneral Manager Update: How to Win as WAN Edge and Security Converge Into the Secure Access Service EdgeProduct Manager Insight: China Presents Growing Opportunities for SASE ProvidersThe Future of Network Security Is in the Cloud
Zero Trust Networking
Back to TopAnalysis by: Nat SmithDescription: Zero trust networking, or identity-based networking, is the use of identities to establish sessions and control traffic in the network. In zero trust networking, connectivity policies are created in terms of the actual users, devices and services — not Internet Protocol (IP) addresses. It simplifies connectivity and actually makes it scale. It is also a call for network security vendors to invest and take a much more active role in identity systems and architectures as a function of all traffic passing through their offerings.For example, VPN tunnels are created as an encrypted path between two sets of IP addresses. Network segmentation is often accomplished with subnets or a range of related IP addresses. Firewall rules are often written using only IP addresses. Policies are made artificially complex and large as users, devices and services move around. This is one of the reasons why we see firewalls with thousands of rules. Not only do we need more rules to accommodate moving users, data and services, but also the intent of the rules and why they were added are easily forgotten. When we do not know why a rule was put in place or what it is supposed to do (e.g., just an IP address in the rule with no other context), we just leave it in so as not to interrupt connectivity. Leaving a rule in place because no one knows what it does is a perfect example of things getting too complicated.Zero trust network access (ZTNA) is one of the best examples of zero trust networking in action. Instead of setting up static and contextless rules as had been the practice with VPNs, policies are simple, logical and easily overlay on the existing low-level network infrastructure. As services increasingly move to environments where organizations do not control the network infrastructure (e.g., IaaS), the forced use of IP addresses to specify connectivity policies will increasingly be a burden.Sample Providers: Akamai; Appgate; Cisco; Citrix; Proofpoint Meta; Netskope; Odo; Okta; Palo Alto Networks; Perimeter 81; ZscalerRange: Short (1 to 3 Years)The recent global pandemic has accelerated adoption of this technology, particularly as part of ZTNA solutions. That alone dictates a short range for this technology. However, the overhead and conversion of existing firewall rules alone will make this transition slow and IP-based rules will perpetuate for many years to come.Mass: MediumZero trust networking is already underway, firmly a part of connecting remote workers to private services (ZTNA). However, the conversion of other network security and connectivity solutions away from IP addresses and to users or services will take some time. In addition, the requirement to reassess trust of identity will require some new technology and new architecture for many vendors. As a result, Gartner rates this technology trend as medium in mass.Recommended Actions:
- Study ZTNA as a template for access control under zero trust networking.
- Use existing object technology in rules and configuration to prove the concept of policies solely by user and service, learning market preferences and workflow optimizations.
- Expand product integrations with identity services and vendors, not to log into products, but to verify path and permission before traffic is allowed to pass. Constant reassessment of identity (trust) should be part of the packet path.
Recommended Reading:Market Guide for Zero Trust Network Access
Note 1: Research and Methodology for the Emerging Technology Horizon
The Emerging Technology Horizon content analyzes and illustrates two significant aspects of impact:
- When we expect it to have a significant impact on the market (specifically, range).
- How big an impact it will have on relevant markets (namely, mass).
Analysts evaluate range and mass independently and score them each on a one-to-five Likert-type scale:
- For range, this scoring determines in which Horizon ring the Emerging Technologies and Trends will appear.
- For mass, the score determines the size of the Horizon point.
In the Emerging Technology Horizon, the range estimates the distance (in years) that the technology, technique or trend is from crossing over from early-adopter status to early majority adoption. This indicates that the technology is prepared for and progressing toward mass adoption. So at its core, range is an estimation of the rate at which successful customer implementations will accelerate. That acceleration is scored on a five-point scale with one being very distant (beyond eight years) and five being very near (within a year). Each of the five scoring points corresponds to a ring of the Emerging Technology Horizon graphic (see Figure 1). Those Emerging Technologies and Trends with a score of one (beyond eight years) do not qualify for inclusion on the Horizon. When formulating scores for range, Gartner analysts consider many factors, including:
- The volume of current successful implementations
- The rate of new successful implementations
- The number of implementations required to move from early adopter to early majority
- The growth of the vendor community
- The growth in venture investment
Mass in the Emerging Technology Horizon estimates how substantial an impact the technology or trend will have on existing products and markets. Mass is also scored on a five-point scale — with one being very low impact and five being very high impact. Emerging Technologies and Trends with a score of one are not included in the Horizon. When evaluating mass, Gartner analysts examine the breadth of impact across existing products (specifically, sectors affected) and the extent of the disruption to existing product capabilities. It should be noted that an emerging technology or trend may be expressed in different positions on different Emerging Technology Horizons. This occurs when the maturity of Emerging Technologies and Trends varies based on the scope of Horizon coverage.
Note 2: NetOps
NetOps is a networking approach that incorporates the use of DevOps tools and methods to improve the operational experience, with a more scalable and programmable network infrastructure approach. The primary driver is to reduce the operational burden and costs associated with managing network infrastructure.
Article link: https://www.gartner.com/doc/reprints?id=1-25NFSNR6&ct=210326&st=sb
Nikhil R. Sahni, BA, BAS, BSe, MBA, MPA/ID1,2; Brandon Carrus, BS, MSc3; David M. Cutler, AB, PhD1,4Author AffiliationsArticle InformationJAMA. Published online October 20, 2021. doi:10.1001/jama.2021.17315
October 20, 2021
Nearly every industry in the US has experienced substantial improvements in productivity over the last 50 years, with 1 major exception: health care. In 2019, the US spent an estimated $3.8 trillion on health care, including an estimated $950 billion on nonclinical, administrative functions, and that number has increased despite major technological enhancements.1,2 This Viewpoint considers several specific steps that can be taken to simplify administration in health care and boost overall productivity in the economy.
To run any organization, a base of administration is necessary. A typical US services industry (for example, legal services, education, and securities and commodities) has approximately 0.85 administrative workers for each person in a specialized role (lawyers, teachers, and financial agents). In US health care, however, there are twice as many administrative staff as physicians and nurses, with an estimated 5.4 million administrative employees in 2017, including more than 1 million who have been added since 2001.3
The administrative complexity of health care is profound. There are multiple transaction nodes, including more than 6000 hospitals, 11 000 nonemployed physician groups (defined as hospital-affiliated and independent practices with 5 or more physicians),4 and 900 private payers; regulatory complexity (compliance requirements such as the Health Insurance Portability and Accountability Act and regulated markets such as Medicare Advantage); and contrasting incentives, for example, market-driven checks and balances, such as prior authorization.4 The sheer complexity associated with so many entities makes administrative simplification difficult.
A new report provides an extensive evaluation of administrative spending to determine which parts are necessary and which could be simplified.2 The analysis dissected profit and loss statements of individual health care organizations, estimated spending on specific processes, and compared administrative spending in health care with that of other industries. The conclusion of the report is that an estimated $265 billion, or approximately 28% of annual administrative spending, could be saved without compromising quality or access by implementing about 30 interventions that could be carried out in the next 3 years.2 This set of interventions works within the structure of today’s US health care system in order to preserve its market nature (eg, multipayer, multiclinician, multi–health care center) and the associated benefits (eg, world-leading innovation in care delivery).
The starting point is 5 functional areas that account for approximately 94% of administrative spending (see eTable in the Supplement). The largest of these is industry-agnostic corporate functions: general administration, human resources, nonclinical information technology, general sales and marketing, and finance. This functional area accounts for an estimated $375 billion of spending annually. The second-largest category is the financial transactions ecosystem, which includes claims processing, revenue cycle management, and prior authorization, accounting for an estimated $200 billion annually. The rest is made up of industry-specific operational functions, such as insurance underwriting (an estimated $135 billion annually), administrative clinical support operations such as case management (an estimated $105 billion annually), and customer and patient services such as call centers (an estimated $80 billion annually).
For each of these functional focus areas, known interventions that could reduce spending without harming patient care were considered. This meant using a financial and operational perspective for the analysis, but also acknowledging that these interventions could and likely will have broader benefits on other outcomes, such as access, quality, patient experience, physician satisfaction, and equity.
“Within” and “Between” Interventions at the Organizational Level
The individual organization level was used as the starting point, by looking at “within” interventions, those that can be controlled and implemented by individual organizations, and “between” interventions, those that require agreement to act between organizations but not broader, industry-wide change. This spending is amenable to interventions that address highly manual, inefficient workflows, such as patient admission and discharge planning in case management; poor data management and lack of standardization, such as nonstandardized submission processes for prior authorization forms; and disconnected tools and systems, for example, the lack of interoperability between the claims systems of payers and hospitals.
Organizations could potentially save an estimated $210 billion annually by addressing these issues.2 The majority of those savings reside in industry-agnostic corporate functions such as finance or human resources. Interventions that affect these functions include automating repetitive work such as generation of standard invoices and financial reports; using analytical tools for human resources departments to better predict and address temporary labor shortages; integrating a suite of tools and solutions to coordinate staffing for nurse managers; and building strategic communications platforms between payers and hospitals to send unified messages. These interventions have been adopted in the marketplace by some payers, hospitals, and physician groups, with a positive return on investment using current technology and nominal investment (that is, once the interventions are fully rolled out, the cost of implementation is generally paid off in about a year by the recurring savings). Research has shown that organizations that aggressively pursue industry-leading productivity programs are twice as likely to be in the top quintile of their peers as measured by economic profit.5
Since many of these interventions are relatively standard, the question that arises is why they have not been implemented to date. A common set of barriers to implementation currently exists, including high levels of complexity and overlapping compliance rules such as privacy guidelines and requirements on how and where data can be stored; the need to manage labor displacement in an industry that is a driver of workforce growth; contrasting incentives for payers, hospitals, and physician groups in a primarily fee-for-service reimbursement model; and lack of prioritization from industry leaders on administrative simplification. Successful organizations often have common lessons for implementation including prioritizing administrative simplification as a top strategic initiative; committing to transformational change vs incremental steps; engaging the broader partnership ecosystem for the right capabilities and investments; and disproportionally investing in the underlying drivers of productivity, such as technology and talent.
“Seismic” Interventions at the Industry Level
Some of the inertia at the organizational level reflects market failures that require industry-level intervention, including the necessary decision-makers and influencers from both the public and private sectors for a given intervention. For example, individual organizations alone cannot change the systemic lack of interoperability in the US health care system. A set of “seismic” interventions were identified that require broad, structural collaboration across the health care industry.2 These include new technology platforms such as the use of a centralized, automated claims clearinghouse; operational alignment such as standardizing medical policies across payers, for example, requiring the same set of diagnostics and clinical data before agreeing to cover a more complicated procedure or drug therapy; and payment design such as globally capitated payment models for segments of the care delivery system. These are meant to be examples of what is possible and are based on analogs from other industries that have undergone this type of change. If currently identified seismic interventions were undertaken, an estimated $105 billion of savings could occur annually.2 These savings would largely occur in the financial transactions ecosystem and industry-specific operational functions such as clinician credentialing and medical records management.
Launching these seismic interventions could be considerably more difficult than the within and between interventions. A framework that focuses on how to promote innovation in the public sector was applied to isolate the mechanism required to enable action for each seismic intervention.6 For example, individual organizations do not experience the financial pressure today that would bring them together to create a centralized automated clearinghouse (which is what happened in banking). Financial incentives could help overcome this inertia.
A set of common actions is necessary to galvanize this change. These actions include using interoperability frameworks to support high-value use cases such as the assembly of longitudinal patient records; creating public-private partnerships such as piloting a complete Health Information Exchange in 1 or more states; and selecting third parties, such as foundations, to research facts to galvanize movement (for example, a foundation-backed randomized trial of administrative interventions to validate the conditions for success).
Across the 3 types of interventions, the analyses suggest that simplifying administration could save the US health care system an estimated $265 billion annually after accounting for $50 billion of overlap between organizational and industry-level interventions.2 These savings, if realized, would be more than 3 times the combined 2019 budgets of the National Institutes of Health ($39 billion), the Health Resources and Services Administration ($12 billion), the Substance Abuse and Mental Health Services Administration ($6 billion), and the Centers for Disease Control and Prevention ($12 billon).7 In per capita terms, $265 billion is approximately $1300 for each adult in the US.
Economic downturn often leads to health system change. With COVID-19 creating enormous disruption to the health care system, a known opportunity to capture more than a quarter-trillion dollars in the next few years without compromising the US health care system’s ability to deliver care could be quite attractive. The sooner health care administration is simplified, the easier it will be for all to engage the US health care system.Back to topArticle Information
Corresponding Author: Nikhil R. Sahni, BA, BAS, BSe, MBA, MPA/ID, McKinsey & Company, 280 Congress St, Ste 1100, Boston, MA 02210 (nikhil_sahni@mckinsey.com).
Published Online: October 20, 2021. doi:10.1001/jama.2021.17315
Conflict of Interest Disclosures: Mr Sahni and Mr Carrus are partners at McKinsey & Company. Dr Cutler reported receipt of personal fees as part of the multidistrict litigation against opioid manufacturers, distributors, and pharmacies and the multidistrict litigation against JUUL.
Additional Contributions: We acknowledge Prakriti Mishra, BA, MBA, McKinsey & Company, a fellow author on Administrative Simplification: How to Save a Quarter-Trillion Dollars in US Healthcare, for her contributions to drafting and revising this Viewpoint. She was not compensated for her contributions.References1.Centers for Medicare & Medicaid Services. National health expenditure data. Accessed September 12, 2021. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData2.Sahni NR, Mishra P, Carrus B, Cutler DM. Administrative Simplification: How to Save a Quarter-Trillion Dollars in US Healthcare. McKinsey & Company. October 20, 2021. https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/administrative-simplification-how-to-save-a-quarter-trillion-dollars-in-US-healthcare3.Sahni NR, Kumar P, Levine E, Singhal S. The Productivity Imperative for Healthcare Delivery in the United States. McKinsey & Company. February 27, 2019. Accessed September 17, 2021. https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-productivity-imperative-for-healthcare-delivery-in-the-united-states4.Definitive Healthcare. Top physician groups by size and Medicare charges. Accessed August 6, 2021. https://www.definitivehc.com/resources/healthcare-insights/top-physician-groups-size-medicare-charges5.Bradley C, Hirt M, Smit S. Strategy Beyond the Hockey Stick: People, Probabilities, and Big Moves to Beat the Odds. John Wiley & Sons Inc; 2018.6.Sahni NR, Wessel M, Christensen CM. Unleashing breakthrough innovation in government. Stanf Soc Innov Rev. Summer 2013:26-31. doi:10.48558/tb8j-1968Google Scholar7.Department of Health and Human Services. Putting America’s Health First: FY 2021 President’s Budget for HHS. Accessed September 12, 2021. https://www.hhs.gov/sites/default/files/fy-2021-budget-in-brief.pdf
Article link: https://jamanetwork.com/journals/jama/fullarticle/2785480?
Jon Stresing At the intersection of the AI and the DoD! — NVIDIA Account Manager – Army | DISA | JHUAPL | DLA — President, AFCEA Central Maryland Chapter Emerging
From the Joint Chiefs of Staff (JCS) to the Battalion/Squadron level, the entire Department of Defense is organized around a deliberate staff structure. At the JCS level, because it is a Joint Environment like the Combatant Commands (COCOMs), the staffs are broken out into J1-J8. What many don’t know is that each of these staffs directly correlate to an applicable business function in the corporate world. And every business function in the corporate world has been massively disrupted by artificial intelligence (AI) and machine learning (ML). This makes each J-Staff ripe for massive technological disruption.
But how? I will explain below.
The JCS consist of:
-The Chairman
-The Vice Chairman
-The Chief of Staff of the Army
-The Chief of Naval Operations
-The Chief of Staff of the Air Force
-The Commandant of the Marine Corps
-The Chief of the National Guard Bureau
-The Chief of Space Operations
In the Army, at the Pentagon level, we have the Chief of Staff of the Army who is on the JCS, and below him/her exists the General-staff (G-Staff) broken out into G1-9. Below the Chief of Staff of the Air Force, there is the Air Force as Air-Staff (A-Staff). The Marines, Space Force and Guard all have their own letter designations. At the Brigade/Regiment/Wing/Ship down to the Battalion/Squadron/Department level, the staff is generally designated as an “S” staff. But for the sake of this article, it is irrelevant.
The illustrious J-Staff is usually confusing to those outside of the military. Honestly, as a lower enlisted soldier, I did not even know what the G/J-Staff was. I did not know until I worked at DISA as a contractor…but I digress. The roles of each staff are below, and generally are the same across the DoD.
– J/G/N/S x1 Manpower and Personnel: Think of this as your human resources department in your company. It is the division of a business that is charged with finding, screening, recruiting, and training job applicants, as well as administering employee-benefit programs.
- HR business functions are currently being massively disrupted, and the x1 staff could take advantage of several capabilities:
o AI for recruitment – AI could predict where the best candidates from the military come from, and help analyze candidate information and background to ensure they are placed in the right jobs. AI using Natural Language Processing and Conversational AI could also power chat bots for recruiters.
o One of the 1-Staff’s most important functions is manpower optimization. All businesses right now are under incredible stress to “do more with less, “and the DoD is no different. The correct AI algorithms would revolutionize the way the 1 staff is able to ensure comprehensive Joint Force readiness to meet warfighting requirements.
o On the back end of the HR cycle, AI can be used for retention. Organizations are using models in place today to predict when and why workers will leave, and even providing solutions for how to stop it.
–x2 – Intelligence: Although the DoD has a very important and different function for intelligence, most large businesses do conduct some sort of Business intelligence (BI) operations. BI is a technology-driven process for analyzing data and delivering actionable information that helps executives, managers and workers make informed business decisions.
- Intelligence tradecraft brings in data from multiple sources and tries to derive some sort of insight or prediction from that data. Because of that, many x2-Staffs are further ahead in the adoption of AI; however, a few ideas include:
o One of the most basic intelligence disciplines is Human-Intelligence. In today’s modern world, an intelligence officer does not need to travel around the world to find human intelligence, because many share so much information openly. This very article and these words I am writing right now will be read by LinkedIn AI algorithms, and likely algorithms from nefarious actors.
o The electromagnetic spectrum offers a trove of data. However, there are not enough people in America to listen to all of that radio traffic. And that makes AI for SIGINT a perfect use case. AI can sift through all of that data, analyze it, correlate it, do automatic language translations, do speech to text, and create labeled transcriptions.
o Imagery Intelligence is another area where there are not enough analysts in all of America to sift through every satellite, spy plane, balloon, or drone image/video to derive insight. Long gone are the days of dark rooms with magnifying glasses looking for Russian SCUDs in Cuba. These days, powerful technology analyzes thousands of images a second and derives more and better insight from each image than a human could. This same technology is used to save lives as well.
–x3 – Operations: Out of all the X-Staff, this one should be the most understandable to a business. Operations is the execution of the “Mission” of the organization. Most companies have a mission statement. The DoD has a broader mission than any company, but business operations are easily translatable. The best known applications for AI in the DoD probably fit into Military Operations in some capacity including: Robotics, Computer Vision, Natural Language Processing, Signals Processing, and Smart Bases, among many others. Aside from the typical Military Operations functions, there are other ways AI can improve the 3 shop.
- All businesses have an operations function. The operations staff need AI in order to stay competitive with adversaries:
o The easiest way AI can help is with basic decision making. There are times with “going with your gut” is the right call. However, using scientific, math-based approach to decision making has its strengths as well. AI neural network decision making models will have a place in all future decision-making processes.
o AI could automatically plan and schedule mundane operations, freeing up 3-Staff to focus on more strategic problems.
o Predictive maintenance knowledge flows from the ground to operations leaders, and could predict future combat power and readiness of units and the military as a whole.
–x4 – Logistics: All business have some sort of input and output. There are supply chains and distribution chains. The DoD is the largest logistics organization in the world, as it is within the mission to project force across the globe, across thousands of bases. Within the DoD, logistics focuses and aligns with the core logistics capabilities of supply, maintenance operations, deployment and distribution, health services support, engineering, logistics services, and operational contract support.
- Logistics has been using math-based optimization for a long time now. There is still a long way to go for logistics to be fully optimized, and AI can help:
o AI can be used to read labels and to track shipments using computer vision. The same technology that can scan an image in the 2 staff can also be used to identify what item needs to go where, and track that inventory along the way for chain of custody. Computer vision can also be used to figure out the perfect way to pack a box, truck or airplane.
o Everyday, Amazon uses AI to perfectly route millions of packages to end consumers. Amazon has it perfected to the point that there is no statistical way for each package to be anymore optimized. This same technology could easily be applied to the shipping of boots, beans, and bullets. Or it could easily be applied to tanks, helicopters, boats, and airplanes as well. Logistics optimization could save the DoD billions of dollars per year.
o Smart warehouses are here, and AI at the edge in the form of robots are a great way for 4 shops to bring AI into their organizations.
o One of my other favorite use cases in the 4-Staff is for maintenance operations – Predictive Maintenance. The DoD spends an incomprehensible about of money on maintenance of vehicles, like the Blackhawk. AI for predictive maintenance will save billions in taxpayer money, but most importantly, save lives.
–x5 – Strategy, Plans and Policy: It is the role of the strategy department of an organization to plan out how that organization is going to execute on the CEO’s (Commander-in-Chief) vision. Most, if not all, companies have some sort of Chief Strategy Officer (CSO). Typically, CSO’s communicate and implement a company’s strategy internally and externally so that all employees, partners, suppliers, and contractors understand the company-wide strategic plan and how it carries out the company’s overall goals. That said, the most important AI function for the 5 shop will be to implement an AI strategy across their organization. The DoD published an AI Strategy in 2019.
- Because the 5 shop will be instrumental in developing the AI strategy for the organization, all of the use cases outlined in this document will be important to the people working there. * Note: There are considerable ethical considerations within using AI for purposes of conflict, and It will be important for some sort of AI Ethics Officer to be positioned there.
o The adversaries of the Department of Defense will be using AI against the United States. AI can develop plans and strategies that humans cannot, and those future capabilities will be used against DoD organizations.
o It will be impossible to develop plans and strategy to counter advisories’ AI without some sort of AI red teaming technologies in the 5 shop.
o Recommender engines could significantly decrease the time of data analysis spent developing strategy, plans, and policy.
–x6 – Command, Control, Communications, & Computers/Cyber: The mission of the Joint Staff J6 is to assist the CJCS in providing best military advice, while advancing cyber defense, joint/coalition interoperability, and C2 capabilities required by the Joint Force to preserve the Nation’s security. The head of the x6 directorate for DoD organizations will oftentimes be the CIO of that organization. This is the IT shop for all units. From base switching and internet service all the way up to the CIO office of the DoD, the 6 shop is always where IT lives. Because of the nature of AI, many people believe that this is where AI lives exclusively, and the goal of this thought leadership piece is to explore use cases outside of the J6.
· Because of the nature of AI, the 6 shop is a prime target for numerous AI initiatives:
o AI for cyber-security is the #1 use case for AI within the office of the CIO.
o Recommender engines can be used to help suggest statistically accurate actions for engineers to take across the organization.
o Predictive maintenance can play a huge role in assessing the health of an IT environment, and predicting when circuits, disk drives or compute cards may fail.
–x7 – Joint Force Development OR Education and Training OR Exercises and Training: The J-7 is responsible for the six functions of joint force development: Doctrine, Education, Concept Development & Experimentation, Training, Exercises and Lessons Learned. Typically, this part of the staff will be responsible for the training of the Soldiers/Airmen/Sailors/Marines and civilian workforce. 7 staff will have an important role in the development of AI across the organization, because they will be responsible for training all of the people within it.
· AI in education is nothing new, and most modern education has some sort of AI in the background. It will be important for the DoD to recognize this and implement several AI initiatives into the 7-Staff.
o AI can be used to create realistic non-player characters in virtual training environments.
o With a large part of modern learning being virtual, AI can help test and tailor curriculums to a learner’s needs and goals.
o Utilizing augmented and virtual reality, AI will create simulated environments for training in hands on military occupations from Infantry to Mechanic.
–x8 – Force Structure, Resources, and Assessment Directorate or Integration of Capabilities & Resources in some capacity. The J-8 Directorate is charged with providing support to CJCS for evaluating and developing force structure requirements. J-8 conducts joint, bilateral, and multilateral war games and interagency politico-military seminars and simulations. It develops, maintains, and improves the models, techniques, and capabilities used by the Joint Staff and combatant commands, to conduct studies and analyses for CJCS. This is where the rubber meets the road for making sure everything works together, and whatever service branch the staff is in can effectively conduct their mission. The Army describes the 8 staff as, “the Army’s lead for matching available resources to the defense strategy and the Army plan.” The 8-Staff usually hold the finance function of the organization as well.
· The 8-Staff have a variety of AI optimal roles from finance to running massive statistical analysis of optimized structure and outcomes. AI could further enable these capabilities, and let leadership know if the force is optimized.
o Within the comptrollers office, AI can help eliminate human error and decrease burden on accounting professionals, leaving humans to do more high value work.
o AI techniques can help construct the organizations portfolios based on more accurate risk and return forecasts, and more complex constraints.
o Predictive maintenance directly correlates to predictive combat power and asset optimization, and AI can assist the analytics needed to derive optimal outcomes for an organization.
-x9 – This could be anything depending on where you are. There is no 9 staff at the JCS. The Army uses the G9 for Installations. The Navy uses the N9 for warfare systems. USCYBERCOM uses the J9 for Advanced Concepts and Technologies and Technical Outreach. And the Air Force uses the A9 for Studies, Analysis, and Assessments.
· Because of the wide depth and breadth of the various x9 staffs out there, I will simply end by pointing to the wide variety of other use cases above.
Artificial intelligence holds tremendous promise for improvements across each of the Department of Defense’s Staff functions. Each staff function directly correlates to a business process that every single industry has a need for. And every single industry has been disrupted by AI. AI can affect positive change across all of the business functions of the DoD, and not just Robotics or ISR.
And again, the business functions of United States Department of Defense are ripe for massive technological disruption.
Article link: https://www.linkedin.com/pulse/business-functions-united-states-department-defense-ripe-jon-stresing/
Reach out and message me if you would like to discuss any of these use cases further!Report this
Published by
Jon StresingAt the intersection of the AI and the DoD! — NVIDIA Account Manager – Army | DISA | JHUAPL | DLA — President, AFCEA Central Maryland Chapter Emerging LeadersPublished • 2w2 articles FollowingIn this article I break down the United States Department of Defense‘s J-Staff and discuss hashtag#AI use cases for each. I noticed Most of my AI discussions focus aorund operational activities, so I want to explore with my LinkedIn followers use cases outside of the typical use cases you hear of. As started putting my thoughts to paper around using AI for legitimate DoD business use cases I see form our civilian sector folks, it dawned on me that the DoD has all of these functions organized in a very consistent way across all services and echelons. I am hoping to spark a conversation around AI across all DoD functions, not just operations. I hope to get to get more DoD- employees thinking about how they can start in their sections today. This HAS been done before and YOU can do it! For folks unfamiliar with the J-Staff, i would like to introduce you to it. Please reference this document when they hear the word, “J1” or “S3.” Thank you to Steven Beynon and Eric M. Evans for giving me tips and some editing help. Thank you Lindy Riggs for further editing help and to make sure I was in compliance with the English language. And to Margaret Amori for giving me the last glance. I hope you all find this helpful and learn something new.! DISAArmy Futures CommandArmy Deputy Chief of Staff, G-6Army Artificial Intelligence Integration Center (AI2C)US ArmyUnited States Army Human Resources CommandU.S. Cyber CommandU.S. Army Cyber CommandThe Johns Hopkins University Applied Physics LaboratoryMITREhashtag#DODhashtag#AIhashtag#MLhashtag#AIArmyhashtag#JointStaffhashtag#Armyhashtag#Navyhashtag#AirForcehashtag#CoastGuardhashtag#MarineCorpshashtag#artificialintelligencehashtag#AIforGoodhashtag#Innovationhashtag#humanresourceshashtag#businessintelligencehashtag#Operationshashtag#Logisticshashtag#planningforsuccesshashtag#Traininghashtag#Structure
March 29, 2019
Inclusive leaders are:
– Visibly committed to diversity
– Humble
– Aware of their own bias
– Curious about others
– Culturally intelligent
– Effective collaborators
Summary.
Companies increasingly rely on diverse, multidisciplinary teams that combine the collective capabilities of women and men, people of different cultural heritage, and younger and older workers. But simply throwing a mix of people together doesn’t guarantee high performance; it requires inclusive leadership — leadership that assures that all team members feel they are treated respectfully and fairly, are valued and sense that they belong, and are confident and inspired. Research involving 3,500 ratings by employees of 450 leaders found that inclusive leaders share six behaviors — and that leaders often overestimate how inclusive they really are. These are the behaviors: visible commitment, humility, awareness of bias, curiosity about others, cultural intelligence, and effective collaboration.
Companies increasingly rely on diverse, multidisciplinary teams that combine the collective capabilities of women and men, people of different cultural heritage, and younger and older workers. But simply throwing a mix of people together doesn’t guarantee high performance; it requires inclusive leadership — leadership that assures that all team members feel they are treated respectfully and fairly, are valued and sense that they belong, and are confident and inspired.
Inclusiveness isn’t just nice to have on teams. Our research shows that it directly enhances performance. Teams with inclusive leaders are 17% more likely to report that they are high performing, 20% more likely to say they make high-quality decisions, and 29% more likely to report behaving collaboratively. What’s more, we found that a 10% improvement in perceptions of inclusion increases work attendance by almost 1 day a year per employee, reducing the cost of absenteeism.
What specific actions can leaders take to be more inclusive? To answer this question, we surveyed more than 4,100 employees about inclusion, interviewed those identified by followers as highly inclusive, and reviewed the academic literature on leadership. From this research, we identified 17 discrete sets of behaviors, which we grouped into six categories (or “traits”), all of which are equally important and mutually reinforcing. We then built a 360-degree assessment tool for use by followers to rate the presence of these traits among leaders. The tool has now been used by over 3,500 raters to evaluate over 450 leaders. The results are illuminating.
These are the six traits or behaviors that we found distinguish inclusive leaders from others:
Visible commitment: They articulate authentic commitment to diversity, challenge the status quo, hold others accountable and make diversity and inclusion a personal priority.
Humility: They are modest about capabilities, admit mistakes, and create the space for others to contribute.
Awareness of bias: They show awareness of personal blind spots as well as flaws in the system and work hard to ensure meritocracy.
Curiosity about others: They demonstrate an open mindset and deep curiosity about others, listen without judgment, and seek with empathy to understand those around them.
Cultural intelligence: They are attentive to others’ cultures and adapt as required.
Effective collaboration: They empower others, pay attention to diversity of thinking and psychological safety, and focus on team cohesion.
These traits may seem like the obvious ones, similar to those that are broadly important for good leadership. But the difference between assessing and developing good leadership generally versus inclusive leadership in particular lies in three specific insights.
First, most leaders in the study were unsure about whether others experienced them as inclusive or not. More particularly, only a third (36%) saw their inclusive leadership capabilities as others did, another third (32%) overrated their capabilities and the final third (33%) underrated their capabilities. Even more importantly, rarely were leaders certain about the specific behaviors that actually have an impact on being rated as more or less inclusive.
Second, being rated as an inclusive leader is not determined by averaging all members’ scores but rather by the distribution of raters’ scores. For example, it’s not enough that, on average, raters agree that a leader “approaches diversity and inclusiveness wholeheartedly.” Using a five-point scale (ranging from “strongly agree” to “strongly disagree”), an average rating could mean that some team members disagree while others agree. To be an inclusive leader, one must ensure that everyone agrees or strongly agrees that they are being treated fairly and respectfully, are valued, and have a sense of belonging and are psychologically safe.
Third, inclusive leadership is not about occasional grand gestures, but regular, smaller-scale comments and actions. By comparing the qualitative feedback regarding the most inclusive (top 25%) and the least inclusive (bottom 25%) of leaders in our sample, we discovered that inclusive leadership is tangible and practiced every day.
These verbatim responses from our assessments illustrate some of the tangible behaviors of the most inclusive leaders in the study.
- Shares personal weaknesses: “[This leader] will openly ask about information that she is not aware of. She demonstrates a humble unpretentious work manner. This puts others at ease, enabling them to speak out and voice their opinions, which she values.”
- Learns about cultural differences: “[This leader] has taken the time to learn the ropes (common words, idioms, customs, likes/dislikes) and the cultural pillars.”
- Acknowledges team members as individuals: “[This leader] leads a team of over 100 people and yet addresses every team member by name, knows the work stream that they support and the work that they do.”
The following verbatims illustrate some of the behaviors of the least inclusive leaders:
- Overpowers others: “He can be very direct and overpowering which limits the ability of those around him to contribute to meetings or participate in conversations.”
- Displays favoritism: “Work is assigned to the same top performers, creating unsustainable workloads. [There is a] need to give newer team members opportunities to prove themselves.”
- Discounts alternative views: “[This leader] can have very set ideas on specific topics. Sometimes it is difficult to get an alternative view across. There is a risk that his team may hold back from bringing forward challenging and alternative points of view.”
What leaders say and do has an outsized impact on others, but our research indicates that this effect is even more pronounced when they are leading diverse teams. Subtle words and acts of exclusion by leaders, or overlooking the exclusive behaviors of others, easily reinforces the status quo. It takes energy and deliberate effort to create an inclusive culture, and that starts with leaders paying much more attention to what they say and do on a daily basis and making adjustments as necessary. Here are four ways for leaders to get started:
Know your inclusive-leadership shadow: Seek feedback on whether you are perceived as inclusive, especially from people who are different from you. This will help you to see your blind spots, strengths, and development areas. It will also signal that diversity and inclusion are important to you. Scheduling regular check-ins with members of your team to ask how you can make them feel more included also sends the message.
Be visible and vocal: Tell a compelling and explicit narrative about why being inclusive is important to you personally and the business more broadly. For example, share your personal stories at public forums and conferences.
Deliberately seek out difference: Give people on the periphery of your network the chance to speak up, invite different people to the table, and catch up with a broader network. For example, seek out opportunities to work with cross-functional or multi-disciplinary teams to leverage diverse strengths.
Check your impact: Look for signals that you are having a positive impact. Are people copying your role modeling? Is a more diverse group of people sharing ideas with you? Are people working together more collaboratively? Ask a trusted advisor to give you candid feedback on the areas you have been working on.
There’s more to be learned about how to become an inclusive leader and harness the power of diverse teams, but one thing is clear: leaders who consciously practice inclusive leadership and actively develop their capability will see the results in the superior performance of their diverse teams.
Readers Also Viewed These Items
- Working Identity: Unconventional Strategies for Reinventing Your Career
- HBR’s 10 Must Reads for Healthcare Leaders Collection (4 Books)
Read more on Diversity and inclusion or related topics Leadership and Collaboration and teams
- JB Juliet Bourke is a partner in Human Capital, Deloitte Australia where she leads the Diversity and Inclusion Consulting practice and co-leads the Leadership practice. She is the author of Which Two Heads Are Better Than One: How diverse teams create breakthrough ideas and make smarter decisions. Email her at julietbourke@deloitte.com.au
- AT Andrea Titus is a consultant in Human Capital, Deloitte Australia, and PhD candidate in organizational psychology at Macquarie University. Email her at aespedido@deloitte.com.au
Article link: https://hbr.org/2019/03/why-inclusive-leaders-are-good-for-organizations-and-how-to-become-one?
NXP Semiconductors N.V. has kicked off what could be the start of a new trend in the computer chipmaking industry, announcing that it’s moving the vast majority of its electronics design automation workloads to Amazon Web Services Inc.’s public cloud platform.
The company said today it has selected AWS as its preferred cloud provider. By moving its EDA workloads to Amazon’s cloud, NXP will gain increased efficiency and more compute power that should help it to design a new generation of faster and more powerful computer chips for the automotive, industrial “internet of things,” mobile and communications infrastructure sectors.
According to Amazon, NXP has already seen benefits that include enhanced collaboration and increased EDA throughput since moving to AWS, while reducing costs and gaining more time to focus on actual design, rather than managing compute resources.
More interesting, though, are the expected long-term benefits of moving such a compute-heavy workload as EDA to the cloud. NXP says it believes it will be able to achieve some important process improvements that will revolutionize the way it designs and tests its central processing units.
As the company explains, each new chip design is put through extensive testing and validation before it’s manufactured to ensure it is functionally safe and secure and delivers the expected performance. This work includes front-end design workflows such as performance simulation and verification, as well as back-end workloads around timing and power analysis, design rule checks and other applications necessary to prepare a new chip for production.
Previously, NXP, like other chipmakers, had always done this on-premises in internal data centers with a fixed compute capacity. But because of the increasing complexity of newer chips, it means these processes can take many months or even years to complete, and they call for the accurate forecasting and installation of new compute infrastructure.
As a result, it makes sense for NXP to move to the cloud, where it can tap into AWS’ advanced infrastructure and the scale and agility it needs to advance multiple chip design and testing projects at the same time. The move will enable it to run dozens of performance simulations in parallel, resulting in faster overall design times.
Shifting to the cloud also enables NXP to leverage key AWS analytics and machine learning services that can aid its research and development efforts. For instance, NXP is already using Amazon QuickSight, a machine learning-powered business intelligence service, to boost workflow efficiencies. By rapidly translating the results from one step of testing into modifications for another, it can reduce the time it takes to iterate on chip designs, NXP said.
The company also makes use of Amazon SageMaker, a service that’s used to build, train and deploy machine learning models in the cloud and at the edge, to optimize the way it structures compute, storage and third-party software application licenses.
NXP also benefits from the wide range of specialized instances available on AWS that allow it to achieve the perfect balance of price/performance for its EDA workflows.
Constellation Research Inc. analyst Holger Mueller told SiliconANGLE the key advantage of cloud platforms such as AWS is that they allow “commercial elasticity” for enterprises.
“In the roller-coaster bound semiconductor industry it makes sense to evaluate capital spending demand, so it’s a smart idea by NXP to move its EDA workloads to the cloud,” Mueller said. “EDA is early in the value chain for semiconductor makers, and not all design work leads to actual chips that are made.”
Charles King, an analyst with Pund IT Inc., said moving its EDA workloads to Amazon could provide compelling value to relatively smaller silicon player compared with the biggest ones such as Intel Corp. “Offloading compute and storage to a cloud vendor should reduce capital expenditures, and may also result in the company reducing IT headcount,” King said. “It wouldn’t be surprising if other vendors in NXP’s class are contemplating similar moves, though I doubt major semiconductor players will follow suit.”
NXP Semiconductors Chief Information Officer Olli Hyyppa said cloud-based EDA is necessary to accelerate semiconductor innovation and get new designs to market faster.
“AWS gives us the best scale, global presence, and selection of compute and storage options, with continuous improvements in price performance, that we need,” he said. “This will give precious time back to our design engineers to focus on innovation and lead the transformation of the semiconductor industry.”
Article link: https://siliconangle.com/2021/10/14/nxp-semiconductors-moves-chip-design-workloads-amazons-cloud/
How to go from a few teams to hundreds by
From the Magazine (May–June 2018)
Summary.
When implemented correctly, agile innovation teams almost always result in higher team productivity and morale, faster time to market, better quality, and lower risk than traditional approaches can achieve. What if a company were to launch dozens, hundreds, or even thousands of agile teams? Could whole segments of the business learn to operate in this manner?
As enticing as such a prospect is, turning it into a reality can be challenging. Companies often struggle to know which functions should be reorganized into multidisciplinary agile teams and which should not. And it’s not unusual to launch dozens of new agile teams only to see them bottlenecked by slow-moving bureaucracies.
The authors, who have studied the scaling of agile at hundreds of companies, share what they’ve learned about how to do it effectively. Leaders should use agile methodologies themselves and create a taxonomy of opportunities to set priorities and break the journey into small steps. Workstreams should be modularized and then seamlessly integrated. Functions not reorganized into agile teams should learn to operate with agile values. And the annual budgeting process should be complemented with a VC-like approach to funding.
By now most business leaders are familiar with agile innovation teams. These small, entrepreneurial groups are designed to stay close to customers and adapt quickly to changing conditions. When implemented correctly, they almost always result in higher team productivity and morale, faster time to market, better quality, and lower risk than traditional approaches can achieve.
Naturally, leaders who have experienced or heard about agile teams are asking some compelling questions. What if a company were to launch dozens, hundreds, or even thousands of agile teams throughout the organization? Could whole segments of the business learn to operate in this manner? Would scaling up agile improve corporate performance as much as agile methods improve individual team performance?
In today’s tumultuous markets, where established companies are furiously battling assaults from start-ups and other insurgent competitors, the prospect of a fast-moving, adaptive organization is highly appealing. But as enticing as such a vision is, turning it into a reality can be challenging. Companies often struggle to know which functions should be reorganized into multidisciplinary agile teams and which should not. And it’s not unusual to launch hundreds of new agile teams only to see them bottlenecked by slow-moving bureaucracies.
We have studied the scaling up of agile at hundreds of companies, including small firms that run the entire enterprise with agile methods; larger companies that, like Spotify and Netflix, were born agile and have become more so as they’ve grown; and companies that, like Amazon and USAA (the financial services company for the military community), are making the transition from traditional hierarchies to more-agile enterprises. Along with the many success stories are some disappointments. For example, one prominent industrial company’s attempts over the past five years to innovate like a lean start-up have not yet generated the financial results sought by activist investors and the board of directors, and several senior executives recently resigned.
Our studies show that companies can scale up agile effectively and that doing so creates substantial benefits. But leaders must be realistic. Not every function needs to be organized into agile teams; indeed, agile methods aren’t well suited to some activities. Once you begin launching dozens or hundreds of agile teams, however, you can’t just leave the other parts of the business alone. If your newly agile units are constantly frustrated by bureaucratic procedures or a lack of collaboration between operations and innovation teams, sparks will fly from the organizational friction, leading to meltdowns and poor results. Changes are necessary to ensure that the functions that don’t operate as agile teams support the ones that do.
Leading Agile by Being Agile
For anyone who isn’t familiar with agile, here’s a short review. Agile teams are best suited to innovation—that is, the profitable application of creativity to improve products and services, processes, or business models. They are small and multidisciplinary. Confronted with a large, complex problem, they break it into modules, develop solutions to each component through rapid prototyping and tight feedback loops, and integrate the solutions into a coherent whole. They place more value on adapting to change than on sticking to a plan, and they hold themselves accountable for outcomes (such as growth, profitability, and customer loyalty), not outputs (such as lines of code or number of new products).
Conditions are ripe for agile teams in any situation where problems are complex, solutions are at first unclear, project requirements are likely to change, close collaboration with end users is feasible, and creative teams will outperform command-and-control groups. Routine operations such as plant maintenance, purchasing, and accounting are less fertile ground. Agile methods caught on first in IT departments and are now widely used in software development. Over time they have spread into functions such as product development, marketing, and even HR. (See “Embracing Agile,” HBR, May 2016, and “HR Goes Agile,” HBR, March–April 2018.)
Agile teams work differently from chain-of-command bureaucracies. They are largely self-governing: Senior leaders tell team members where to innovate but not how. And the teams work closely with customers, both external and internal. Ideally, this puts responsibility for innovation in the hands of those who are closest to customers. It reduces layers of control and approval, thereby speeding up work and increasing the teams’ motivation. It also frees up senior leaders to do what only they can do: create and communicate long-term visions, set and sequence strategic priorities, and build the organizational capabilities to achieve those goals.
When leaders haven’t themselves understood and adopted agile approaches, they may try to scale up agile the way they have attacked other change initiatives: through top-down plans and directives. The track record is better when they behave like an agile team. That means viewing various parts of the organization as their customers—people and groups whose needs differ, are probably misunderstood, and will evolve as agile takes hold. The executive team sets priorities and sequences opportunities to improve those customers’ experiences and increase their success. Leaders plunge in to solve problems and remove constraints rather than delegate that work to subordinates. The agile leadership team, like any other agile team, has an “initiative owner” who is responsible for overall results and a facilitator who coaches team members and helps keep everyone actively engaged.
Bosch, a leading global supplier of technology and services with more than 400,000 associates and operations in 60-plus countries, took this approach. As leaders began to see that traditional top-down management was no longer effective in a fast-moving, globalized world, the company became an early adopter of agile methods. But different business areas required different approaches, and Bosch’s first attempt to implement what it called a “dual organization”—one in which hot new businesses were run with agile teams while traditional functions were left out of the action—compromised the goal of a holistic transformation. In 2015 members of the board of management, led by CEO Volkmar Denner, decided to build a more unified approach to agile teams. The board acted as a steering committee and named Felix Hieronymi, a software engineer turned agile expert, to guide the effort.
At first Hieronymi expected to manage the assignment the same way Bosch managed most projects: with a goal, a target completion date, and regular status reports to the board. But that approach felt inconsistent with agile principles, and the company’s divisions were just too skeptical of yet another centrally organized program. So the team shifted gears. “The steering committee turned into a working committee,” Hieronymi told us. “The discussions got far more interactive.” The team compiled and rank-ordered a backlog of corporate priorities that was regularly updated, and it focused on steadily removing companywide barriers to greater agility. Members fanned out to engage division leaders in dialogue. “Strategy evolved from an annual project to a continuous process,” Hieronymi says. “The members of the management board divided themselves into small agile teams and tested various approaches—some with a ‘product owner’ and an ‘agile master’—to tackle tough problems or work on fundamental topics. One group, for instance, drafted the 10 new leadership principles released in 2016. They personally experienced the satisfaction of increasing speed and effectiveness. You can’t gain this experience by reading a book.” Today Bosch operates with a mix of agile teams and traditionally structured units. But it reports that nearly all areas have adopted agile values, are collaborating more effectively, and are adapting more quickly to increasingly dynamic marketplaces.
Getting Agile Rolling
At Bosch and other advanced agile enterprises, the visions are ambitious. In keeping with agile principles, however, the leadership team doesn’t plan every detail in advance. Leaders recognize that they do not yet know how many agile teams they will require, how quickly they should add them, and how they can address bureaucratic constraints without throwing the organization into chaos. So they typically launch an initial wave of agile teams, gather data on the value those teams create and the constraints they face, and then decide whether, when, and how to take the next step. This lets them weigh the value of increasing agility (in terms of financial results, customer outcomes, and employee performance) against its costs (in terms of both financial investments and organizational challenges). If the benefits outweigh the costs, leaders continue to scale up agile—deploying another wave of teams, unblocking constraints in less agile parts of the organization, and repeating the cycle. If not, they can pause, monitor the market environment, and explore ways to increase the value of the agile teams already in place (for instance, by improving the prioritization of work or upgrading prototyping capabilities) and decrease the costs of change (by publicizing agile successes or hiring experienced agile enthusiasts).
To get started on this test-and-learn cycle, leadership teams typically employ two essential tools: a taxonomy of potential teams and a sequencing plan reflecting the company’s key priorities. Let’s first look at how each can be employed and then explore what more is needed to tackle large-scale, long-term agile initiatives.
Create a taxonomy of teams.
Just as agile teams compile a backlog of work to be accomplished in the future, companies that successfully scale up agile usually begin by creating a full taxonomy of opportunities. Following agile’s modular approach, they may break the taxonomy into three components—customer experience teams, business process teams, and technology systems teams—and then integrate them. The first component identifies all the experiences that could significantly affect external and internal customer decisions, behaviors, and satisfaction. These can usually be divided into a dozen or so major experiences (for example, one of a retail customer’s major experiences is to buy and pay for a product), which in turn can be divided into dozens of more-specific experiences (the customer may need to choose a payment method, use a coupon, redeem loyalty points, complete the checkout process, and get a receipt). The second component examines the relationships among these experiences and key business processes (improved checkout to reduce time in lines, for instance), aiming to reduce overlapping responsibilities and increase collaboration between process teams and customer experience teams. The third focuses on developing technology systems (such as better mobile-checkout apps) to improve the processes that will support customer experience teams.
The taxonomy of a $10 billion business might identify anywhere from 350 to 1,000 or more potential teams. Those numbers sound daunting, and senior executives are often loath even to consider so much change (“How about if we try two or three of these things and see how it goes?”). But the value of a taxonomy is that it encourages exploration of a transformational vision while breaking the journey into small steps that can be paused, turned, or halted at any time. It also helps leaders spot constraints. Once you’ve identified the teams you could launch and the sorts of people you would need to staff them, for instance, you need to ask: Do we have those people? If so, where are they? A taxonomy reveals your talent gaps and the kinds of people you must hire or retrain to fill them. Leaders can also see how each potential team fits into the goal of delivering better customer experiences.
USAA has more than 500 agile teams up and running and plans to add 100 more in 2018. The taxonomy is fully visible to everyone across the enterprise. “If you don’t have a really good taxonomy, you get redundancy and duplication,” COO Carl Liebert told us. “I want to walk into an auditorium and ask, ‘Who owns the member’s change-of-address experience?’ And I want a clear and confident response from a team that owns that experience, whether a member is calling us, logging into our website on a laptop, or using our mobile app. No finger-pointing. No answers that begin with ‘It’s complicated.’”
USAA’s taxonomy ties the activities of agile teams to the people responsible for business units and product lines. The goal is to ensure that managers responsible for specific parts of the P&L understand how cross-functional teams will influence their results. The company has senior leaders who act as general managers in each line of business and are fully accountable for business results. But those leaders rely on customer-focused, cross-organizational teams to get much of the work done. The company also depends on technology and digital resources assigned to the experience owners; the goal here is to ensure that business leaders have the end-to-end resources to deliver the outcomes they have committed to. The intent of the taxonomy is to clarify how to engage the right people in the right work without creating confusion. This kind of link is especially important when hierarchical organizational structures do not align with customer behaviors. For example, many companies have separate structures and P&Ls for online and offline operations—but customers want seamlessly integrated omnichannel experiences. A clear taxonomy that launches the right cross-organizational teams makes such alignment possible.
Sequence the transition.
Taxonomy in hand, the leadership team sets priorities and sequences initiatives. Leaders must consider multiple criteria, including strategic importance, budget limitations, availability of people, return on investment, cost of delays, risk levels, and interdependencies among teams. The most important—and the most frequently overlooked—are the pain points felt by customers and employees on the one hand and the organization’s capabilities and constraints on the other. These determine the right balance between how fast the rollout should proceed and how many teams the organization can handle simultaneously.
A few companies, facing urgent strategic threats and in need of radical change, have pursued big-bang, everything-at-once deployments in some units. For example, in 2015 ING Netherlands anticipated rising customer demand for digital solutions and increasing incursions by new digital competitors (“fintechs”). The management team decided to move aggressively. It dissolved the organizational structures of its most innovative functions, including IT development, product management, channel management, and marketing—essentially abolishing everyone’s job. Then it created small agile “squads” and required nearly 3,500 employees to reapply for 2,500 redesigned positions on those squads. About 40% of the people filling the positions had to learn new jobs, and all had to profoundly change their mindset. (See “One Bank’s Agile Team Experiment,” HBR, March–April 2018.)
But big-bang transitions are hard. They require total leadership commitment, a receptive culture, enough talented and experienced agile practitioners to staff hundreds of teams without depleting other capabilities, and highly prescriptive instruction manuals to align everyone’s approach. They also require a high tolerance of risk, along with contingency plans to deal with unexpected breakdowns. ING continues to iron out wrinkles as it expands agile throughout the organization.
Companies short on those assets are better off rolling out agile in sequenced steps, with each unit matching the implementation of opportunities to its capabilities. At the beginning of its agile initiative, the advanced technology group at 3M Health Information Systems launched eight to 10 teams every month or two; now, two years in, more than 90 teams are up and running. 3M’s Corporate Research Systems Lab got started later but launched 20 teams in three months.
Big-bang transitions are hard. It may be better to roll out agile in steps.
Whatever the pace or endpoint, results should begin showing up quickly. Financial results may take a while—Jeff Bezos believes that most initiatives take five to seven years to pay dividends for Amazon—but positive changes in customer behavior and team problem solving provide early signs that initiatives are on the right track. “Agile adoption has already enabled accelerated product deliveries and the release of a beta application six months earlier than originally planned,” says Tammy Sparrow, a senior program manager at 3M Health Information Systems.
Division leaders can determine the sequencing just as any agile team would. Start with the initiatives that offer potentially the greatest value and the most learning. SAP, the enterprise software company, was an early scaler of agile, launching the process a decade ago. Its leaders expanded agile first in its software development units—a highly customer-centric segment where they could test and refine the approach. They established a small consulting group to train, coach, and embed the new way of working, and they created a results tracker so that everyone could see the teams’ gains. “Showing concrete examples of impressive productivity gains from agile created more and more pull from the organization,” says Sebastian Wagner, who was then a consulting manager in that group. Over the next two years the company rolled out agile to more than 80% of its development organizations, creating more than 2,000 teams. People in sales and marketing saw the need to adapt in order to keep up, so those areas went next. Once the front end of the business was moving at speed, it was time for the back end to make the leap, so SAP shifted its group working on internal IT systems to agile.
Too many companies make the mistake of going for easy wins. They put teams into offsite incubators. They intervene to create easy workarounds to systemic obstacles. Such coddling increases the odds of a team’s success, but it doesn’t produce the learning environment or organizational changes necessary to scale dozens or hundreds of teams. A company’s early agile teams carry the burden of destiny. Testing them, just like testing any prototype, should reflect diverse, realistic conditions. Like SAP, the most successful companies focus on vital customer experiences that cause the greatest frustrations among functional silos.
Still, no agile team should launch unless and until it is ready to begin. Ready doesn’t mean planned in detail and guaranteed to succeed. It means that the team is:
- focused on a major business opportunity with a lot at stake
- responsible for specific outcomes
- trusted to work autonomously—guided by clear decision rights, properly resourced, and staffed with a small group of multidisciplinary experts who are passionate about the opportunity
- committed to applying agile values, principles, and practices
- empowered to collaborate closely with customers
- able to create rapid prototypes and fast feedback loops
- supported by senior executives who will address impediments and drive adoption of the team’s work
Following this checklist will help you plot your sequence for the greatest impact on both customers and the organization.
Master large-scale agile initiatives.
Many executives have trouble imagining that small agile teams can attack large-scale, long-term projects. But in principle there is no limit to the number of agile teams you can create or how large the initiative can be. You can establish “teams of teams” that work on related initiatives—an approach that is highly scalable. Saab’s aeronautics business, for instance, has more than 100 agile teams operating across software, hardware, and fuselage for its Gripen fighter jet—a $43 million item that is certainly one of the most complex products in the world. It coordinates through daily team-of-teams stand-ups. At 7:30 AM each frontline agile team holds a 15-minute meeting to flag impediments, some of which cannot be resolved within that team. At 7:45 the impediments requiring coordination are escalated to a team of teams, where leaders work to either settle or further escalate issues. This approach continues, and by 8:45 the executive action team has a list of the critical issues it must resolve to keep progress on track. Aeronautics also coordinates its teams through a common rhythm of three-week sprints, a project master plan that is treated as a living document, and the colocation of traditionally disparate parts of the organization—for instance, putting test pilots and simulators with development teams. The results are dramatic: IHS Jane’s has deemed the Gripen the world’s most cost-effective military aircraft.
Leadership teams need to instill agile values throughout the entire enterprise.
Building Agility Across the Business
Expanding the number of agile teams is an important step toward increasing the agility of a business. But equally important is how those teams interact with the rest of the organization. Even the most advanced agile enterprises—Amazon, Spotify, Google, Netflix, Bosch, Saab, SAP, Salesforce, Riot Games, Tesla, and SpaceX, to name a few—operate with a mix of agile teams and traditional structures. To ensure that bureaucratic functions don’t hamper the work of agile teams or fail to adopt and commercialize the innovations developed by those teams, such companies constantly push for greater change in at least four areas.
Values and principles.
A traditional hierarchical company can usually accommodate a small number of agile teams sprinkled around the organization. Conflicts between the teams and conventional procedures can be resolved through personal interventions and workarounds. When a company launches several hundred agile teams, however, that kind of ad hoc accommodation is no longer possible. Agile teams will be pressing ahead on every front. Traditionally structured parts of the organization will fiercely defend the status quo. As with any change, skeptics can and will produce all kinds of antibodies that attack agile, ranging from refusals to operate on an agile timetable (“Sorry, we can’t get to that software module you need for six months”) to the withholding of funds from big opportunities that require unfamiliar solutions.
So a leadership team hoping to scale up agile needs to instill agile values and principles throughout the enterprise, including the parts that do not organize into agile teams. This is why Bosch’s leaders developed new leadership principles and fanned out throughout the company: They wanted to ensure that everyone understood that things would be different and that agile would be at the center of the company’s culture.
Operating architectures.
Implementing agile at scale requires modularizing and then seamlessly integrating workstreams. For example, Amazon can deploy software thousands of times a day because its IT architecture was designed to help developers make fast, frequent releases without jeopardizing the firm’s complex systems. But many large companies, no matter how fast they can code programs, can deploy software only a few times a day or a week; that’s how their architecture works.
Building on the modular approach to product development pioneered by Toyota, Tesla meticulously designs interfaces among the components of its cars to allow each module to innovate independently. Thus the bumper team can change anything as long as it maintains stable interfaces with the parts it affects. Tesla is also abandoning traditional annual release cycles in favor of real-time responses to customer feedback. CEO Elon Musk says that the company makes about 20 engineering changes a week to improve the production and performance of the Model S. Examples include new battery packs, updated safety and autopilot hardware, and software that automatically adjusts the steering wheel and seat for easier entry and exit.
In the most advanced agile enterprises, innovative product and process architectures are attacking some of the thorniest organizational constraints to further scaling. Riot Games, the developer of the wildly successful multiplayer online battle arena League of Legends, is redesigning the interfaces between agile teams and support-and-control functions that operate conventionally, such as facilities, finance, and HR. Brandon Hsiung, the product lead for this ongoing initiative, says it involves at least two key steps. One is shifting the functions’ definition of their customers. “Their customers are not their functional bosses, or the CEO, or even the board of directors,” he explains. “Their customers are the development teams they serve, who ultimately serve our players.” The company instituted Net Promoter surveys to collect feedback on whether those customers would recommend the functions to others and made it plain that dissatisfied customers could sometimes hire outside providers. “It’s the last thing we want to happen, but we want to make sure our functions develop world-class capabilities that could compete in a free market,” Hsiung says.
Riot Games also revamped how its corporate functions interact with its agile teams. Some members of corporate functions may be embedded in agile teams, or a portion of a function’s capacity may be dedicated to requests from agile teams. Alternatively, functions might have little formal engagement with the teams after collaborating with them to establish certain boundaries. Says Hsiung: “Silos such as real estate and learning and development might publish philosophies, guidelines, and rules and then say, ‘Here are our guidelines. As long as you operate within them, you can go crazy; do whatever you believe is best for our players.’”
In companies that have scaled up agile, the organization charts of support functions and routine operations generally look much as they did before, though often with fewer management layers and broader spans of control as supervisors learn to trust and empower people. The bigger changes are in the ways functional departments work. Functional priorities are necessarily more fully aligned with corporate strategies. If one of the company’s key priorities is improving customers’ mobile experience, that can’t be number 15 on finance’s funding list or HR’s hiring list. And departments such as legal may need buffer capacity to deal with urgent requests from high-priority agile teams.
Over time even routine operations with hierarchical structures are likely to develop more-agile mindsets. Of course, finance departments will always manage budgets, but they don’t need to keep questioning the decisions of the owners of agile initiatives. “Our CFO constantly shifts accountability to empowered agile teams,” says Ahmed Sidky, the head of development management at Riot Games. “He’ll say, ‘I am not here to run the finances of the company. You are, as team leaders. I’m here in an advisory capacity.’ In the day-to-day organization, finance partners are embedded in every team. They don’t control what the teams do or don’t do. They are more like finance coaches who ask hard questions and provide deep expertise. But ultimately it’s the team leader who makes decisions, according to what is best for Riot players.”
Some companies, and some individuals, may find these trade-offs hard to accept and challenging to implement. Reducing control is always scary—until you do so and find that people are happier and success rates triple. In a recent Bain survey of nearly 1,300 global executives, more respondents agreed with this statement about management than with any other: “Today’s business leaders must trust and empower people, not command and control them.” (Only 5% disagreed.)
Talent acquisition and motivation.
Companies that are scaling up agile need systems for acquiring star players and motivating them to make teams better. (Treat your stars unfairly, and they will bolt to a sexy start-up.) They also need to unleash the wasted potential of more-typical team members and build commitment, trust, and joint accountability for outcomes. There’s no practical way to do this without changing HR procedures. A company can no longer hire purely for expertise, for instance; it now needs expertise combined with enthusiasm for work on a collaborative team. It can’t evaluate people according to whether they hit individual objectives; it now needs to look at their performance on agile teams and at team members’ evaluations of one another. Performance assessments typically shift from an annual basis to a system that provides relevant feedback and coaching every few weeks or months. Training and coaching programs encourage the development of cross-functional skills customized to the needs of individual employees. Job titles matter less and change less frequently with self-governing teams and fewer hierarchical levels. Career paths show how product owners—the individuals who set the vision and own the results of an agile team—can continue their personal development, expand their influence, and increase their compensation.
Companies may also need to revamp their compensation systems to reward group rather than individual accomplishments. They need recognition programs that celebrate contributions immediately. Public recognition is better than confidential cash bonuses at bolstering agile values—it inspires recipients to improve even further, and it motivates others to emulate the recipients’ behaviors. Leaders can also reward “A” players by engaging them in the most vital opportunities, providing them with the most advanced tools and the greatest possible freedom, and connecting them with the most talented mentors in their field.
Annual planning and budgeting cycles.
In bureaucratic companies, annual strategy sessions and budget negotiations are powerful tools for aligning the organization and securing commitments to stretch goals. Agile practitioners begin with different assumptions. They see that customer needs change frequently and that breakthrough insights can occur at any time. In their view, annual cycles constrain innovation and adaptation: Unproductive projects burn resources until their budgets run out, while critical innovations wait in line for the next budget cycle to compete for funding.
In companies with many agile teams, funding procedures are different. Funders recognize that for two-thirds of successful innovations, the original concept will change significantly during the development process. They expect that teams will drop some features and launch others without waiting for the next annual cycle. As a result, funding procedures evolve to resemble those of a venture capitalist. VCs typically view funding decisions as opportunities to purchase options for further discovery. The objective is not to instantly create a large-scale business but, rather, to find a critical component of the ultimate solution. This leads to a lot of apparent failures but accelerates and reduces the cost of learning. Such an approach works well in an agile enterprise, vastly improving the speed and efficiency of innovation.
CONCLUSION
Companies that successfully scale up agile see major changes in their business. Scaling up shifts the mix of work so that the business is doing more innovation relative to routine operations. The business is better able to read changing conditions and priorities, develop adaptive solutions, and avoid the constant crises that so frequently hit traditional hierarchies. Disruptive innovations will come to feel less disruptive and more like adaptive business as usual. The scaling up also brings agile values and principles to business operations and support functions, even if many routine activities remain. It leads to greater efficiency and productivity in some of the business’s big cost centers. It improves operating architectures and organizational models to enhance coordination between agile teams and routine operations. Changes come on line faster and are more responsive to customer needs. Finally, the business delivers measurable improvements in outcomes—not only better financial results but also greater customer loyalty and employee engagement.
Agile’s test-and-learn approach is often described as incremental and iterative, but no one should mistake incremental development processes for incremental thinking. SpaceX, for example, aims to use agile innovation to begin transporting people to Mars by 2024, with the goal of establishing a self-sustaining colony on the planet. How will that happen? Well, people at the company don’t really know…yet. But they have a vision that it’s possible, and they have some steps in mind. They intend to dramatically improve reliability and reduce expenses, partly by reusing rockets much like airplanes. They intend to improve propulsion systems to launch rockets that can carry at least 100 people. They plan to figure out how to refuel in space. Some of the steps include pushing current technologies as far as possible and then waiting for new partners and new technologies to emerge.
That’s agile in practice: big ambitions and step-by-step progress. It shows the way to proceed even when, as is so often the case, the future is murky.A version of this article appeared in the May–June 2018 issue (pp.88–96) of Harvard Business Review.Read more on Change management or related topics Leading teams and Agile project management
- Darrell K. Rigby is a partner in the Boston office of Bain & Company. He heads the firm’s global innovation practice. He is the author of Winning in Turbulence and is a co-author of Doing Agile Right: Transformation Without Chaos (Harvard Business Review Press, 2020).
- Jeff Sutherland is a cocreator of the scrum form of agile innovation and the CEO of Scrum Inc., a consulting and training firm.
- Andy Noble is a partner in Bain & Company’s Organization practice and is located in the Boston office.
Article link: https://hbr.org/2018/05/agile-at-scale?