Researchers and companies are creating ultra-secure communication networks that could form the basis of a quantum internet. This is how it works.by
Researchers and companies are creating ultra-secure communication networks that could form the basis of a quantum internet. This is how it works.
Barely a week goes by without reports of some new mega-hack that’s exposed huge amounts of sensitive information, from people’s credit card details and health records to companies’ valuable intellectual property. The threat posed by cyberattacks is forcing governments, militaries, and businesses to explore more secure ways of transmitting information.
Today, sensitive data is typically encrypted and then sent across fiber-optic cables and other channels together with the digital “keys” needed to decode the information. The data and the keys are sent as classical bits—a stream of electrical or optical pulses representing 1s and 0s. And that makes them vulnerable. Smart hackers can read and copy bits in transit without leaving a trace.
Quantum communication takes advantage of the laws of quantum physics to protect data. These laws allow particles—typically photons of light for transmitting data along optical cables—to take on a state of superposition, which means they can represent multiple combinations of 1 and 0 simultaneously. The particles are known as quantum bits, or qubits.
The beauty of qubits from a cybersecurity perspective is that if a hacker tries to observe them in transit, their super-fragile quantum state “collapses” to either 1 or 0. This means a hacker can’t tamper with the qubits without leaving behind a telltale sign of the activity.
Some companies have taken advantage of this property to create networks for transmitting highly sensitive data based on a process called quantum key distribution, or QKD. In theory, at least, these networks are ultra-secure.
What is quantum key distribution?
QKD involves sending encrypted data as classical bits over networks, while the keys to decrypt the information are encoded and transmitted in a quantum state using qubits.
Various approaches, or protocols, have been developed for implementing QKD. A widely used one known as BB84 works like this. Imagine two people, Alice and Bob. Alice wants to send data securely to Bob. To do so, she creates an encryption key in the form of qubits whose polarization states represent the individual bit values of the key.
The qubits can be sent to Bob through a fiber-optic cable. By comparing measurements of the state of a fraction of these qubits—a process known as “key sifting”—Alice and Bob can establish that they hold the same key.
As the qubits travel to their destination, the fragile quantum state of some of them will collapse because of decoherence. To account for this, Alice and Bob next run through a process known as “key distillation,” which involves calculating whether the error rate is high enough to suggest that a hacker has tried to intercept the key.
If it is, they ditch the suspect key and keep generating new ones until they are confident that they share a secure key. Alice can then use hers to encrypt data and send it in classical bits to Bob, who uses his key to decode the information.
We’re already starting to see more QKD networks emerge. The longest is in China, which boasts a 2,032-kilometer (1,263-mile) ground link between Beijing and Shanghai. Banks and other financial companies are already using it to transmit data. In the US, a startup called Quantum Xchange has struck a deal giving it access to 500 miles (805 kilometers) of fiber-optic cable running along the East Coast to create a QKD network. The initial leg will link Manhattan with New Jersey, where many banks have large data centers.
Although QKD is relatively secure, it would be even safer if it could count on quantum repeaters.
What is a quantum repeater?
Materials in cables can absorb photons, which means they can typically travel for no more than a few tens of kilometers. In a classical network, repeaters at various points along a cable are used to amplify the signal to compensate for this.
QKD networks have come up with a similar solution, creating “trusted nodes” at various points. The Beijing-to-Shanghai network has 32 of them, for instance. At these waystations, quantum keys are decrypted into bits and then reencrypted in a fresh quantum state for their journey to the next node. But this means trusted nodes can’t really be trusted: a hacker who breached the nodes’ security could copy the bits undetected and thus acquire a key, as could a company or government running the nodes.
Ideally, we need quantum repeaters, or waystations with quantum processors in them that would allow encryption keys to remain in quantum form as they are amplified and sent over long distances. Researchers have demonstrated it’s possible in principle to build such repeaters, but they haven’t yet been able to produce a working prototype.
There’s another issue with QKD. The underlying data is still transmitted as encrypted bits across conventional networks. This means a hacker who breached a network’s defenses could copy the bits undetected, and then use powerful computers to try to crack the key used to encrypt them.
The most powerful encryption algorithms are pretty robust, but the risk is big enough to spur some researchers to work on an alternative approach known as quantum teleportation.
What is quantum teleportation?
This may sound like science fiction, but it’s a real method that involves transmitting data wholly in quantum form. The approach relies on a quantum phenomenon known as entanglement.
Quantum teleportation works by creating pairs of entangled photons and then sending one of each pair to the sender of data and the other to a recipient. When Alice receives her entangled photon, she lets it interact with a “memory qubit” that holds the data she wants to transmit to Bob. This interaction changes the state of her photon, and because it is entangled with Bob’s, the interaction instantaneously changes the state of his photon too.
In effect, this “teleports” the data in Alice’s memory qubit from her photon to Bob’s. The graphic below lays out the process in a little more detail:
Researchers in the US, China, and Europe are racing to create teleportation networks capable of distributing entangled photons. But getting them to scale will be a massive scientific and engineering challenge. The many hurdles include finding reliable ways of churning out lots of linked photons on demand, and maintaining their entanglement over very long distances—something that quantum repeaters would make easier.
Still, these challenges haven’t stopped researchers from dreaming of a future quantum internet.
What is a quantum internet?
Just like the traditional internet, this would be a globe-spanning network of networks. The big difference is that the underlying communications networks would be quantum ones.
It isn’t going to replace the internet as we know it today. Cat photos, music videos, and a great deal of non-sensitive business information will still move around in the form of classical bits. But a quantum internet will appeal to organizations that need to keep particularly valuable data secure. It could also be an ideal way to connect information flowing between quantum computers, which are increasingly being made available through the computing cloud.
China is in the vanguard of the push toward a quantum internet. It launched a dedicated quantum communications satellite called Micius a few years ago, and in 2017 the satellite helped stage the world’s first intercontinental, QKD-secured video conference, between Beijing and Vienna. A ground station already links the satellite to the Beijing-to-Shanghai terrestrial network. China plans to launch more quantum satellites, and several cities in the country are laying plans for municipal QKD networks.
Some researchers have warned that even a fully quantum internet may ultimately become vulnerable to new attacks that are themselves quantum based. But faced with the hacking onslaught that plagues today’s internet, businesses, governments, and the military are going to keep exploring the tantalizing prospect of a more secure quantum alternative.
Published 10 March 2020 – ID G00450641 – 17 min read
Enterprises are advancing use cases of cloud computing in ways that deliver it at the point of need using distributed cloud. Enterprise architecture and technology innovation leaders must identify and exploit evolving models of cloud computing deployment to exploit business opportunities.
- Distributed cloud is the first cloud model that incorporates physical location of cloud-delivered services as part of its definition.
- Distributed cloud fixes discontinuities in the cloud value chain that often exist in hybrid cloud models. Cloud providers are following different approaches and models to solve such issues.
- Distributed cloud will emerge in phases. In the first phase, enterprises will deploy and consume it as a packaged, location-bound, distributed cloud offering. In the second phase, third parties such as telcos and city governments will become involved.
- New advanced use cases and more sophisticated uses of cloud computing are increasing the array of cloud services available to IT professionals. Each of these distributed cloud architectures offers a different set of trade-offs, often based on proximity, control, scalability and breadth of services available.
Enterprise architecture and technology innovation leaders assessing strategic technology trends for their impact and potential for competitive advantage must:
- Use distributed cloud models as an opportunity to prepare for the next generation of cloud computing by targeting location-dependent use cases.
- Overcome deficiencies in private and hybrid cloud implementations by using the like-for-like hybrid nature of distributed cloud.
- Identify use cases for future phases of distributed cloud (such as low latency, tethered scale and data residency) that are enhanced by using distributed cloud “substations.”
- Investigate making cloud providers responsible for cloud operations, even on-premises, to overcome the failures and shortcomings of today’s private and hybrid cloud computing.
Strategic Planning Assumption
By 2024, most cloud service platforms will provide at least some distributed cloud services that execute at the point of need.
Why Distributed Cloud Is a Top 10 Trend
As more people are using cloud computing, they are using it for more advanced use cases. And vendors are delivering cloud capabilities in more nuanced and intelligent ways, recognizing new customer value in new business cases.Distributed cloud is the answer to the question “What is the future of cloud computing?” It refers to the distribution of public cloud services to different physical locations while the operation, governance and evolution of the services remain the responsibility of the public cloud provider. As with anything that describes the future, distributed cloud is based on origins visible today. The distributed cloud brings aspects of worldwide public cloud regions, hybrid cloud and edge computing to the original world of cloud computing (see Figure 1).Figure 1. Distributed Cloud
We have identified distributed cloud as a top 10 strategic technology trend for 2020 because of the importance of cloud computing itself. Cloud computing underpins virtually all the candidates for “the next big thing,” including the other top 10 strategic technology trends.
Where Distributed Cloud Fits in the Top 10
This trend is part of the smart spaces category (see Figure 2), along with empowered edge, autonomous things, practical blockchain and AI security.Figure 2. Where Distributed Cloud Fits in the Top 10 List of Strategic Technology Trends
Distributed cloud has much synergy with three of our other top 10 strategic technology trends. This goes beyond the basic “requires cloud to work” reality of these three trends:
- Empowered edge. Edge devices will exploit distributed cloud systems located everywhere from adjacents to endpoints (for example, on gateways and on-premises microdata centers) through to remote cloud regions.
- Practical blockchain. As blockchain matures, more processing will occur at the edge and elsewhere. However, many of these environments have restricted computing power, slow networking and limited data storage capabilities. They will increasingly rely on capabilities powered by distributed cloud.
- AI security. It won’t be possible to monitor and manage the vast number of future edge devices manually. AI-based security systems will be essential to identify anomalous behavior by distributed capabilities.
Location is a key factor in the successful deployment and consumption of these three top 10 technologies. Distributed cloud will be a foundation on which the power of cloud can be delivered in the required locations to support the other top 10 technologies.
Distributed Cloud Explained
Distributed cloud’s distribution of public cloud services to different physical locations represents a significant shift from the virtually centralized model of most public cloud services and the model associated with the general cloud concept. It will lead to a new era in cloud computing.Gartner defines cloud computing as a style of computing in which elastically scalable IT-enabled capabilities are delivered as a service using internet technologies. This definition makes no mention of location. Cloud computing has long been viewed as synonymous with a “centralized” service running in the provider’s data center. However, it would be better to view it as a logically centralized or unified service. Private and hybrid cloud options complement this public cloud model. Private cloud refers to the creation of cloud services dedicated to individual companies often running in their own data centers. Hybrid cloud refers to the integration of private and public cloud services to support parallel, integrated or complementary tasks.Location is a key part of the distributed cloud concept. Distributed cloud distributes capabilities to different locations. Deploying cloud services in a distributed fashion provides stronger support for a continuum of cloud services from the central public cloud out to edge devices and scenarios. The ability to access cloud services running on edge devices enables the allocation of cloud resources to different use cases. It enables different connectivity requirements to be met for individual devices; nearby sites; and communities, cities, countries or entire regions. Distributed cloud can also address different physical security and ruggedness requirements. This continuum unifies cloud, edge and disconnected deployment use cases for cloud services, devices and data in a unified strategy.
Distributed Cloud Has Three Origins: Public Cloud Regions, Hybrid Cloud and Edge Computing
Public Cloud Regions
In hyperscale public cloud implementations, the public cloud is the “center of the universe.” However, cloud services have been distributed worldwide in the public cloud almost since its inception. Providers have different regions around the world, all centrally controlled, managed and provided by one public cloud provider.The location of the cloud services is a critical component of the distributed cloud computing model. Historically, location has not been relevant to cloud definitions, but issues related to it are important in many situations. Location may be important for a variety of reasons, including data sovereignty, and for latency-sensitive use cases. In these scenarios, the distributed cloud service provides organizations with the capabilities of a public cloud service delivered in a location that meets their requirements.
The aim of the hybrid cloud concept has been to blend external services from a provider and internal services running on-premises in an optimized, efficient and cost-effective manner.However, implementing a private cloud is hard. Hybrid cloud computing requires both public and private clouds. Most private cloud projects do not deliver the cloud outcomes and benefits organizations seek. Also, most of the conversations Gartner has with clients about hybrid cloud are not about true hybrid cloud scenarios. Instead, they are about hybrid IT scenarios in which noncloud technologies are used with public cloud services in a spectrum of cloud-like models. This is referred to as cloud-inspired (see “Four Types of Cloud Computing Describe a Spectrum of Cloud Value”). Hybrid IT and true hybrid cloud options are valid approaches, and we recommend them for some use cases. But most hybrid cloud styles break many of the cloud computing value propositions and fail to:
- Shift the responsibility and work of running hardware and software infrastructure to cloud providers
- Exploit the economics of cloud elasticity (scaling up and down) from a large pool of shared resources
- Benefit from the pace of innovation in sync with the public cloud providers
- Use the cost economics of global hyperscale services
- Employ the skills of large cloud providers to secure and operate world-class services
The Packaging of Hybrid Cloud
The next generation of hybrid (and private) cloud is packaged and solves many of the problems with hybrid cloud. Packaged hybrid cloud refers to a vendor-provided private cloud offering that is packaged and connected to a public cloud in a tethered way. Two main approaches exist to packaged hybrid cloud: “like-for-like” hybrid and “layered technology” hybrid (spanning different technology bases).
- The like-for-like hybrid approach is typified by Microsoft Azure and Azure Stack. Azure Stack is not the same as Azure in the public cloud. It is a subset, but delivers a set of capabilities that mirror the services in the Azure public cloud. AWS Outposts, another example, can be used in a managed private cloud mode (where no other companies have access). It represents an example of the like-for-like approach. However, the broader strategy represented by AWS Outposts would encourage a more distributed model in which each Outposts deployment is opened to near neighbors. Like-for-like solutions provide the “full stack,” but not necessarily the hardware, all managed by a single vendor.
- In the Azure Stack approach, the customer buys and owns a hardware platform. The cloud software layer is delivered with a subset of the provider’s public cloud services. In this scenario, the cloud provider does not usually take full responsibility for the ongoing operations, maintenance or updating of the underlying hardware platform. The cloud provider may have only partial responsibility for the software. Users are responsible, doing it themselves or using a managed service provider.
- In the AWS Outposts model, a full appliance comprising both hardware and software is delivered to the customer. The cloud provider takes responsibility for supporting and maintaining the hardware and software. The customer provides the physical facility in which the system is hosted, but otherwise the cloud provider effectively runs the appliance as an extension of its central cloud service.
- Although a software approach provides a like-for-like model between the public service and the on-premises implementation, the other challenges with the hybrid cloud remain. Some customers consider it an advantage that they control service updates.
- The layered technology hybrid approach is based on integration of different underlying technologies, platforms and capabilities — creating a portability layer of sorts. This is where Google and IBM (and others) have focused — Google with Anthos (formerly its cloud services platform) and IBM with Red Hat and OpenShift.
- In this approach, the provider delivers a portability layer typically built on Kubernetes as the foundation for services across a distributed environment. In some cases, the portability layer simply uses containers to support execution of a containerized application. In other cases, the provider delivers some of its cloud services as containerized services that can run in the distributed environment. The portability approach ignores the ownership and management of the underlying hardware platform, which remains the responsibility of the customer.
Combined and other approaches exist. In these, the provider delivers a like-for-like version of some of its cloud services in a hardware/software combination, and the provider commits to managing and updating the service. This reduces the burden on the service consumer who can view the service as a “black box.” However, some customers will be uncomfortable giving up all control of the underlying hardware and software update cycles.
Distributed Cloud Delivers on the Hybrid Cloud Promise
The distributed cloud extends beyond cloud-provider-owned data centers (for example, the model in which cloud providers have different regions). In the distributed cloud, the originating public cloud provider is responsible for all aspects of cloud service architecture, delivery, operations, governance and updates. This restores cloud value propositions that are broken when customers are responsible for a part of the delivery, as is usually the case in hybrid cloud scenarios. The cloud provider does not need to own the hardware on which the distributed cloud service is installed. But in a full implementation of the distributed cloud model, the cloud provider must take full responsibility for how that hardware is managed and maintained.
The fundamental notion of the distributed cloud is that the public cloud provider is responsible for the design, architecture, delivery, operation, maintenance, updates and ownership, often including the underlying hardware. However, as solutions move closer to the edge, it is often not desirable or feasible for the provider to own the entire stack of technology. As these services are distributed onto operational systems (for example, a power plant or wind farm), the consuming organization may not want to give up ownership and management of the physical plant to an outsider provider. But the consuming organization may be interested in a service that the provider delivers, manages and updates on such equipment. The same is true for mobile devices, smartphones and other client equipment. As a result, we expect a spectrum of delivery models will appear, with the provider accepting varying levels of ownership and responsibility.Another edge factor that will influence the distribution of public cloud services will be the capabilities of the edge, near-edge and far-edge platforms that may not need, or cannot run, a like-for-like service that mirrors that in the centralized cloud. Complementary services tailored to the target environment, such as a low-function Internet of Things (IoT) or storage device, will be part of the distributed cloud spectrum (for example, AWS IoT Greengrass, AWS Snowball and Azure Stack Edge). However, at a minimum, the cloud provider must design, architect, distribute, manage and update these services if they are to be viewed as part of the distributed cloud spectrum.The distributed cloud supports continuously connected and intermittently connected operation of like-for-like cloud services from the public cloud distributed to specific and varied locations. This enables low-latency service execution in which the cloud services are closer to the point of need in remote data centers or delivered all the way to the edge device itself. This can deliver major improvements in performance and reduce the risk of global network-related outages, as well as support occasionally connected scenarios. By 2024, most cloud service platforms will provide at least some services that execute at the point of need.
The Evolution of Distributed Cloud
Distributed Cloud Phases
We expect that distributed cloud computing will happen in four phases:
- Phase 1. A like-for-like hybrid mode in which the cloud provider delivers services in a distributed fashion that mirror a subset of services in its centralized cloud for delivery in the enterprise.
- Phase 2. An extension of the like-for-like model in which the cloud provider teams with third parties to deliver a subset of its centralized cloud services to target communities through the third-party provider. An example is the delivery of services through a telecommunications provider or colocation provider to support data sovereignty requirements in smaller countries where the provider has no data centers.
- Phase 3. Communities of organizations share distributed cloud substations. We use the term “substations” to evoke the image of subsidiary stations (like branch post offices) where people gather to use services. Cloud customers can gather at a distributed cloud substation to consume cloud services for common or varied reasons if it is open for community or public use. This improves the economics associated with paying for the installation and operation of a distributed cloud substation. As other companies use the substation, they can share the cost of the installation. We expect that third parties such as telecommunications service providers will consider creating substations in locations where the public cloud provider lacks a presence. If the substation is not open for use outside the organization that paid for its installation, then the substation represents a private cloud instance in a hybrid relationship with the public cloud.
- Phase 4. Use of embedded and personal resources. Examples include the use of local processing on personal devices, embedded capabilities in smart buildings and components embedded in software packages or applications.
Ironically, the distributed cloud takes something location-independent (cloud computing), introduces location importance and ultimately removes the concern about location. In its most complete form, a distributed cloud approach will enable an organization to specify its requirements (for example, compliance and security, budget, and capacity) to a cloud provider. The cloud provider will, increasingly in an automated way, generate the optimal configuration without requiring detailed location knowledge.In addition to addressing regional, hybrid and edge issues, distributed cloud approaches will enable additional scenarios. These include dedicated connected implementations for governments and industry-specific community clouds, and potentially for solutions that can address geopolitical needs. Such geopolitical issues are leading to increasing national concerns about connections to the main internet. These include censorship, security, privacy and data sovereignty. This “splintering” of the internet and cloud scenarios defies easy solutions — distributed cloud capabilities could help.
Paths to Distributed Cloud
The distributed cloud is in the early stages of development. Many providers aim to offer most of their public services in a distributed manner in the long term. But they currently provide only a subset — and often a small subset — of their services in a distributed way, and with limited consumption models (form factors). Some providers do not support the complete delivery, operation and update elements of a full distributed cloud. Providers are extending services to on-premises data centers, third-party data centers and the edge. They are doing so with offerings such as Microsoft Azure Stack, Oracle Cloud at Customer, Google Anthos, IBM Red Hat and AWS Outposts (and AWS Local Zones and AWS Wavelength).Evaluate the potential benefits and challenges of the like-for-like and layered technology packaging approaches. Each approach involves challenges in terms of fulfilling the vision of distributed cloud. The like-for-like approach tends to result in walled gardens. The layered approach can be subject to the challenges of delivering portable, open software. Both of these approaches could lead to an open, fully managed, multicloud solution but through different paths and with very different challenges.
Enterprise architecture and technology innovation leaders must:
- Use distributed cloud models as an opportunity to prepare for the next generation of cloud computing by targeting location-dependent use cases.
- Overcome deficiencies in private and hybrid cloud implementations by using the like-for-like hybrid nature of distributed cloud.
- Identify use cases for future phases of distributed cloud (such as low latency, tethered scale and data residency) that are enhanced by using distributed cloud substations.
- Identify scenarios where a distributed cloud model will remove the need for a “traditional” hybrid cloud model and where hybrid cloud models will continue to be needed for years.
- Investigate making cloud providers responsible for cloud operations, even on-premises, to overcome the failures and shortcomings of today’s private and hybrid cloud computing.
- Exploit the flexibility offered by the increased deployment options of cloud computing.
Appendix: The Other Top Strategic Technology Trends for 2020
For information on the other top strategic technology trends for 2020, see:“Top 10 Strategic Technology Trends for 2020: Hyperautomation”“Top 10 Strategic Technology Trends for 2020: Multiexperience”“Top 10 Strategic Technology Trends for 2020: Democratization”“Top 10 Strategic Technology Trends for 2020: Human Augmentation”“Top 10 Strategic Technology Trends for 2020: Transparency and Traceability”“Top 10 Strategic Technology Trends for 2020: Empowered Edge”“Top 10 Strategic Technology Trends for 2020: Autonomous Things”“Top 10 Strategic Technology Trends for 2020: Practical Blockchain”“Top 10 Strategic Technology Trends for 2020: AI Security”
By David Smith, David Cearley, Ed Anderson, Daryl Plummer
The plan will make telehealth a foundational modality of care, with the option for patients to follow up with in-person visits if necessary.
By Kat Jercich
September 14, 202012:59 PM
In response to increasing patient demand for telehealth, Kaiser Permanente this week announced the launch of a new “virtual-first” healthcare plan in Washington state.
The plan, which will be available January 1, 2021, through Kaiser Foundation Health Plan of Washington’s direct to employer groups and consumers, will center telehealth as a foundational modality of care for patients with nonurgent issues.
“Virtual care is the health care of today and tomorrow,” said Dr. Paul Minardi, president and executive medical director of Washington Permanente Medical Group, in a statement.
“The pandemic has reinforced the need to provide care in the most convenient, accessible, and safe way for our members, and that’s what Virtual Plus does,” he said.
WHY IT MATTERS
As with other providers, Kaiser Permanente says it saw a huge uptick in telehealth use since the start of the pandemic. According to the company, it was one of the first healthcare organizations to deliver the majority of care via telemedicine.
The system’s options include its Consulting Nurse Service, Care Chat online messaging, video and phone visits. About 65% of appointments are now conducted virtually.
The new health plan will allow members to reach out via phone, online chat, video or email for nonurgent issues. According to the company, patients will see the same doctors and clinicians as they would at any Kaiser Permanente facility, with their data available through electronic health records.
Members can get in touch with their clinicians virtually, with the option to come in person for follow-up visits, says the organization.
They have access to Kaiser Permanente pharmacists via telehealth and can get medications delivered in one to two days.
“With this new plan, we are innovating to give our members more convenient care options at a more affordable cost, respecting their choices and preferences,” said Joseph Smith, vice president of sales and business development, Kaiser Foundation Health Plan of Washington, in a statement.
THE LARGER TREND
Although much attention has been paid of late to any upcoming physician fee schedules for telehealth under Medicare and Medicaid, some private payers have also taken strides to center virtual care in the future of their coverage.
This summer, Blue Cross Blue Shield of Tennessee announced that it would make in-network telehealth services permanent.
Telehealth “has opened up another opportunity for people to get care in a different way,” said BCBS Tennessee SVP and Chief Medical Officer Dr. Andrea Willis. “We are continuing this expansion for our commercial population because it feels like it’s the right thing to do.”
ON THE RECORD
“We are excited to offer another care option to our members that continues our commitment of providing high-quality, convenient, and affordable health care,” said Smith.
by Zeynep Ton
August 17, 2020
The pandemic’s impact on frontline workers and recent incidents of police brutality have highlighted the urgent need to provide good jobs for people of color. For too long, millions of Americans have been left behind with low wages, few benefits, unstable schedules, and lack of respect and dignity. And for too long, American employers have assumed these conditions as an inevitability of doing business rather than a deliberate choice. Based on her research, the author offers six steps that business leaders should take to help bring about economic and racial justice. They will help their firms competitiveness as well.
Americans are demanding a reckoning. Incidents of police brutality and structural inequities that have caused the pandemic to hit people of color especially hard are sparking calls for racial justice. The precarious conditions endured by poorly paid frontline workers who have continued to stay on the job during the pandemic have generated calls for economic justice. Each of these forms of injustice has distinct drivers, but they amplify each other and often fall hardest on the same people. As Martin Luther King, Jr. reminded us, economic and racial justice are inexorably linked.
To address this critical moment, we need business leaders to emulate the business leaders who, in the midst of World War II, committed to creating good jobs. For too long, millions of Americans have been left behind with low wages, few benefits, unstable schedules, and lack of respect and dignity. And for too long, American employers have assumed these conditions as an inevitability of doing business rather than a deliberate choice. But my research shows that bad jobs are a choice, not a necessity, and that offering good jobs is also a choice — even a profit-maximizing choice — yes, even for companies that compete on low cost.
Here I describe why business leaders should care about having too many employees with low-wage “bad jobs” and six steps to begin transforming those jobs into good jobs.
The Central Problem: Too Many People Left Behind
Even in the pre-Covid world of low unemployment, 32% of the U.S. workforce — that’s 46.5 million people — worked in occupations with a median wage of less than $15 an hour. With Covid-19, we started calling many of these workers “essential.” Yet one wouldn’t know it from their wages. The median wage for a meatpacker is $14 an hour; it’s only $12 for a health aide.
Imagine a single parent working as a bank teller and earning the median hourly wage for that job of $15.02. At 40 hours a week, she would make $2,580 a month — $494 less than her rent, child care, transportation, food, and medical expenses. That’s not even counting personal care, clothing, leisure, cable/phone bills, housekeeping supplies, and unexpected expenses such as a broken cell phone. And remember, many service workers don’t even get 40 hours a week, so their annual pay is much lower. Brookings Institute estimates that 53 million Americans make less than $17,950 a year.
Many live so close to the edge that they rely on government assistance. Twenty-five million working people and families received $61 billion in Employed Income Tax Credits in 2019. Forty percent of Americans, disproportionately Black, cannot absorb an unexpected $400 expense. Such families have no cushion when the full impact of Covid-19 hits, as has been made clear by the long lines at food banks.
This problem won’t be solved by upskilling or improving education — the current focus of the President’s Workforce Advisory Board. Future job growth is expected largely in low-wage jobs such as health aides, food and cleaning services, and laborer occupations. Most of the top 20 fastest-growing occupations — 55% of projected job growth — pay below the median wage. Think of the U.S. economy as an enormous ship with a hole in its hull. Those in the lower decks are at risk of drowning. Upskilling may move some of them to a dry deck, but there isn’t room there for all, and, anyway, the ship is still sinking. We need to fix the hole right now so no one drowns.
In those lower decks, the people most at risk of drowning are Black and Hispanic Americans, who are disproportionally represented in low-wage jobs. Black workers make up 25% and 37% of two of the fastest growing (but low-wage) occupations: personal care aides and home health aides. Company leaders who have spoken against racial injustice, committed funds to address racial injustice, or have hired chief diversity and inclusion officers should look at where people of color work in their company, as employees or contractors, and make sure their jobs allow them to live with dignity.
The Vicious Cycle of Poverty Hurts U.S. Society and Business
Low-wage workers live in a vicious cycle that prevents them from moving up. Many work multiple jobs. The associated stress undermines mental and physical health. Indeed, that stress lowers cognitive functioning, creating a “bandwidth tax” equal to a loss of 13 IQ points. Performance suffers as it is harder to keep up good attendance, focus on the job, be productive, and do your best for customers or coworkers. Unsurprisingly, these workers find it hard to climb the ladder of opportunity that this country has historically provided.
My research shows that the vicious cycle for low-wage workers is also a vicious cycle for their companies. Poor attendance and high turnover lead to operational problems that undermine sales and profits. Reduced profits, in turn, prevent companies from investing more in their workers, causing yet more instability for the companies and their workers. I’ve observed racial tensions to be worse at companies operating in this vicious cycle. People feel less respect and treat others, including customers, with less respect. Managers are busy fighting fires rather than leading their people. I’ve also observed once-successful companies go out of business at least partly on account of this vicious cycle.
If there ever was a lose-lose for workers and companies, bad jobs with low wages are it.
What we often fail to see is that wages are not just a neutral market valuation of what a job is worth. The wages themselves affect the quality of that work and therefore the worker’s productive value and career prospects. Put a capable person in a position where she must work two low-wage jobs — with uncertainty about her schedule, fear that she will not make rent, and little or no support from management to do a really good job — and she will likely resign herself to mediocre or poor performance.
This vicious cycle can be reversed. We already observe that where the minimum wage increases, workers’ well-being improves — with no negative effect on employment. Workers have fewer unmet medical needs, better nutrition, less smoking, less child neglect, fewer low-birth-weight babies, and fewer teen births. With more income, they spend more money and rely less on government benefits — all positives for the economy. What’s more, my colleague Hazhir Rahmandad and I find that even in low-cost service settings, paying higher wages and treating workers with respect and dignity can be profit-maximizing. Good jobs also create a competitive advantage by enabling firms to differentiate and to adapt better to change.
Here are six steps that business leaders can and should take to address these inequities.
1. Recognize the problem and make a collective pledge to address it. When it comes to the role of businesses as employers, the focus tends to be on upskilling workers through education and public-private partnerships. That’s important, but the inherent assumption that the problem is on the supply side (too few qualified workers) misses the more important problem on the demand side (too few good jobs).
Business leaders need to publicly recognize the problem on the demand side and pledge to create well-paying jobs, without which we cannot maintain and expand a strong middle class. Part of what drove leaders of companies such as GM, GE, Coca-Cola, and Kodak to commit so effectively to creating well-paying jobs during World War II was fear of socialism and communism.
The stakes for capitalism are similar now. Many Americans are losing faith in capitalism and market economies, believing that capitalism inherently drives not only inequality, but also injustice. Even before the pandemic, 70% of Americans believed the economic system was rigged against them.
2. Commit to raise low wages. All large companies should calculate the distribution of annual frontline take-home pay and consider the budgetary needs of their workers. (With so many working part-time or irregular hours, the hourly wage itself doesn’t tell the story.) How many are below the living wage in their area? (When we at the nonprofit Good Jobs Institute share these data with executives, they are often surprised.) How many are single parents or students? How many families have two wage-earners? How many full-timers rely on welfare?
With these data, a company can make commitments that are reasonable but bold — the most it can manage, not the least it can get away with. If profit margins are high and low-wage workers are only a small part of costs, doing the right thing is easy. That’s why, in 2015, after calculating the potential benefits from higher wages (e.g., lower turnover, better customer service from employees who can focus on the job) Aetna could raise its minimum wage from $12 an hour to $16 an hour. Last March, Bank of America raised its minimum wage to $20 an hour.For others, like Walmart, raising wages requires systemic change. But it can be done and, if done right, can help both employees and employers win.
During our current economic crisis, such a change may feel impossible. But these commitments can be made over time. What matters is that companies consider cost of living, not just what others pay. Even more effective would be for large companies (or industry associations such as the National Retail Federation) to encourage other firms in their communities/industries to follow their lead. New research shows that raising wages is less competitively costly than companies may realize because once a large company raises wages, others in the area follow suit.
3. Provide career paths for low-wage workers. Companies such as Costco and QuikTrip — which offer careers, not merely jobs — aim for all frontline managers to be promoted from within. Committing to such a policy forces them to invest in the development of their workers. Equally important is entering gender and race into promotion decisions. In retail, for example, 18% of cashiers and 12% of salespeople but only 10% of frontline supervisors are Black.
4. Disclose pay and turnover data. Revealing annual turnover and the annual take-home pay distribution — not just the average or median — by race and gender might be uncomfortable but will help drive conversations with the board and investors. (Intel is one company that discloses annual take-home pay buckets by race and gender.) Such transparency will show who’s operating in the vicious cycle described above. Quantitative benchmarks can help create peer pressure and enable customers, communities, and investors to track change at different companies. Benchmarks can also help people such as my students, many of whom seek employers that take care of their workers, decide where to take their considerable talents when they graduate.
5. Involve workers in technology decisions that affect their work. Too often, the people who do the work are excluded from such decisions — to the detriment of companies. Workers with frontline knowledge can help companies be smarter about choosing, deploying, and scaling technologies. They can also help identify technologies that would complement people rather than “so-so” technologies that replace people but don’t even improve productivity. Companies that commit to Step 2 can justify higher wages by making better use of the talent that they already have.
6. Drive public policies that improve workers’ well-being and the economy. Some business leaders are already recommending higher minimum wages. Benefits such as paid sick leave, smart-scheduling legislation that can improve stability for workers and companies, and government-sponsored child care for low-income families could also improve workers’ well-being and the economy. Changes to tax code to favor investing in workers rather than automation would reduce incentives to invest in “so-so” technologies. Business leaders should be seen and heard advocating for them.
Even so, history shows that we can’t rely on business leaders alone. They need a context that pushes for good jobs. Apart from worker power and smart public policy, investors and business schools have important roles.
Investors. Since I’ve started working with companies, I’ve seen how infrequently their leaders take a long-term view. Even after believing in the financial and competitive reasons to offer good jobs, some shy away from investing in their employees. Why? They fear a dip in profitability. “Anything that doesn’t yield a return in a year doesn’t make it,” I hear.
This short-term focus is driven, in part, by executives’ short tenures (the median CEO tenure in public companies was five years in 2017, the most recent data available) and, in part, by their fear that investors will punish them. Indeed, when Walmart began investing in its workers in 2014, its stock price took a hit. If investors were less quick to punish a company for “profit-draining” labor investments and more interested in really profit-draining employee turnover (and in the drivers of turnover such as take-home pay, schedule stability, and career paths), we’d see more companies prioritizing investment in people — and doing better.
Business schools. For decades, schools taught that the corporation’s duty is to maximize shareholder value. It is time to teach how to make a decent profit decently. When it comes to exemplary CEOs, we should celebrate those like Costco’s Jim Sinegal, who created a competitive business without sacrificing the financial well-being of frontline employees.
We should also provide students with an opportunity to develop compassion for the people they will lead. While teaching at MIT’s Sloan School of Management and Harvard Business School, I have seen plenty of programs that allow students to spend time with business and political leaders but none that encouraged them to spend time in a low-wage job. That would help them see past the stigmas of low-wage work and workers, witness how poor corporate decisions get in the way of delivering value to customers and good jobs to employees, and cultivate respect for the challenging work that takes place across all levels the company.
The problem is vast, but the first step is for the job creators to reevaluate their assumptions about the jobs they create. It is easy to create a job that treats people like robots and justify it with the assumption that workers lack skills and abilities. But as we have seen, that attitude sows social unrest and puts a ceiling on the prospects of hardworking Americans and their communities.
If we are to live in a just world, we need to remember what Martin Luther King, Jr. told the sanitation workers in a crowded Memphis church in 1968: “So often we overlook the work and the significance of those who are not in professional jobs, of those who are not in the so-called big jobs. But let me say to you tonight, that whenever you are engaged in work that serves humanity and is for the building of humanity, it has dignity, and it has worth.”
From the September–October 2020 Issue
Many White people deny the existence of racism against people of color because they assume that racism is defined by deliberate actions motivated by malice and hatred. However, racism can occur without conscious awareness or intent. When defined simply as differential evaluation or treatment based solely on race, regardless of intent, racism occurs far more frequently than most White people suspect.
As intractable as it seems, racism in the workplace can be effectively addressed. Because organizations are small, autonomous entities that afford leaders a high level of control over norms and policies, they are ideal sites for promoting racial equity.
Companies should move through the five stages of a process called PRESS: (1) Problem awareness, (2) Root-cause analysis, (3) Empathy, or level of concern about the problem and the people it afflicts, (4) Strategies for addressing the problem, and (5) Sacrifice, or willingness to invest the time, energy, and resources necessary for strategy implementation.
Idea in Brief
Racial discrimination—defined as differential evaluation or treatment based solely on race, regardless of intent—remains prevalent in organizations and occurs far more frequently than most White people suspect.
Intractable as it seems, racism in the workplace can be effectively addressed. Because organizations are autonomous entities that afford leaders a high level of control over norms and policies, they are ideal places to promote racial equity.
The Way Forward
Effective interventions move through stages, from understanding the underlying condition, to developing genuine concern, to focusing on correction.Leer en español
Intractable as it seems, the problem of racism in the workplace can be effectively addressed with the right information, incentives, and investment. Corporate leaders may not be able to change the world, but they can certainly change their world. Organizations are relatively small, autonomous entities that afford leaders a high level of control over cultural norms and procedural rules, making them ideal places to develop policies and practices that promote racial equity. In this article, I’ll offer a practical road map for making profound and sustainable progress toward that goal.
I’ve devoted much of my academic career to the study of diversity, leadership, and social justice, and over the years I’ve consulted on these topics with scores of Fortune 500 companies, federal agencies, nonprofits, and municipalities. Often, these organizations have called me in because they are in crisis and suffering—they just want a quick fix to stop the pain. But that’s akin to asking a physician to write a prescription without first understanding the patient’s underlying health condition. Enduring, long-term solutions usually require more than just a pill. Organizations and societies alike must resist the impulse to seek immediate relief for the symptoms, and instead focus on the disease. Otherwise they run the risk of a recurring ailment.
To effectively address racism in your organization, it’s important to first build consensus around whether there is a problem (most likely, there is) and, if so, what it is and where it comes from. If many of your employees do not believe that racism against people of color exists in the organization, or if feedback is rising through various communication channels showing that Whites feel that they are the real victims of discrimination, then diversity initiatives will be perceived as the problem, not the solution. This is one of the reasons such initiatives are frequently met with resentment and resistance, often by mid-level managers. Beliefs, not reality, are what determine how employees respond to efforts taken to increase equity. So, the first step is getting everyone on the same page as to what the reality is and why it is a problem for the organization.
But there’s much more to the job than just raising awareness. Effective interventions involve many stages, which I’ve incorporated into a model I call PRESS. The stages, which organizations must move through sequentially, are: (1) Problem awareness, (2) Root-cause analysis, (3) Empathy, or level of concern about the problem and the people it afflicts, (4) Strategies for addressing the problem, and (5) Sacrifice, or willingness to invest the time, energy, and resources necessary for strategy implementation. Organizations going through these stages move from understanding the underlying condition, to developing genuine concern, to focusing on correction.
Let’s now have a closer look at these stages and examine how each informs, at a practical level, the process of working toward racial equity.
To a lot of people, it may seem obvious that racism continues to oppress people of color. Yet research consistently reveals that many Whites don’t see it that way. For example, a 2011 study by Michael Norton and Sam Sommers found that on the whole, Whites in the United States believe that systemic anti-Black racism has steadily decreased over the past 50 years—and that systemic anti-White racism (an implausibility in the United States) has steadily increased over the same time frame. The result: As a group, Whites believe that there is more racism against them than against Blacks. Other recent surveys echo Sommers and Norton’s findings, one revealing, for example, that 57% of all Whites and 66% of working-class Whites consider discrimination against Whites to be as big a problem as discrimination against Blacks and other people of color. These beliefs are important, because they can undermine an organization’s efforts to address racism by weakening support for diversity policies. (Interestingly, surveys taken since the George Floyd murder indicate an increase in perceptions of systemic racism among Whites. But it’s too soon to tell whether those surveys reflect a permanent shift or a temporary uptick in awareness.)
Even managers who recognize racism in society often fail to see it in their own organizations. For example, one senior executive told me, “We don’t have any discriminatory policies in our company.” However, it is important to recognize that even seemingly “race neutral” policies can enable discrimination. Other executives point to their organizations’ commitment to diversity as evidence for the absence of racial discrimination. “Our firm really values diversity and making this a welcoming and inclusive place for everybody to work,” another leader remarked.
The real challenge for organizations is not figuring out “What can we do?” but rather “Are we willing to do it?”
Despite these beliefs, many studies in the 21st century have documented that racial discrimination is prevalent in the workplace, and that organizations with strong commitments to diversity are no less likely to discriminate. In fact, research by Cheryl Kaiser and colleagues has demonstrated that the presence of diversity values and structures can actually make matters worse, by lulling an organization into complacency and making Blacks and ethnic minorities more likely to be ignored or harshly treated when they raise valid concerns about racism.
Many White people deny the existence of racism against people of color because they assume that racism is defined by deliberate actions motivated by malice and hatred. However, racism can occur without conscious awareness or intent. When defined simply as differential evaluation or treatment based solely on race, regardless of intent, racism occurs far more frequently than most White people suspect. Let’s look at a few examples.
In a well-publicized résumé study by the economists Marianne Bertrand and Sendhil Mullainathan, applicants with White-sounding names (such as Emily Walsh) received, on average, 50% more callbacks for interviews than equally qualified applicants with Black-sounding names (such as Lakisha Washington). The researchers estimated that just being White conferred the same benefit as an additional eight years of work experience—a dramatic head start over equally qualified Black candidates.
Research shows that people of color are well-aware of these discriminatory tendencies and sometimes try to counteract them by masking their race. A 2016 study by Sonia Kang and colleagues found that 31% of the Black professionals and 40% of the Asian professionals they interviewed admitted to “Whitening” their résumés, either by adopting a less “ethnic” name or omitting extracurricular experiences (a college club membership, for instance) that might reveal their racial identities.
These findings raise another question: Does Whitening a résumé actually benefit Black and Asian applicants, or does it disadvantage them when applying to organizations seeking to increase diversity? In a follow-up experiment, Kang and her colleagues sent Whitened and non-Whitened résumés of Black or Asian applicants to 1,600 real-world job postings across various industries and geographical areas in the United States. Half of these job postings were from companies that expressed a strong desire to seek diverse candidates. They found that Whitening résumés by altering names and extracurricular experiences increased the callback rate from 10% to nearly 26% for Blacks, and from about 12% to 21% for Asians. What’s particularly unsettling is that a company’s stated commitment to diversity failed to diminish this preference for Whitened résumés.
A Road Map for Racial Equity
Organizations move through these stages sequentially, first establishing an understanding of the underlying condition, then developing genuine concern, and finally focusing on correcting the problem.
This is a very small sample of the many studies that have confirmed the prevalence of racism in the workplace, all of which underscore the fact that people’s beliefs and biases must be recognized and addressed as the first step toward progress. Although some leaders acknowledge systemic racism in their organizations and can skip step one, many may need to be convinced that racism persists, despite their “race neutral” policies or pro-diversity statements.
Understanding an ailment’s roots is critical to choosing the best remedy. Racism can have many psychological sources—cognitive biases, personality characteristics, ideological worldviews, psychological insecurity, perceived threat, or a need for power and ego enhancement. But most racism is the result of structural factors—established laws, institutional practices, and cultural norms. Many of these causes do not involve malicious intent. Nonetheless, managers often misattribute workplace discrimination to the character of individual actors—the so-called bad apples—rather than to broader structural factors. As a result, they roll out trainings to “fix” employees while dedicating relatively little attention to what may be a toxic organizational culture, for example. It is much easier to pinpoint and blame individuals when problems arise. When police departments face crises related to racism, the knee-jerk response is to fire the officers involved or replace the police chief, rather than examining how the culture licenses, or even encourages, discriminatory behavior.
Appealing to circumstances beyond one’s control is another way to exonerate deeply embedded cultural or institutional practices that are responsible for racial disparities. For example, an oceanographic organization I worked with attributed its lack of racial diversity to an insurmountable pipeline problem. “There just aren’t any Black people out there studying the migration patterns of the humpback whale,” one leader commented. Most leaders were unaware of the National Association of Black Scuba Divers, an organization boasting thousands of members, or of Hampton University, a historically Black college on the Chesapeake Bay, which awards bachelor’s degrees in marine and environmental science. Both were entities that could source Black candidates for the job, especially given that the organization only needed to fill dozens, not thousands, of openings.
A Fortune 500 company I worked with cited similar pipeline problems. Closer examination revealed, however, that the real culprit was the culture-based practice of promoting leaders from within the organization—which already had low diversity—rather than conducting a broader industry-wide search when leadership positions became available. The larger lesson here is that an organization’s lack of diversity is often tied to inadequate recruitment efforts rather than an empty pipeline. Progress requires a deeper diagnosis of the routine practices that drive the outcomes leaders wish to change.
To help managers and employees understand how being embedded within a biased system can unwittingly influence outcomes and behaviors, I like to ask them to imagine being fish in a stream. In that stream, a current exerts force on everything in the water, moving it downstream. That current is analogous to systemic racism. If you do nothing—just float—the current will carry you along with it, whether you’re aware of it or not. If you actively discriminate by swimming with the current, you will be propelled faster. In both cases, the current takes you in the same direction. From this perspective, racism has less to do with what’s in your heart or mind and more to do with how your actions or inactions amplify or enable the systemic dynamics already in place.
Workplace discrimination often comes from well-educated, well-intentioned, open-minded, kindhearted people who are just floating along, severely underestimating the tug of the prevailing current on their actions, positions, and outcomes. Anti-racism requires swimming against that current, like a salmon making its way upstream. It demands much more effort, courage, and determination than simply going with the flow.
In short, organizations must be mindful of the “current,” or the structural dynamics that permeate the system, not just the “fish,” or individual actors that operate within it.
Once people are aware of the problem and its underlying causes, the next question is whether they care enough to do something about it. There is a difference between sympathy and empathy. Many White people experience sympathy, or pity, when they witness racism. But what’s more likely to lead to action in confronting the problem is empathy—experiencing the same hurt and anger that people of color are feeling. People of color want solidarity—and social justice—not sympathy, which simply quiets the symptoms while perpetuating the disease.
If your employees don’t believe that racism exists in the company, then diversity initiatives will be perceived as the problem, not the solution.
One way to increase empathy is through exposure and education. The video of George Floyd’s murder exposed people to the ugly reality of racism in a visceral, protracted, and undeniable way. Similarly, in the 1960s, northern Whites witnessed innocent Black protesters being beaten with batons and blasted with fire hoses on television. What best prompts people in an organization to register concern about racism in their midst, I’ve found, are the moments when their non-White coworkers share vivid, detailed accounts of the negative impact that racism has on their lives. Managers can raise awareness and empathy through psychologically safe listening sessions—for employees who want to share their experiences, without feeling obligated to do so—supplemented by education and experiences that provide historical and scientific evidence of the persistence of racism.
For example, I spoke with Mike Kaufmann, CEO of Cardinal Health—the 16th largest corporation in America—who credited a visit to the Equal Justice Initiative’s National Memorial for Peace and Justice, in Montgomery, Alabama as a pivotal moment for the company. While diversity and inclusion initiatives have been a priority for Mike and his leadership team for well over a decade, their focus and conversations related to racial inclusion increased significantly during 2019. As he expressed to me, “Some Americans think when slavery ended in the 1860s that African Americans have had an equal opportunity ever since. That’s just not true. Institutional systemic racism is still very much alive today; it’s never gone away.” Kaufmann is planning a comprehensive education program, which will include a trip for executives and other employees to visit the museum, because he is convinced that the experience will change hearts, open eyes, and drive action and behavioral change.
Empathy is critical for making progress toward racial equity because it affects whether individuals or organizations take any action and if so, what kind of action they take. There are at least four ways to respond to racism: join in and add to the injury, ignore it and mind your own business, experience sympathy and bake cookies for the victim, or experience empathic outrage and take measures to promote equal justice. The personal values of individual employees and the core values of the organization are two factors that affect which actions are undertaken.
After the foundation has been laid, it’s finally time for the “what do we do about it” stage. Most actionable strategies for change address three distinct but interconnected categories: personal attitudes, informal cultural norms, and formal institutional policies.
To most effectively combat discrimination in the workplace, leaders should consider how they can run interventions on all three of these fronts simultaneously. Focusing only on one is likely to be ineffective and could even backfire. For example, implementing institutional diversity policies without any attempt to create buy-in from employees is likely to produce a backlash. Likewise, focusing just on changing attitudes without also establishing institutional policies that hold people accountable for their decisions and actions may generate little behavioral change among those who don’t agree with the policies. Establishing an anti-racist organizational culture, tied to core values and modeled by behavior from the CEO and other top leaders at the company, can influence both individual attitudes and institutional policies.
Just as there is no shortage of effective strategies for losing weight or promoting environmental sustainability, there are ample strategies for reducing racial bias at the individual, cultural, and institutional levels. The hard part is getting people to actually adopt them. Even the best strategies are worthless without implementation.
Fairness requires treating people equitably—which may entail treating people differently, but in a way that makes sense.
I’ll discuss how to increase commitment to execution in the final section. But before I do, I want to give a specific example of an institutional strategy that works. It comes from Massport, a public organization that owns Boston Logan International Airport and commercial lots worth billions of dollars. When its leaders decided they wanted to increase diversity and inclusion in real estate development in Boston’s booming Seaport District, they decided to leverage their land to do it. Massport’s leaders made formal changes to the selection criteria determining who is awarded lucrative contracts to build and operate hotels and other large commercial buildings on their parcels. In addition to evaluating three traditional criteria—the developer’s experience and financial capital, Massport’s revenue potential, and the project’s architectural design—they added a fourth criterion called “comprehensive diversity and inclusion,” which accounted for 25% of the proposal’s overall score, the same as the other three. This forced developers not only to think more deeply about how to create diversity but also to go out and do it. Similarly, organizations can integrate diversity and inclusion into managers’ scorecards for raises and promotions—if they think it’s important enough. I’ve found that the real barrier to diversity is not figuring out “What can we do?” but rather “Are we willing to do it?”
Many organizations that desire greater diversity, equity, and inclusion may not be willing to invest the time, energy, resources, and commitment necessary to make it happen. Actions are often inhibited by the assumption that achieving one desired goal requires sacrificing another desired goal. But that’s not always the case. Although nothing worth having is completely free, racial equity often costs less than people may assume. Seemingly conflicting goals or competing commitments are often relatively easy to reconcile—once the underlying assumptions have been identified.
As a society, are we sacrificing public safety and social order when police routinely treat people of color with compassion and respect? No. In fact, it’s possible that kinder policing will actually increase public safety. Famously, the city of Camden, New Jersey, witnessed a 40% drop in violent crime after it reformed its police department, in 2012, and put a much greater emphasis on community policing.
The assumptions of sacrifice have enormous implications for the hiring and promotion of diverse talent, for at least two reasons. First, people often assume that increasing diversity means sacrificing principles of fairness and merit, because it requires giving “special” favors to people of color rather than treating everyone the same. But take a look at the scene below. Which of the two scenarios appears more “fair,” the one on the left or the one on the right?
People often assume that fairness means treating everyone equally, or exactly the same—in this case, giving each person one crate of the same size. In reality, fairness requires treating people equitably—which may entail treating people differently, but in a way that makes sense. If you chose the scenario on the right, then you subscribe to the notion that fairness can require treating people differently in a sensible way.
Of course, what is “sensible” depends on the context and the perceiver. Does it make sense for someone with a physical disability to have a parking space closer to a building? Is it fair for new parents to have six weeks of paid leave to be able to care for their baby? Is it right to allow active-duty military personnel to board an airplane early to express gratitude for their service? My answer is yes to all three questions, but not everyone will agree. For this reason, equity presents a greater challenge to gaining consensus than equality. In the first panel of the fence scenario, everybody gets the same number of crates. That’s a simple solution. But is it fair?
In thinking about fairness in the context of American society, leaders must consider the unlevel playing fields and other barriers that exist—provided they are aware of systemic racism. They must also have the courage to make difficult or controversial calls. For example, it might make sense to have an employee resource group for Black employees but not White employees. Fair outcomes may require a process of treating people differently. To be clear, different treatment is not the same as “special” treatment—the latter is tied to favoritism, not equity.
There is no test or interview that can invariably identify the “best candidate.” Instead, hire good people and invest in their potential.
One leader who understands the difference is Maria Klawe, the president of Harvey Mudd College. She concluded that the only way to increase the representation of women in computer science was to treat men and women differently. Men and women tended to have different levels of computing experience prior to entering college—different levels of experience, not intelligence or potential. Society treats boys and girls differently throughout secondary school—encouraging STEM subjects for boys but liberal arts subjects for girls, creating gaps in experience. To compensate for this gap created by bias in society, the college designed two introductory computer-science tracks—one for students with no computing experience and one for students with some computing experience in high school. The no-experience course tended to be 50% women whereas the some-experience course was predominantly men. By the end of the semester, the students in both courses were on par with one another. Through this and other equity-based interventions, Klawe and her team were able to dramatically increase the representation of women and minority computer-science majors and graduates.
The second assumption many people have is that increasing diversity requires sacrificing high quality and standards. Consider again the fence scenario. All three people have the same height or “potential.” What varies is the level of the field and the fence—apt metaphors for privilege and discrimination, respectively. Because the person on the far left has lower barriers to access, does it make sense to treat the other two people differently to compensate? Do we have an obligation to do so when differences in outcomes are caused by the field and the fence, not someone’s height? Maria Klawe sure thought so. How much human potential is left unrealized within organizations because we do not recognize the barriers that exist?
Finally, it’s important to understand that quality is difficult to measure with precision. There is no test, instrument, survey, or interviewing technique that will enable you to invariably predict who the “best candidate” will be. The NFL draft illustrates the difficulty in predicting future job performance: Despite large scouting departments, plentiful video of prior performance, and extensive tryouts, almost half of first round picks turn out to be busts. This may be true for organizations as well. Research by Sheldon Zedeck and colleagues on corporate hiring processes has found that even the best screening or aptitude tests predict only 25% of intended outcomes, and that candidate quality is better reflected by “statistical bands” rather than a strict rank ordering. This means that there may be absolutely no difference in quality between the candidate who scored first out of 50 people and the candidate who scored eighth.
The big takeaway here is that “sacrifice” may actually involve giving up very little. If we look at people within a band of potential and choose the diverse candidate (for example, number eight) over the top scorer, we haven’t sacrificed quality at all—statistically speaking—even if people’s intuitions lead them to conclude otherwise.
Managers should abandon the notion that a “best candidate” must be found. That kind of search amounts to chasing unicorns. Instead, they should focus on hiring well-qualified people who show good promise, and then should invest time, effort, and resources into helping them reach their potential.
The tragedies and protests we have witnessed this year across the United States have increased public awareness and concern about racism as a persistent problem in our society. The question we now must confront is whether, as a nation, we are willing to do the hard work necessary to change widespread attitudes, assumptions, policies, and practices. Unlike society at large, the workplace very often requires contact and cooperation among people from different racial, ethnic, and cultural backgrounds. Therefore, leaders should host open and candid conversations about how their organizations are doing at each of the five stages of the model—and use their power to press for profound and perennial progress.A version of this article appeared in the September–October 2020 issue of Harvard Business Review.
“Your work is going to fill a large part of your life and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.”
– Steve Jobs
“The big secret in life is that there is no big secret. Whatever your goal, you can get there if you’re willing to work.”
– Oprah Winfrey
Happy Labor Day #laborday #hardwork #greatwork #goals #chestereastside
We need a search engine/agent/service that analyzes and categorizes content and ranks and presents it based upon the veracity of what it contains. Today, google ranks content based upon what’s popular vs. what’s true and reality. Someone recently pointed out that if they googled “Sandy Hook” they would get things like: https://nymag.com/intelligencer/2016/09/the-sandy-hook-hoax.html that would pop up in the top of the search. Is that what we want? Don’t think so. Accordingly, content can be search, analyzed, and categorized according to the truth or veracity of its content and authors. Presenting what’s true vs. what’s popular has never been more important in fighting disinformation so prevalent in the Internet and in society. Surely the technical means exist to do this type of analysis, categorization, and presentation to everyone. We need to figure out how to do it and then do it. Just one viewpoint.
A lack of adequate staff training and communication is disrupting the way the VA is able to productively use its variety of health information exchanges.
– Editor’s Note 8/17/2020: This article has been updated with a statement from DirectTrust.
A recent report from the Office of Inspector General (OIG) found training challenges, the need for increased community partners, the use of community coordinators, and technology issues that need to be addressed to enhance the Department of Veterans Affairs’ (VA) ability to effectively utilize its health information exchanges and the ability to exchange patient data.
However, OIG found respondents who utilized either VA Exchange or VA Direct cited successes and said VA goals were more attainable with more community participation.
Patient data exchange and interoperability is crucial for the VA to enhance patient care. Not only does the Veterans Health Information Exchange (VHIE) program use the VA Exchange and VA Direct, which is directly connected to DirectTrust, but it also has community partnerships to promote patient data exchange and accessibility.
OIG interviewed and surveyed faculty at the 48 Level 2 and 3 Veterans Health Administration (VHA) facilities. The group also interviewed the VHIE Program Office, while meeting with the Office of Information Technology, Office of Community Care, Office of Rural Health, Cerner Corporation, and two unnamed state HIEs to gain a variety of differing perspectives.
Based on the VHIE Program Office, 140 VA facilities have access to VA Exchange and VA Direct, but only 28 have implemented VA Direct. Facilities that did not have VA Direct access were either not adequately trained by DirectTrust, did not have community partners that used DirectTrust or were using other HIE options.
“Expansion of VA Direct usage to all facilities would increase the instances of health information sharing and improve the timeliness of health information exchange while efforts continue with development of community partnerships through VA Exchange,” wrote OIG.
“The VA OIG report provides valuable insight and recommendations on how to enhance the Veterans Health Information Exchange program.”
“DirectTrust is a volunteer-driven membership organization and standards body serving as custodian of the Direct Standard, the foundation of Direct Secure Messaging. As such, DirectTrust is not set up to provide end-user training. Typically, vendors provide training on how to use Direct within their platform, as Direct Secure Messaging is implemented differently across vendor platforms. We’re pleased to report that the DirectTrust EHR Roundtable, in which the VA participates, recognizes the variability in utilization across vendors, and is creating ‘best practices’ guidelines to advance the usability and utilization of Direct Secure Messaging.”
“The VA provides a critical service to our veterans, and coordination of care with community partners is extremely important. DirectTrust values our partnership with the VA, and we look forward to continued expansion and improvement of health information exchange and interoperability throughout all of their facilities and with their community partners.”
But according to additional survey responses and interviews from the 48 VA facilities, OIG found 46 facilities use either VA Exchange or VA Direct, while two facilities do not use either. And of those 48 facilities, 22 said they exchange patient data by mail, fax machine, or scanner.
Respondents noted additional training, an increase in community partners, and an overall better understanding of how to use the health information exchanges as common ways to defeat these HIE challenges.
“In addition, facilities reported technology challenges to viewing community health information through VA Exchange, including the dual sign-on requirement for VHA providers to first sign into the electronic health record and then sign into the Joint Legacy Viewer (JLV) to access community partner patient information,” wrote OIG.
“The JLV data quality was not ideal, information naming and access was not user friendly, and facilities reported a cumbersome process that resulted in delays in finding needed information.”
Currently, VA has two separate contracts that establish community coordination for VHIE, and OIG found 56 community coordinators who work to enhance infrastructure, outreach, and training.
But while there are 56 community coordinators, the organization found a varied level of coordinator engagement from high to little or no participation. Respondents also noted that once a coordinator leaves her position, a lack of communication and training issues typically occurs.
“With the addition of more training, communication, and future planned technological changes, VHA could more effectively streamline the continuity of care received by veterans,” OIG wrote.
“Electronic Health Records Modernization should alleviate some of the technology challenges currently experienced with the use of VHIE,” OIG continued. “Cerner reported the implementation of Millennium/Power Chart would eliminate the need for dual sign-in to review community care documents and allow for exchange accesses between VHA, the Department of Defense, and community providers.”
In late April, OIG found major patient safety issues and EHR capability problems centered around the new Electronic Health Record Modernization (EHRM) system and a recent POLITICO report stated VA leaders have acknowledged those issues.
“The OIG made four recommendations to the Under Secretary for Health related to the need for increased utilization of VA Direct, education for staff and veterans on VA Exchange and VA Direct, expansion of community partnerships, and use of contract VHIE community coordinators,” wrote OIG.
OIG said it would follow up with VA until the recommendations are completed or acknowledged.
The Joint Common Foundation aims to help the Department to standardize and secure its data and make it easier to find.
The “factory” that pumps out AI tools for the Pentagon is about to get a new tool of its own, one that leaders of the Joint Artificial Intelligence Center, or JAIC, hope will streamline their production and boost output.
The JAIC has awarded a $100 million contract to Deloitte Consulting to create the Joint Common Foundation, or JCF — basically, a tool to help organize the factory, secure it against intruders, direct its workers, and test its products.
. “The Joint Common Foundation will provide an AI development environment to test, validate, and field AI capabilities at scale across the Department of Defense,” the Center said in a statement.
Moreover, the JCF is supposed to help meld various existing programming setups and ease the tasks of building tools that can work across multiple service branches.
“What we currently have is a bunch of disparate AI and ML development environments, duplicative and siloed. A lot of these products in these environments are not interoperable,” said Col. Sang Han, the Center’s infrastructure chief.
The Joint Common Foundation will also catalogue all the data that the Defense Department has that a programmer might need to train an AI system on.
“We currently don’t have a central marketplace. We don’t have a central…repository or catalogue. That’s what we’re going to do so developers can easily search for data, easily search for AI source code, AI and ML [machine learning] models and products in the DoD,” said Han. The goal is to create a place where “developers can easily come in, pick out the part that they need to build their high-performance application.”
The data catalogue is particularly important. It produces more data than a Forture 500 business and does do in a dizzying array of different formats, from video and voice feeds to images to spreadsheets and text, all classified at different levels. But the center exists, in part, to make sure that AI innovations that occur in one part of DoD can be used elsewhere (so long as that use is ethically justified). Some of that data is what Don Bitner, strategy chief for the JAIC’s Joint Common Foundation and Infrastructure Team, describes as “value-add” data. “We’ve done something to it, we’ve ingested it. We’ve probably paired it down,” Bitner said.
The JCF allows the Department to take that data and put it into a place where anyone–with approval–can look it up and find it. “Being able to do that type of work and have standardized training sets to at least let components and services start the work of natural language processing in a chat room, elevates a lot of the upfront work in presenting data.”
They say that will allow the center to scale up the delivery of new AI tools to services.