In response to increasing patient demand for telehealth, Kaiser Permanente this week announced the launch of a new “virtual-first” healthcare plan in Washington state.
The plan, which will be available January 1, 2021, through Kaiser Foundation Health Plan of Washington’s direct to employer groups and consumers, will center telehealth as a foundational modality of care for patients with nonurgent issues.
“Virtual care is the health care of today and tomorrow,” said Dr. Paul Minardi, president and executive medical director of Washington Permanente Medical Group, in a statement.
“The pandemic has reinforced the need to provide care in the most convenient, accessible, and safe way for our members, and that’s what Virtual Plus does,” he said.
WHY IT MATTERS
As with other providers, Kaiser Permanente says it saw a huge uptick in telehealth use since the start of the pandemic. According to the company, it was one of the first healthcare organizations to deliver the majority of care via telemedicine.
The system’s options include its Consulting Nurse Service, Care Chat online messaging, video and phone visits. About 65% of appointments are now conducted virtually.
The new health plan will allow members to reach out via phone, online chat, video or email for nonurgent issues. According to the company, patients will see the same doctors and clinicians as they would at any Kaiser Permanente facility, with their data available through electronic health records.
Members can get in touch with their clinicians virtually, with the option to come in person for follow-up visits, says the organization.
They have access to Kaiser Permanente pharmacists via telehealth and can get medications delivered in one to two days.
“With this new plan, we are innovating to give our members more convenient care options at a more affordable cost, respecting their choices and preferences,” said Joseph Smith, vice president of sales and business development, Kaiser Foundation Health Plan of Washington, in a statement.
THE LARGER TREND
Although much attention has been paid of late to any upcoming physician fee schedules for telehealth under Medicare and Medicaid, some private payers have also taken strides to center virtual care in the future of their coverage.
This summer, Blue Cross Blue Shield of Tennessee announced that it would make in-network telehealth services permanent.
Telehealth “has opened up another opportunity for people to get care in a different way,” said BCBS Tennessee SVP and Chief Medical Officer Dr. Andrea Willis. “We are continuing this expansion for our commercial population because it feels like it’s the right thing to do.”
ON THE RECORD
“We are excited to offer another care option to our members that continues our commitment of providing high-quality, convenient, and affordable health care,” said Smith.
Kat Jercich is senior editor of Healthcare IT News. Twitter: @kjercich Email: email@example.com Healthcare IT News is a HIMSS Media publication.
The pandemic’s impact on frontline workers and recent incidents of police brutality have highlighted the urgent need to provide good jobs for people of color. For too long, millions of Americans have been left behind with low wages, few benefits, unstable schedules, and lack of respect and dignity. And for too long, American employers have assumed these conditions as an inevitability of doing business rather than a deliberate choice. Based on her research, the author offers six steps that business leaders should take to help bring about economic and racial justice. They will help their firms competitiveness as well.
Americans are demanding a reckoning. Incidents of police brutality and structural inequities that have caused the pandemic to hit people of color especially hard are sparking calls for racial justice. The precarious conditions endured by poorly paid frontline workers who have continued to stay on the job during the pandemic have generated calls for economic justice. Each of these forms of injustice has distinct drivers, but they amplify each other and often fall hardest on the same people. As Martin Luther King, Jr. reminded us, economic and racial justice are inexorably linked.
To address this critical moment, we need business leaders to emulate the business leaders who, in the midst of World War II, committed to creating good jobs. For too long, millions of Americans have been left behind with low wages, few benefits, unstable schedules, and lack of respect and dignity. And for too long, American employers have assumed these conditions as an inevitability of doing business rather than a deliberate choice. But my research shows that bad jobs are a choice, not a necessity, and that offering good jobs is also a choice — even a profit-maximizing choice — yes, even for companies that compete on low cost.
Here I describe why business leaders should care about having too many employees with low-wage “bad jobs” and six steps to begin transforming those jobs into good jobs.
The Central Problem: Too Many People Left Behind
Even in the pre-Covid world of low unemployment, 32% of the U.S. workforce — that’s 46.5 million people — worked in occupations with a median wage of less than $15 an hour. With Covid-19, we started calling many of these workers “essential.” Yet one wouldn’t know it from their wages. The median wage for a meatpacker is $14 an hour; it’s only $12 for a health aide.
Imagine a single parent working as a bank teller and earning the median hourly wage for that job of $15.02. At 40 hours a week, she would make $2,580 a month — $494 less than her rent, child care, transportation, food, and medical expenses. That’s not even counting personal care, clothing, leisure, cable/phone bills, housekeeping supplies, and unexpected expenses such as a broken cell phone. And remember, many service workers don’t even get 40 hours a week, so their annual pay is much lower. Brookings Institute estimates that 53 million Americans make less than $17,950 a year.
Many live so close to the edge that they rely on government assistance. Twenty-five million working people and families received $61 billion in Employed Income Tax Credits in 2019. Forty percent of Americans, disproportionately Black, cannot absorb an unexpected $400 expense. Such families have no cushion when the full impact of Covid-19 hits, as has been made clear by the long lines at food banks.
This problem won’t be solved by upskilling or improving education — the current focus of the President’s Workforce Advisory Board. Future job growth is expected largely in low-wage jobs such as health aides, food and cleaning services, and laborer occupations. Most of the top 20 fastest-growing occupations — 55% of projected job growth — pay below the median wage. Think of the U.S. economy as an enormous ship with a hole in its hull. Those in the lower decks are at risk of drowning. Upskilling may move some of them to a dry deck, but there isn’t room there for all, and, anyway, the ship is still sinking. We need to fix the hole right now so no one drowns.
In those lower decks, the people most at risk of drowning are Black and Hispanic Americans, who are disproportionally represented in low-wage jobs. Black workers make up 25% and 37% of two of the fastest growing (but low-wage) occupations: personal care aides and home health aides. Company leaders who have spoken against racial injustice, committed funds to address racial injustice, or have hired chief diversity and inclusion officers should look at where people of color work in their company, as employees or contractors, and make sure their jobs allow them to live with dignity.
The Vicious Cycle of Poverty Hurts U.S. Society and Business
Low-wage workers live in a vicious cycle that prevents them from moving up. Many work multiple jobs. The associated stress undermines mental and physical health. Indeed, that stress lowers cognitive functioning, creating a “bandwidth tax” equal to a loss of 13 IQ points. Performance suffers as it is harder to keep up good attendance, focus on the job, be productive, and do your best for customers or coworkers. Unsurprisingly, these workers find it hard to climb the ladder of opportunity that this country has historically provided.
My research shows that the vicious cycle for low-wage workers is also a vicious cycle for their companies. Poor attendance and high turnover lead to operational problems that undermine sales and profits. Reduced profits, in turn, prevent companies from investing more in their workers, causing yet more instability for the companies and their workers. I’ve observed racial tensions to be worse at companies operating in this vicious cycle. People feel less respect and treat others, including customers, with less respect. Managers are busy fighting fires rather than leading their people. I’ve also observed once-successful companies go out of business at least partly on account of this vicious cycle.
If there ever was a lose-lose for workers and companies, bad jobs with low wages are it.
What we often fail to see is that wages are not just a neutral market valuation of what a job is worth. The wages themselves affect the quality of that work and therefore the worker’s productive value and career prospects. Put a capable person in a position where she must work two low-wage jobs — with uncertainty about her schedule, fear that she will not make rent, and little or no support from management to do a really good job — and she will likely resign herself to mediocre or poor performance.
Here are six steps that business leaders can and should take to address these inequities.
1. Recognize the problem and make a collective pledge to address it. When it comes to the role of businesses as employers, the focus tends to be on upskilling workers through education and public-private partnerships. That’s important, but the inherent assumption that the problem is on the supply side (too few qualified workers) misses the more important problem on the demand side (too few good jobs).
Business leaders need to publicly recognize the problem on the demand side and pledge to create well-paying jobs, without which we cannot maintain and expand a strong middle class. Part of what drove leaders of companies such as GM, GE, Coca-Cola, and Kodak to commit so effectively to creating well-paying jobs during World War II was fear of socialism and communism.
The stakes for capitalism are similar now. Many Americans are losing faith in capitalism and market economies, believing that capitalism inherently drives not only inequality, but also injustice. Even before the pandemic, 70% of Americans believed the economic system was rigged against them.
2. Commit to raise low wages. All large companies should calculate the distribution of annual frontline take-home pay and consider the budgetary needs of their workers. (With so many working part-time or irregular hours, the hourly wage itself doesn’t tell the story.) How many are below the living wage in their area? (When we at the nonprofit Good Jobs Institute share these data with executives, they are often surprised.) How many are single parents or students? How many families have two wage-earners? How many full-timers rely on welfare?
With these data, a company can make commitments that are reasonable but bold — the most it can manage, not the least it can get away with. If profit margins are high and low-wage workers are only a small part of costs, doing the right thing is easy. That’s why, in 2015, after calculating the potential benefits from higher wages (e.g., lower turnover, better customer service from employees who can focus on the job) Aetna could raise its minimum wage from $12 an hour to $16 an hour. Last March, Bank of America raised its minimum wage to $20 an hour.For others, like Walmart, raising wages requires systemic change. But it can be done and, if done right, can help both employees and employers win.
During our current economic crisis, such a change may feel impossible. But these commitments can be made over time. What matters is that companies consider cost of living, not just what others pay. Even more effective would be for large companies (or industry associations such as the National Retail Federation) to encourage other firms in their communities/industries to follow their lead. New research shows that raising wages is less competitively costly than companies may realize because once a large company raises wages, others in the area follow suit.
3. Provide career paths for low-wage workers. Companies such as Costco and QuikTrip — which offer careers, not merely jobs — aim for all frontline managers to be promoted from within. Committing to such a policy forces them to invest in the development of their workers. Equally important is entering gender and race into promotion decisions. In retail, for example, 18% of cashiers and 12% of salespeople but only 10% of frontline supervisors are Black.
4. Disclose pay and turnover data. Revealing annual turnover and the annual take-home pay distribution — not just the average or median — by race and gender might be uncomfortable but will help drive conversations with the board and investors. (Intel is one company that discloses annual take-home pay buckets by race and gender.) Such transparency will show who’s operating in the vicious cycle described above. Quantitative benchmarks can help create peer pressure and enable customers, communities, and investors to track change at different companies. Benchmarks can also help people such as my students, many of whom seek employers that take care of their workers, decide where to take their considerable talents when they graduate.
5. Involve workers in technology decisions that affect their work. Too often, the people who do the work are excluded from such decisions — to the detriment of companies. Workers with frontline knowledge can help companies be smarter about choosing, deploying, and scaling technologies. They can also help identify technologies that would complement people rather than “so-so” technologies that replace people but don’t even improve productivity. Companies that commit to Step 2 can justify higher wages by making better use of the talent that they already have.
6. Drive public policies that improve workers’ well-being and the economy. Some business leaders are already recommending higher minimum wages. Benefits such as paid sick leave, smart-scheduling legislation that can improve stability for workers and companies, and government-sponsored child care for low-income families could also improve workers’ well-being and the economy. Changes to tax code to favor investing in workers rather than automation would reduce incentives to invest in “so-so” technologies. Business leaders should be seen and heard advocating for them.
Even so, history shows that we can’t rely on business leaders alone. They need a context that pushes for good jobs. Apart from worker power and smart public policy, investors and business schools have important roles.
Investors. Since I’ve started working with companies, I’ve seen how infrequently their leaders take a long-term view. Even after believing in the financial and competitive reasons to offer good jobs, some shy away from investing in their employees. Why? They fear a dip in profitability. “Anything that doesn’t yield a return in a year doesn’t make it,” I hear.
This short-term focus is driven, in part, by executives’ short tenures (the median CEO tenure in public companies was five years in 2017, the most recent data available) and, in part, by their fear that investors will punish them. Indeed, when Walmart began investing in its workers in 2014, its stock price took a hit. If investors were less quick to punish a company for “profit-draining” labor investments and more interested in really profit-draining employee turnover (and in the drivers of turnover such as take-home pay, schedule stability, and career paths), we’d see more companies prioritizing investment in people — and doing better.
Business schools. For decades, schools taught that the corporation’s duty is to maximize shareholder value. It is time to teach how to make a decent profit decently. When it comes to exemplary CEOs, we should celebrate those like Costco’s Jim Sinegal, who created a competitive business without sacrificing the financial well-being of frontline employees.
We should also provide students with an opportunity to develop compassion for the people they will lead. While teaching at MIT’s Sloan School of Management and Harvard Business School, I have seen plenty of programs that allow students to spend time with business and political leaders but none that encouraged them to spend time in a low-wage job. That would help them see past the stigmas of low-wage work and workers, witness how poor corporate decisions get in the way of delivering value to customers and good jobs to employees, and cultivate respect for the challenging work that takes place across all levels the company.
The problem is vast, but the first step is for the job creators to reevaluate their assumptions about the jobs they create. It is easy to create a job that treats people like robots and justify it with the assumption that workers lack skills and abilities. But as we have seen, that attitude sows social unrest and puts a ceiling on the prospects of hardworking Americans and their communities.
If we are to live in a just world, we need to remember what Martin Luther King, Jr. told the sanitation workers in a crowded Memphis church in 1968: “So often we overlook the work and the significance of those who are not in professional jobs, of those who are not in the so-called big jobs. But let me say to you tonight, that whenever you are engaged in work that serves humanity and is for the building of humanity, it has dignity, and it has worth.”
Many White people deny the existence of racism against people of color because they assume that racism is defined by deliberate actions motivated by malice and hatred. However, racism can occur without conscious awareness or intent. When defined simply as differential evaluation or treatment based solely on race, regardless of intent, racism occurs far more frequently than most White people suspect.
As intractable as it seems, racism in the workplace can be effectively addressed. Because organizations are small, autonomous entities that afford leaders a high level of control over norms and policies, they are ideal sites for promoting racial equity.
Companies should move through the five stages of a process called PRESS: (1) Problem awareness, (2) Root-cause analysis, (3) Empathy, or level of concern about the problem and the people it afflicts, (4) Strategies for addressing the problem, and (5) Sacrifice, or willingness to invest the time, energy, and resources necessary for strategy implementation.
Idea in Brief
Racial discrimination—defined as differential evaluation or treatment based solely on race, regardless of intent—remains prevalent in organizations and occurs far more frequently than most White people suspect.
Intractable as it seems, racism in the workplace can be effectively addressed. Because organizations are autonomous entities that afford leaders a high level of control over norms and policies, they are ideal places to promote racial equity.
The Way Forward
Effective interventions move through stages, from understanding the underlying condition, to developing genuine concern, to focusing on correction.Leer en español
Intractable as it seems, the problem of racism in the workplace can be effectively addressed with the right information, incentives, and investment. Corporate leaders may not be able to change the world, but they can certainly change their world. Organizations are relatively small, autonomous entities that afford leaders a high level of control over cultural norms and procedural rules, making them ideal places to develop policies and practices that promote racial equity. In this article, I’ll offer a practical road map for making profound and sustainable progress toward that goal.
I’ve devoted much of my academic career to the study of diversity, leadership, and social justice, and over the years I’ve consulted on these topics with scores of Fortune 500 companies, federal agencies, nonprofits, and municipalities. Often, these organizations have called me in because they are in crisis and suffering—they just want a quick fix to stop the pain. But that’s akin to asking a physician to write a prescription without first understanding the patient’s underlying health condition. Enduring, long-term solutions usually require more than just a pill. Organizations and societies alike must resist the impulse to seek immediate relief for the symptoms, and instead focus on the disease. Otherwise they run the risk of a recurring ailment.
To effectively address racism in your organization, it’s important to first build consensus around whether there is a problem (most likely, there is) and, if so, what it is and where it comes from. If many of your employees do not believe that racism against people of color exists in the organization, or if feedback is rising through various communication channels showing that Whites feel that they are the real victims of discrimination, then diversity initiatives will be perceived as the problem, not the solution. This is one of the reasons such initiatives are frequently met with resentment and resistance, often by mid-level managers. Beliefs, not reality, are what determine how employees respond to efforts taken to increase equity. So, the first step is getting everyone on the same page as to what the reality is and why it is a problem for the organization.
But there’s much more to the job than just raising awareness. Effective interventions involve many stages, which I’ve incorporated into a model I call PRESS. The stages, which organizations must move through sequentially, are: (1) Problem awareness, (2) Root-cause analysis, (3) Empathy, or level of concern about the problem and the people it afflicts, (4) Strategies for addressing the problem, and (5) Sacrifice, or willingness to invest the time, energy, and resources necessary for strategy implementation. Organizations going through these stages move from understanding the underlying condition, to developing genuine concern, to focusing on correction.
Let’s now have a closer look at these stages and examine how each informs, at a practical level, the process of working toward racial equity.
To a lot of people, it may seem obvious that racism continues to oppress people of color. Yet research consistently reveals that many Whites don’t see it that way. For example, a 2011 study by Michael Norton and Sam Sommers found that on the whole, Whites in the United States believe that systemic anti-Black racism has steadily decreased over the past 50 years—and that systemic anti-White racism (an implausibility in the United States) has steadily increased over the same time frame. The result: As a group, Whites believe that there is more racism against them than against Blacks. Other recent surveys echo Sommers and Norton’s findings, one revealing, for example, that 57% of all Whites and 66% of working-class Whites consider discrimination against Whites to be as big a problem as discrimination against Blacks and other people of color. These beliefs are important, because they can undermine an organization’s efforts to address racism by weakening support for diversity policies. (Interestingly, surveys taken since the George Floyd murder indicate an increase in perceptions of systemic racism among Whites. But it’s too soon to tell whether those surveys reflect a permanent shift or a temporary uptick in awareness.)
Even managers who recognize racism in society often fail to see it in their own organizations. For example, one senior executive told me, “We don’t have any discriminatory policies in our company.” However, it is important to recognize that even seemingly “race neutral” policies can enable discrimination. Other executives point to their organizations’ commitment to diversity as evidence for the absence of racial discrimination. “Our firm really values diversity and making this a welcoming and inclusive place for everybody to work,” another leader remarked.
The real challenge for organizations is not figuring out “What can we do?” but rather “Are we willing to do it?”
Despite these beliefs, many studies in the 21st century have documented that racial discrimination is prevalent in the workplace, and that organizations with strong commitments to diversity are no less likely to discriminate. In fact, research by Cheryl Kaiser and colleagues has demonstrated that the presence of diversity values and structures can actually make matters worse, by lulling an organization into complacency and making Blacks and ethnic minorities more likely to be ignored or harshly treated when they raise valid concerns about racism.
Many White people deny the existence of racism against people of color because they assume that racism is defined by deliberate actions motivated by malice and hatred. However, racism can occur without conscious awareness or intent. When defined simply as differential evaluation or treatment based solely on race, regardless of intent, racism occurs far more frequently than most White people suspect. Let’s look at a few examples.
In a well-publicized résumé study by the economists Marianne Bertrand and Sendhil Mullainathan, applicants with White-sounding names (such as Emily Walsh) received, on average, 50% more callbacks for interviews than equally qualified applicants with Black-sounding names (such as Lakisha Washington). The researchers estimated that just being White conferred the same benefit as an additional eight years of work experience—a dramatic head start over equally qualified Black candidates.
Research shows that people of color are well-aware of these discriminatory tendencies and sometimes try to counteract them by masking their race. A 2016 study by Sonia Kang and colleagues found that 31% of the Black professionals and 40% of the Asian professionals they interviewed admitted to “Whitening” their résumés, either by adopting a less “ethnic” name or omitting extracurricular experiences (a college club membership, for instance) that might reveal their racial identities.
These findings raise another question: Does Whitening a résumé actually benefit Black and Asian applicants, or does it disadvantage them when applying to organizations seeking to increase diversity? In a follow-up experiment, Kang and her colleagues sent Whitened and non-Whitened résumés of Black or Asian applicants to 1,600 real-world job postings across various industries and geographical areas in the United States. Half of these job postings were from companies that expressed a strong desire to seek diverse candidates. They found that Whitening résumés by altering names and extracurricular experiences increased the callback rate from 10% to nearly 26% for Blacks, and from about 12% to 21% for Asians. What’s particularly unsettling is that a company’s stated commitment to diversity failed to diminish this preference for Whitened résumés.
A Road Map for Racial Equity
Organizations move through these stages sequentially, first establishing an understanding of the underlying condition, then developing genuine concern, and finally focusing on correcting the problem.
This is a very small sample of the many studies that have confirmed the prevalence of racism in the workplace, all of which underscore the fact that people’s beliefs and biases must be recognized and addressed as the first step toward progress. Although some leaders acknowledge systemic racism in their organizations and can skip step one, many may need to be convinced that racism persists, despite their “race neutral” policies or pro-diversity statements.
Understanding an ailment’s roots is critical to choosing the best remedy. Racism can have many psychological sources—cognitive biases, personality characteristics, ideological worldviews, psychological insecurity, perceived threat, or a need for power and ego enhancement. But most racism is the result of structural factors—established laws, institutional practices, and cultural norms. Many of these causes do not involve malicious intent. Nonetheless, managers often misattribute workplace discrimination to the character of individual actors—the so-called bad apples—rather than to broader structural factors. As a result, they roll out trainings to “fix” employees while dedicating relatively little attention to what may be a toxic organizational culture, for example. It is much easier to pinpoint and blame individuals when problems arise. When police departments face crises related to racism, the knee-jerk response is to fire the officers involved or replace the police chief, rather than examining how the culture licenses, or even encourages, discriminatory behavior.
Appealing to circumstances beyond one’s control is another way to exonerate deeply embedded cultural or institutional practices that are responsible for racial disparities. For example, an oceanographic organization I worked with attributed its lack of racial diversity to an insurmountable pipeline problem. “There just aren’t any Black people out there studying the migration patterns of the humpback whale,” one leader commented. Most leaders were unaware of the National Association of Black Scuba Divers, an organization boasting thousands of members, or of Hampton University, a historically Black college on the Chesapeake Bay, which awards bachelor’s degrees in marine and environmental science. Both were entities that could source Black candidates for the job, especially given that the organization only needed to fill dozens, not thousands, of openings.
A Fortune 500 company I worked with cited similar pipeline problems. Closer examination revealed, however, that the real culprit was the culture-based practice of promoting leaders from within the organization—which already had low diversity—rather than conducting a broader industry-wide search when leadership positions became available. The larger lesson here is that an organization’s lack of diversity is often tied to inadequate recruitment efforts rather than an empty pipeline. Progress requires a deeper diagnosis of the routine practices that drive the outcomes leaders wish to change.
To help managers and employees understand how being embedded within a biased system can unwittingly influence outcomes and behaviors, I like to ask them to imagine being fish in a stream. In that stream, a current exerts force on everything in the water, moving it downstream. That current is analogous to systemic racism. If you do nothing—just float—the current will carry you along with it, whether you’re aware of it or not. If you actively discriminate by swimming with the current, you will be propelled faster. In both cases, the current takes you in the same direction. From this perspective, racism has less to do with what’s in your heart or mind and more to do with how your actions or inactions amplify or enable the systemic dynamics already in place.
Workplace discrimination often comes from well-educated, well-intentioned, open-minded, kindhearted people who are just floating along, severely underestimating the tug of the prevailing current on their actions, positions, and outcomes. Anti-racism requires swimming against that current, like a salmon making its way upstream. It demands much more effort, courage, and determination than simply going with the flow.
In short, organizations must be mindful of the “current,” or the structural dynamics that permeate the system, not just the “fish,” or individual actors that operate within it.
Once people are aware of the problem and its underlying causes, the next question is whether they care enough to do something about it. There is a difference between sympathy and empathy. Many White people experience sympathy, or pity, when they witness racism. But what’s more likely to lead to action in confronting the problem is empathy—experiencing the same hurt and anger that people of color are feeling. People of color want solidarity—and social justice—not sympathy, which simply quiets the symptoms while perpetuating the disease.
If your employees don’t believe that racism exists in the company, then diversity initiatives will be perceived as the problem, not the solution.
One way to increase empathy is through exposure and education. The video of George Floyd’s murder exposed people to the ugly reality of racism in a visceral, protracted, and undeniable way. Similarly, in the 1960s, northern Whites witnessed innocent Black protesters being beaten with batons and blasted with fire hoses on television. What best prompts people in an organization to register concern about racism in their midst, I’ve found, are the moments when their non-White coworkers share vivid, detailed accounts of the negative impact that racism has on their lives. Managers can raise awareness and empathy through psychologically safe listening sessions—for employees who want to share their experiences, without feeling obligated to do so—supplemented by education and experiences that provide historical and scientific evidence of the persistence of racism.
For example, I spoke with Mike Kaufmann, CEO of Cardinal Health—the 16th largest corporation in America—who credited a visit to the Equal Justice Initiative’s National Memorial for Peace and Justice, in Montgomery, Alabama as a pivotal moment for the company. While diversity and inclusion initiatives have been a priority for Mike and his leadership team for well over a decade, their focus and conversations related to racial inclusion increased significantly during 2019. As he expressed to me, “Some Americans think when slavery ended in the 1860s that African Americans have had an equal opportunity ever since. That’s just not true. Institutional systemic racism is still very much alive today; it’s never gone away.” Kaufmann is planning a comprehensive education program, which will include a trip for executives and other employees to visit the museum, because he is convinced that the experience will change hearts, open eyes, and drive action and behavioral change.
Empathy is critical for making progress toward racial equity because it affects whether individuals or organizations take any action and if so, what kind of action they take. There are at least four ways to respond to racism: join in and add to the injury, ignore it and mind your own business, experience sympathy and bake cookies for the victim, or experience empathic outrage and take measures to promote equal justice. The personal values of individual employees and the core values of the organization are two factors that affect which actions are undertaken.
After the foundation has been laid, it’s finally time for the “what do we do about it” stage. Most actionable strategies for change address three distinct but interconnected categories: personal attitudes, informal cultural norms, and formal institutional policies.
To most effectively combat discrimination in the workplace, leaders should consider how they can run interventions on all three of these fronts simultaneously. Focusing only on one is likely to be ineffective and could even backfire. For example, implementing institutional diversity policies without any attempt to create buy-in from employees is likely to produce a backlash. Likewise, focusing just on changing attitudes without also establishing institutional policies that hold people accountable for their decisions and actions may generate little behavioral change among those who don’t agree with the policies. Establishing an anti-racist organizational culture, tied to core values and modeled by behavior from the CEO and other top leaders at the company, can influence both individual attitudes and institutional policies.
Just as there is no shortage of effective strategies for losing weight or promoting environmental sustainability, there are ample strategies for reducing racial bias at the individual, cultural, and institutional levels. The hard part is getting people to actually adopt them. Even the best strategies are worthless without implementation.
Fairness requires treating people equitably—which may entail treating people differently, but in a way that makes sense.
I’ll discuss how to increase commitment to execution in the final section. But before I do, I want to give a specific example of an institutional strategy that works. It comes from Massport, a public organization that owns Boston Logan International Airport and commercial lots worth billions of dollars. When its leaders decided they wanted to increase diversity and inclusion in real estate development in Boston’s booming Seaport District, they decided to leverage their land to do it. Massport’s leaders made formal changes to the selection criteria determining who is awarded lucrative contracts to build and operate hotels and other large commercial buildings on their parcels. In addition to evaluating three traditional criteria—the developer’s experience and financial capital, Massport’s revenue potential, and the project’s architectural design—they added a fourth criterion called “comprehensive diversity and inclusion,” which accounted for 25% of the proposal’s overall score, the same as the other three. This forced developers not only to think more deeply about how to create diversity but also to go out and do it. Similarly, organizations can integrate diversity and inclusion into managers’ scorecards for raises and promotions—if they think it’s important enough. I’ve found that the real barrier to diversity is not figuring out “What can we do?” but rather “Are we willing to do it?”
Many organizations that desire greater diversity, equity, and inclusion may not be willing to invest the time, energy, resources, and commitment necessary to make it happen. Actions are often inhibited by the assumption that achieving one desired goal requires sacrificing another desired goal. But that’s not always the case. Although nothing worth having is completely free, racial equity often costs less than people may assume. Seemingly conflicting goals or competing commitments are often relatively easy to reconcile—once the underlying assumptions have been identified.
As a society, are we sacrificing public safety and social order when police routinely treat people of color with compassion and respect? No. In fact, it’s possible that kinder policing will actually increase public safety. Famously, the city of Camden, New Jersey, witnessed a 40% drop in violent crime after it reformed its police department, in 2012, and put a much greater emphasis on community policing.
The assumptions of sacrifice have enormous implications for the hiring and promotion of diverse talent, for at least two reasons. First, people often assume that increasing diversity means sacrificing principles of fairness and merit, because it requires giving “special” favors to people of color rather than treating everyone the same. But take a look at the scene below. Which of the two scenarios appears more “fair,” the one on the left or the one on the right?
People often assume that fairness means treating everyone equally, or exactly the same—in this case, giving each person one crate of the same size. In reality, fairness requires treating people equitably—which may entail treating people differently, but in a way that makes sense. If you chose the scenario on the right, then you subscribe to the notion that fairness can require treating people differently in a sensible way.
Of course, what is “sensible” depends on the context and the perceiver. Does it make sense for someone with a physical disability to have a parking space closer to a building? Is it fair for new parents to have six weeks of paid leave to be able to care for their baby? Is it right to allow active-duty military personnel to board an airplane early to express gratitude for their service? My answer is yes to all three questions, but not everyone will agree. For this reason, equity presents a greater challenge to gaining consensus than equality. In the first panel of the fence scenario, everybody gets the same number of crates. That’s a simple solution. But is it fair?
In thinking about fairness in the context of American society, leaders must consider the unlevel playing fields and other barriers that exist—provided they are aware of systemic racism. They must also have the courage to make difficult or controversial calls. For example, it might make sense to have an employee resource group for Black employees but not White employees. Fair outcomes may require a process of treating people differently. To be clear, different treatment is not the same as “special” treatment—the latter is tied to favoritism, not equity.
There is no test or interview that can invariably identify the “best candidate.” Instead, hire good people and invest in their potential.
One leader who understands the difference is Maria Klawe, the president of Harvey Mudd College. She concluded that the only way to increase the representation of women in computer science was to treat men and women differently. Men and women tended to have different levels of computing experience prior to entering college—different levels of experience, not intelligence or potential. Society treats boys and girls differently throughout secondary school—encouraging STEM subjects for boys but liberal arts subjects for girls, creating gaps in experience. To compensate for this gap created by bias in society, the college designed two introductory computer-science tracks—one for students with no computing experience and one for students with some computing experience in high school. The no-experience course tended to be 50% women whereas the some-experience course was predominantly men. By the end of the semester, the students in both courses were on par with one another. Through this and other equity-based interventions, Klawe and her team were able to dramatically increase the representation of women and minority computer-science majors and graduates.
The second assumption many people have is that increasing diversity requires sacrificing high quality and standards. Consider again the fence scenario. All three people have the same height or “potential.” What varies is the level of the field and the fence—apt metaphors for privilege and discrimination, respectively. Because the person on the far left has lower barriers to access, does it make sense to treat the other two people differently to compensate? Do we have an obligation to do so when differences in outcomes are caused by the field and the fence, not someone’s height? Maria Klawe sure thought so. How much human potential is left unrealized within organizations because we do not recognize the barriers that exist?
Finally, it’s important to understand that quality is difficult to measure with precision. There is no test, instrument, survey, or interviewing technique that will enable you to invariably predict who the “best candidate” will be. The NFL draft illustrates the difficulty in predicting future job performance: Despite large scouting departments, plentiful video of prior performance, and extensive tryouts, almost half of first round picks turn out to be busts. This may be true for organizations as well. Research by Sheldon Zedeck and colleagues on corporate hiring processes has found that even the best screening or aptitude tests predict only 25% of intended outcomes, and that candidate quality is better reflected by “statistical bands” rather than a strict rank ordering. This means that there may be absolutely no difference in quality between the candidate who scored first out of 50 people and the candidate who scored eighth.
The big takeaway here is that “sacrifice” may actually involve giving up very little. If we look at people within a band of potential and choose the diverse candidate (for example, number eight) over the top scorer, we haven’t sacrificed quality at all—statistically speaking—even if people’s intuitions lead them to conclude otherwise.
Managers should abandon the notion that a “best candidate” must be found. That kind of search amounts to chasing unicorns. Instead, they should focus on hiring well-qualified people who show good promise, and then should invest time, effort, and resources into helping them reach their potential.
The tragedies and protests we have witnessed this year across the United States have increased public awareness and concern about racism as a persistent problem in our society. The question we now must confront is whether, as a nation, we are willing to do the hard work necessary to change widespread attitudes, assumptions, policies, and practices. Unlike society at large, the workplace very often requires contact and cooperation among people from different racial, ethnic, and cultural backgrounds. Therefore, leaders should host open and candid conversations about how their organizations are doing at each of the five stages of the model—and use their power to press for profound and perennial progress.A version of this article appeared in the September–October 2020 issue of Harvard Business Review.
“Your work is going to fill a large part of your life and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.” – Steve Jobs
“The big secret in life is that there is no big secret. Whatever your goal, you can get there if you’re willing to work.” – Oprah Winfrey
Happy Labor Day #laborday #hardwork #greatwork #goals #chestereastside
We need a search engine/agent/service that analyzes and categorizes content and ranks and presents it based upon the veracity of what it contains. Today, google ranks content based upon what’s popular vs. what’s true and reality. Someone recently pointed out that if they googled “Sandy Hook” they would get things like: https://nymag.com/intelligencer/2016/09/the-sandy-hook-hoax.html that would pop up in the top of the search. Is that what we want? Don’t think so. Accordingly, content can be search, analyzed, and categorized according to the truth or veracity of its content and authors. Presenting what’s true vs. what’s popular has never been more important in fighting disinformation so prevalent in the Internet and in society. Surely the technical means exist to do this type of analysis, categorization, and presentation to everyone. We need to figure out how to do it and then do it. Just one viewpoint.
Technology background red and blue color, circuit board and code. 3d Illustration
A lack of adequate staff training and communication is disrupting the way the VA is able to productively use its variety of health information exchanges.
– Editor’s Note 8/17/2020: This article has been updated with a statement from DirectTrust.
A recent report from the Office of Inspector General (OIG) found training challenges, the need for increased community partners, the use of community coordinators, and technology issues that need to be addressed to enhance the Department of Veterans Affairs’ (VA) ability to effectively utilize its health information exchanges and the ability to exchange patient data.
However, OIG found respondents who utilized either VA Exchange or VA Direct cited successes and said VA goals were more attainable with more community participation.
Patient data exchange and interoperability is crucial for the VA to enhance patient care. Not only does the Veterans Health Information Exchange (VHIE) program use the VA Exchange and VA Direct, which is directly connected to DirectTrust, but it also has community partnerships to promote patient data exchange and accessibility.
OIG interviewed and surveyed faculty at the 48 Level 2 and 3 Veterans Health Administration (VHA) facilities. The group also interviewed the VHIE Program Office, while meeting with the Office of Information Technology, Office of Community Care, Office of Rural Health, Cerner Corporation, and two unnamed state HIEs to gain a variety of differing perspectives.
Based on the VHIE Program Office, 140 VA facilities have access to VA Exchange and VA Direct, but only 28 have implemented VA Direct. Facilities that did not have VA Direct access were either not adequately trained by DirectTrust, did not have community partners that used DirectTrust or were using other HIE options.
“Expansion of VA Direct usage to all facilities would increase the instances of health information sharing and improve the timeliness of health information exchange while efforts continue with development of community partnerships through VA Exchange,” wrote OIG.
“The VA OIG report provides valuable insight and recommendations on how to enhance the Veterans Health Information Exchange program.”
“DirectTrust is a volunteer-driven membership organization and standards body serving as custodian of the Direct Standard, the foundation of Direct Secure Messaging. As such, DirectTrust is not set up to provide end-user training. Typically, vendors provide training on how to use Direct within their platform, as Direct Secure Messaging is implemented differently across vendor platforms. We’re pleased to report that the DirectTrust EHR Roundtable, in which the VA participates, recognizes the variability in utilization across vendors, and is creating ‘best practices’ guidelines to advance the usability and utilization of Direct Secure Messaging.”
“The VA provides a critical service to our veterans, and coordination of care with community partners is extremely important. DirectTrust values our partnership with the VA, and we look forward to continued expansion and improvement of health information exchange and interoperability throughout all of their facilities and with their community partners.”
But according to additional survey responses and interviews from the 48 VA facilities, OIG found 46 facilities use either VA Exchange or VA Direct, while two facilities do not use either. And of those 48 facilities, 22 said they exchange patient data by mail, fax machine, or scanner.
Respondents noted additional training, an increase in community partners, and an overall better understanding of how to use the health information exchanges as common ways to defeat these HIE challenges.
“In addition, facilities reported technology challenges to viewing community health information through VA Exchange, including the dual sign-on requirement for VHA providers to first sign into the electronic health record and then sign into the Joint Legacy Viewer (JLV) to access community partner patient information,” wrote OIG.
“The JLV data quality was not ideal, information naming and access was not user friendly, and facilities reported a cumbersome process that resulted in delays in finding needed information.”
Currently, VA has two separate contracts that establish community coordination for VHIE, and OIG found 56 community coordinators who work to enhance infrastructure, outreach, and training.
But while there are 56 community coordinators, the organization found a varied level of coordinator engagement from high to little or no participation. Respondents also noted that once a coordinator leaves her position, a lack of communication and training issues typically occurs.
“With the addition of more training, communication, and future planned technological changes, VHA could more effectively streamline the continuity of care received by veterans,” OIG wrote.
“Electronic Health Records Modernization should alleviate some of the technology challenges currently experienced with the use of VHIE,” OIG continued. “Cerner reported the implementation of Millennium/Power Chart would eliminate the need for dual sign-in to review community care documents and allow for exchange accesses between VHA, the Department of Defense, and community providers.”
In late April, OIG found major patient safety issues and EHR capability problems centered around the new Electronic Health Record Modernization (EHRM) system and a recent POLITICO report stated VA leaders have acknowledged those issues.
“The OIG made four recommendations to the Under Secretary for Health related to the need for increased utilization of VA Direct, education for staff and veterans on VA Exchange and VA Direct, expansion of community partnerships, and use of contract VHIE community coordinators,” wrote OIG.
OIG said it would follow up with VA until the recommendations are completed or acknowledged.
The Joint Common Foundation aims to help the Department to standardize and secure its data and make it easier to find.
The “factory” that pumps out AI tools for the Pentagon is about to get a new tool of its own, one that leaders of the Joint Artificial Intelligence Center, or JAIC, hope will streamline their production and boost output.
The JAIC has awarded a $100 million contract to Deloitte Consulting to create the Joint Common Foundation, or JCF — basically, a tool to help organize the factory, secure it against intruders, direct its workers, and test its products.
. “The Joint Common Foundation will provide an AI development environment to test, validate, and field AI capabilities at scale across the Department of Defense,” the Center said in a statement.
Moreover, the JCF is supposed to help meld various existing programming setups and ease the tasks of building tools that can work across multiple service branches.
“What we currently have is a bunch of disparate AI and ML development environments, duplicative and siloed. A lot of these products in these environments are not interoperable,” said Col. Sang Han, the Center’s infrastructure chief.
The Joint Common Foundation will also catalogue all the data that the Defense Department has that a programmer might need to train an AI system on.
“We currently don’t have a central marketplace. We don’t have a central…repository or catalogue. That’s what we’re going to do so developers can easily search for data, easily search for AI source code, AI and ML [machine learning] models and products in the DoD,” said Han. The goal is to create a place where “developers can easily come in, pick out the part that they need to build their high-performance application.”
The data catalogue is particularly important. It produces more data than a Forture 500 business and does do in a dizzying array of different formats, from video and voice feeds to images to spreadsheets and text, all classified at different levels. But the center exists, in part, to make sure that AI innovations that occur in one part of DoD can be used elsewhere (so long as that use is ethically justified). Some of that data is what Don Bitner, strategy chief for the JAIC’s Joint Common Foundation and Infrastructure Team, describes as “value-add” data. “We’ve done something to it, we’ve ingested it. We’ve probably paired it down,” Bitner said.
The JCF allows the Department to take that data and put it into a place where anyone–with approval–can look it up and find it. “Being able to do that type of work and have standardized training sets to at least let components and services start the work of natural language processing in a chat room, elevates a lot of the upfront work in presenting data.”
They say that will allow the center to scale up the delivery of new AI tools to services.
Ethics for urgency means making ethics a core part of AI rather than an afterthought, says Jess Whittlestone.
Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.
For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.
Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.
This interview has been edited for length and clarity.
Why do we need a new kind of ethics for AI?
With this pandemic we’re suddenly in a situation where people are really talking about whether AI could be useful, whether it could save lives. But the crisis has made it clear that we don’t have robust enough ethics procedures for AI to be deployed safely, and certainly not ones that can be implemented quickly.
What’s wrong with the ethics we have?
I spent the last couple of years reviewing AI ethics initiatives, looking at their limitations and asking what else we need. Compared to something like biomedical ethics, the ethics we have for AI isn’t very practical. It focuses too much on high-level principles. We can all agree that AI should be used for good. But what does that really mean? And what happens when high-level principles come into conflict?
For example, AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.
AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions.
But ethics needs to be proactive and prepare for what could go wrong, not what has gone wrong already. Obviously, we can’t predict the future. But as these systems become more powerful and get used in more high-stakes domains, the risks will get bigger.
Do companies still need to hire a large contingent of data scientists to build machine learning models or can AutoML reduce the demand for this elusive talent?
In recent years, as the promise of artificial intelligence (AI) crystallized across industries, organizations revamped their talent strategies to gain the skills necessary to deploy and scale AI systems. They hired legions of data scientists and other data experts to build AI applications, trained analytics translators to connect the business and technical realms, and upskilled frontline staff to use AI applications effectively.
One role in particular, the data scientist, has been especially difficult for leaders to fill as competition for its illusive knowledge increased. Last year, employment-related search engine Indeed.com reported that job postings on its site for data scientists had more than tripled since December 2013. McKinsey Global Institute research has also highlighted the talent shortage and the potential for hundreds of thousands of positions to go unfilled.
Incumbent companies found it especially hard to compete with start-ups and tech giants such as Google to attract or retain the best practicing data scientists and the newest crop of graduates. One multinational retail conglomerate, for example, put in place a highly attractive package last year, with education perks and salaries up to 20 percent higher than market rates, to attract the 30-plus data scientists it needed to support its strategic road map of priority AI use cases.
Certainly, some of this competition may soften as tech start-ups struggle to survive in the wake of the COVID-19 crisis, making it somewhat easier for incumbents to acquire these hard-to-get skills. But there are also new tools that have the potential to fill the data-science talent gap and increase the efficiency of analytics teams. Automated machine learning (ML) tools, commonly called AutoML, are designed to automate many steps in developing machine learning models. Business experts armed with AutoML can build some types of models that once would have needed a trained data scientist.
As one might imagine, there’s a great deal of discussion around what can or should be automated when it comes to model development. However, one thing is clear: the evolution of AutoML tools is driving a radically new way of thinking about data science, expanding its bench to include business experts with extensive domain knowledge, basic data-science skills or the willingness to learn them, and AutoML training, rather than solely filling the team with experienced data scientists.
To stay competitive, we believe companies will be best served by not putting all their resources into the fight for sparse technical talent, but instead focusing at least part of their attention on building up their troop of AutoML practitioners, who will become a substantial proportion of the talent pool for the next decade.
How AutoML tools change the data-science game
To understand this shift in AI talent needs, it’s helpful to grasp at a high level how models—the basic building blocks of AI systems—are created and where data scientists spend most of their time (exhibit).
There are typically six broad steps in the model-development workflow:
Understanding the business challenge and translating it into a mathematical one. This is arguably one of the most crucial steps, as the decisions data scientists make here (for example, how they account for the interplay of pricing and demand in an AI-driven price-optimization system) can determine the performance and ultimate success of the model.
Understanding the data, including assessing what data are available to support the business goal and the feasibility of leveraging that data to fuel an effective analytical model for the job.
Preparing the data, including cleansing the data and identifying the most important features. For example, average operating temperature of equipment and time between maintenance would be key features for helping to predict when maintenance is needed.
Developing the models using programming languages such as R and Python by either leveraging one of the many readily available algorithms on open-source platforms or, in much rarer instances, developing a new tailored approach for the problem at hand.
Testing and fine-tuning models for performance in meeting the original business goals as well as to address any risks, such as bias, fairness, production readiness, and so on.
Deploying the new models into production, embedding them into business and decision-making workflows, and monitoring their performance, making updates as needed.
Many organizations have found that 60 to 80 percent of a data scientist’s time is spent preparing the data for modeling. Once the initial model is built, only a fraction of his or her time—4 percent, according to some analyses—is spent on testing and tuning code. In essence, tuning model parameters has become a commodity, and performance is driven by data selection and preparation.
The field of AutoML aims to automate all data preparation, as well as modeling and tuning steps, so that manual technical work is no longer required. While these tools don’t automate everything yet, they are currently able to produce machine learning models that perform well enough to deliver returns. In the telecom industry, for example, some companies have successfully leveraged AutoML to build profitable churn-management models that predict with sufficient accuracy which customers have a high risk of canceling their contracts.
It’s important to note here that the push to eliminate manual data tasks isn’t new. Today, most machine learning models, including powerful ones such as deep learning, are already fully integrated into programming languages, meaning that data scientists can apply these techniques using very little code. For example, one energy company was able, once it had prepared the data, to build a model that accurately predicted customer cancellations by applying just one line of code. An active and growing open-source community also provides “snippets” of code that data scientists can copy and paste into their models to make the data-preparation and modeling part of their work easier than ever.
It is unclear how far AutoML capabilities will go in automating modeling tasks; complete automation still seems far away. However, it seems certain that these capabilities will make data science ever more accessible to business experts, and, in some cases, business-domain understanding will enrich the quality of many models more than the technical skills of a data scientist.
We already see the transition happening: state-of-the-art tools are enabling AutoML practitioners to build reasonably-high-performing ML pipelines that include all steps—from reading the data to tuning the parameters—without substantial knowledge of machine learning or statistics. One North American retailer, for example, retrained several hundred employees in its business-intelligence team to use an off-the-shelf AutoML platform to perform customer-segmentation tasks that were previously carried out by highly trained data scientists. The move has enabled the company to fill the talent gap between basic business-intelligence functions and very complex ML modeling tasks and save hundreds of thousands of dollars in data preparation.
Certainly, not all data-science challenges can be solved using AutoML tools. At present, the technology is best suited to streamlining the development of common forecasting tasks, where the goal is to predict an outcome, given a few metrics, and the use of black-box models is permissible. Models that require statistical expertise to ensure fairness or build trust—for example, customer-engagement models that help salespeople understand what a prospect is likely to buy and why—still require the expertise of trained data scientists.
The impact on hiring strategies
Given the current limitations of AutoML tools, we don’t foresee demand for substantial, functional data-science expertise going away anytime soon. Over the long term, purely technical data scientists will still be needed, but simply far fewer than most currently predict. We estimate that over the next five years, demand for AutoML practitioners is likely to be twice as high as demand for data scientists as companies build out their talent strategies with both levels of expertise:
AutoML practitioners, such as biochemists in pharma research, will be able to perform simpler data-science tasks.
Data scientists with the statistical expertise to understand which tasks can safely be automated without risk will perform highly specialized tasks that can’t be automated, such as developing new algorithms or optimizing accuracy down to the last few percentage points.
How to get started
Where should organizations begin to rethink their data-science talent needs? We recommend companies take the following steps.
Reassess your requirements
The distinction between tasks that can be left to AutoML practitioners and those that require data scientists with deep statistical expertise is not trivial. It requires experienced analytics practitioners to take stock of all the initiatives on the AI road map and triage them based on the complexity of the data and modeling techniques and the necessary level of predictive accuracy. We find the following questions can serve as a useful guide to determine how to divvy up the work on any given task:
Is this a nonstandard data-science task as opposed to a standard predictive task, such as classification or regression?
Will we need to use rich and complex data to solve the business problem?
Is there potential bias in the data, such as in the case of a resume-screening model that may unintentionally reflect historical prejudices?
Will the problem likely require deeper understanding of statistical methods, such as causal inference?
Would a slight difference in model performance (for example, a 1 to 2 percent bump in predictive accuracy) significantly influence the value of the model?
To handle tasks for which the answer to any of these questions is “yes,” the organization will most certainly need highly trained data scientists in its talent mix.
Upskill domain experts
The best way to get started with AutoML tools is to train your existing business experts, as opposed to recruiting new hires. Training should include education both in using AutoML tools and in the fundamentals of data science. For example, the business experts should be aware of how common modeling techniques work, what form of data (numeric or text fields) they require, and what patterns the data can (and cannot) reveal. To build out its AutoML team, a manufacturing company piloted a capability-building program for approximately 200 process engineers and line managers. The program consisted of five training days with exercises along the entire use-case life cycle, including standard coding tasks such as cleaning the data and running automated standard ML models, followed by on-the-job coaching when they applied their new skills on their own projects. While education in a technical field such as engineering, physics, or mathematics was a plus, the only prerequisite for these business experts was an interest in and curiosity about data science.
Discuss the limitations—and the opportunities
As highlighted, there are clearly limitations to AutoML technology and numerous pitfalls for companies that use it inappropriately—not least of which are the potential for faulty outputs when it’s used outside its realm of expertise, undetected biases, and lack of explainability. It’s these dangers that have led to concerns in the data-science community. However, organizations that are mindful of the issues and engage in open discussions with their data scientists about the potential of AutoML will not only be able to better deal with current talent gaps but also free up their data scientists for the tasks that really interest them. At the manufacturing company mentioned earlier, data scientists were happy they no longer needed to run every standardized task in the local plants and instead could focus on the tasks that really required their deep and specialized knowledge.
The time is ripe for companies to adjust their talent strategy to take advantage of AutoML tools. These tools enable business experts to efficiently and cost-effectively complete many of today’s simpler data-science tasks and will be even more important in the future as they improve. At the same time, expert data scientists will be freed up for the technically most challenging tasks, enabling them to use their skill set more efficiently and innovate faster, while increasing their job satisfaction—benefits for both the data scientists and the companies that seek to maximize their outputs and retention.
Planning has long been one of the cornerstones of management. Early in the twentieth century Henri Fayol identified the job of managers as to plan, organize, command, coordinate, and control. The capacity and willingness of managers to plan developed throughout the century. Management by Objectives (MBO) became the height of corporate fashion in the late 1950s. The world appeared predictable. The future could be planned. It seemed sensible, therefore, for executives to identify their objectives. They could then focus on managing in such a way that these objectives were achieved.
This was the capitalist equivalent of the Communist system’s five-year plans. In fact, one management theorist of the 1960s suggested that the best managed organizations in the world were the Standard Oil Company of New Jersey, the Roman Catholic Church and the Communist Party. The belief was that if the future was mapped out, it would happen.
Later, MBO evolved into strategic planning. Corporations developed large corporate units dedicated to it. They were deliberately detached from the day-to-day realities of the business and emphasized formal procedures around numbers. Henry Mintzberg defined strategic planning as “a formalized system for codifying, elaborating and operationalizing the strategies which companies already have.” The fundamental belief was still that the future could largely be predicted.
Now, strategic planning has fallen out of favor. In the face of relentless technological change, disruptive forces in industry after industry, global competition, and so on, planning seems like pointless wishful thinking.
And yet, planning is clearly essential for any company of any size. Look around your own organization. The fact that you have a place to work which is equipped for the job, and you and your colleagues are working on a particular project at a particular time and place, requires some sort of planning. The reality is that plans have to be made about the use of a company’s resources all of the time. Some are short-term, others stretch into an imagined future.
Universally valuable, but desperately unfashionable, planning waits like a spinster in a Jane Austen novel for someone to recognize her worth.
But executives are wary of planning because it feels rigid, slow, and bureaucratic. The Fayol legacy lingers. A 2016 HBR Analytics survey of 385 managers revealed that most executives were frustrated with planning because they believed that speed was important and that plans frequently changed anyway. Why engage in a slow, painful planning exercise when you’re not even going to follow the plan?
The frustrations with current planning practices intersect with another fundamental managerial trend: organizational agility. Reorganizing around small self-managing teams — enhanced by agility methods like Scrum and LeSS — is emerging as the route to the organizational agility required to compete in the fast-changing business reality. One of the key principles underpinning team-based agility is that teams autonomously decide their priorities and where to allocate their own resources.
The logic of centralized long-term strategic planning (done once a year at a fixed time) is the antithesis of an organization redesigned around teams who define their own priorities and resources allocation on a weekly basis.
But if planning and agility are both necessary, organizations have to make them work. They have to create a Venn diagram with planning on one side, agility on the other, and a practical and workable sweet-spot in the middle. This is why the quest to rethink strategic planning has never been more urgent and critical. Planning twenty-first century style should be reconceived as agile planning.
Agile planning has a number of characteristics:
frameworks and tools able to deal with a future that will be different;
the ability to cope with more frequent and dynamic changes;
the need for quality time to be invested for a true strategic conversation rather than simply being a numbers game;
resources and funds are available in a flexible way for emerging opportunities.
The intersection of planning with organizational agility generates two other paramount requirements:
A process able to coordinate and align with agile teams
Agile organizations face the challenge of managing the local autonomy of squads (bottom-up input) consistently with a bigger picture represented by the tribe’s goals and by cross-tribe interdependencies and the strategic priorities of the organization (top-down view). Governing this tension requires new processes and routines for planning and coordination.
Consider the Dutch financial services firm ING Bank. It restructured its operations in the Netherlands by reorganizing 3,500 employees into agile squads. These are autonomous multidisciplinary teams (up to nine people per team) able to define their work and make business decisions quickly and flexibly. Squads are organized into a Tribe (of no more than 150 people), a collection of squads working on related areas.
ING Bank revisited its process and introduced routine meetings and formats to create alignment between and within tribes. Each tribe develops a QBR (Quarterly Business Review), a six-page document outlining tribe-level priorities, objectives and key results. This is then discussed in a large alignment meeting (labelled the QBR Marketplace) attended by tribe leads and other relevant leaders. At this meeting one fundamental question is addressed: when we add up everything, does this contribute to our company’s strategic goals?
The alignment within a tribe happens at what is called a Portfolio Marketplace event: representatives of each of the squads which make up the tribe come together to agree on how the set goals are going to be achieved and to address opportunities for synergies.
The ING Bank example shows how the planning process is still necessary and essential to an agile company although in a different fashion with different processes, mechanisms and routines.
As more and more companies transform into agile organizations, agile planning will likely become the new normal replacing the traditional centralized planning approach.
A process that makes use of both limitless hard data and human judgment
Planners have traditionally been obsessed with gathering hard data on their industry, markets, competitors. Soft data — networks of contacts, talking with customers, suppliers and employees, using intuition and using the grapevine — have all but been ignored.
From the 1960s onwards, planning was built around analysis. Now, thanks to Big Data, the ability to generate data is pretty well limitless. This does not necessarily allow us to create better plans for the future.
Soft data is also vital. “While hard data may inform the intellect, it is largely soft data that generate wisdom. They may be difficult to ‘analyze’, but they are indispensable for synthesis — the key to strategy making,” says Henry Mintzberg.
Companies need first to imagine possibilities and second, pick the one for which the most compelling argument can be made. In deciding which is backed by the most compelling argument, they should indeed take into account all data that can be crunched. But in addition, they should use qualitative judgment.
In an agile organization, teams use design thinking and other exploratory techniques (plus data) to make rapid decisions and change the course on a weekly basis. Decision making is done by a team of people, offsetting in this way the potential biases of a single person making a decision based on her individual judgement. To some extent, an agile team-based organization enables the possibility to leverage qualitative data and judgement — combined today with infinite hard data — for better decisions.
Relying solely on hard data has unquestionably killed many potential great businesses. Take Nespresso, the coffee pod pioneer developed by Nestle. Nespresso took off when it stopped targeting offices and started marketing itself to households. There was little data on how households would respond to the concept and whatever information was available suggested a perceived consumer value of just 25 Swiss centimes versus a company-wide threshold requirement of 40 centimes. The Nespresso team had to interpret the data skillfully to present a better case to top management. Because it believed strongly in the idea, it forced the company to take a bigger-than-usual risk. If Nestle had been guided solely by quantitative market research the concept would never have gotten off the ground.
The traditional planning approach needs to be revisited to better serve the purposes of the agile enterprise of the twenty-first century. Agile planning is the future of planning. This new approach will require two fundamental elements. First, replacing the traditional obsessions on hard data and playing the numbers-game with a more balanced co-existence of hard and soft data where judgment also plays an important role. Second, introducing new mechanisms and routines to ensure alignment between the hundreds of self-organizing autonomous local teams and the overarching goals and directions of the company.