Ethics for urgency means making ethics a core part of AI rather than an afterthought, says Jess Whittlestone.
Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.
For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.
Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.
This interview has been edited for length and clarity.
Why do we need a new kind of ethics for AI?
With this pandemic we’re suddenly in a situation where people are really talking about whether AI could be useful, whether it could save lives. But the crisis has made it clear that we don’t have robust enough ethics procedures for AI to be deployed safely, and certainly not ones that can be implemented quickly.
What’s wrong with the ethics we have?
I spent the last couple of years reviewing AI ethics initiatives, looking at their limitations and asking what else we need. Compared to something like biomedical ethics, the ethics we have for AI isn’t very practical. It focuses too much on high-level principles. We can all agree that AI should be used for good. But what does that really mean? And what happens when high-level principles come into conflict?
For example, AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.
AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions.
But ethics needs to be proactive and prepare for what could go wrong, not what has gone wrong already. Obviously, we can’t predict the future. But as these systems become more powerful and get used in more high-stakes domains, the risks will get bigger.
Do companies still need to hire a large contingent of data scientists to build machine learning models or can AutoML reduce the demand for this elusive talent?
In recent years, as the promise of artificial intelligence (AI) crystallized across industries, organizations revamped their talent strategies to gain the skills necessary to deploy and scale AI systems. They hired legions of data scientists and other data experts to build AI applications, trained analytics translators to connect the business and technical realms, and upskilled frontline staff to use AI applications effectively.
One role in particular, the data scientist, has been especially difficult for leaders to fill as competition for its illusive knowledge increased. Last year, employment-related search engine Indeed.com reported that job postings on its site for data scientists had more than tripled since December 2013. McKinsey Global Institute research has also highlighted the talent shortage and the potential for hundreds of thousands of positions to go unfilled.
Incumbent companies found it especially hard to compete with start-ups and tech giants such as Google to attract or retain the best practicing data scientists and the newest crop of graduates. One multinational retail conglomerate, for example, put in place a highly attractive package last year, with education perks and salaries up to 20 percent higher than market rates, to attract the 30-plus data scientists it needed to support its strategic road map of priority AI use cases.
Certainly, some of this competition may soften as tech start-ups struggle to survive in the wake of the COVID-19 crisis, making it somewhat easier for incumbents to acquire these hard-to-get skills. But there are also new tools that have the potential to fill the data-science talent gap and increase the efficiency of analytics teams. Automated machine learning (ML) tools, commonly called AutoML, are designed to automate many steps in developing machine learning models. Business experts armed with AutoML can build some types of models that once would have needed a trained data scientist.
As one might imagine, there’s a great deal of discussion around what can or should be automated when it comes to model development. However, one thing is clear: the evolution of AutoML tools is driving a radically new way of thinking about data science, expanding its bench to include business experts with extensive domain knowledge, basic data-science skills or the willingness to learn them, and AutoML training, rather than solely filling the team with experienced data scientists.
To stay competitive, we believe companies will be best served by not putting all their resources into the fight for sparse technical talent, but instead focusing at least part of their attention on building up their troop of AutoML practitioners, who will become a substantial proportion of the talent pool for the next decade.
How AutoML tools change the data-science game
To understand this shift in AI talent needs, it’s helpful to grasp at a high level how models—the basic building blocks of AI systems—are created and where data scientists spend most of their time (exhibit).
There are typically six broad steps in the model-development workflow:
Understanding the business challenge and translating it into a mathematical one. This is arguably one of the most crucial steps, as the decisions data scientists make here (for example, how they account for the interplay of pricing and demand in an AI-driven price-optimization system) can determine the performance and ultimate success of the model.
Understanding the data, including assessing what data are available to support the business goal and the feasibility of leveraging that data to fuel an effective analytical model for the job.
Preparing the data, including cleansing the data and identifying the most important features. For example, average operating temperature of equipment and time between maintenance would be key features for helping to predict when maintenance is needed.
Developing the models using programming languages such as R and Python by either leveraging one of the many readily available algorithms on open-source platforms or, in much rarer instances, developing a new tailored approach for the problem at hand.
Testing and fine-tuning models for performance in meeting the original business goals as well as to address any risks, such as bias, fairness, production readiness, and so on.
Deploying the new models into production, embedding them into business and decision-making workflows, and monitoring their performance, making updates as needed.
Many organizations have found that 60 to 80 percent of a data scientist’s time is spent preparing the data for modeling. Once the initial model is built, only a fraction of his or her time—4 percent, according to some analyses—is spent on testing and tuning code. In essence, tuning model parameters has become a commodity, and performance is driven by data selection and preparation.
The field of AutoML aims to automate all data preparation, as well as modeling and tuning steps, so that manual technical work is no longer required. While these tools don’t automate everything yet, they are currently able to produce machine learning models that perform well enough to deliver returns. In the telecom industry, for example, some companies have successfully leveraged AutoML to build profitable churn-management models that predict with sufficient accuracy which customers have a high risk of canceling their contracts.
It’s important to note here that the push to eliminate manual data tasks isn’t new. Today, most machine learning models, including powerful ones such as deep learning, are already fully integrated into programming languages, meaning that data scientists can apply these techniques using very little code. For example, one energy company was able, once it had prepared the data, to build a model that accurately predicted customer cancellations by applying just one line of code. An active and growing open-source community also provides “snippets” of code that data scientists can copy and paste into their models to make the data-preparation and modeling part of their work easier than ever.
It is unclear how far AutoML capabilities will go in automating modeling tasks; complete automation still seems far away. However, it seems certain that these capabilities will make data science ever more accessible to business experts, and, in some cases, business-domain understanding will enrich the quality of many models more than the technical skills of a data scientist.
We already see the transition happening: state-of-the-art tools are enabling AutoML practitioners to build reasonably-high-performing ML pipelines that include all steps—from reading the data to tuning the parameters—without substantial knowledge of machine learning or statistics. One North American retailer, for example, retrained several hundred employees in its business-intelligence team to use an off-the-shelf AutoML platform to perform customer-segmentation tasks that were previously carried out by highly trained data scientists. The move has enabled the company to fill the talent gap between basic business-intelligence functions and very complex ML modeling tasks and save hundreds of thousands of dollars in data preparation.
Certainly, not all data-science challenges can be solved using AutoML tools. At present, the technology is best suited to streamlining the development of common forecasting tasks, where the goal is to predict an outcome, given a few metrics, and the use of black-box models is permissible. Models that require statistical expertise to ensure fairness or build trust—for example, customer-engagement models that help salespeople understand what a prospect is likely to buy and why—still require the expertise of trained data scientists.
The impact on hiring strategies
Given the current limitations of AutoML tools, we don’t foresee demand for substantial, functional data-science expertise going away anytime soon. Over the long term, purely technical data scientists will still be needed, but simply far fewer than most currently predict. We estimate that over the next five years, demand for AutoML practitioners is likely to be twice as high as demand for data scientists as companies build out their talent strategies with both levels of expertise:
AutoML practitioners, such as biochemists in pharma research, will be able to perform simpler data-science tasks.
Data scientists with the statistical expertise to understand which tasks can safely be automated without risk will perform highly specialized tasks that can’t be automated, such as developing new algorithms or optimizing accuracy down to the last few percentage points.
How to get started
Where should organizations begin to rethink their data-science talent needs? We recommend companies take the following steps.
Reassess your requirements
The distinction between tasks that can be left to AutoML practitioners and those that require data scientists with deep statistical expertise is not trivial. It requires experienced analytics practitioners to take stock of all the initiatives on the AI road map and triage them based on the complexity of the data and modeling techniques and the necessary level of predictive accuracy. We find the following questions can serve as a useful guide to determine how to divvy up the work on any given task:
Is this a nonstandard data-science task as opposed to a standard predictive task, such as classification or regression?
Will we need to use rich and complex data to solve the business problem?
Is there potential bias in the data, such as in the case of a resume-screening model that may unintentionally reflect historical prejudices?
Will the problem likely require deeper understanding of statistical methods, such as causal inference?
Would a slight difference in model performance (for example, a 1 to 2 percent bump in predictive accuracy) significantly influence the value of the model?
To handle tasks for which the answer to any of these questions is “yes,” the organization will most certainly need highly trained data scientists in its talent mix.
Upskill domain experts
The best way to get started with AutoML tools is to train your existing business experts, as opposed to recruiting new hires. Training should include education both in using AutoML tools and in the fundamentals of data science. For example, the business experts should be aware of how common modeling techniques work, what form of data (numeric or text fields) they require, and what patterns the data can (and cannot) reveal. To build out its AutoML team, a manufacturing company piloted a capability-building program for approximately 200 process engineers and line managers. The program consisted of five training days with exercises along the entire use-case life cycle, including standard coding tasks such as cleaning the data and running automated standard ML models, followed by on-the-job coaching when they applied their new skills on their own projects. While education in a technical field such as engineering, physics, or mathematics was a plus, the only prerequisite for these business experts was an interest in and curiosity about data science.
Discuss the limitations—and the opportunities
As highlighted, there are clearly limitations to AutoML technology and numerous pitfalls for companies that use it inappropriately—not least of which are the potential for faulty outputs when it’s used outside its realm of expertise, undetected biases, and lack of explainability. It’s these dangers that have led to concerns in the data-science community. However, organizations that are mindful of the issues and engage in open discussions with their data scientists about the potential of AutoML will not only be able to better deal with current talent gaps but also free up their data scientists for the tasks that really interest them. At the manufacturing company mentioned earlier, data scientists were happy they no longer needed to run every standardized task in the local plants and instead could focus on the tasks that really required their deep and specialized knowledge.
The time is ripe for companies to adjust their talent strategy to take advantage of AutoML tools. These tools enable business experts to efficiently and cost-effectively complete many of today’s simpler data-science tasks and will be even more important in the future as they improve. At the same time, expert data scientists will be freed up for the technically most challenging tasks, enabling them to use their skill set more efficiently and innovate faster, while increasing their job satisfaction—benefits for both the data scientists and the companies that seek to maximize their outputs and retention.
Planning has long been one of the cornerstones of management. Early in the twentieth century Henri Fayol identified the job of managers as to plan, organize, command, coordinate, and control. The capacity and willingness of managers to plan developed throughout the century. Management by Objectives (MBO) became the height of corporate fashion in the late 1950s. The world appeared predictable. The future could be planned. It seemed sensible, therefore, for executives to identify their objectives. They could then focus on managing in such a way that these objectives were achieved.
This was the capitalist equivalent of the Communist system’s five-year plans. In fact, one management theorist of the 1960s suggested that the best managed organizations in the world were the Standard Oil Company of New Jersey, the Roman Catholic Church and the Communist Party. The belief was that if the future was mapped out, it would happen.
Later, MBO evolved into strategic planning. Corporations developed large corporate units dedicated to it. They were deliberately detached from the day-to-day realities of the business and emphasized formal procedures around numbers. Henry Mintzberg defined strategic planning as “a formalized system for codifying, elaborating and operationalizing the strategies which companies already have.” The fundamental belief was still that the future could largely be predicted.
Now, strategic planning has fallen out of favor. In the face of relentless technological change, disruptive forces in industry after industry, global competition, and so on, planning seems like pointless wishful thinking.
And yet, planning is clearly essential for any company of any size. Look around your own organization. The fact that you have a place to work which is equipped for the job, and you and your colleagues are working on a particular project at a particular time and place, requires some sort of planning. The reality is that plans have to be made about the use of a company’s resources all of the time. Some are short-term, others stretch into an imagined future.
Universally valuable, but desperately unfashionable, planning waits like a spinster in a Jane Austen novel for someone to recognize her worth.
But executives are wary of planning because it feels rigid, slow, and bureaucratic. The Fayol legacy lingers. A 2016 HBR Analytics survey of 385 managers revealed that most executives were frustrated with planning because they believed that speed was important and that plans frequently changed anyway. Why engage in a slow, painful planning exercise when you’re not even going to follow the plan?
The frustrations with current planning practices intersect with another fundamental managerial trend: organizational agility. Reorganizing around small self-managing teams — enhanced by agility methods like Scrum and LeSS — is emerging as the route to the organizational agility required to compete in the fast-changing business reality. One of the key principles underpinning team-based agility is that teams autonomously decide their priorities and where to allocate their own resources.
The logic of centralized long-term strategic planning (done once a year at a fixed time) is the antithesis of an organization redesigned around teams who define their own priorities and resources allocation on a weekly basis.
But if planning and agility are both necessary, organizations have to make them work. They have to create a Venn diagram with planning on one side, agility on the other, and a practical and workable sweet-spot in the middle. This is why the quest to rethink strategic planning has never been more urgent and critical. Planning twenty-first century style should be reconceived as agile planning.
Agile planning has a number of characteristics:
frameworks and tools able to deal with a future that will be different;
the ability to cope with more frequent and dynamic changes;
the need for quality time to be invested for a true strategic conversation rather than simply being a numbers game;
resources and funds are available in a flexible way for emerging opportunities.
The intersection of planning with organizational agility generates two other paramount requirements:
A process able to coordinate and align with agile teams
Agile organizations face the challenge of managing the local autonomy of squads (bottom-up input) consistently with a bigger picture represented by the tribe’s goals and by cross-tribe interdependencies and the strategic priorities of the organization (top-down view). Governing this tension requires new processes and routines for planning and coordination.
Consider the Dutch financial services firm ING Bank. It restructured its operations in the Netherlands by reorganizing 3,500 employees into agile squads. These are autonomous multidisciplinary teams (up to nine people per team) able to define their work and make business decisions quickly and flexibly. Squads are organized into a Tribe (of no more than 150 people), a collection of squads working on related areas.
ING Bank revisited its process and introduced routine meetings and formats to create alignment between and within tribes. Each tribe develops a QBR (Quarterly Business Review), a six-page document outlining tribe-level priorities, objectives and key results. This is then discussed in a large alignment meeting (labelled the QBR Marketplace) attended by tribe leads and other relevant leaders. At this meeting one fundamental question is addressed: when we add up everything, does this contribute to our company’s strategic goals?
The alignment within a tribe happens at what is called a Portfolio Marketplace event: representatives of each of the squads which make up the tribe come together to agree on how the set goals are going to be achieved and to address opportunities for synergies.
The ING Bank example shows how the planning process is still necessary and essential to an agile company although in a different fashion with different processes, mechanisms and routines.
As more and more companies transform into agile organizations, agile planning will likely become the new normal replacing the traditional centralized planning approach.
A process that makes use of both limitless hard data and human judgment
Planners have traditionally been obsessed with gathering hard data on their industry, markets, competitors. Soft data — networks of contacts, talking with customers, suppliers and employees, using intuition and using the grapevine — have all but been ignored.
From the 1960s onwards, planning was built around analysis. Now, thanks to Big Data, the ability to generate data is pretty well limitless. This does not necessarily allow us to create better plans for the future.
Soft data is also vital. “While hard data may inform the intellect, it is largely soft data that generate wisdom. They may be difficult to ‘analyze’, but they are indispensable for synthesis — the key to strategy making,” says Henry Mintzberg.
Companies need first to imagine possibilities and second, pick the one for which the most compelling argument can be made. In deciding which is backed by the most compelling argument, they should indeed take into account all data that can be crunched. But in addition, they should use qualitative judgment.
In an agile organization, teams use design thinking and other exploratory techniques (plus data) to make rapid decisions and change the course on a weekly basis. Decision making is done by a team of people, offsetting in this way the potential biases of a single person making a decision based on her individual judgement. To some extent, an agile team-based organization enables the possibility to leverage qualitative data and judgement — combined today with infinite hard data — for better decisions.
Relying solely on hard data has unquestionably killed many potential great businesses. Take Nespresso, the coffee pod pioneer developed by Nestle. Nespresso took off when it stopped targeting offices and started marketing itself to households. There was little data on how households would respond to the concept and whatever information was available suggested a perceived consumer value of just 25 Swiss centimes versus a company-wide threshold requirement of 40 centimes. The Nespresso team had to interpret the data skillfully to present a better case to top management. Because it believed strongly in the idea, it forced the company to take a bigger-than-usual risk. If Nestle had been guided solely by quantitative market research the concept would never have gotten off the ground.
The traditional planning approach needs to be revisited to better serve the purposes of the agile enterprise of the twenty-first century. Agile planning is the future of planning. This new approach will require two fundamental elements. First, replacing the traditional obsessions on hard data and playing the numbers-game with a more balanced co-existence of hard and soft data where judgment also plays an important role. Second, introducing new mechanisms and routines to ensure alignment between the hundreds of self-organizing autonomous local teams and the overarching goals and directions of the company.
Pentagon officials working on Defense Secretary Mark Esper’s cost-cutting review of the department haveproposed slashing military health care by $2.2 billion, a reduction thatsomedefense officials say could effectively gut the Pentagon’s health care system during a nationwide pandemic.
The proposed cut to the military health system over the next five years is part of a sweeping effort Esper initiated last year to eliminate inefficiencies within the Pentagon’s coffers. But two senior defense officials say the effort has been rushed and driven by an arbitrary cost-savings goal, and argue that the cuts to the system will imperil the health care of millions of military personnel and their families as the nation grapples with Covid-19.
Esper and his deputies have argued that America’s private health system can pick up the slack.
Roughly 9.5 million active-duty personnel, military retirees and their dependents rely on the military health system, which is the military’s sprawling government-run health care framework that operates hundreds of facilities around the world.The military health system also provides care through TRICARE, which enables military personnel and their families to obtain civilian healthcare outside of military networks.
Under the proposal in the latest version of Esper’s defense-wide review, the armed services, the defense health system and officials at the Office of the Secretary of Defense for Personnel and Readiness would be tasked to find savings in their budgets to the tune of $2.2 billion for military health. Officials arrived at that number recently after months of discussions with the impacted offices during the review, said a third defense official. A fourth added that the cuts will be “conditions-based and will only be implemented to the extent that the [military health system] can continue to maintain our beneficiaries access to quality care, be it through our military health care facilities or with our civilian health care provider partners.”
However, the first two senior defense officialssaid the cuts are not supported by program analysis nor by warfighter requirements.
The department’s effort to overhaul the military health system have recently come under scrutiny, as lawmakers pressed the Pentagon on whether the pandemic would affect those plans.
“A lot of the decisions were made in dark, smoky rooms, and it was driven by arbitrary numbers of cuts,” said one senior defense official with knowledge of the process. “They wanted to book the savings to be able to report it.”
“It imperils the ability to support our combat forces overseas,” added a second senior official, who argued that Esper’s moves are weakening the ability to protect the health of active-duty troops in military theaters abroad. “They’re actively pushing very skilled medical people out the door.”
However, a Pentagon spokesperson said the system will “continually assesses how it can most effectively align its assets in support of the National Defense Strategy.
“The MHS will not waver from its mission to provide a ready medical force and a medically ready force,” said Pentagon spokesperson Lisa Lawrence. “Any potential changes to the health system will only be pursued in a manner that ensures its ability to continue to support the Department’s operational requirements and to maintain our beneficiaries access to quality health care.”
Esper rolled out the results of the first iteration of the defense-wide review in February, revealing $5.7 billion in cost savings that he said would be put toward preparing the Pentagon to better compete with Russia and China, including research into hypersonic weapons, artificial intelligence, missile defense and more.
But the proposed health cuts, in the second iteration of the defense-wide review, would degrade military hospitals to the point that they will no longer be able to sustain the current training pipeline for the military’s medical force, potentially necessitating something akin to a draft of civilian medical workers into the military, the two defense officials said.
The second official noted the challenge in finding outside doctors given longstanding complaints from some U.S. hospitals and researchers that there aren’t enough physicians to serve civilians.
As a result, the proposed reductions would hurt combat medical capability without actually saving money, the officials argued. The Pentagon is already significantly overspending on private sector care and TRICARE because patients are being pushed out of undermanned military health facilities to the private health care network, they said. The cuts also would follow nearly a decade of the Pentagon holding military health spending flat, even as spending on care for veterans and civilians has ballooned.
The officials blamed the Pentagon’s Cost Assessment and Program Evaluation office, or CAPE, under the leadership of John Whitley, who has been acting director since August 2019, for the cuts. CAPE conducts analysis and provides advice to the secretary of defense on potential cuts to the defense budget.
During Whitley’s confirmation hearing to be the permanent CAPE director last week, Sen. Doug Jones (D-Ala.) pressed him on the health cuts.
“Folks in my state have expressed some concern and opposition to some of the policies, which allow only active-duty service members to visit military treatment facilities,” Jones said. “What do I tell those folks?”
“The department does have work to do on expanding choice and access to beneficiaries,” Whitley responded. “Sometimes that’s in an MTF, sometimes that’s in the civilian health care setting.”
Whitley has specifically tried to eliminatethe Murtha Cancer Center as an unnecessary expense, said one senior official.
Last fall, Whitley and CAPE also sought to close the Uniformed Services University of the Health Sciences, which prepares graduates for the medical corps, as part of the defense-wide review, the people said. Although at the time Esper denied the proposal, CAPE is now seeking major cuts to USU as part of the $2.2 billion. The reductions include eliminating all basic research dollars for combat casualty care, infectious disease and military medicine for USU, as well as slicing operational funds.
“What’s been proposed would be devastating, and it’s coming right out of Whitley’s shop,” said the senior official. “Instead of a clean execution, USU would be bled to death.”
The officials pointed out that USU has contributed to the Covid-19 response in recent months by graduating 230 medical officers and Nurse Corps officers early from the class of 2020 School of Medicine, leading and participating in research clinical trials for virus countermeasures and contributing to the Operation Warp Speed effort to develop a vaccine.
The cuts to USU will leave the department ill-prepared “for the next pandemic,” the first defense official said.
Officials with the Department of Health and Human Services in January raised concerns aboutlast year’s cuts to the Pentagon’s medical corps, which were unrelated to the defense-wide review. In a Jan. 14 memo sent to the Pentagon, the health department’s top emergency-response official stressed that the private health sector would not be able to accommodate “potential casualty estimates shared with HHS.”
The U.S. civilian health system “is unable to absorb and provide sustained care for large numbers of injured service members returning from combat,” wrote Robert Kadlec, the HHS assistant secretary for emergency and preparedness and a retired Air Force colonel.
Kadlec’s memo came in response to the Pentagon’s announced intent to cut the active duty medical force by about 20 percent, or roughly 17,000 personnel, over the next five years. “Explicit” in the move was the expectation that the civilian health care system would pick up the slack, according to Kadlec’s memo.
The heads of the U.S. military branches are calling on the Defense Department to stop the transfer of all medical facilities to the Defense Health Agency, saying the novel coronavirus pandemic has shown that the plan to convey the services’ hospitals and clinics to the agency is “not viable.”
In a memo sent to Defense Secretary Mark Esper on Aug. 5, the secretaries of the Army, Navy and Air Force, along with the branch chiefs of the Army, Navy, Air Force, Marine Corps and Space Force, called for the return of all military hospitals and clinics already transferred to the DHA and suspension of any planned moves of personnel or resources.
They said that the COVID-19 outbreak has demonstrated that the reform, which was proposed by Congress in the fiscal 2017 National Defense Authorization Act, “introduces barriers, creates unnecessary complexity and increases inefficiency and cost.”
“The proposed DHA end-state represents unsustainable growth with a disparate intermediate structure that hinders coordination of service medical response to contingencies such as a pandemic,” they wrote in the memo, first obtained by a reporter for Synopsis, a Capitol Hill newsletter that focuses on military and veterans health care.
The DoD launched major reforms of its health system in 2013 with the creation of the Defense Health Agency, an organization initially established to improve the quality of health care available to military personnel and family members and reduce services such as administration, IT, logistics and training that existed in triplicate across the three service medical commands.
But the initiatives ballooned in 2016, with Congress passing legislation that placed the DHA in charge of military hospitals and clinics worldwide, as well as research and development, public health agencies, medical logistics and other operations run by the service medical commands.
On Oct. 1, 2019, all military hospitals and clinics in the continental United States were transferred to the DHA, with those overseas expected to move over by October 2021.
But in December, Army Secretary Ryan McCarthy asked for a temporary halt of the transfers of Army facilities and requested that the Army Public Health Center and Army Medical Research and Development Command remain permanently under the service’s control.
Ryan said he had concerns with what he viewed as a “lack of performance and planning with respect to the transition” by the DHA and Defense Department Health Affairs, according to a memo he sent Deputy Defense Secretary David Norquist.
McCarthy’s comments were the first public statements by a military service in opposition to the transformation, which also calls for cutting roughly 18,000 military medical personnel.
In early March, the Air Force and Army surgeons general weighed in, telling the House Appropriations defense subcommittee that the reorganization is an “extremely difficult” and “complicated merger of four cultures.” They suggested that the Defense Health Agency isn’t ready for some of the coming changes.
The DHA assumed management of all domestic military treatment facilities without the staff or management capabilities to actually run them. As part of the plan, the services were to provide support and guidance for the DHA to run the hospitals and clinics in the interim, until its personnel were ready to operate them.
But then the pandemic struck. And according to a source familiar with operations at several medical treatment facilities in the Washington, D.C., region, tensions that had been bubbling since the initial facility transfer erupted.
At one facility, commanders and DHA leadership argued over who was responsible for the COVID-19 screening tents in the parking lot.
“There are definitely turf battles going on,” said the source, a DoD civilian employee. “[The services] are making it very hard.”
The COVID-19 pandemic has delayed several elements of the military health system reform effort. In March, the DoD placed a 60-day hold on a step to establish administrative markets responsible for military treatment facilities in five regions in the U.S.
In April, the department paused the rollout of its Military Health Systems Genesis electronic medical records program to several new medical facilities, although it continued to modernize the IT infrastructure needed to support the system.
And in June, the Pentagon’s top health official announced that the DoD would delay some of the changes planned for this year, including an effort to begin closing or restructuring 48 hospitals and clinics and sending at least 200,000 patients to private care.
But Assistant Secretary of Defense for Health Affairs Thomas McCaffery, a former health industry executive who took office last August, has said he remains committed to reform, which he believes will improve quality of care while also saving taxpayer dollars.
“There’s been at least 12 times since World War II where there has been efforts to change our system,” McCaffery said during a visit to military health facilities in Washington last week. “All focused on the best way to organize and manage for the mission, have a ready medical force and a medically ready force. The mission is still the same, and having a more integrated system is the way to do it.”
In their letter to Esper, the service heads said the DHA has been helpful during the pandemic in developing standardized clinical practices for the coronavirus response.
But they still asked him to suspend any transfer activity and appoint a working group to explore different options for management of the hospitals.
They also asked that all military hospitals, including two that have operated under the DHA and the National Capital Region since 2013 — Walter Reed National Military Medical Center in Maryland and Fort Belvoir Community Hospital — be returned to their respective services.
They did not say which service Walter Reed would fall under; the medical center was created after a merger between the Army’s Walter Reed Medical Center in Washington, D.C., and the Navy’s National Naval Medical Center in Bethesda, Maryland. It remains housed at Bethesda, a Navy installation.
“We look forward to working together to achieve successful reform of the military health system,” they wrote.
Lisa Lawrence, a public affairs officer at the Pentagon, said the department plans to continue pursuing reforms as spelled out in the fiscal 2017 defense policy bill.
“The Department remains focused on ensuring the Services maintain a medically ready force and a ready medical force, as well as [ensuring] all eligible beneficiaries have continued access to quality health care,” Lawrence said.
A staff member for the National Military Family Association said that it “makes sense” the pandemic would lead to a reevaluation of the military health system reforms, adding that the organization hopes the DoD, DHA and military services will continue focusing on accountability, transparency and standardization across the system.
“Whatever the outcome, our priority is that service members and families have access to high-quality health care, wherever they happen to be stationed,” said Eileen Huck, deputy director for health care at NMFA.
Many strategy execution processes fail because the firm does not have something worth executing.
Many strategy execution processes fail because “new strategies” are often not strategies at all. A real strategy involves a clear set of choices that define what the firm is going to do and what it’s not going to do. Many strategies fail to get implemented because they do not represent such a set of clear choices. And many so-called strategies are in fact goals. “We want to be the number one or number two in all the markets in which we operate” is one of those. It does not tell you what you are going to do; all it does is tell you what you hope the outcome will be. But you’ll still need a strategy to achieve it. Another reason why many implementation efforts fail is that executives see it as a pure top-down, two-step process: “The strategy is made; now we implement it.” That’s unlikely to work. A successful strategy execution process is seldom a one-way trickle-down cascade of decisions.
The strategy consultants come in, do their work, and document the new strategy in a PowerPoint presentation and a weighty report. Town hall meetings are organized, employees are told to change their behavior, balanced scorecards are reformulated, and budgets are set aside to support initiatives that fit the new strategy. And then nothing happens.
One major reason for the lack of action is that “new strategies” are often not strategies at all. A real strategy involves a clear set of choices that define what the firm is going to do and what it’s not going to do. Many strategies fail to get implemented, despite the ample efforts of hard-working people, because they do not represent a set of clear choices.
Many so-called strategies are in fact goals. “We want to be the number one or number two in all the markets in which we operate” is one of those. It does not tell you what you are going to do; all it does is tell you what you hope the outcome will be. But you’ll still need a strategy to achieve it.
Others may represent a couple of the firm’s priorities and choices, but they do not form a coherent strategy when considered in conjunction. For example, consider “We want to increase operational efficiency; we will target Europe, the Middle East, and Africa; and we will divest business X.” These may be excellent decisions and priorities, but together they do not form a strategy.
Let me give you a better example. About 15 years ago, the iconic British toy company Hornby Railways — maker of model railways and Scalextric slot car racing tracks — was facing bankruptcy. Under the new CEO, Frank Martin, the company decided to change course and focus on collectors and hobbyists instead. As a new strategy, Martin aimed (1) to make perfect scale models (rather than toys); (2) for adult collectors (rather than for children); (3) that appealed to a sense of nostalgia (because it reminded adults of their childhoods). The switch became a runaway success, increasing Hornby’s share price from £35 to £250 over just five years.
That’s because it represented a clear set of just three choices, which fit together to form a clear strategic direction for the company. (Unfortunately, in recent years Hornby abandoned its set of choices, to quite disastrous consequences, where it was forced to issue a string of profit warnings and Martin was encouraged to take early retirement.) Without a clear strategic direction, any implementation process is doomed to fail.
Communicate your logic. Sly Bailey, at the time the CEO of UK newspaper publisher Trinity Mirror, once told me, “If there is one thing I have learned about communicating choices, it is that we always focus on what the choices are. I now realize you have to spend at least as much time on explaining the logic behind the choices.”
A set of a limited number of choices that fit together — such as Hornby’s “perfect-scale models for adult collectors that appeal to nostalgia” — is easy to communicate, which is one reason you need them. You cannot communicate a list of 20 choices; employees simply will not remember them. And if they don’t remember them, the choices cannot influence their behavior, in which case you do not have a strategy (but merely a PowerPoint deck). However, as Bailey suggested, communicating the choices is not enough.
Consider Hornby again. Its employees — product designers and technical engineers, for example — could all tell me their company’s new choices. But they could also tell me the rudimentary logic behind them: that their iconic brand names appealed more to adults, who remembered them from their childhoods; that the hobby market was less competitive, with more barriers to entry and less switching by consumers. It is because they understood the reasoning behind Frank Martin’s choices that they believed in them and followed up on them in their day-to-day work.
It’s not just a top-down process. Another reason many implementation efforts fail is that executives see it as a pure top-down, two-step process: “The strategy is made; now we implement it.” That’s unlikely to work. A successful strategy execution process is seldom a one-way trickle-down cascade of decisions.
Stanford professor Robert Burgelman said, “Successful firms are characterized by maintaining bottom-up internal experimentation and selection processes while simultaneously maintaining top-driven strategic intent.” This is quite a mouthful, but what Burgelman meant is that you indeed need a clear, top-down strategic direction (such as Hornby’s set of choices). But this will only be effective if, at the same time, you enable your employees to create bottom-up initiatives that fall within the boundaries set by that strategic intent.
Burgelman was speaking about Intel, when it was still a company focused on producing memory chips. Its top-down strategy was clear: (1) to be on the forefront of (2) semiconductor technology and (3) to be aimed at the memory business (not coincidentally a set of three clear choices!). But Intel implemented it by providing ample autonomy and decentralized budgets to its various groups and teams, for employees to experiment with initiatives that would bring this strategic intent to life and fruition.
Many of these experiments failed — they were “selected out,” in Burgelman’s terminology — but others became successes. One of them formed the basis of the Pentium microprocessor, which would turn Intel into one of most successful technology companies the world has ever seen. It was the combination of a broad yet clear top-down strategic direction and ample bottom-up initiatives that made it work.
Let selection happen organically. A common mistake in the bottom-up implementation process is that many top managers cannot resist doing the selection themselves. They look at the various initiatives that employees propose as part of the strategy execution process and then they pick the ones they like best.
In contrast, top executives should resist the temptation to decide what projects live and die within their firms. Strategy implementation requires top managers to design the company’s internal system that does the selection for them. Intel’s top management, for example, did not choose among the various initiatives in the firm personally, but used an objective formula to assign production capacity. They also gave division managers ample autonomy to decide what technology they wanted to work on, so projects that few people believed in automatically failed to get staffed.
Be brave enough to resist making these bottom-up choices, but design a system that does it for you.
Make change your default. Finally, another reason many implementation efforts fail is that they usually require changing people’s habits. And habits in organizations are notoriously sticky and persistent. Habits certainly don’t change by telling people in a town hall meeting that they should act differently. People are often not even aware that they are doing things in a particular way and that there might be different ways to run the same process.
Identifying and countering the bad habits that keep your strategy from getting executed is not an easy process, but — as I elaborate on in my book Breaking Bad Habits— there are various practices you can build into your organization to make it work. Depending on your specific circumstances and strategy, this might involve taking on difficult clients or projects that fit your new strategy and that trigger learning throughout the firm. It may involve reshuffling people into different units, to disrupt and alter habitual ways of working and to expose people to alternative ways of doing things. It may also involve identifying key processes and explicitly asking the question “Why do we do it this way?” If the answer is a shrug of the shoulders and a proclamation of “That’s how we’ve always done it,” it may be a prime candidate for change.
There are usually different ways of doing things, and there is seldom one perfect solution, since all alternatives have advantages and disadvantages — whether it concerns an organization’s structure, incentive system, or resource allocation process. We often resist change unless it is crystal clear that the alternative is substantially better. For a successful strategy implementation process, however, it is useful to put the default the other way around: Change it unless it is crystal clear that the old way is substantially better. Execution involves change. Embrace it.
America’s health-care system, the most expensive in the world, is under renewed scrutiny because of the coronavirus pandemic. U.S. employers and households spend almost $4 trillion annually on medical care, yet America regularly lags its peers in key health metrics, and it registered the greatest number of confirmed coronavirus cases and deaths anywhere in the first six months of the global crisis. In advance of a presidential election in November, the virus’s assault shined a spotlight on the gaps and inequities in the market-driven approach of the only rich country not to have universal health care.
1. How is U.S. health care different?
Government involvement in health care goes against the libertarian streak that distinguishes the U.S. from, say, the U.K. and Canada, whose state-funded health systems guaranteeing care for all are derided by some Americans as “socialized medicine.” Only about 36% of Americans, mainly the elderly and poor, receive health-care coverage through the government, via the Medicare and Medicaid programs. More than half of Americans have health insurance as a benefit through work (and can lose coverage if laid off). The 2010 Affordable Care Act, more commonly called Obamacare, has helped about 20 million Americans get health coverage by expanding access to Medicaid and subsidizing purchases of individual plans. Still, as of 2018, about 9% of the population, or 28.3 million people, had no health insurance.
2. What do the uninsured do?
About 1-in-4 put off seeking care in 2018 because of the expense, according to the Centers for Disease Control and Prevention. When emergencies arise, those lacking insurance often seek treatment at hospitals, which by law can’t turn them away. Even among those with insurance, about 29% were “underinsured” in 2018, meaning they faced high out-of-pocket costs when seeking care, according to the Commonwealth Fund, a private foundation. Those costs can total about $650 a year on average for the non-elderly and can reach into many thousands of dollars in the event of a “surprise billing,” when a patient receives care, often in an emergency, from a provider that’s not covered by the patient’s insurer. Medical expenses or health-related income loss resulted in an average of 530,000 bankruptcies each year in the U.S. from 2013 to 2016, according to the American Public Health Association.
3. What happened when the virus hit?
America’s patchwork system created confusion, muddling the response. Since prices are unregulated, concerns about out-of-pocket costs for coronavirus tests lingered even after federal officials assured Americans they’d pay nothing. The federal government also promised to pay the hospital bills of uninsured Covid-19 patients, and major insurers eventually pledged to waive out-of-pocket hospital expenses for their customers, but concerns about unexpected bills remained. The virus took a particularly harsh toll on Black Americans, who, as a result of income inequality and disparities in access to health care, are more likely to have underlying conditions such as diabetes, hypertension and lung disease. Some state governors bemoaned the lack of a coordinated federal response to challenges such as ensuring sufficient supplies of tests, ventilators and personal protective equipment for health-care workers, tasks that were left to underfunded statelevel public-health departments. The U.S. spent 17% of gross domestic product on health care in 2019, double the average of the well-to-do members of the Organization for Economic Cooperation and Development. But it allots only 3 cents out of every dollar to public health, the field devoted to protecting entire populations by, among other things, responding to infectious disease.
4. Where does U.S. health spending go?
The system encourages more expensive, specialized treatment over primary care. This does make the U.S. a leader in many aspects of medicine. It’s long been at the forefront of research and has lower death rates for breast cancer, heart attacks and strokes. Patients face shorter wait times to see specialists and can have access to state-of-the-art procedures. U.S. doctors earn roughly twice as much as those in other wealthy countries. At the same time, the U.S. has some of the worst health outcomes, including the lowest life expectancy among its peers and the highest rate of “avoidable deaths” — those that could have been prevented with effective care.
5. How is health care an election issue?
President Donald Trump and the Republican Party generally regard Obamacare as government overreach and support a challenge before the Supreme Court that could gut it. Some Democrats want to abolish the existing system of private insurance in favor of a government program along the lines of the U.K.’s. Trump’s presumptive Democratic challenger, former Vice President Joe Biden, doesn’t go that far but does want to lower the age for Medicare eligibility to 60 from 65 and create a government-provided health plan — a “public option” — that Americans could buy into.
Sir Tim Berners-Lee is the CTO and co-founder of Inrupt and inventor of the world wide web.
Afew months into the coronavirus pandemic, the web is more central to humanity’s functioning than I could have imagined 30 years ago. It’s now a lifeline for billions of people and businesses worldwide. But I’m more frustrated now with the current state of the web than ever before. We could be doing so much better.
COVID-19 underscores how urgently we need a new approach to organizing and sharing personal data. You only have to look at the limited scope and the widespread adoption challenges of the pandemic apps offered by various tech companies and governments.
Think of all the data about your life accumulated in the various applications you use – social gatherings, frequent contacts, recent travel, health, fitness, photos, and so on. Why is it that none of that information can be combined and used to help you, especially during a crisis?
It’s because you aren’t in control of your data. Most businesses, from big tech to consumer brands, have siphoned it for their own agendas. Our global reactions to COVID-19 should present us with an urgent impetus to rethink this arrangement.
For some years now, I, along with a growing number of dedicated engineers, have been working on a different kind of technology for the web. It’s called Solid. It’s an update to the web – a course-correction if you will – that provides you with a trusted place or places to store all your digital information about your life, at work and home, no matter what application you use that produces it. The data remains under your control, and you can easily choose who can access it, for what purpose, and for how long. With Solid, you can effectively decide how to share anything with anyone, no matter what app you or the recipient uses. It’s as if your apps could all talk to one another, but only under your supervision.
I think of all the possibilities this new relationship to our data could unlock, especially in the case of a pandemic.
Take virus infection detection and contact tracing apps: the pandemic hits and there is a call for people to share specific parts of their health data. These apps would be swift to develop and deploy, and more trusted by everyday citizens. Once the crisis passes, people would simply revoke permission for their data and the app would no longer have access to it.
There’s even more that could have been done to benefit the lives of people impacted by the crisis – simply by linking data between apps. For example:
What if you could safely share photos about your symptoms, your fitness log, the medications you’ve taken, and places you’ve been directly with your doctor? All under your control.
What if your whole family could automatically share location information and daily temperature readings with each other so you’d all feel assured when it was safe to visit your grandfather? And be sure no-one else would see it.
What if health providers could during an outbreak see a map of households flagged as immuno-compromised or at-risk, so they could organize regular medical check-ins? And once the crisis is over, their access to your data could be taken away, and privacy restored.
What if grocery delivery apps could prioritize homes based on whether elderly residents lived there? Without those homes or the people in them having their personal details known by the delivery service.
What if a suddenly unemployed person could, from one simple app, give every government agency access to their financial status and quickly receive a complete overview of all the services for which they’re eligible? Without being concerned that any agency could pry into their personal activity.
None of this is possible within the constructs of today’s web. But all of it and much more could be possible. I don’t believe we should accept the web as it currently is or be resigned to its shortcomings, just because we need it so much. It doesn’t have to be this way. We can make it better.
My goal has always been a web that empowers human beings, redistributes power to individuals, and reimagines distributed creativity, collaboration, and compassion.
Today, developers are creating exciting new applications and organizations are exploring new ways to innovate. The momentum for this new and vibrant web is already palpable, but we must not let the crisis distract us. We must be ready to hit the ground running once this crisis passes so we are better prepared to navigate the next one. To help make this a reality, I co-founded a company, called Inrupt, to support Solid’s evolution into a high-quality, reliable technology that can be used at scale by businesses, developers, and, eventually, by everyone.
Let’s free data from silos and put it to work for our personal benefit and the greater good. Let’s collaborate more effectively and innovate in ways that benefit humanity and revitalize economies. Let’s build these new systems with which people will work together more effectively. Let’s inspire businesses, governments, and developers to build powerful application platforms that work for us, not just for them.
Let’s focus on making the post-COVID-19 world much more effective than the pre-COVID-19 world. Our future depends on it.
In essence, agility at an enterprise level means moving strategy, structure, processes, people, and technology toward a new operating model by rebuilding an organization around hundreds of self-steering, high-performing teams supported by a stable backbone. On starting an agile transformation, many organizations emphasize and discuss tribes, squads, chapters, scrums, and DevOps pipelines. Our research shows, however, that the people dimension—culture especially—is the most difficult to get right. In fact, the challenges of culture change are more than twice as common as the average of the other top five challenges (Exhibit 1).
Shifting culture requires dedicated effort. Unfortunately, many organizations on this journey struggle to articulate their aspired agile culture and bring it to life. This article demystifies culture change in an agile world through four practical lessons drawn from real-life success stories from around the world.
Lesson 1: Define the from–tos
Each organization is unique. Accordingly, each needs its unique culture to power the new agile operating model. Organizations building an agile culture should base their approach on aspirational goals. They also need to understand their current culture, including the behavioral pain points that can be used as a starting point to articulate three to five specific mindset and behavior shifts that would make the biggest difference in delivering business results.
At New Zealand–based digital-services and telecommunications company Spark, one of the first steps the leadership team took in its agile transformation was to launch an effort to articulate the cultural from–tos. Spark boldly decided to go all in on agile across the entire organization in 2017—flipping the whole organization to an agile operating model in less than a year. From the beginning, Spark understood that the change needed to be a “hearts and minds” transformation if it was to successfully enable radical shifts to structure, processes, and technology.
Spark’s culture change started with its Sounding Board, a diverse group of 70 volunteers from across the organization. These were opinion leaders—the “water cooler” leaders and Spark’s “neural network”—not the usual suspects visible to management. The Sounding Board’s role was creating buy-in for and comprehension about the new model and designing enablers (behavioral shifts and new values) to help employees along the agile journey.
An early task for the Sounding Board was to identify the behavioral shifts teams would need to thrive in the new agile operating model. Members used their experiences, inspirational examples from other companies, and Spark’s work on culture and talent to define these shifts. And to help inform what changes were necessary, the Sounding Board sought to understand mindsets (those underlying thoughts, feelings, and beliefs that explain why people act the way they do) that were driving behaviors.
The from–to aspirations were then shared with different groups, including the top team, and distilled into four key themes. Each theme had to resonate with colleagues across the organization, be both practical and achievable, be specific to the company (that is, not derived from general agile theory). The resulting articulation of from–to behaviors allowed Spark to understand and compare its existing cultural reality with the desired end state (Exhibit 2).1
Finally, to set up its from–tos as more than words on paper, Spark made culture one of the agile transformation’s work streams, sponsored by a top team member and discussed weekly in transformation sessions. The work stream brought culture to life through action. The from–to changes were incorporated in all major design choices, events, and capability-building activities. The work stream aligned fully with other culture initiatives that would help to move the needle on cultural change, such as diversity and inclusion.
Melissa Anastasiou, the team member who led the company’s culture workstream, observed: “Like many organizations, the company’s experience has been that culture change is hard and does not happen overnight. It takes collective and consistent effort, as well as a genuine belief in and understanding of the ‘why’ at all levels of the organization. Setting a clear and purposeful vision for what great looks like—and ensuring that this vision is authentically bought in from bottom to top that is, from shop floor to C-suite—put us in the best possible position to deliver the change to full business agile.”
Lesson 2: Make it personal
This lesson is about making the change personally meaningful to employees. To take change from the organizational to the personal frontier, leaders need to give their people the space and support to define what the agile mindset means to them. This will differ among senior leaders, middle managers, and frontline staff, and have different implications for each. Inviting colleagues to share personal experiences and struggles can build transformational momentum and unlock transformational energy.
This was an approach adopted by Roche, a 122-year-old biotechnology company with 94,000 employees in more than 100 countries. In order to build an agile culture, Roche facilitated a deep, personal change process among senior leaders. More than 1,000 of these leaders were invited to learn a new, more agile approach to leadership through a four-day immersive program that introduced them to the mindsets and capabilities needed to lead an agile organization. The program, called Kinesis, focused on enabling leaders to shift from a limiting, reactive mindset to an enabling, creative one. It also started the journey of learning how to shift from a traditional organization designed for command, control, and value capture to an agile organization designed for innovation, collaboration, and value creation.
Throughout the program, leaders came to recognize the ways in which their individual mindsets, thoughts, and feelings manifested in the design architecture and culture of the organizations they led. This recognition highlights why change programs that start with personal transformation are more successful. Organizations are built and led by their leaders: the way they think, make decisions, and show up shapes every part of the organization. This dynamic is amplified in agile organizations, which have an unusually high degree of openness and transparency.
The Kinesis program focused on leading through example. Roche’s head of talent innovation (the primary architect of the initiative) heard dozens of stories of leaders coming back from Kinesis and showing up differently. Beyond its learning programs, Kinesis also helped make the change personal by catalyzing large-scale experimentation in organization and business models. Within six months of the senior leader programs, many participants had launched agile experiments with their own leadership teams, departments, and several in their organizational units—engaging thousands of people in cocreating innovative ways to embed agility within the organization.
A core tenet of Kinesis was invitation, not expectation. Leaders were invited to apply lessons learned back to their own organizations. With the new mindset and the invitation, most participants did. Compared with the initial expectations of 5 to 10 percent of participants running a follow-up session with their teams, 95 percent chose to do so.2Today, agility has been embraced and widely deployed with Roche in many forms and across many of its organizations, engaging tens of thousands of people in applying agile mindsets and ways of working.
Lesson 3: Culturally engineer the architecture
Even the best-designed culture programs can fail if the surrounding context does not support—or worse, hinders—new mindsets and behaviors. To sustain a new culture, the structures, processes, and technology must be redesigned to support behavioral expectations. To be successful, the desired culture change needs to be hardwired into all elements of the business-as-usual organization as well as the transformation.
Magyar Telekom of Hungary (a Deutsche Telekom subsidiary), invested to embed and ingrain agile mindsets and behaviors throughout the agile transformation it started in 2018. As with Spark and Roche, Magyar Telekom began with the foundational lesson of defining its from–to. The telco started with three core values that, as the transformation matured, eventually evolved into seven values and were translated into slogans for more effective communication:3
Focus, becoming more focused by critically assessing the current tasks and saying no to things that are not worth the required effort
Ownership, encouraging ownership by nudging employees to think of their tasks as if being performed for their own company
Retrospection, emphasizing the need to review and assess, celebrating successes and learning from failures
To ensure formal mechanisms supported this agile mindset shift, Magyar Telekom used structural changes on an individual and organizational level, aligning the people, customer, and business processes as well as the physical and digital working environments to an agile culture.
Magyar Telekom’s people processes, for example, practically reflected four principles:
All messages employees receive from the company are consistent with its cultural values
The cultural values and themes of focus, ownership, and retrospection are embedded in all HR and people processes
The employer brand, recruitment process, and onboarding journey ensure every new employee understands the agile culture’s cornerstones
Criteria for career progression define and support agile mindsets and behavior shifts
Magyar Telekom’s business processes were also hardwired to support its culture values. One of several examples used to support the focus and retrospective themes was the quarterly business review (QBR), a common element of agile operating models for business planning and resource allocation. QBRs typically involve stakeholders from major areas of the organization to set priorities and manage organizational demand and dependencies.
To further emphasize focus, the telco committed to implementing and scaling the QBR in the whole organization, including nontribe areas such as customer care or field execution. This formal mechanism had strong cultural implications. First, it signaled that the organization was committed to its cultural theme of focus. Second, the company-wide QBR aligned the whole organization around clear priorities, helping employees focus only on activities that create value while explicitly recognizing and deprioritizing activities that do not. Third, the QBR cycle also included retrospectives to understand and learn from previous successes and failures in a formal, structured, and highly visible process.
Another powerful way to ingrain culture is to change the physical and digital environments. Floors and walls can, quite literally, create either collaboration or barriers between teams. Magyar Telekom altered its floor plans to create spaces for individual squads, as well as all squads in a tribe, to sit and work together. The new physical environment promoted collaboration and continuous interactions. Team- level tools were introduced—including spaces for squads’ ceremonies and writeable walls where teams could visualize priorities, track progress, and engage in real-time creative thinking. Similarly, the digital work environment was updated with agile tools such as Jira issue-tracking and Confluence collaboration software, enabling efficient handling of epics, features and user stories. Within weeks, the Magyar Telekom work spaces turned from stereotypical offices to collaborative incubators of the new agile culture.
Lesson 4: Monitor and learn
Continuous learning and improvement is a core principle of agile working. It applies to agile culture as well. Successful agile transformations have shown the value of monitoring progress, evaluating behavioral change and its impact on performance, and running regular retrospectives to learn from successes and failures. However, measuring behavioral change has traditionally been a challenge.
ING, a well-known leader of agile transformations in banking, innovated here and used multiple approaches to track the impact of its agile transformation on productivity and several dimensions of performance, time to market and volume, and employee engagement. As part of these tracking initiatives, ING also tracked the progress of culture change and its impact on the overall transformation. The bank even teamed with INSEAD’s Maria Guadalupe, a professor of economics, to study and improve the quality of tracking efforts and the resulting insights.
ING’s first tracking initiative was a 40-question survey with 1,000 respondents that ran five times between 2015 and 2017. The survey questions, including those related to culture, were linked to the bank’s objectives and key results. This correlation between the transformation’s soft and hard drivers and its performance metrics allowed ING to see which cultural factors led to results and were critical to the transformation’s success. According to Michel Zuidgeest, ING’s lead of Global Change Execution, the product-owner roles and their corresponding behaviors, for example, turned out to be one of the most important factors affecting outcomes. Skill sets for product owners, chapter leads, and agile coaches—as well as the way they work together—were not clearly defined at the start of the transformation, and individuals in these roles had to grow the right mindsets and behaviors before team performance improved.
ING’s second tracking initiative, started in 2019, combined a 300-person “working floor” survey with senior leadership interviews across 15 countries. Once again, metrics on agile included culture-related questions on whether people on the floor felt more responsibility, whether they could collaborate better, and whether they were more able to learn from others in the company. In parallel, ING used qualitative methods to track the shift toward an agile culture. Updated performance frameworks and dialogues, for example, tracked whether employees were adopting desired behaviors while a continuous listening framework gave an ongoing pulse check of how people were doing.
ING used the data from its tracking initiatives to produce practical learning. Survey and interview results were used in QBRs, leadership dialogues, and improvement cycles. Outcomes were shared with tribes, the central works council, advisory groups, and others, and used in performance dialogues. ING also shared its findings with universities, sharpening both the company’s tracking efforts and university research. The value of tracking became very clear. ING managed to measure culture progress, establish the correlation between culture and performance, and use culture data to bring its agile operating model to life.
ING’s tracking initiatives produced insights on agile maturity, performance, and culture. Payam Djavdan, ING’s global head for One Agile Way of Working, explains that as the agile culture metrics improved—specifically the sense of belonging, motivation, purpose, and empowerment—employee engagement consistently increased. Similarly, several dimensions of team performance improved as the culture of credibility and clarity took hold while greater autonomy, a core principle of agile culture, allowed teams to take on their own challenges. In parallel, performance dialogues revealed that trust in tribe leads was a defining factor in employees’ engagement and their ability to share the tribe’s purpose.
Culture counts in all organizational transformations; it becomes critical in agile transformations. Organizations can do agile by changing their structure, processes, and technology. But they cannot be agile without changing the way people work and interact daily. Enabling a successful, agile transformation requires a fundamental shift in culture. Lessons from organizations that have successfully made this shift can give others a head start on their own transformation journeys.
Most major organizations today have embarked on transformation programs in response to changes in customer, competitive, and regulatory landscapes. Whether the transformations are labeled agile, digital, or DevOps, their fundamental premise is to build value by establishing short, iterative, and continuous feedback loops between product and customers that dramatically improve both the product and its time to market.
Technology has a crucial role in enabling this faster and more flexible approach. In our experience, however, technology does not get sufficient attention on the executive agenda. This is a serious flaw given the importance of technology in driving successful digital transformations.
There can be many reasons for this, but one of the main culprits is that technology is often viewed as a “specialist thing,” and IT leaders often have a hard time communicating about technology in a way that engages non-technologists. This reality often leads to “antipatterns”: an ineffective solution for a problem. Antipatterns have serious and sometimes fatal ramifications for technology transformations.
In this article, we synthesize ten of the most frequent technology antipatterns that we have observed in transformations across more than 50 major organizations. How many do you recognize in your organization?
1. Force-fitting technology solutions: Are you choosing technology out of context?
Watch out when technology decisions do not attract business scrutiny beyond cost and a cursory discussion of “scalability/strategic alignment.” For instance, we hear in many organizations about a “microservice-first” approach. While microservices are a critical component of many IT modernization journeys, they don’t fit the bill in all circumstances.
At one major corporation, the architects of the transformation suggested an approach that built microservices for a client-side application. But microservices is fundamentally a server-side architecture. The architects were simply responding to an organizational push to become technologically modern, but since installation and management of a client-side application was much more cumbersome due to the multiple independent components that would need to work together seamlessly, the approach would have resulted in significant cost and complexity with no additional benefit.
Leaders need to raise their hands to ask “silly” questions to fully understand the rationale and purported benefit of the recommended technology choices. In the preceding example, an executive’s simple series of questions, starting with, “Explain microservices to me as if I were an eight-year old,” could have saved the company millions of dollars!
2. Adopting cutting-edge tech that’s not fully mature: Are you adopting new technology that seems promising but doesn’t have a proven track record?
With stability and scalability as two core elements of any IT organization’s focus, very careful due diligence and decision making are needed to avoid adopting technologies that haven’t fully matured. Leaders should be cautious about any technology recommendation that is pitched primarily because it is in vogue or promises to attract new talent.
A leading bank launched a major redesign for its customer-facing application, using the latest web front-end framework as the software solution stack. It was touted as “future-proof” technology that would attract new talent. The project suffered serious setbacks and cost overruns because staff didn’t have the right capabilities to support it, resulting in time and money to upskill them. It was finally delivered after two years—18 months behind schedule.
Another major bank decided to rewrite its core accounting systems, which were more than 20 years old. While the systems were clean and extensible, the bank wanted to use the latest data-store technology. The project was shelved after an investment of more than €100 million because the new technology was not stable and the architecture created to support it had several fatal flaws.
Choose simple, proven technologies with which your people are familiar.
3. Building out your own cloud infrastructure without sufficient capabilities: Have you let security and regulation block your adoption of public cloud?
Companies are looking to take advantage of new infrastructure platforms and technologies such as container orchestrators, serverless platforms, and analytics solutions. These are complex pieces of technology that major cloud providers are constantly evolving, both in terms of capabilities and reduction in price point, to win the market.
If providing cloud-based infrastructure is not your core business, it will be impossible for you to match the cloud providers on the talent needed to build and run these platforms in a scalable, efficient, and secure way. Moreover, your (digital) competitors will be using these providers to enable them to operate at a completely different price point. At one major financial-services company, we identified more than four different private-cloud initiatives—converged infrastructure, OpenShift, Mesos/Marathon, and OpenStack—each struggling to achieve scale and competing for talent. After many months, the company halted those programs and rightly focused on public-cloud solutions. This is just one of more than 50 examples of private-cloud initiatives that have failed to deliver, often after the investment of millions of euros.
Focus your IT-for-IT investments in public cloud. Start by adopting only one of the major public-cloud providers for running your application workloads. Do not pursue a multicloud approach at first, as all major platforms differ significantly in setup and usage and require inordinate effort and investment for a similarly customized setup. For IT tooling, leverage SaaS solutions, such as workflow systems, source-code management, continuous integration, and collaboration platforms, as much as possible. For the workloads that you need to keep on the premises, use infrastructure patterns that you know and can operate safely and securely at scale.
Technology road map
4. Initiating big-system-replacement programs: Are you focusing on system replacement rather than improving existing systems in a way that is faster and more cost-effective?
System-replacement projects are fundamentally complex, cost intensive, and inherently risky. They also distract the organization from building customer-centric capabilities and features in the short-to-medium term. Consequently, big-system-replacement projects should be avoided unless all other paths have proven not to be viable.
A major bank considered a core-banking-system replacement primarily because it was running an old Unix-based system on legacy hardware. But very quickly into the replacement project, the bank realized that its existing system was readily portable to new platforms, including public cloud. Additionally, updating the original monolithic code would entail substantially less cost, effort, and risk than replacing the entire system. Hence, after initial exploration, the bank chose to achieve the target business outcome by gradually modernizing the existing system at a fraction of the estimated big-system-replacement cost.
Before embarking on big-system replacement, ask the following:
Can you incrementally improve the old system instead of replacing it?
If you need to build a new system, will it deliver incremental value to your customers as it scales up over time?
Can you gradually phase out the old system?
5. Focusing on architecture and tooling improvements without enhancing process and delivery discipline: Did you re-architect and implement new tooling but forget to adapt the delivery processes?
One of the biggest sources of impact in technology transformations comes from simplifying the path to production, the steps involved from defining requirements to releasing software and using it with disciplined repetition across teams. This requires a lot of organizational and executive patience, as the impacted teams—app development, operations, security, support—can take weeks and months to perfect this coordinated dance. Tools and architecture changes can help, but to be effective, they need to be paired with changes to engineering practices, processes, and behaviors. Launching programs for large architecture and tooling changes often requires minimal effort, catches the executive and board’s fancy, and represents that things are moving. However, in our experience, without changes to engineering practices, processes, and behaviors, such programs have minimal or no impact.
A major bank realized, after several years of significant investment in development, release, and collaboration tools, that it had no improvement in time to market and a low adoption rate. After months of futile top-down incentives and nudges for tools adoption, the bank refocused on how the tools enabled a new set of engineering practices and collaboration between teams. It showed how the new tools could simplify the path to production. At last check, more than 40 percent of the teams were onboarded on the new way, and there was dramatic improvement in both time to market and the tactical adoption of tools. In a similar example, a major European bank dramatically increased delivery speed by focusing on a common and clear understanding of all components of the delivery process, establishing a strict cadence on how it would be executed, and simplifying some of the documents and approvals required before releasing software to production.
To improve speed of delivery, start with baselining the path to production to identify strengths and gaps. Follow this with simplification of process and delivery artifacts and addressing relevant gaps through tools and architectural changes, as required. Once the new process has been instituted, ensure that the teams adopt the cadence of disciplined repetition of this process.
6. Focusing on outputs rather than business outcomes: Are your technologists focused on output instead of business/technology outcome?
Are your technology targets too . . . tech-y? Does the technology organization clearly articulate and track customer-focused targets? If the answer to any of these questions is even a mild pause to reflect, you need to dig deeper.
Well-meaning technologists, even in customer-centric organizations, often default to focusing on tech output. It is easily measurable, all-consuming, and very “in control.” Examples include amount of screens delivered, functionalities deployed, and defects tackled. The tech-output metrics might be great, but unless you’re measuring the direct impact of technology on customers, they are not particularly relevant. At one of the major financial organizations, the app-development group focused on 100 percent test automation as a key result and celebrated success—and closure—when they achieved it. However, testing took as many days as it had before, and there was no improvement in the product’s time to customer.
Business and technology leaders should define joint accountability for desired outcomes (aka “two in a box”):
Business outcomes: usage of your products (number of customers, daily usage) and customer satisfaction (net promoter scores, number of support requests)
Technology outcomes: functional availability/security of your product and efficiency of development (release frequency, toil)
Have the technologists and business articulate specific and measurable business and technical outcomes instead of technology outputs. The magic of product ownership and cross-functional teams lies in understanding the trade-offs between these different business and technology outcomes and making conscious choices about what to prioritize to balance short-term objectives with long-term product health.
7. Managing IT purely for cost: Are you sacrificing significant value by overindexing on price and cost?
Managing IT purely as a cost center is an outdated mindset that can have drastic repercussions, ranging from the inability to attract the right talent to discouraging use of critical and expensive technologies and platforms. For example, hiring or sourcing primarily on price typically results in subpar talent.
One financial-services company paid its vendor cheap day rates, and the vendor “recouped” the discounts by staffing novices and inflating estimates on what it would take to deliver certain features. The result was very low-quality code and long cycle times.
Instead of managing internal IT as a cost center, consider the following to think through how to set and align efficiency incentives:
Bring competence into the mix when talking about the cost of talent. Some companies have defined a unified competence model ranging from “novice” to “expert” for both internal engineers and vendors. Vendor pricing is discussed in terms of “novice equivalents,” recognizing that productivity differences between novice and expert engineers are a factor of eight to ten.
Create a true total-cost-of-ownership (TCO) view of your products to make sensible trade-off decisions on whether to invest in new features, automation, or infrastructure optimization.
8. Investing in developing new platforms without involving the business: Is your primary focus platform development instead of platform adoption by the business?
IT organizations rightly invest significant effort and resources in building and deploying robust platforms. However, quite often business is not involved in platform design or development, leading to new platforms with minimal relevance for the business side and, hence, poor adoption.
A major US bank made a multimillion-dollar investment in a data lake, ostensibly to pivot to a data-first culture. As a data-organization project within IT, it was conceived, designed, and developed primarily without business engagement. The data lake was delivered slightly behind schedule. More than a year later, the bank was still trying to make progress capturing use cases the data lake supported. In addition to the unused platform, the data organization suffered significant staff turnover due to poor morale. The bank has now launched multiple programs to update the data lake to meet business needs.
Start and end the conversation on all technology platforms with the business problem that they will address. Focus on that relentlessly, and ensure that all the right stakeholders across business and technology have joint accountability for the platform’s delivery of customer value. When building your platform, focus on building use cases, and instead of spending time up front on putting enablers in place, accept that you might want to refactor pieces of the platform when you onboard more use cases.
9. Outsourcing your core value streams: Are vendors doing the work that creates the most value for your business?
If your core technology knowledge or intellectual property (IP) is outsourced (either through offshoring agreements or vendor-support contracts), you risk limiting the impact of any transformation and depending to an unhealthy degree on a third party. Because outsourcing has proven to be an effective tool to reduce cost for commoditized activities, some leaders have expanded its scope in response to cost pressures and outsourced entire subgroups or critical platforms. Such dependence has severely restricted organizations in making bold strategy and partner choices, mostly because they have little or no control over their own IP.
For instance, at a major insurer, 90 percent of the technology organization was outsourced across different vendors. During its transformation, it realized that a major reason for underperformance was that the knowledge of its three core systems was held by three individuals at different vendors. Not only that, but they had never spoken with each other.
Clearly demarcate the boundaries for outsourcing at business-critical technologies or activities. If you are currently heavily outsourced for critical activities, align with your peer group and the board on a plan to progressively bring them in-house.
10. Building up an army of managers rather than developing an engineering culture: Do you value your managers more than your engineers?
Career growth in most organizations usually entails people management. Gradually, talented employees who once showed great technology promise spend more time on managing people and administrative activities than on practicing the craft of engineering. They become full-time managers. Over time, they lose the ability to engage in deep technical conversations with their teams, to role-model technical problem solving and innovation, and—most damaging—to effectively manage team performance based on detailed technical merit. Consequently, organizations have a large IT group that consistently underperforms and has little technical guidance or accountability.
One major financial organization launched a program to overhaul its performance-management process and discovered that, on average, managers spent less than an hour a week in technical discussions with their teams. Does that sound familiar?
Give technology managers specific responsibilities for tech delivery. Encourage a culture of tech expertise for technology managers through monetary and nonmonetary incentives. To build an engineering culture, define granular performance-review criteria for technologists that are focused on both delivery and expertise.
Identifying and addressing these challenges require a concerted effort, with focus and ownership from both business and technology leadership. As more and more organizations launch and mature their digital transformations, executives must constantly probe for any evidence of the above antipatterns and urgently move to address them. Only then can the technology transformations evolve sufficiently to support the most vaunted three-part outcome of the digital transformation: faster customer-centric delivery, business growth, and happier employees.
About the author(s)
Sven Blumberg is a senior partner in McKinsey’s Istanbul office, Thomas Delaet is a partner in the Brussels office, and Kartikeya Swami is an alumnus of the New York office.