healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

THE MYTH OF CYBERWAR AND THE REALITIES OF SUBVERSION – MWI

Posted by timmreardon on 01/24/2022
Posted in: Uncategorized. Leave a comment

Lennart Maschmeyer | 01.24.22

The prospect of cyberwarfare continues to haunt defense planners, policymakers, and the public. Earlier visions of cyberwar, in which opponents hurled cyber weapons and logic bombs at each other at the speed of light, have mostly subsided. Yet fears of a strategic cyberattack causing a “cyber Pearl Harbor” remain acute. And even if cyberattacks remain below the intensity of armed conflict, many argue that their unrivaled effectiveness expands the value of “hybrid warfare,” opening a new space for strategic competition. Cyber operations, in this view, will allow states to shift the balance of power and attain strategic gains in ways that were previously impossible without going to war. In other words, by staying in the gray zone states can get more for less.

If true, this development would herald no less than a revolution in strategic competition. The reality is more prosaic. In a recent article in International Security, I show that current expectations about the strategic potential of cyber operations focus on the promise of technology, while neglecting key operational challenges. As has often been the case with new and ostensibly revolutionary military technologies, the way actors use them at the operational leveldetermines their strategic utility. And a closer look at the operational challenges in cyber conflict— including Russia’s five major disruptive cyber operations in the Russo-Ukrainian conflict—suggests that its strategic value will be modest.

Cyber operations are not novel instruments of power, but instruments of subversion. Like all such instruments, cyber operations hold great strategic promise but falter all too often in practice. The reason is an operational trilemma between speed, intensity, and control: cyber operations cannot have all three properties at once. In theory, cyber operations offer rapid and stealthy options to sow mass disruption capable of shifting the balance of power. In practice, however, they tend to be too slow, weak, and volatile to deliver on that promise.

Subversion and its Promise

Subversion is a common but understudied mechanism of power familiar mostly to intelligence scholars and practitioners in the context of nonmilitary covert operations. The distinctive characteristic of subversion is its reliance on the secret exploitation of vulnerabilities in adversary systems. Exploitation involves identifying flaws in a system, and then using these flaws to infiltrate the system to produce unexpected outcomes for the victim.

Traditional subversion uses spies to infiltrate organizations or groups and manipulate them. For example, a spy could attain employment at an industrial facility under a false identity, exploiting insufficient background checks. The spy could then gain access to sensitive machinery, before sabotaging it by exploiting weaknesses in security protocols. Since humans are fallible, any human-made system of rules and practices is vulnerable in principle.

Subversion can produce a wide range of effects: it can influence policy and public opinion, sabotage infrastructure, disrupt the economy, and foment unrest—it can even overthrow governments. As a result, subversion is a nearly irresistible option: it is cheaper and lower risk than warfare, yet still capable of significantly weakening adversaries.

Subversion’s Pitfalls: An Operational Trilemma

But the same characteristics that enable this strategic promise also often prevent its fulfillment. Subversion promises low risks and low costs because of its secrecy and its exploitation of adversary systems. These operational characteristics are not a given, however, but require significant efforts to achieve and maintain. Secrecy requires stealth and adaptation. Exploitation requires reconnaissance of systems, identification of vulnerabilities, and development of means of manipulation—all under the constraints of secrecy. These challenges limit operational speed, intensity of effects, and control. Moreover, increasing one variable tends to create corresponding losses across the remaining ones.

First, speed is constrained because reconnaissance, identification of vulnerabilities, and development of exploitation techniques all take time. Since an increase in speed means less time to develop and refine exploitation techniques, it correspondingly tends to reduce the intensity of effects and the degree of control over an operation.

The second variable in the trilemma, intensity of effects, is constrained by adversary systems and the need for secrecy. The properties of the target system determine the maximum intensity of effects—for example, if economic disruption is the aim, the target system must in some way affect the relevant economic processes. Even if the target system is capable of such an effect, however, the process of manipulation must stay hidden until the effect is produced. Otherwise, the victim can neutralize it—typically, by arresting the spy involved.

Finally, subversive actors never fully control a target system, and usually have only incomplete knowledge about its design and functioning. Because of this limited control, manipulation may fail to produce the intended effect or lead to unintended consequences. This trilemma means that subversion is typically too slow, too weak, and too volatile to provide strategic value.

The Subversive Nature of Cyber Operations

Cyber operations share this operational trilemma. The core mechanism of cyber operations is hacking—exploiting vulnerabilities in computer systems to make them behave in ways not intended by their designers, owners, and users. These systems are of a different kind than the social systems targeted by traditional subversion, but the mechanism of exploitation involved follows the same functional logic: identifying flaws in a system and then using them to manipulate it.

Hacking targets two types of vulnerabilities. First, it can target flaws in the design of the technology itself, such as software code, to make systems behave in ways neither their designers nor users intended or expected. Usually, this means granting access and control to the hacker. But it can also exploit flaws in hardware design.

The second type of vulnerability targets users and security practices. Phishing emails offer a classic example, leveraging weaknesses in human psychology to trick users into installing malware or revealing access credentials. Regardless of the vulnerability exploited, cyber operations then use the targeted system to inflict damage upon an adversary. As in the case of traditional subversion, hackers turn these systems into instruments of the sponsor’s interests. In a second parallel, hackers also proceed stealthily, establishing access to, and assuming control over, targets without alerting the victim to their presence.

Hacking can achieve similarly diverse effects as traditional subversion, ranging from influencing public opinion to disrupting the economy to the sabotage of critical infrastructure. In modern societies a growing portion of social, economic, and physical processes are computerized. This computerization produces vast efficiency gains, but it also creates new liabilities. Current expectations about the strategic potential of cyber operations are correct in identifying this promise.

The Subversive Trilemma and the Strategic Limitations of Cyber Operations

Yet the exploitation required to fulfill this promise involves the same operational challenges as in the case of traditional subversion, and therefore produces the same trilemma. As a result, in practice cyber operations offer similarly limited strategic value.

Contrary to prevailing expectations, cyber operations face key constraints when it comes to speed. Hacking requires reconnaissance, identifying suitable vulnerabilities, and developing the means to exploit them, such as computer viruses. All of this takes time. If operational speed is required, there is less time for reconnaissance and development which means that the tools and techniques deployed are less likely to achieve large effects and significant control over the target system. And hacking, like traditional subversion, also requires stealth. Upon discovery, victims can delete malware and patch vulnerabilities, so hackers must proceed with caution—constraining the intensity of effects that can be produced.

Conversely, increasing the intensity of an operation tends to slow down speed and decrease control. The greater the desired scale of impact, the more reconnaissance and development time will be required to achieve a corresponding degree of control over a target system capable of producing the desired effect. The more capable the system, the more likely it is to be well protected, raising the risk of discovery. With the increase in scale, the likelihood that something goes wrong also tends to increase—unless one invests even more time in reconnaissance and development.

Finally, as in the case of traditional subversion, control in cyber is also limited. Access to target systems usually remains incomplete, and some parts of these systems remain unfamiliar. Even those parts that hackers have access to may behave differently than expected in response to manipulation. The same fallibility that produces logical flaws that enable exploitation may also apply to the hackers themselves. For example, in the 2016 sabotage operationagainst Ukraine’s power grid, the infamous Sandworm hacking group had developed a program that was capable of physically damaging power circuits by overloading them. Yet the hackers had missed something: the industrial control systems they targeted reversed IP addresses. As a result, the malicious commands went nowhere, the capability failed to produce any effects, and the victims neutralized the outage in little more than an hour.

In sum, the trilemma predicts that an increase in one of speed, intensity, or control will tend to produce a decrease in the other two. And increasing two variables at once tends to produce corresponding “double losses” in the remaining variable. For example, high-speed and high-intensity operations will entail an extremely high risk of losing control.

The Strategic Value of Cyber Operations: Expectations versus Evidence

This subversive trilemma defangs cyber operations in most circumstances. Contrary to expectations, cyber operations cannot be fast, intense, and anonymous—or at least not all at once. In practice, cyber operations are usually too slow, too weak, or too volatile to contribute to strategic goals.

My research into the use of cyber operations in the Russo-Ukrainian conflict—a paradigmatic example of cyber-enabled gray-zone conflict—confirms these conclusions. In contrast to expectations about the integral role of cyber operations in hybrid warfare, cyber operations have been mostly irrelevant to the military dimension of the conflict. And Russia’s five major disruptive cyber operations against Ukraine failed to produce strategic value—in large part because of the operational constraints laid out above. Even the one operation that produced strategically significant effects, the 2017 NotPetya operation that disrupted businesses across much of the world, ultimately supports this theory: the reason for its wide spread was a loss of control. The hackers had no way to control the malware’s spread, and thus no control over the scale of its disruption—which, based on forensic evidence, spread far wider than intended. The operation had a measurable strategic impact since it reduced Ukraine’s GDP, but its uncontrolled spread also produced additional costs as several Western countries levied sanctions against Russia in response, reducing the attack’s net strategic benefit.

This last point highlights an important distinction between strategic impact and value. Cyber operations can produce significant impacts by spreading widely, but their uncontrolled spread limits their strategic value. And because of the trilemma, the greater the scale of effects, the greater the risk of losing control tends to become.

In most circumstances, then, the subversive trilemma significantly limits the value of cyber operations. Their track record in Ukraine confirms this assessment. Of course, actors may occasionally get lucky and manage to achieve strategic goals despite taking exceptional risks. Yet such rare scenarios should not dominate threat assessments and strategy development. In theory, it is possible to juggle three balls while sprinting one hundred meters at competitive pace without dropping a single ball. In practice, few—if any—will be able to achieve this feat.

Article link: https://mwi.usma.edu/the-myth-of-cyberwar-and-the-realities-of-subversion/

Lennart Maschmeyer is a senior researcher at the Center for Security Studies at ETH Zurich. You can follow him on Twitter @LenMaschmeyer.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

How a Russian cyberwar in Ukraine could ripple out globally – MIT Tech Review

Posted by timmreardon on 01/24/2022
Posted in: Uncategorized. Leave a comment

Soldiers and tanks may care about national borders. Cyber doesn’t.

By Patrick Howell O’Neillarchive page

January 21, 2022

Russia has sent more than 100,000 soldiers to the nation’s border with Ukraine, threatening a war unlike anything Europe has seen in decades. Though there hasn’t been any shooting yet, cyber operations are already underway. 

Last week, hackers defaced dozens of government websites in Ukraine, a technically simple but attention-grabbing act that generated global headlines. More quietly, they also placed destructive malware inside Ukrainian government agencies, an operation first discovered by researchers at Microsoft. It’s not clear yet who is responsible, but Russia is the leading suspect.

But while Ukraine continues to feel the brunt of Russia’s attacks, government and cybersecurity experts are worried that these hacking offensives could spill out globally, threatening Europe, the United States, and beyond. 

On January 18, the US Cybersecurity and Infrastructure Security Agency (CISA) warned critical infrastructure operators to take “urgent, near-term steps” against cyber threats, citing the recent attacks against Ukraine as a reason to be on alert for possible threats to US assets. The agency also pointed to two cyberattacks from 2017, NotPetya and WannaCry, which both spiraled out of control from their initial targets, spread rapidly around the internet, and impacted the entire world at a cost of billions of dollars. The parallels are clear: NotPetya was a Russian cyberattack targeting Ukraine during a time of high tensions.

“Aggressive cyber operations are tools that can be used before bullets and missiles fly,” says John Hultquist, head of intelligence for the cybersecurity firm Mandiant. “For that exact reason, it’s a tool that can be used against the United States and allies as the situation further deteriorates. Especially if the US and its allies take a more aggressive stance against Russia.”

That looks increasingly possible. President Joe Biden said during a press conference January 19 that the US could respond to future Russian cyberattacks against Ukraine with its own cyber capabilities, further raising the specter of conflict spreading. 

“My guess is he will move in,” Biden said when asked if he thought Russia’s President Vladimir Putin would invade Ukraine.

Unintentional consequences?

The knock-on effects for the rest of the world might not be limited to intentional reprisals by Russian operatives. Unlike old-fashioned war, cyberwar is not confined by borders and can more easily spiral out of control.

Ukraine has been on the receiving end of aggressive Russian cyber operations for the last decade and has suffered invasion and military intervention from Moscow since 2014. In 2015 and 2016, Russian hackers attacked Ukraine’s power grid and turned out the lights in the capital city of Kyiv— unparalleled acts that haven’t been carried out anywhere else before or since. 

The 2017 NotPetya cyberattack, once again ordered by Moscow, was directed initially at Ukrainian private companies before it spilled over and destroyed systems around the world. 

NotPetya masqueraded as ransomware, but in fact it was a purely destructive and highly viral piece of code. The destructive malware seen in Ukraine last week, now known as WhisperGate, also pretended to be ransomware while aiming to destroy key data that renders machines inoperable. Experts say WhisperGate is “reminiscent” of NotPetya, down to the technical processes that achieve destruction, but that there are notable differences. For one, WhisperGate is less sophisticated and is not designed to spread rapidly in the same way. Russia has denied involvement, and no definitive link points to Moscow.

NotPetya incapacitated shipping ports and left several giant multinational corporations and government agencies unable to function. Almost anyone who did business with Ukraine was affected because the Russians secretly poisoned software used by everyone who pays taxes or does business in the country. 

The White House said the attack caused more than $10 billion in global damage and deemed it “the most destructive and costly cyberattack in history.”

Since 2017, there has been ongoing debate about whether the international victims were merely unintentional collateral damage or whether the attack targeted companies doing business with Russia’s enemies. What is clear is that it can happen again. 

Accident or not, Hultquist anticipates that we will see cyber operations from Russia’s military intelligence agency GRU, the organization behind many of the most aggressive hacks of all time, both inside and outside Ukraine. The GRU’s most notorious hacking group, dubbed Sandworm by experts, is responsible for a long list of greatest hits including the 2015 Ukrainian power grid hack, the 2017 NotPetya hacks, interference in US and French elections, and the Olympics opening ceremony hack in the wake of a Russian doping controversy that left the country excluded from the games. 

Hultquist is also looking out for another group, known to experts as Berserk Bear, that originates from the Russian intelligence agency FSB. In 2020, US officials warned of the threat the group poses to government networks. The German government said the same group had achieved “longstanding compromises” at companies as they targeted energy, water, and power sectors.

“These guys have been going after this critical infrastructure for a long, a long time now, almost a decade,” says Hultquist. “Even though we’ve caught them on many occasions, it’s reasonable to assume that they still have access in certain areas.”

sophisticated toolbox

There is serious debate about the calculus inside Russia and what kind of aggression Moscow would want to undertake outside of Ukraine. 

“I think it’s pretty likely that the Russians will not target our own systems, our own critical infrastructure,” said Dmitri Alperovitch, a longtime expert on Russian cyber activity and founder of the Silverado Policy Accelerator in Washington. “The last thing they’ll want to do is escalate a conflict with the United States in the midst of trying to fight a war with Ukraine.”

No one fully understands what goes into Moscow’s math in this fast-moving situation. American leadership now predicts that Russia will invade Ukraine. But Russia has demonstrated repeatedly that, when it comes to cyber, they have a large and varied toolbox. Sometimes they use it for something as relatively simple but effective as a disinformation campaign, intended to destabilize or divide adversaries. They’re also capable of developing and deploying some of the most complex and aggressive cyber operations in the world.

In 2014, as Ukraine plunged into another crisis and Russia invaded Crimea, Russian hackers secretly recorded the call of a US diplomat frustrated with European inaction who said “Fuck the EU” to a colleague. They leaked the call online in an attempt to sow chaos in the West’s alliances as a prelude to intensifying information operations by Russia. 

Leaks and disinformation have continued to be important tools for Moscow. US and European elections have been plagued repeatedly by cyber-enabled disinformation at Russia’s direction. At a moment of more fragile alliances and complicated political environments in Europe and the United States, Putin can achieve important goals by shaping public conversation and perception as war in Europe looms.

“These cyber incidents can be nonviolent, they are reversible, and most of the consequences are in perception,” says Hultquist. “They corrode institutions, they make us look insecure, they make governments look weak. They often don’t rise to the level that would provoke an actual physical, military response. I believe these capabilities are on the table.”

Article link: https://www.technologyreview.com/2022/01/21/1043980/how-a-russian-cyberwar-in-ukraine-could-ripple-out-globally/

Nurses and the Great Attrition – McKinsey

Posted by timmreardon on 01/24/2022
Posted in: Uncategorized. Leave a comment

January 20, 2022 | Podcast

By David Baboolall and Gretchen Berlin

A recent McKinsey survey found that more than 30 percent of nurses are thinking of leaving direct patient care. What can be done to inspire them to stay?

DOWNLOADS

Open interactive popup

Article (9 pages)

Many nurses are reevaluating their commitment to direct patient care given the demands of the coronavirus. Now, during a time of unprecedented need, what can health systems and other employers of nurses do to prevent losing this backbone of the healthcare workforce to the Great Attrition? Hear from Gretchen Berlin, a registered nurse (RN) and McKinsey senior partner, on the state of nurses and on specific suggestions to improve their work experience, practically and emotionally. After, McKinsey associate partner David Baboolall joins us to discuss the recent findings of the McKinsey Quarterly article “Being transgender at work.” An edited version of the conversations follow.

The McKinsey Podcast is hosted by Roberta Fusaro and Lucia Rahilly.

https://player.backtracks.fm/mckinsey/the-mckinsey-podcast/m/how-to-preserve-and-grow-the-beleaguered-nurse-workforce?

Segment one: Nurses are under great strain

Lucia Rahilly: Today, we have Gretchen Berlin on the show, a registered nurse and senior partner in our healthcare practice. Gretchen, welcome to the podcast.

Gretchen Berlin: Thank you.

Lucia Rahilly: It’s great to have you here. Nurses have been on the front lines of the COVID-19 crisis for nearly two years now. Infection rates are surging. What are you hearing on the ground about how nurses are feeling now?

Gretchen Berlin: Nurses are not a monolithic group, and it varies quite significantly across the country. In general, demands on nurses were high even before COVID-19. Across the country and across the world, we have an aging population. We have a population that’s getting sicker and needs more care.

Now, fast-forward to today: a lot of those nurses are tired. In any crisis situation, you’re running on adrenaline and trying to get through to the other side. What has become clearer as the months have gone by—with Delta and now Omicron—is that there may not be a magical end of the tunnel, and that is a very different world to be facing.

It’s a lot of pressure, a lot of day-to-day, and then month-to-month, demands and potentially not a lot of relief in the near-term future.

A few factors driving nurse fatigue

Lucia Rahilly: And we see resignation generally in the zeitgeist in the wake of the pandemic. Quitting is up across the board in different industries, and McKinsey’s own “Great Attrition” research shows that the intent to quit continues to be heightened. How do nurses stand in that area? Are there factors that are specifically driving nurses out the door?

Gretchen Berlin: Nurses are no exception to our research on the likelihood of people leaving their professions. We ran a survey in early 2021 that showed about 20 percent of folks were looking to leave. [Editor’s note: This figure rose to 32 percent in a McKinsey survey conducted in November and December of 2021.]

What we’ve seen in the healthcare market in recent months is massive competition through things such as retention bonuses, attraction bonuses for new hires; frankly, in a way, that is largely unsustainable.

I think the more troubling piece is that nurses are exiting the profession altogether. It’s not just a challenge that the US is facing; we hear it from health systems around the world.

To answer your question as to what drives them to leave: we see a lot around compensation, and, yes, we need to pay nurses adequately for the services and value that they’re delivering. But at the end of the day, a lot of it comes down to the support and recognition that they feel in their workplace, from their leaders, their managers, their team, and through ensuring there’s sufficient staffing, sufficient respite, and gratitude.

Lucia Rahilly: Presumably, there’s variability in care settings, but are you seeing extremities of workload, insufficient staffing, or an emotional toll on nurses right now as the pandemic drags on?

Gretchen Berlin: Yes, you can almost draw a timeline of the pandemic. It started at this crisis moment where many health systems were flexing staff in a variety of ways. You had nurses who were historically nurses in the OR [operating room] becoming ICU [intensive-care unit] nurses or nurses who were accustomed to running ventilators moving over onto COVID-19 units.

You had nurses in the outpatient settings moving into inpatient. We had nurses crossing state lines and operating in health systems they had never operated in before. All of that was happening, with most non-COVID-19 care being delayed.

Health systems have been doing everything they can to ensure sufficient staffing, but it has been a challenge to meet the need, to say the least. It’s had to be met by contract labor and additional support, which is extremely expensive and can often be challenging to integrate into the regular care team.

Because of those staffing challenges or the variability in the workload, we haven’t yet hit a new normal in the health system. We continue to see reports coming out about the impact of delayed care, and that still hasn’t fully run its course through the system.

We have done surveys of health systems every quarter, and they’re still projecting that surgical backlogs and preventative backlogs are not yet through the system.

The mental health of nurses

Lucia Rahilly: It feels like nurses have always been required to be incredibly resilient. They are expected to behave heroically, but nurses are also human beings, and we’re seeing a rise in clinician burnout across the board.

Is mental health a new issue for nurses, or have they been suffering under the radar for longer than many of us might have suspected?

Gretchen Berlin: I don’t think mental health is a new issue in nursing at all. In general, mental health is an underappreciated, underdiscussed issue in the entire population, and nurses are no exception to that.

Many parts of care have provided support and respite for clinical teams. For example, pediatric hospitals will allow rotations between cardiac ICU step-down units and outpatient settings, allowing nurses to avoid being in the most critical, upsetting care settings day in, day out, in perpetuity.

We haven’t really built in that decompression space for a lot of healthcare. And it’s interesting that you use the word “burnout”—there are a lot of sensitivities around that word in healthcare. And rightly so, as some believe that it implies that the clinicians themselves aren’t resilient enough to deal with what is happening. When, in reality, what is happening is an untenable situation for anyone to individually survive in, let alone thrive in.

We, as a society, need to lift up these professions. In the last two years, we’ve had probably ten different parades for different professional sports teams who have won championships. And, yes, these events bring great joy to society. But where is that kind of support and recognition at the community level for what our frontline heroes are doing day in and day out?

Lucia Rahilly: Right, it’s a really good point. I live in New York City, and at the beginning of the pandemic, we used to stop what we were doing and clap at seven in the evening for the essential workers. And it was such an amazing outpouring of gratitude.

But now, lo these many months later, that appreciation may reside in all of us, but it’s much less visible.

Gretchen Berlin: Exactly. The nurses and clinicians have not stopped seeing the patients, the firefighters and police have not stopped answering the calls for patients in respiratory distress that may or may not have COVID-19. The level of stress that individuals are dealing with is going to have massive implications on everyone’s well-being, which then will put more strain back on the healthcare system through mental-health needs, cardiac needs, et cetera.

The level of stress that individuals are dealing with is going to have massive implications on everyone’s well-being, which then will put more strain back on the healthcare system.

Gretchen Berlin 

Lucia Rahilly: Seems also that since family members are not allowed to visit bedside in many healthcare settings, this could add to the emotional work of nurses?

Gretchen Berlin: I think that’s absolutely right. Nurses are dealing with a lot at the bedside, in terms of helping patients die and helping families. To your point, many patients, especially at the start of this, had only the nurses with them for those final moments, and I’m not sure that we’ve provided the decompression space for what that does to an individual who has to see that and support people through that over and over again.

How to improve nurse working conditions

Lucia Rahilly: Let’s talk about what we should be doing to make this better. You’ve written that we should move away from thinking about a rebuild and shift instead toward an entirely new build of our nursing workforce. And specifically, Gretchen, you mentioned several areas: workforce health, workforce flexibility, reimagining care-delivery models, and strengthening talent pipelines.

Let’s start with workforce health and well-being, both of which feel exigent right now. How can those areas be improved?

Gretchen Berlin: I think the areas of workforce health and well-being can be improved in a couple of ways. Some of it is societal recognition and celebration. When you hear that someone is an astronaut, the reaction often is, “That is so cool; tell me about that.” How do we make that be the narrative for our frontline caregivers?

The second form of recognition in our society often comes financially through compensation for the role that nursing plays.

There’s other financial recognition that can be provided, too, and has happened over time in terms of loan forgiveness from states, from the federal government, from various nonprofits in support of these roles, which different parts of the community can get involved in.

And then I think there’s recognition in the workplace. A lot of health systems do it in spades, but genuinely doing it means doubling down on the basics of leadership recognition, being on the floor with nurses to understand the simple and the complicated fixes to make their lives easier—things such as making sure supplies are there on time, and eliminating unnecessary documentation so that they can spend more time at the bedside.

Lucia Rahilly: What about workforce flexibility? Many nurses must already work shifts. What does workforce flexibility look like in the nursing context?

Gretchen Berlin: Workforce flexibility takes a few flavors. Some of it is flexibility in the care setting. So, a bit of what we discussed earlier: allowing folks the ability to have the intense experience in the ICU when they want it, but also to have the ability to go elsewhere to get different experiences—obviously all within appropriate licensure and clinical standards—depending on what’s going on with them individually or with the rest of their lives.

Health systems are often doing this through regional float pools or other team-based models, but more and more of this can and should happen.

Lucia Rahilly: The pandemic obviously accelerated digital adoption in all kinds of areas, including telehealth. How might telehealth affect future care delivery and nurses’ roles in it?

Gretchen Berlin: Well, I think telehealth is an example of flexibility, and more nurses now say that they would like to continue to participate in telehealth.

The other thing that happened during the pandemic that was interesting was the more digital ways of providing patient monitoring and care. Many facilities moved a lot of the patient monitors out into the hallway to avoid unnecessary donning of PPE [personal protective equipment] and going into the room. And that actually allows for more patient monitoring at any one time.

So how do you translate that into a new model? Some parts of patient care you’re never going to get rid of—for example, the human interaction. You need to do physical assessments. You need to administer medications. But how do we take what worked in a moment of crisis and institutionalize it further in our systems and in our technology?

Lucia Rahilly: Is there a possibility of hybrid work for nurses? And if so, what would that look like?

Gretchen Berlin: I think there is the option of hybrid working for nurses in the future.

Often when we think of telemedicine, we think of a parent at home worried about their kid’s fever, and if they should bring them in or not, and getting a telemedicine visit. But telemedicine and teleconsultations are used for a lot more complex things. Especially in rural hospitals—for example, if you have a patient coming in with a stroke, they’ll have more of a virtual consultation, with higher specialty service elsewhere.

We’re doing that for tele-ICU, et cetera. And individuals could operate across these care settings. Again, of course, all within license requirements, to provide flexibility. And we are seeing that nurses are more interested in doing telemedicine going forward.

Lucia Rahilly: It’s interesting to think of telehealth not just as a convenience but also as potentially a model that improves the cadence and the quality of care through more frequent monitoring or monitoring for folks in rural settings who might not otherwise make it all the way into the doctor on a more routine basis.

Gretchen Berlin: I think it can be very effective, especially for more rural settings.

The promise of technology

Lucia Rahilly: You talked about fungibility in care settings and regional float pools and so forth. Gretchen, you yourself went to nursing school. Are the skills that nurses need to do their jobs successfully changing?

Gretchen Berlin: We continue to ride the curve of technology.

In the past 20–25 years, there has been a lot of technology adoption in care delivery. A lot of these technologies often don’t fully replace how something is done, which adds to nurses’ workloads. How do you then use technology to declutter what a nurse does and help get the signal through the noise of all of the alarms and all of the vitals and all of the documentation to actually help clinicians practice at the top of their license and focus on what truly matters?

I think that is very exciting, and that is the promise of redefining how the clinical workforce can go into the future. There are longer-term systemic things we can and should do in terms of strengthening the talent pipeline: encouraging students to engage in science, engage in medicine.

It will also require expanding schools and clinical-training spots. And we see health systems doing that directly because they recognize the need and aren’t willing to wait for others in the ecosystem to do it. And these things are all needed to rebuild our talent pipelines and skills for the workforce of the future.

But in the meantime, we need to flip the operating models that we have for our workforce now, so that we’re able to bridge the gap. Otherwise, I worry, we’re going to have more than a decade of pretty turbulent times, where we have a lot of clinical demand and a very turbulent workforce.

People find purpose in nursing

Lucia Rahilly: My niece is in high school, and she recently surprised me by raising the possibility of getting an RN degree.

It occurred to me when she was talking about this that we hear so much now about the importance of purpose, particularly vis-à-vis Gen Z. Do you think the pandemic has in any way created pull into the nursing field because it has surfaced as so vital and so high stakes?

Gretchen Berlin: Yeah. It’s a really interesting point.

In some ways, I think the pandemic has shone a light on the purpose, as you said. But also, we have seen in our own research that some nurses are more likely to stay in the profession now than they were before.

We haven’t surveyed to see if that translates into more folks interested, but we do see an increase in applications to schools going up. And I think some of that is because of the importance of purpose, and some of it is because the profession is changing, and nursing can be much more flexible than your traditional office job.

In a lot of ways, the criticality of the role has been elevated for people. A lot of people want nothing more than to support society and individuals on the biggest challenge of the day, which right now is COVID-19 and meeting the pent-up demand that it has caused.

An optimistic view of the future

Lucia Rahilly: Acknowledging that access to quality nursing care is, in part because of COVID-19, such a high-stakes and collectively vital issue, are you optimistic about the potential for positive change, both for the sake of nurses and for all of us?

Gretchen Berlin: I am quite optimistic. I think there are a lot of really bright minds trying to solve this. There are a lot of committed health systems, employers, and societies trying to invest and fix it. I think more than anything, there’s a really committed workforce who’s excited to innovate, who has shown tremendous flexibility and resilience already and will continue to do that going forward.

Lucia Rahilly: Any suggestions for keeping this issue on the front burner, assuming COVID-19 starts to recede?

Gretchen Berlin: I think that there are ways we can continue to recognize as a society. I think that’s part of the power of conversations like these. We have National Nurses Week in May. There are obviously national companies that run nurses campaigns. There are ways that each of us as individuals, or our small businesses, or our large businesses, can draw attention to our first responders and our clinicians in general but our nurses especially through celebrations, promotions, and accolades.

Lucia Rahilly: Let’s close there. Gretchen, that was a fascinating discussion. Thanks so much.

Gretchen Berlin: Thank you. I hope we all have a better 2022.

Lucia Rahilly: Roberta, so many of us have had the experience of relying elementally on nursing care. My daughter, who is now a happy, coltish, and—knock on wood—healthy six-and-a-half-year-old girl, had respiratory surgery at birth and spent almost two weeks in the surgical NICU [neonatal intensive-care unit]. And those NICU nurses were just invaluable.

During that experience, our family will never, never forget them. They were vital not just to her survival but also to our own emotional stability and well-being. At that time, it was incredible.

Roberta Fusaro: I feel the same way, Lucia, and I’ve had the complete opposite life cycle experience of dealing with in-home nurses for my mother when she refused to move out of the house that she had been in for, you know, some 70 years.

The fact that we felt comfortable enough to have people come into our house to take care of my mother made the final months of her life that much more comfortable, which gave us a lot of comfort too.

It’s horrible to see such flux within the nursing workforce. And there’s another cohort of people that are at risk of quitting, in part because they don’t feel valued at work. It’s our transgender colleagues. We’re about to hear from David Baboolall about our recent article: “Being transgender at work.”

Segment two: Being transgender at work

Lucia Rahilly: David, thanks for joining us today.

David Baboolall: Lucia, thank you so much for having me. I’m very excited that we’re having this conversation.

Lucia Rahilly: Acknowledging the range and the variety of experience within the trans demographic, what has our research taught us about what it’s like to be trans in today’s workplace?

David Baboolall: I’d like to start off with just a few facts around unemployment, if that works with you. Unfortunately, I’ll start quite stark.

Only 73 percent of transgender adults are actually in the workforce compared with 82 percent of cisgender adults. Our survey, which we ran across a number of trends over this past year, shows that trans individuals are two times more likely to be unemployed than cisgender people, which is kind of crazy.

And in the US alone, almost two times as many trans people report being recently out of work. The scarcity and precarity of transgender employment can lead to loneliness, instability, and alienation from the rest of the workplace. When we look at wages, candidly, the situation is equally as stark. Transgender people make far less money than cisgender people do.

The average household income of a transgender adult is about $17,000 less than that of a cisgender one. And our survey showed that transgender individuals are almost 2.5 times more likely to work in places such as retail or food, which, as we know, in large proportion are entry-level-paying jobs, paying the minimum wage in the US.

Then, when we take it one step further to intersectionality, when we look at folks who are marginalized in addition to being trans—for example, people of color—the figures are worse. Seventy-five percent of Native American trans people and 43 percent of Hispanic trans people make less than $25,000, with that figure only equating to 17 percent for White cisgender people.

Lucia Rahilly: That’s a dire picture, and it sounds like an urgent need to take action. The stakes are high. What are some examples of the specific challenges—the trends employees confront daily in the workplace?

David Baboolall: In the corporate push for more diverse workplaces, especially since the racial reckoning last year, the transgender population candidacy is unsupported. And this is more than just a matter of career progression, promotion, or climbing to the top of the ladder.

Whereas other populations strive to feel included in the workplace, transgender workers want to feel safe. For members of the trans and gender-nonconforming community, safety is top of mind—safety from physical harm, mental harm, or emotional harm. And what we’re seeing in our data is that less than half of transgender adults are comfortable being fully open about their gender identity at work.

For members of the trans and gender-nonconforming community, safety is top of mind—safety from physical harm, mental harm, or emotional harm.

David Baboolall 

And when we take that a step further, two-thirds are uncomfortable being out with their customers and their clients. And being in a client-facing role myself, Lucia, the inability to be out with folks that I’m talking to not only on a monthly but also a weekly—if not hourly—basis is a lot to grapple with, especially if you’re in client service.

Lucia Rahilly: Safety is obviously fundamental. I mean, that’s Maslov’s hierarchy of needs, right? That’s basic. Besides ensuring that employees feel safe and are able to bring their full selves to work, what can leaders do to help at the enterprise level?

David Baboolall: I think step one is education and awareness. That’s my biggest goal with this report and this research. I’m hoping corporate leaders and leaders of different sectors will take this report and say, “I have a reference to learn.”

I have a glossary that we put together to discern different words that are used in the trans community. And then I think you can go across the employee life cycle and be intentional in recruiting. How can you connect with potential trans new hires or participate in specific recruiting events? Signal to those that are coming to your firm that you are excited to be a workplace where trans individuals thrive.

I think the second step is to think about offering trans-affirming benefits. And this doesn’t just mean medical benefits, gender-affirming surgery or hormone therapy. It also involves thinking about whether you have mental-healthcare support for a community that is disproportionately affected by mental-health issues.

The third step is about other policies and programs, such as reviewing company dress codes, eliminating gender-specific language, offering diversity trainings that are nuanced to gender identity. And I think the last step is adopting an overall inclusive culture—are the forms and documents that you ask your employees to fill out on a weekly, annual, or half-annual basis asking for personal pronouns? Are they asking for preferred names? Does your office have gender-neutral bathrooms?

Lucia Rahilly: You mentioned language and a glossary in the report, which seems vital, particularly because fear and confusion over language can hold colleagues back from talking about some of these issues. What are some small steps that all of us in the workforce might take to signal support for our transgender colleagues and potentially improve their daily experience directly?

David Baboolall: My teams actually practice it at McKinsey. When we kick off a new project at McKinsey, we do team introductions. And within those sort of simple five to ten introductory questions—What’s your name? Where are you from? Where did you grow up? How did you join McKinsey?—there’s the question of what are your personal pronouns? What is your preferred name? Those two simple questions signal to any person in the trans community, “Hey, this person seems like an ally.”

Lucia Rahilly: Are there any examples from your own career of allyship?

David Baboolall: I think personally there have been a number of instances over the last year where folks have noticed my pronoun change—clients have come to me, I’ve had senior individuals at McKinsey come to me, I’ve had people in my building when I would wear work name tags back home. They see my pronouns, and they’re inquisitive, asking, “Hey, is everything OK? How can I be supportive? Is there anything that you’d like to talk through? How can I become educated?” And that’s been great. I think that that has been an opportunity for folks to engage because I’m very open about my personal pronouns, and I use those for every introduction that I actually have.

Lucia Rahilly: Any thoughts for leaders on the best way to know that they’re making progress?

David Baboolall: I think the more it comes up in conversation, the more you’re likely doing things right. We tend to avoid conversations when it comes to topics of diversity that we’re not used to, that make us uncomfortable, that we’re nervous about getting wrong.

So the more that these topics are being brought up, the more ideas that are being brought to senior leaders—such as “Hey, maybe we should do this for our trans colleagues? Maybe we should do this in terms of gender identity? Have we thought about offering this healthcare benefit? Have we thought about changing this policy? Have we really thought through placing a gender-neutral bathroom at our factory site?”—the better it is. As those ideas are flourishing, as folks are being more vocal about it, you’re doing something right. And they’re open to having the conversation to advance change.

Lucia Rahilly: David, fascinating. Thanks so much for being with us today.

David Baboolall: Thank you so much for having me.

Article link: https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/nurses-and-the-great-attrition?cid=app

ABOUT THE AUTHOR(S)

David Baboolall is an associate partner in McKinsey’s New York office, and Gretchen Berlin is a senior partner in the Washington, DC, office. Roberta Fusaro is an executive editor in the Waltham, Massachusetts, office; and Lucia Rahilly, global editorial director of McKinsey Global Publishing, is based in the New York office.

THE MIRAGE OF THE INTERCONNECTED BATTLEFIELD – MWI

Posted by timmreardon on 01/24/2022
Posted in: Uncategorized. Leave a comment

During an exercise in the California desert in October 2021 a special operations forces team hit the jackpot. Beneath the team’s observation post were almost a hundred enemy vehicles rolling through a refueling point. The team had eyes on target and fires on call. It should have been a decisive moment in the exercise, the kind of opportunity that so much modern doctrine strives to capitalize upon. Alas it was not to be.

Unlike the decades-long wars following 9/11, in which NATO forces fought in small formations with few constraints imposed by enemy fires threatening supporting infrastructure and little interference from electronic warfare, this force-on-force exercise replicated a congested battlespace and a contested electromagnetic spectrum. In the face of insufficient nodes in their communications network, saturated headquarters, and enemy jamming, the kill chain for the fire mission took four hours to complete. It killed some enemy logisticians, but the opportunity had long passed.

Today, staff officers the world over are heralding the dawn of an interconnected battlefield in which data can move seamlessly between air, land, maritime, space, and cyber forces in real time. PowerPoint and CGI presentations promise commanders continual access to pervasive and perpetually relevant situational awareness. Senior officers lap it up because it is what they have always dreamed of. The ability to access the data from any battlefield sensor across a force and share it with the most appropriate shooter holds out the prospect of maximizing a force’s lethality and efficiency while denying the enemy the opportunity to achieve surprise.

But for the technicians trying to build these architectures and the soldiers, sailors, aviators, guardians, and marines trying to maintain and use them in the field, the gap between theory and practice remains wide—and risks becoming wider still. The problem is not that an interconnected battlefield is impossible, or that it isn’t advantageous. The problem is that so much of the conceptual bloviation on the subject evades any serious appreciation for the friction involved, which is all too often dismissed with the claim that artificial intelligence will function as a cure-all. By pursuing the goal of connecting everything, all of the time, military leaders are avoiding hard choices about what data to prioritize and who on the battlefield should have the most assured access to it under pressure. Policymakers need to start thinking harder about these decisions if the pursuit of connectivity is to bear fruit.

The Limits of Convergence

The conventional narrative of a connected battlefield is of an any-sensor-to-any-shooter network. In this vision of combat, data is transferred seamlessly between units, command posts have real-time situational awareness from every available source, artificial intelligence rapidly generates optimized courses of action, and the fog of war dissipates. To realize this vision, sensors from all of the military services must be connected, with data able to flow through any available path.

The ability to transfer data between units from different services, so that aircraft, ships, and armored vehicles can communicate with each other, will improve the effectiveness of the joint force. But there are limits to this vision of an interconnected command-and-control system spanning every domain of war. This is because of fundamental differences between the domains. Ships and aircraft tend to have access to a great deal of power and large directional antenna, and they operate in formations comprising a comparatively small number of nodes with which they must exchange data. What’s more, they tend to operate within line of sight of one another. The result is that through free-space optical links and other high-bandwidth transmissions naval and air forces can transfer large volumes of data in real time.

These conditions do not pertain to land forces. Where a naval task force might comprise up to a dozen vessels, a division consists of thousands of vehicles, each of which is highly constrained in its available power and can generally only carry a small antenna without sacrificing mobility. Moreover, sensors and shooters will rarely be in line of sight of one another, so passing data around the force often requires transiting key bottlenecks in a network. Nodes that are elevated or dedicated transmitters with more energy will stand out in the electromagnetic spectrum and risk being targeted. There is therefore a practical and tactical emphasis on minimizing signature to maximize survivability.

Given these disparities, the seamless transmission of data between domains along any available route must have one of two consequences. Either it must restrict air and naval forces into only sharing data packets of a size that land forces can support, or air forces in particular will perpetually saturate the available network of the land forces beneath them. The first approach would massively restrict the performance of key naval and air systems, including cooperative engagement capabilities. The latter approach would suppress land forces’ access to their own communications.

There are examples of multi-domain networks that are often touted as proving that the concept works. Many are in Israel. Others sit on testing sites in various NATO countries. Often the reports that emerge from these testing sites describe single engagements in which a single platform in one domain passes data to a platform in a different domain. The problem with these tests is that many of the testing sites—and Israel—have access to a huge amount of fixed infrastructure, including the civilian internet, that bypasses military networks. The United States and its allies are unlikely to have access to such infrastructure in a future confrontation with a peer adversary. Secondly, single tests of multi-domain data transfer fail to replicate what happens when large formations are trying to utilize a network. Simply avoiding fratricide with one’s own communications becomes difficult given the number of nodes trying to share data simultaneously, even without the effects of enemy jamming.

The Bandwidth Bottleneck

The challenge is getting harder, not easier. To be sure, bandwidth across military networks has steadily improved over the past three decades. The exact capacity of military systems is classified, but transfer rates have advanced generation by generation. Unfortunately—with the exception of some niche and not universally useable systems—the gap between available bandwidth and data is expanding as the size of files and the number of transmitters increases geometrically.

A high-resolution image likely comprises several megabytes of data. A multispectral image comprising electro-optical and thermal layers, radar overlays, and topographical information becomes orders of magnitude larger. Military sensors have massively improved in their fidelity over recent years. The result is that platforms now hoover up terabytes of information. Further exacerbating the pressure on networks is that as sophisticated sensors are added to more and more platforms there is also a higher volume of high-fidelity, multispectral data points, all competing for bandwidth.

The Department of Defense has heralded space-based communications as a way of circumventing the constraints imposed by a lack of line of sight between units. The problem with space-based communications is that the infrastructure is exceedingly expensive, often visible to the enemy and therefore able to be suppressed, and in any case imposes significant delays on the network. Since most satellites move in orbits, they can only receive data while above a unit wishing to transmit and cannot then push the data down until they are above the desired receiving base station. Sharing data between satellites is possible, but every additional link in the network imposes more delays between transmission and receipt.

As a result, there is no conceivable manner in which all of this data can be accumulated in real time. Aircraft and other systems can plug in and download what they have captured upon landing, but as more and more data is generated it will take a long time to sift and disseminate it; the tempo of distribution will remain a long way from the promised panopticon. Indeed, the volume of data gathered vastly exceeds the capacity of the crews collecting it to monitor, meaning that there is little effective means of identifying incidental detections and manually prioritizing their transfer to interested parties. Bandwidth constraints are not just a reality of networks; the sheer volume of data is saturating human capacity to monitor it, let alone analyze and understand what is being captured.

AI Is No Panacea

It is at this point that the phrase artificial intelligence inevitably enters the discussion. All too often it is with this ritual incantation that the discussion ends. Humans may not be able to work their way through the data, but the computer can, and by only selecting what is relevant the computer will thereby only transmit what is needed, alleviating the pressure on the network.

This is true, insofar as artificial intelligence, when integrated into the platforms (often described as being at the “edge” of the force to distinguish it from AI analyzing data in a central headquarters), will allow systems to identify specific kinds of return within the vast quantity of data they are collecting. The relevance of what these systems find, however, will depend entirely on what they have been programmed to look for, and so long as there is a constraint upon bandwidth there are only so many returns that a system can offload. The key question, therefore, becomes defining what is relevant.

The problem with the vision of a commander’s data-driven panopticon is that it conveys the aspiration that commanders and analysts do not need to prioritize. Although mission data files can be updated periodically the reality is that edge-based processing systems will have a set of mission data files with which they operate when deployed. Those files will include the priority stack—the preprogrammed order in which information is transmitted—that in a system with limited available bandwidth will determine what gets through and what does not. Further mission data files in each point within a network will need to sort incoming data and prioritize what to pass on if there is too much to be transmitted immediately.

The building of priority stacks is therefore the fundamental prerequisite for moving the desired data quickly around the multi-domain battlespace. This requires commanders to determine what is important, when it is important, and to whom it is relevant. To understand this, it is necessary to understand how the force wants to fight, where it seeks advantage, and where it will accept vulnerability. Commanders need to understand the vulnerabilities generated by the priorities—and therefore blind spots—they have programmed into their systems and develop training, tactics, and procedures for how to mitigate these inbuilt risks.

Some priorities are easy. Ballistic missile track data is likely to be high on anyone’s list. But when analysts start to consider the trade-off between an F-35 prioritizing the transmission of detected artillery fires versus the position of an enemy tactical radar, they run into very different risks and rewards between the services, and questions regarding who is dependent upon the F-35 as opposed to the other assets at their disposal. The priority stack therefore drives where the force needs resilient or redundant deployed capability. It literally shapes force design.

If an interconnected battlefield is going to be realized then commanders must accept that while the aspiration is any sensor to any shooter the reality in the field will always be some sensors to some shooters, some of the time. If commanders refuse to accept this then they will avoid the critical decisions that need to be made to deliver genuine advances in capability—and the interconnected battlefield will remain little more than a mirage.

Article link: https://mwi.usma.edu/the-mirage-of-the-interconnected-battlefield/

Dr. Jack Watling is research fellow for land warfare at the Royal United Services Institute in London.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Staff Sgt. Clay Lancaster, US Air Force

The race to understand the exhilarating, dangerous world of language AI – MIT Tech Review

Posted by timmreardon on 01/24/2022
Posted in: Uncategorized. Leave a comment

Hundreds of scientists around the world are working together to understand one of the most powerful emerging technologies before it’s too late.

by Karen Hao May 20, 2021

On May 18, Google CEO Sundar Pichai announced an impressive new tool: an AI system called LaMDA that can chat to users about any subject.

To start, Google plans to integrate LaMDA into its main search portal, its voice assistant, and Workplace, its collection of cloud-based work software that includes Gmail, Docs, and Drive. But the eventual goal, said Pichai, is to create a conversational interface that allows people to retrieve any kind of information—text, visual, audio—across all Google’s products just by asking.

LaMDA’s rollout signals yet another way in which language technologies are becoming enmeshed in our day-to-day lives. But Google’s flashy presentation belied the ethical debate that now surrounds such cutting-edge systems. LaMDA is what’s known as a large language model (LLM)—a deep-learning algorithm trained on enormous amounts of text data.

Studies have already shown how racist, sexist, and abusive ideas are embedded in these models. They associate categories like doctors with men and nurses with women; good words with white people and bad ones with Black people. Probe them with the right prompts, and they also begin to encourage things like genocide, self-harm, and child sexual abuse. Because of their size, they have a shockingly high carbon footprint. Because of their fluency, they easily confuse people into thinking a human wrote their outputs, which experts warn could enable the mass production of misinformation.

In December, Google ousted its ethical AI co-lead Timnit Gebru after she refused to retract a paper that made many of these points. A few months later, after wide-scale denunciation of what an open letter from Google employees called the company’s “unprecedented research censorship,” it fired Gebru’s coauthor and co-lead Margaret Mitchell as well.

It’s not just Google that is deploying this technology. The highest-profile language models so far have been OpenAI’s GPT-2 and GPT-3, which spew remarkably convincing passages of text and can even be repurposed to finish off music compositions and computer code. Microsoft now exclusively licenses GPT-3 to incorporate into yet-unannounced products. Facebook has developed its own LLMs for translation and content moderation. And startups are creating dozens of products and services based on the tech giants’ models. Soon enough, all of our digital interactions—when we email, search, or post on social media—will be filtered through LLMs.

Unfortunately, very little research is being done to understand how the flaws of this technology could affect people in real-world applications, or to figure out how to design better LLMs that mitigate these challenges. As Google underscored in its treatment of Gebru and Mitchell, the few companies rich enough to train and maintain LLMs have a heavy financial interest in declining to examine them carefully. In other words, LLMs are increasingly being integrated into the linguistic infrastructure of the internet atop shaky scientific foundations.

More than 500 researchers around the world are now racing to learn more about the capabilities and limitations of these models. Working together under the BigScience project led by Huggingface, a startup that takes an “open science” approach to understanding natural-language processing (NLP), they seek to build an open-source LLM that will serve as a shared resource for the scientific community. The goal is to generate as much scholarship as possible within a single focused year. Their central question: How and when should LLMs be developed and deployed to reap their benefits without their harmful consequences?

“We can’t really stop this craziness around large language models, where everybody wants to train them,” says Thomas Wolf, the chief science officer at Huggingface, who is co-leading the initiative. “But what we can do is try to nudge this in a direction that is in the end more beneficial.”

Stochastic parrots

In the same month that BigScience kicked off its activities, a startup named Cohere quietly came out of stealth. Started by former Google researchers, it promises to bring LLMs to any business that wants one—with a single line of code. It has developed a technique to train and host its own model with the idle scraps of computational resources in a data center, which holds down the costs of renting out the necessary cloud space for upkeep and deployment. See

Among its early clients is the startup Ada Support, a platform for building no-code customer support chatbots, which itself has clients like Facebook and Zoom. And Cohere’s investor list includes some of the biggest names in the field: computer vision pioneer Fei-Fei Li, Turing Award winner Geoffrey Hinton, and Apple’s head of AI, Ian Goodfellow.

Cohere is one of several startups and initiatives now seeking to bring LLMs to various industries. There’s also Aleph Alpha, a startup based in Germany that seeks to build a German GPT-3; an unnamed venture started by several former OpenAI researchers; and the open-source initiative Eleuther, which recently launched GPT-Neo, a free (and somewhat less powerful) reproduction of GPT-3.

But it’s the gap between what LLMs are and what they aspire to be that has concerned a growing number of researchers. LLMs are effectively the world’s most powerful autocomplete technologies. By ingesting millions of sentences, paragraphs, and even samples of dialogue, they learn the statistical patterns that govern how each of these elements should be assembled in a sensible order. This means LLMs can enhance certain activities: for example, they are good for creating more interactive and conversationally fluid chatbots that follow a well-established script. But they do not actually understand what they’re reading or saying. Many of the most advanced capabilities of LLMs today are also available only in English.

Among other things, this is what Gebru, Mitchell, and five other scientists warned about in their paper, which calls LLMs “stochastic parrots.” “Language technology can be very, very useful when it is appropriately scoped and situated and framed,” says Emily Bender, a professor of linguistics at the University of Washington and one of the coauthors of the paper. But the general-purpose nature of LLMs—and the persuasiveness of their mimicry—entices companies to use them in areas they aren’t necessarily equipped for.

In a recent keynote at one of the largest AI conferences, Gebru tied this hasty deployment of LLMs to consequences she’d experienced in her own life. Gebru was born and raised in Ethiopia, where an escalating war has ravaged the northernmost Tigray region. Ethiopia is also a country where 86 languages are spoken, nearly all of them unaccounted for in mainstream language technologies.

Despite LLMs having these linguistic deficiencies, Facebook relies heavily on them to automate its content moderation globally. When the war in Tigray first broke out in November, Gebru saw the platform flounder to get a handle on the flurry of misinformation. This is emblematic of a persistent pattern that researchers have observed in content moderation. Communities that speak languages not prioritized by Silicon Valley suffer the most hostile digital environments.

Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.

In many cases, researchers haven’t investigated thoroughly enough to know how this toxicity might manifest in downstream applications. But some scholarship does exist. In her 2018 book Algorithms of Oppression, Safiya Noble, an associate professor of information and African-American studies at the University of California, Los Angeles, documented how biases embedded in Google search perpetuate racism and, in extreme cases, perhaps even motivate racial violence.

“The consequences are pretty severe and significant,” she says. Google isn’t just the primary knowledge portal for average citizens. It also provides the information infrastructure for institutions, universities, and state and federal governments.

Google already uses an LLM to optimize some of its search results. With its latest announcement of LaMDA and a recent proposal it published in a preprint paper, the company has made clear it will only increase its reliance on the technology. Noble worries this could make the problems she uncovered even worse: “The fact that Google’s ethical AI team was fired for raising very important questions about the racist and sexist patterns of discrimination embedded in large language models should have been a wake-up call.”

BigScience

The BigScience project began in direct response to the growing need for scientific scrutiny of LLMs. In observing the technology’s rapid proliferation and Google’s attempted censorship of Gebru and Mitchell, Wolf and several colleagues realized it was time for the research community to take matters into its own hands.

Inspired by open scientific collaborations like CERN in particle physics, they conceived of an idea for an open-source LLM that could be used to conduct critical research independent of any company. In April of this year, the group received a grant to build it using the French government’s supercomputer.

At tech companies, LLMs are often built by only half a dozen people who have primarily technical expertise. BigScience wanted to bring in hundreds of researchers from a broad range of countries and disciplines to participate in a truly collaborative model-construction process. Wolf, who is French, first approached the French NLP community. From there, the initiative snowballed into a global operation encompassing more than 500 people.

The collaborative is now loosely organized into a dozen working groups and counting, each tackling different aspects of model development and investigation. One group will measure the model’s environmental impact, including the carbon footprint of training and running the LLM and factoring in the life-cycle costs of the supercomputer. Another will focus on developing responsible ways of sourcing the training data—seeking alternatives to simply scraping data from the web, such as transcribing historical radio archives or podcasts. The goal here is to avoid toxic language and nonconsensual collection of private information.

Other working groups are dedicated to developing and evaluating the model’s “multilinguality.” To start, BigScience has selected eight languages or language families, including English, Chinese, Arabic, Indic (including Hindi and Urdu), and Bantu (including Swahili). The plan is to work closely with every language community to map out as many of its regional dialects as possible and ensure that its distinct data privacy norms are respected. “We want people to have a say in how their data is used,” says Yacine Jernite, a Huggingface researcher.

The point is not to build a commercially viable LLM to compete with the likes of GPT-3 or LaMDA. The model will be too big and too slow to be useful to companies, says Karën Fort, an associate professor at the Sorbonne. Instead, the resource is being designed purely for research. Every data point and every modeling decision is being carefully and publicly documented, so it’s easier to analyze how all the pieces affect the model’s outcomes. “It’s not just about delivering the final product,” says Angela Fan, a Facebook researcher. “We envision every single piece of it as a delivery point, as an artifact.”

The project is undoubtedly ambitious—more globally expansive and collaborative than any the AI community has seen before. The logistics of coordinating so many researchers is itself a challenge. (In fact, there’s a working group for that, too.) What’s more, every single researcher is contributing on a volunteer basis. The grant from the French government covers only computational, not human, resources.

But researchers say the shared need that brought the community together has galvanized an impressive level of energy and momentum. Many are optimistic that by the end of the project, which will run until May of next year, they will have produced not only deeper scholarship on the limitations of LLMs but also better tools and practices for building and deploying them responsibly.

The organizers hope this will inspire more people within industry to incorporate those practices into their own LLM strategy, though they are the first to admit they are being idealistic. If anything, the sheer number of researchers involved, including many from tech giants, will help establish new norms within the NLP community.

In some ways the norms have already shifted. In response to conversations around the firing of Gebru and Mitchell, Cohere heard from several of its clients that they were worried about the technology’s safety. On its site it includes a page on its website featuring a pledge to continuously invest in technical and non-technical research to mitigate the possible harms of its model. It says it will also assemble an advisory council made up of external experts to help it create policies on the permissible use of its technologies.

“NLP is at a very important turning point,” says Fort. That’s why BigScience is exciting. It allows the community to push the research forward and provide a hopeful alternative to the status quo within industry: “It says, ‘Let’s take another pass. Let’s take it together—to figure out all the ways and all the things we can do to help society.’”

“I want NLP to help people,” she says, “not to put them down.”

Article link: https://www.technologyreview.com/2021/05/20/1025135/ai-large-language-models-bigscience-project/

Management Time: Who’s Got the Monkey? – HBR

Posted by timmreardon on 01/23/2022
Posted in: Uncategorized. Leave a comment

Editor’s Note: This article was originally published in the November–December 1974 issue of HBR and has been one of the publication’s two best-selling reprints ever.

For its reissue as a Classic, the Harvard Business Reviewasked Stephen R. Covey to provide a commentary.

Why is it that managers are typically running out of time while their subordinates are typically running out of work? Here we shall explore the meaning of management time as it relates to the interaction between managers and their bosses, their peers, and their subordinates.

Specifically, we shall deal with three kinds of management time:

Boss-imposed time—used to accomplish those activities that the boss requires and that the manager cannot disregard without direct and swift penalty.

System-imposed time—used to accommodate requests from peers for active support. Neglecting these requests will also result in penalties, though not always as direct or swift.

Self-imposed time—used to do those things that the manager originates or agrees to do. A certain portion of this kind of time, however, will be taken by subordinates and is called subordinate-imposed time. The remaining portion will be the manager’s own and is called discretionary time. Self-imposed time is not subject to penalty since neither the boss nor the system can discipline the manager for not doing what they didn’t know he had intended to do in the first place.

To accommodate those demands, managers need to control the timing and the content of what they do. Since what their bosses and the system impose on them are subject to penalty, managers cannot tamper with those requirements. Thus their self-imposed time becomes their major area of concern.

Managers should try to increase the discretionary component of their self-imposed time by minimizing or doing away with the subordinate component. They will then use the added increment to get better control over their boss-imposed and system-imposed activities. Most managers spend much more time dealing with subordinates’ problems than they even faintly realize. Hence we shall use the monkey-on-the-back metaphor to examine how subordinate-imposed time comes into being and what the superior can do about it.

Where Is the Monkey?

Let us imagine that a manager is walking down the hall and that he notices one of his subordinates, Jones, coming his way. When the two meet, Jones greets the manager with, “Good morning. By the way, we’ve got a problem. You see….” As Jones continues, the manager recognizes in this problem the two characteristics common to all the problems his subordinates gratuitously bring to his attention. Namely, the manager knows (a) enough to get involved, but (b) not enough to make the on-the-spot decision expected of him. Eventually, the manager says, “So glad you brought this up. I’m in a rush right now. Meanwhile, let me think about it, and I’ll let you know.” Then he and Jones part company. 

Let us analyze what just happened. Before the two of them met, on whose back was the “monkey”? The subordinate’s. After they parted, on whose back was it? The manager’s. Subordinate-imposed time begins the moment a monkey successfully leaps from the back of a subordinate to the back of his or her superior and does not end until the monkey is returned to its proper owner for care and feeding. In accepting the monkey, the manager has voluntarily assumed a position subordinate to his subordinate. That is, he has allowed Jones to make him her subordinate by doing two things a subordinate is generally expected to do for a boss—the manager has accepted a responsibility from his subordinate, and the manager has promised her a progress report.

The subordinate, to make sure the manager does not miss this point, will later stick her head in the manager’s office and cheerily query, “How’s it coming?” (This is called supervision.)

Or let us imagine in concluding a conference with Johnson, another subordinate, the manager’s parting words are, “Fine. Send me a memo on that.”

Let us analyze this one. The monkey is now on the subordinate’s back because the next move is his, but it is poised for a leap. Watch that monkey. Johnson dutifully writes the requested memo and drops it in his out-basket. Shortly thereafter, the manager plucks it from his in-basket and reads it. Whose move is it now? The manager’s. If he does not make that move soon, he will get a follow-up memo from the subordinate. (This is another form of supervision.) The longer the manager delays, the more frustrated the subordinate will become (he’ll be spinning his wheels) and the more guilty the manager will feel (his backlog of subordinate-imposed time will be mounting).

Or suppose once again that at a meeting with a third subordinate, Smith, the manager agrees to provide all the necessary backing for a public relations proposal he has just asked Smith to develop. The manager’s parting words to her are, “Just let me know how I can help.”

Now let us analyze this. Again the monkey is initially on the subordinate’s back. But for how long? Smith realizes that she cannot let the manager “know” until her proposal has the manager’s approval. And from experience, she also realizes that her proposal will likely be sitting in the manager’s briefcase for weeks before he eventually gets to it. Who’s really got the monkey? Who will be checking up on whom? Wheel spinning and bottlenecking are well on their way again.

A fourth subordinate, Reed, has just been transferred from another part of the company so that he can launch and eventually manage a newly created business venture. The manager has said they should get together soon to hammer out a set of objectives for the new job, adding, “I will draw up an initial draft for discussion with you.”

Let us analyze this one, too. The subordinate has the new job (by formal assignment) and the full responsibility (by formal delegation), but the manager has the next move. Until he makes it, he will have the monkey, and the subordinate will be immobilized. 

Why does all of this happen? Because in each instance the manager and the subordinate assume at the outset, wittingly or unwittingly, that the matter under consideration is a joint problem. The monkey in each case begins its career astride both their backs. All it has to do is move the wrong leg, and—presto!—the subordinate deftly disappears. The manager is thus left with another acquisition for his menagerie. Of course, monkeys can be trained not to move the wrong leg. But it is easier to prevent them from straddling backs in the first place.

Who Is Working for Whom?

Let us suppose that these same four subordinates are so thoughtful and considerate of their superior’s time that they take pains to allow no more than three monkeys to leap from each of their backs to his in any one day. In a five-day week, the manager will have picked up 60 screaming monkeys—far too many to do anything about them individually. So he spends his subordinate-imposed time juggling his “priorities.”

Late Friday afternoon, the manager is in his office with the door closed for privacy so he can contemplate the situation, while his subordinates are waiting outside to get their last chance before the weekend to remind him that he will have to “fish or cut bait.” Imagine what they are saying to one another about the manager as they wait: “What a bottleneck. He just can’t make up his mind. How anyone ever got that high up in our company without being able to make a decision we’ll never know.”

Worst of all, the reason the manager cannot make any of these “next moves” is that his time is almost entirely eaten up by meeting his own boss-imposed and system-imposed requirements. To control those tasks, he needs discretionary time that is in turn denied him when he is preoccupied with all these monkeys. The manager is caught in a vicious circle. But time is a-wasting (an understatement). The manager calls his secretary on the intercom and instructs her to tell his subordinates that he won’t be able to see them until Monday morning. At 7 pm,he drives home, intending with firm resolve to return to the office tomorrow to get caught up over the weekend. He returns bright and early the next day only to see, on the nearest green of the golf course across from his office window, a foursome. Guess who?

That does it. He now knows who is really working for whom. Moreover, he now sees that if he actually accomplishes during this weekend what he came to accomplish, his subordinates’ morale will go up so sharply that they will each raise the limit on the number of monkeys they will let jump from their backs to his. In short, he now sees, with the clarity of a revelation on a mountaintop, that the more he gets caught up, the more he will fall behind.

The manager can now see, with the clarity of a revelation on a mountaintop, that the more he gets caught up, the more he will fall behind.

He leaves the office with the speed of a person running away from a plague. His plan? To get caught up on something else he hasn’t had time for in years: a weekend with his family. (This is one of the many varieties of discretionary time.)

Sunday night he enjoys ten hours of sweet, untroubled slumber, because he has clear-cut plans for Monday. He is going to get rid of his subordinate-imposed time. In exchange, he will get an equal amount of discretionary time, part of which he will spend with his subordinates to make sure that they learn the difficult but rewarding managerial art called “The Care and Feeding of Monkeys.”

The manager will also have plenty of discretionary time left over for getting control of the timing and the content not only of his boss-imposed time but also of his system-imposed time. It may take months, but compared with the way things have been, the rewards will be enormous. His ultimate objective is to manage his time. 

Getting Rid of the Monkeys

The manager returns to the office Monday morning just late enough so that his four subordinates have collected outside his office waiting to see him about their monkeys. He calls them in one by one. The purpose of each interview is to take a monkey, place it on the desk between them, and figure out together how the next move might conceivably be the subordinate’s. For certain monkeys, that will take some doing. The subordinate’s next move may be so elusive that the manager may decide—just for now—merely to let the monkey sleep on the subordinate’s back overnight and have him or her return with it at an appointed time the next morning to continue the joint quest for a more substantive move by the subordinate. (Monkeys sleep just as soundly overnight on subordinates’ backs as they do on superiors’.)

As each subordinate leaves the office, the manager is rewarded by the sight of a monkey leaving his office on the subordinate’s back. For the next 24 hours, the subordinate will not be waiting for the manager; instead, the manager will be waiting for the subordinate.

Later, as if to remind himself that there is no law against his engaging in a constructive exercise in the interim, the manager strolls by the subordinate’s office, sticks his head in the door, and cheerily asks, “How’s it coming?” (The time consumed in doing this is discretionary for the manager and boss imposed for the subordinate.)

In accepting the monkey, the manager has voluntarily assumed a position subordinate to his subordinate.

When the subordinate (with the monkey on his or her back) and the manager meet at the appointed hour the next day, the manager explains the ground rules in words to this effect:

“At no time while I am helping you with this or any other problem will your problem become my problem. The instant your problem becomes mine, you no longer have a problem. I cannot help a person who hasn’t got a problem.

“When this meeting is over, the problem will leave this office exactly the way it came in—on your back. You may ask my help at any appointed time, and we will make a joint determination of what the next move will be and which of us will make it.

“In those rare instances where the next move turns out to be mine, you and I will determine it together. I will not make any move alone.”

The manager follows this same line of thought with each subordinate until about 11 am, when he realizes that he doesn’t have to close his door. His monkeys are gone. They will return—but by appointment only. His calendar will assure this. 

Transferring the Initiative

What we have been driving at in this monkey-on-the-back analogy is that managers can transfer initiative back to their subordinates and keep it there. We have tried to highlight a truism as obvious as it is subtle: namely, before developing initiative in subordinates, the manager must see to it that they have the initiative. Once the manager takes it back, he will no longer have it and he can kiss his discretionary time good-bye. It will all revert to subordinate-imposed time.

Nor can the manager and the subordinate effectively have the same initiative at the same time. The opener, “Boss, we’ve got a problem,” implies this duality and represents, as noted earlier, a monkey astride two backs, which is a very bad way to start a monkey on its career. Let us, therefore, take a few moments to examine what we call “The Anatomy of Managerial Initiative.”

There are five degrees of initiative that the manager can exercise in relation to the boss and to the system:

1. wait until told (lowest initiative);

2. ask what to do;

3. recommend, then take resulting action;

4. act, but advise at once;

5. and act on own, then routinely report (highest initiative). 

Clearly, the manager should be professional enough not to indulge in initiatives 1 and 2 in relation either to the boss or to the system. A manager who uses initiative 1 has no control over either the timing or the content of boss-imposed or system-imposed time and thereby forfeits any right to complain about what he or she is told to do or when. The manager who uses initiative 2 has control over the timing but not over the content. Initiatives 3, 4, and 5 leave the manager in control of both, with the greatest amount of control being exercised at level 5.

In relation to subordinates, the manager’s job is twofold. First, to outlaw the use of initiatives 1 and 2, thus giving subordinates no choice but to learn and master “Completed Staff Work.” Second, to see that for each problem leaving his or her office there is an agreed-upon level of initiative assigned to it, in addition to an agreed-upon time and place for the next manager-subordinate conference. The latter should be duly noted on the manager’s calendar.

The Care and Feeding of Monkeys

To further clarify our analogy between the monkey on the back and the processes of assigning and controlling, we shall refer briefly to the manager’s appointment schedule, which calls for five hard-and-fast rules governing the “Care and Feeding of Monkeys.” (Violation of these rules will cost discretionary time.)

Rule 1.

Monkeys should be fed or shot. Otherwise, they will starve to death, and the manager will waste valuable time on postmortems or attempted resurrections.

Rule 2.

The monkey population should be kept below the maximum number the manager has time to feed. Subordinates will find time to work as many monkeys as he or she finds time to feed, but no more. It shouldn’t take more than five to 15 minutes to feed a properly maintained monkey.

Rule 3.

Monkeys should be fed by appointment only. The manager should not have to hunt down starving monkeys and feed them on a catch-as-catch-can basis.

Rule 4.

Monkeys should be fed face-to-face or by telephone, but never by mail. (Remember—with mail, the next move will be the manager’s.) Documentation may add to the feeding process, but it cannot take the place of feeding.

Rule 5.

Every monkey should have an assigned next feeding time and degree of initiative. These may be revised at any time by mutual consent but never allowed to become vague or indefinite. Otherwise, the monkey will either starve to death or wind up on the manager’s back.“Get control over the timing and content of what you do” is appropriate advice for managing time. The first order of business is for the manager to enlarge his or her discretionary time by eliminating subordinate-imposed time. The second is for the manager to use a portion of this newfound discretionary time to see to it that each subordinate actually has the initiative and applies it. The third is for the manager to use another portion of the increased discretionary time to get and keep control of the timing and content of both boss-imposed and system-imposed time. All these steps will increase the manager’s leverage and enable the value of each hour spent in managing management time to multiply without theoretical limit.

Article link: https://hbr.org/1999/11/management-time-whos-got-the-monkey

  • WOWilliam Oncken, Jr., was chairman of the William Oncken Corporation until his death in 1988. His son, William Oncken III, now heads the company.
  • DWDonald L. Wass was president of the William Oncken Company of Texas when the article first appeared. He now heads the Dallas–Fort Worth region of The Executive Committee (TEC), an international organization for presidents and CEOs.

Six practical actions for building the cloud talent you need – McKinsey

Posted by timmreardon on 01/22/2022
Posted in: Uncategorized. Leave a comment

January 19, 2022 | Article

Despite a shortage of cloud talent, top companies are finding ways to get past table stakes and build the capabilities needed.

DOWNLOADS

Open interactive popup

Article (11 pages)

Cloud has emerged as one of the most important battlegrounds for tech talent. Despite the fact that there is more than $1 trillion of new value at stake in the cloud,organizations are struggling to capture those benefits because they don’t have the right talent in place.

That reality became clear when a global mining company shifted to cloud. While the business achieved a two-thirds reduction in infrastructure provisioning time, it wasn’t able to take advantage of the change, because it didn’t have experienced cloud professionals and leaders who could pinpoint issues or knew what changes were really required to unlock value. The result was that product delivery took longer than before.

While investment in cloud transformations tripled between 2017 and 2021,1 this example underscores a prevailing issue: companies have not matched that pace on the talent front. Companies with high cloud aspirations often don’t have the right talent or culture to help them navigate complex cloud economics, operating-model changes, and the technical requirements needed to make cloud value a reality. In fact, 95 percent of respondents in a recent McKinsey survey cited lack of cloud talent and capabilities as one of the biggest roadblocks they face.2

Companies that have prioritized cloud talent, on the other hand, have seen a profound impact. A utility company experiencing a 40 percent surge in contract volumes, for example, managed to launch a new digital customer-service channel on cloud six times faster than originally planned because technology leaders knew how to spot and remove key cloud delivery hurdles.

An airline reported it was able to save more than half its IT cost when it was hit by a sudden business downturn after cloud engineers built controls to scale back underutilized workloads.

When a telecom’s product team changed some offering terms and conditions to increase revenues, service utilization spiked. The cloud team’s FinOps experts did a rapid cost attribution, which revealed expenses that negated the incremental revenue gain—something that the business quickly addressed.

In our experience, companies that put in place the right cloud-talent approach as part of the development of a comprehensive cloud transformation engine can capture similar benefits (Exhibit 1).

The cloud transformation engine is made up of three mutually reinforcing elements.

Drawing on our research and experience working with hundreds of organizations on their cloud transformations, we have identified six practical actions that can help companies build a top cloud-talent bench and operating model.

1. Find engineering talent with broad experience and skills

Effective talent management starts with a clear understanding of the cloud skills companies need. There are three broad categories of talent: engineers, who typically determine which cloud services can be consumed and how to do so safely and reliably; developers, who focus on piecing together those services to deliver innovative outcomes; and nontechnical staff, who typically focus on enabling maximum cloud benefit while retaining the core value their function typically brings, such as how to manage risk.

More important than understanding the types of talent needed is hiring those with experience. The most valuable cloud engineers and developers in many established enterprises don’t necessarily have loads of certifications. Instead, they bring extensive experience in IT-infrastructure organizations and have at least five years of experience working in cloud, hands-on development skills, and a habit of lifelong learning. For industries with complex technical requirements, such as defense or telecommunications, practitioners need additional specialist skills and knowledge.

In fact, more than 80 percent of cloud professionals in the United States and Australia have held at least five technology roles during careers spanning ten years or more.3 Their skills profiles are likely to be “M-shaped”—that is, generalists with some deep specializations—and they are unlikely to have occupied a hyperspecialized role immediately before shifting to cloud but may well have occupied one or more such roles earlier in their career.

Experience working in traditional IT-infrastructure organizations is important because it gives cloud professionals an understanding of the range of fundamental design choices (for example, eventual data consistency across application instances, infrastructure platforms, and geographies) that need to be addressed to develop an application or platform. Blending this traditional expertise with deep cloud-specific knowledge and experience is critical to achieve robust, scalable, and secure solutions. Furthermore, top cloud service providers (CSPs) have developed so many capabilities and services that cloud developers and engineers can do many tasks themselves. Cloud developers and engineers don’t need the specific hyperspecialized technical knowledge—such as in hardware or different configuration languages—that is needed to manage on-premises environments.

2. Balance talent maturity levels and team composition

As many CIOs can tell you, cloud engineers with broad experience, knowledge, and multiple areas of specialization are scarce. What’s more, companies often spend too much time trying to land engineering stars, which delays getting the actual work done—or worse, drives them to rely too heavily on the talent they have.

Organizations can address this talent shortfall by balancing engineers with different maturity levels, sourcing strategies, and the team composition. When it comes to maturity levels, McKinsey research indicates that many successful IT organizations have about 30 percent of their engineers in the top “expert” and “proficient” tiers, 50 percent in the middle tier (“capable”), and 20 percent in junior tiers (“novice” and “advanced beginner”) (Exhibit 2).4 Leading companies focus on recruiting anchor leadership roles and entry-level positions as well as training and working with partners (see more on these topics in the sections that follow).

With talent in short supply, organizations can aim for a pragmatic balance of engineers with different maturity levels.

Engineers with less experience can still bring value to a cloud team as long as the team has a balance of engineers with complementary backgrounds and skills (Exhibit 3). By having a clear view of the specific skills gaps at the team level, companies can better target how to build up their overall cloud capabilities more quickly.

A cloud-proficient team can be assembled from members with complementary talent profiles.

3. Build an upskilling program that is extensive, mandatory, and focused on need

New hires are not only hard to find but also two to three times more likely to leave than those already on the payroll.5 For this reason, most of the cloud workforce will need to come from within the existing organization, though not without upgrading their skills.6With the right approach, upskilling in-house talent can close the gap and cost less than half as much as hiring. But companies aren’t approaching their cloud training programs with enough urgency. While nine out of ten organizations are training their tech talent on cloud, for example, most of this training is voluntary and confined to engineers building core cloud platforms. Furthermore, our analysis shows that 44 percent of nontechnical functions are materially impacted by cloud, yet only 25 percent of companies are training the individuals in these roles to respond.7

Engineers need to learn new coding techniques, engineering approaches, and design patterns. They also need to understand how to optimize costs, create value, and manage both risk and security. Given the increasing need to work closely with business leaders, engineers will also need to learn how to collaborate effectively and be intimately familiar with business priorities. The best companies handpick their high-potential talent and create specific opportunities for them to accelerate their path to technical leadership. To ensure broad participation in upskilling programs, they also build in incentives, including mandatory and tailored cloud learning journeys supported by individual performance evaluations (Exhibit 4).

An experienced app developer new to cloud will need to go through a tailored learning journey.

This cloud education needs to extend to senior leaders as well as to nontechnical functions such as finance, procurement, product management, and risk. At the senior-leadership level, training should be focused on developing a deeper understanding of the tech and business implications of cloud to deliver on a company’s strategy. Targeted cloud training can help nontechnical people understand how their function should adapt to the speed and flexibility of cloud. Upskilling programs for procurement teams, for example, should focus on how cloud pricing works, what drives demand, how to assess it, and how to bring cost-management efforts to teams using cloud.

4. Build an engineering culture that optimizes the developer experience

Executives in a recent series of in-depth discussions said that nearly 95 percent of their IT processes got in the way of unleashing cloud talent (and keeping them from leaving) (Exhibit 5).8

On average, nearly 95 percent of IT processes need updating, including 62 percent in need of a major overhaul.

For instance, traditional IT architecture often requires every solution to be reviewed and approved. Typically, however, the standards for approval aren’t clear. Architects use the equivalent of “case law,” or precedent, to provide guidance, which ultimately may not be approved by a review board and can take months to get to an outcome. Many architects focus on why new solutions are problematic and clash with their carefully crafted intentional architectures rather than on the value potential in doing things differently. As one public-sector executive noted, “The hardest part wasn’t getting teams to understand what’s possible with cloud but changing the mindset from ‘why you can’t do this’ to ‘how you can.’”

Building a culture where cloud talent thrives requires supporting them with a new operating modelin which teams have autonomy to continually work on discrete products and platforms and automation reduces toil. Engineers need to ensure they’re building platforms that prioritize developer experience by providing easy-to-consume self-service capabilities that take care of all default configurations and security controls automatically as table stakes. Keeping cloud talent focused on important work, away from toil and meetings, and valuing hands-on work also helps retention. When one large bank found engineers were spending as little as 30 percent of their time on tools, it set up a dashboard to measure “engineering toil” and introduced a productivity team. By reducing process waste, increasing automation, and improving the developer experience, the bank achieved a 12-point improvement in engineer satisfaction.

The hardest part wasn’t getting teams to understand what’s possible with cloud but changing the mindset from ‘why you can’t do this’ to ‘how you can.’

Public-sector executive 

A cultural shift is also needed among supporting functions (such as risk, cyber, architecture, and procurement) to provide the collaboration and tools needed to deliver products safely and quickly. Advanced organizations define cybersecurity policies and standards programmatically so they can be implemented automatically in the code used to provision cloud systems (security as code), and compliance is automatically checked when code is committed. Cloud talent can then get on with the job of building business innovations rather than waiting for and chasing approvals. Psychological safety (such as instituting “blameless” post-mortems, implementing mechanisms for feedback and input, and providing safe environments for experimentation) has emerged as particularly important in building an engineering culture, as has “inner sourcing” (the ability for developers to freely examine cloud-platform code, make adjustments if they see opportunity for improvement, and create pull requests so the code owners can review).

That improvement in the developer experience should be complemented with a mindset of shared responsibility. Developers should understand that their objectives aren’t just to get new features delivered but to do so securely and within regulatory bounds. They also need to understand the business context, the user personas, customer journeys (both current and future state), and product adoption so that the products and platforms they deliver support the organization’s broader digital strategy.How companies can win in the seven tech-talent battlegroundsRead the article

5. Consider using partners to accelerate development, and assign your best cloud leaders as owners

A third of today’s cloud talent is employed by professional-services companies,9 so most organizations will struggle to meet their cloud-talent needs without partnering. Our recent analysis indicates, however, that many partnerships don’t work well.10The challenges range from partners driving simplistic “lift and shift” migrations to delivering complex policy as code that can’t be maintained with in-house skills.

The starting point for effectively working with partners is to move past vendor-management table-stakes practices, such as bringing organizational context, providing support to navigate internal politics, and collaborating on third-party vendor negotiations. Instead, top companies work with a partner to not only benefit from its cloud capabilities but also build up their own capabilities along the way. Since working with partners has proven to be one of the most effective ways to train talent, in fact, capability building should be a core element in commercial arrangements.

With this in mind, leading organizations favor working with fewer, more-expert cloud-partner resources and set higher productivity and capability expectations for them. In the most successful relationships, partners help to inform key decisions and provide context and insight from other organizations. Both parties are equally invested in collaboration and reducing provider churn to maintain delivery consistency.

When we prioritized our top IT talent to work with the partner, we ended up with a reliable environment, scaled benefits, and our talent upskilled to be cloud proficient.

Insurance-provider executive 

An important element of the relationship is assigning a cloud-specific leader as the owner. This differs from traditional partner-management roles in that the cloud leader not only ensures partners meet their commitments but also leads the cloud efforts and makes certain that internal capabilities are developed at the right pace to keep up with what the partners are delivering.

This can entail, for example, influencing work allocation to ensure that the engineers most in need are coached by partner experts, and managing a schedule of knowledge-sharing sessions. This role also owns key decisions that underpin cloud value (for example, cloud operating model, talent development, architecture, security policy, deployment patterns, and guardrail exceptions).

Given the important role cloud partners play, it pays to have the best IT talent work with them. One insurance provider noted, “When we prioritized our top IT talent to work with the partner, we ended up with a reliable environment, scaled benefits, and our talent upskilled to be cloud proficient.”

6. To keep top talent from leaving, focus on what motivates them

Fewer than half of cloud professionals occupy their roles for more than two years.11 So how do you keep them longer? Our analysis shows that compensation, access to cutting-edge tech, the work environment, and professional development are their biggest motivators.

When sales and marketing teams dictated the choice of tech vendors and tools, our cloud engineers constantly battled to marshal suboptimal solutions through antiquated processes. Morale was terrible, and our best people kept leaving. But now that our engineers choose our tools and technologies, they are deeply engaged.

Telecommunications executive 

For this reason, top companies ensure that their top talent can not only work with the most advanced technologies, languages, frameworks, and tools to evolve their skills but also develop the necessary experience to continue to be successful in the organization and industry. In the same vein, they give their leading people the freedom to experiment with new services and solutions by reducing managerial red tape and providing funding (within clear guidelines). One telecommunications executive explained, “When sales and marketing teams dictated the choice of tech vendors and tools, our cloud engineers constantly battled to marshal suboptimal solutions through antiquated processes. Morale was terrible, and our best people kept leaving. But now that our engineers choose our tools and technologies, they are deeply engaged.”

Finally, with remote work an increasing fact of life, organizations should institutionalize an operating model that allows employees to work remotely. That includes scaling collaboration tools, security protocols, and remote-friendly teamwork approaches. This can not only help drive retention but also be a tool in attracting talent.


Without cloud talent, the value that cloud offers is simply unattainable. Companies can win the war for cloud talent, however, when they combine a clear understanding of what talent they really need, a culture where that talent can thrive, and a commitment to practical changes so they can capture cloud value quickly.

Article link: https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/six-practical-actions-for-building-the-cloud-talent-you-need?cid=app

ABOUT THE AUTHOR(S)

Brant Carson and Dorian Gärtnerare partners in McKinsey’s Sydney office, where Keerthi Iyengar is an associate partner; Anand Swaminathan is a senior partner in the Singapore office; and Wayne Vestis an expert in the Melbourne office.

The authors wish to thank Oliver Bossert, Jayne Giemzo, Martin Harrysson, Lawrence Hunt, James Kaplan, Ranja Reda-Kouba, Angelika Reich, Pamela Simon, Megha Sinha, and Suman Thareja for their contributions to this article.

The Pentagon’s new cybersecurity model is better, but still an incremental solution to a big challenge – Federal News Network

Posted by timmreardon on 01/18/2022
Posted in: Uncategorized. Leave a comment

The Pentagon announced in November a new “strategic direction” for its Cyber Maturity Model Certification, calling it CMMC 2.0 and essentially admitting the first iteration was overly complex and costly. The new version better aligns to existing federal standards and requirements but falls well short of being the “bold change” President Biden called for in his much-touted May cybersecurity executive order.

Prior to the creation of CMMC, federal acquisition regulations required all defense contractors that interacted with controlled unclassified information (CUI) to implement the basic cyber hygiene safeguards listed in the National Institute of Standards and Technology guidelines, NIST Special Publication (SP) 800-171. Companies would then conduct self-assessments of their compliance. Predictably, not all companies assessed themselves equally or honestly, or addressed the issues they self-identified.

In November 2020, after nearly two years of development, the Defense Department introduced the original CMMC. Its most significant change was a new requirement that a third-party conduct the assessment for all organizations seeking contracts, including universities applying for grants. The Association of American Universities and others associations warned the “potentially burdensome and harmful requirements” of CMMC would “have a chilling effect” on fundamental research. CMMC had no flexibility and required all organizations, regardless of size, to meet all requirements. Thus, CMMC’s mandates on companies to pay third-party assessors and to implement potentially unnecessary security controls created significant expenses for small and medium-sized businesses. Meanwhile, in the race to roll out CMMC, DoD apparently disregarded industry concerns about the lack of clarity regarding its implementation.

The starkest example of CMMC’s failings was the disastrous establishment of the CMMC Accreditation Body (AB), a volunteer organization that was supposed to certify hundreds of companies as “certified third-party assessor organizations.” Instead, the CMMC-AB created a pay-to-play assessment ecosystem in the form of a partnership program that enabled a company to become a “recognized leader in cybersecurity and an early supporter of CMMC-AB” for $500,000. Cybersecurity standards consulting firm Oxebridge Quality Resources International also accused the CMMC-AB of fraud, money laundering and federal bribery, and ethics violations. Oxebridge filed a formal complaint alleging felony fraudand later reported that the DoD Inspector General is conducting an investigation.

DoD launched a CMMC review in March in part to “reinforce trust and confidence in the maturing CMMC assessment ecosystem,” Jesse Salazar, deputy assistant secretary of defense for industrial policy explained. After two years of scandals focusing more on profit and power than advancing the cybersecurity posture of the defense industrial base, this is a welcome goal.

To fix the ecosystem, CMMC 2.0 reduces the security certification tiers from five to three and removes the third-party assessment requirement for level one and part of level two, allowing contractors to return to self-attestation. For the other part of level two (advanced), CMMC 2.0 requires a third-party assessment, meaning the new industry of CMMC assessors still have jobs, just a smaller market. Level three certification requires government assessors, which are already in short supply and high demand.

In addition to giving industry the flexibility it will need to meet requirements and establish an effective foundation of cybersecurity, CMMC 2.0 removes extra requirements that went beyond those included in NIST SP 800-171. David McKeown, DoD’s deputy chief information officer, explained in a town hall on November 9 that the Pentagon will “not invent a whole bunch of extra controls on our own. If additional controls are needed, we are going to work with NIST to get those added in.” Hopefully, the new leadership and direction will also leverage industry expertise and recommendations to improve CMMC 2.0’s efficacy.

To establish baseline cyber hygiene practices that protect all CUI and not just that relevant to the Defense Department, the Biden administration should consider government-wide implementation of CMMC rather than the development by each department and agency of its own separate model. At the same time, the administration should be honest about the limitations of CMMC to solve the cybersecurity crisis that the government and private sector face.

DoD has touted CMMC as a solution to supply chain risk management. It is not. The cybersecurity safeguards in NIST SP 800-171 are basic cyber hygiene practices. Separate NIST guidelines, NIST SP 800-161, identify supply chain controls, and CMMC makes no reference to them.

Days after the Pentagon’s CMMC 2.0 announcement, CNN reported that hackers breached companies across the defense industrial base by exploiting vulnerabilities that bypassed the authentication process. Which controls in NIST SP 800-171 would have stopped this? None. How would CMMC 1.0 or CMMC 2.0 have protected organizations from a SolarWinds-type attack? They would not have. SolarWinds provided a commercial, off-the-shelf product not subject to CMMC.

President Biden’s May executive orderstates, “Incremental improvements will not give us the security we need; instead, the federal government needs to make bold changes and significant investments to defend the vital institutions that underpin the American way of life.”

Unfortunately, CMMC 2.0 is just such an incremental improvement.https://0414dc86be0b21e03d24bcc1e7e77009.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html?n=0

If the Biden administration wants to make bold changes, its priority must be securing the supply chain, not just policing companies’ basic cyber hygiene practices. In a recent conversation, John Weiler of the IT Acquisition Advisory Council, a founding member of the AB, said that to make CMMC 2.0 effective, the Pentagon needs a “very robust supply chain risk management public and private partnership to rapidly assess the technologies and architectures that government and industry rely upon.”        Read more: Commentary 

One way to achieve this would be to require software vendors to provide a software bill of materials (SBOM), a list of nested software components designed to enable supply chain transparency. The government should also create a single, central capability that continuously monitors SBOMs. Analyzing SBOMs can reveal otherwise hidden dependencies of components built by foreign nationals of adversarial countries and other leading indicators of risk. With this knowledge, the government and industry partners can take appropriate risk mitigations.

CMMC 2.0 may fix some of the flaws of its predecessor, but the hard work to strengthen cybersecurity still lies ahead.

Article link: https://federalnewsnetwork.com/commentary/2022/01/the-pentagons-new-cybersecurity-model-is-better-but-still-an-incremental-solution-to-a-big-challenge/amp/

Dr. Georgianna Shea is the chief technologist of the Transformative Cyber Innovation Lab and Center on Cyber and Technology Innovation at the Foundation for Defense of Democracies (FDD) and previously served as a subject matter expert and consultant to the Office of the Secretary of Defense on cyber resiliency. FDD is a Washington, DC-based, nonpartisan

Categories: Commentary, DefenseTags: CMMC 2.0, cyber maturity model, Foundation for Defense of Democracies, Georgianna Shea

Navy Cryptologic Warfare Officers Cannot Do Cyber – USNI

Posted by timmreardon on 01/18/2022
Posted in: Uncategorized. Leave a comment

By Lieutenant Commander Derek S. Bernsen, U.S. Navy Reserve

January 2022 ProceedingsVol. 148/1/1,427NOW HEAR THIS

VIEW ISSUE

Navy cyber is a ship without a rudder. While every other service has one cyber designator, the Navy’s cyber expertise resides in three seprate communities. As a result, the three communities are each plagued with unnecessary problems and none are fully empowered or capable of leading the domain. To solve this issue, the Navy must consolidate responsibility for cyber, invest in the cyber warfare engineer community, and require deep technical experience for all cyber roles.

Leadership and management for Navy cyber is currently divided among cryptologic warfare officers (CWOs), information professionals (IPs), and cyber warfare engineers(CWEs). CWOs are ostensibly responsible for offensive and defensive cyber operations, IPs for operating the information technology systems, and CWEs for the technical engineering work that enables cyber operations (e.g. conducting vulnerability research, exploit and capability development). This model may appear reasonable to those not versed in cyber operations, but it significantly inhibits the service from realizing a potent cyber warfighting capability.

CWOs are spread too thin—forced to juggle five different areas of expertise without the focus or depth each requires. IPs have some technical depth but not enough for the more intricate cyber defense tasks, such as malware reverse engineering. CWEs have the expertise to do everything cyber, but are too few in number (currently only 68 personnel). Because of this division of responsibility, major decisions regarding cyber are made by individuals without technical expertise, and these communities are not aligned to capture the value each brings.

Navy cyber also suffers from undervaluation and apathy. Navy leaders hold a misconception that cyber is a purely joint endeavor from which the Navy receives no benefit. Yet each service has specific uses for cyber professionals and the Navy has done little to invest in maritime cyber. This undervaluation creates a negative feedback loop exacerbated by a lack of demonstrated benefit from the CWO community. Continuing to allow responsibility for cyber to be fragmented will turn this misconception into reality. If the Navy continues down its current path, it will have nothing to contribute to the cyber fight in the next war.

Cryptologic Warfare Officers Cannot Do Cyber, Too

The CWO community (roughly 900 officers), the de facto primary cyber community, also provides expertise for signals intelligence and all information operation missions (electronic warfare, operational security, military deception, and military information support operations). It is diffcult enough for one officer community to develop expertise in each of these missions, let adding the cyber mission. CWO community leaders have failed to grasp both the importance and the unique requirements of cyber warfare, resulting in no path to develop cyber experts within their ranks and causing cyber to be undervalued.

Often, CWOs are deemed cyber experts and put in charge of a Cyber Mission Force teams after a single course at the Naval Postgraduate School (NPS) in Monterey, Califorinia, or a one-month basic course in Pensacola, Florida. A cyber expert must be able to solve hard technical problems related to computer security with minimal support because they understand the underlying technology and have sufficient breadth and depth in the many subfields of cyber. Cyber is far larger than many realize: cryptography, forensics, vulnerability research, penetration testing, exploitation, (cyber) operations, steganography, malware, cyber-threat intelligence, reverse engineering, networking, and development (including exploits, payloads, and effects). Given all their information warfare missions, that CWOs can also become cyber experts is a fallacy. Even with NPS’s cyber systems and operationsmaster’s degree, CWOs without several years of focused work in technical cyber areas are nothing more than cyber dilettantes.
Furthermore, the CWO community does not have a mechanism to screen for technical talent, nor does it value technical ability. While CWOs claim to value technical talent by encouraging those who apply to have STEM degrees, it is not a requirement nor is knowledge gained in academia put into practice. This is obvious from their published community values.

Needed: A Unified Cyber Warfare Community

The CWO community fails to grasp the scale, potential, and diverse skills within the cyber domain. Seeing it as a unitary skill in which one can get sufficient “cyber-stink” after completing a single assignment at the National Security Agency (NSA), the CWO community continues to undervalue the unique challenges associated with conducting operations, developing capabilities, and managing personnel. Most CWOs view cyber as a black box and not the vast domain it is. Nicolas Chaillan, the Air Force’s first chief software officer decried this issue: “Please stop putting a major or [lieutenant colonel] (despite their devotion, exceptional attitude, and culture) in charge [of technical projects affecting millions of users] when they have no previous experience in that field,” he wrote. “We would not put a pilot in the cockpit without extensive flight training; why would we expect someone with no IT experience to be close to successful?”

At present, there is no incentive for the CWO and IP communities to develop genuine cyber expertise and viable cyber officer career path, because doing so would require prioritizing cyber above their traditional areas of expertise. Because of this, each community invests far too little into cyber professionalization, which further undervalues cyber. This negative feedback cycle causes the NSA and the other military services to view the Navy as poor performers on all things cyber. Even U.S. Cyber Command recognizes this and does not list Navy CWO as a cyber officer community on its careers page (the Navy enlisted cryptologic technician rating is included).

The CWO and IP communities cannot retain genuine cyber talent, as those officers with the aptitude and desire to be cyber experts feel underused and unappreciated. Why become a cyber expert when your community forces you to continually take unrelated jobs? Even if CWOs with cyber talent were to continue their service, they must check boxes in the five CWO disciplines or risk failing to promote. If they are lucky, they may get a cyber job every third or fourth tour, for a total of two or three tours in a 25–30 year career.

Apparently, the CWO community failed to grasp the lessons identified in the book Range—sample and gain diverse experience in the first few years of a career, then specialize for excellence, not dabble in various things forever. The Army, Air Force, and Marine Corps do not ask their officers to be cyber professionals on a part-time basis, instead devoting large communities (the Air Force has nearly 3,000 cyber officers) to the problem and structuring career paths around creating cyber leaders. These services do not have cyber officer career paths that involve random assignments in infantry battalions or aircraft maintenance squadrons just so their officers can experience “the real military.” Yet, the Navy’s CWO community does just that.

According to the Secretary of the Navy’s March 2019 Cybersecurity Readiness Review, the Navy’s “culture, processes, structure, and resources are ill-suited for this new era” and that “a real appreciation of the cyber threat continues to be absent from the fabric of [Navy] culture.” The Navy frequently talks about the importance of cyber, but its actions clearly do not match its words. CWOs should not be the ones making decisions about cyberspace, and its lack of cyber expertise has failed to prepare the Navy for many incidents. For example, the creation of the Navy’s Cybersecurity Task Force and Operation Rolling Tide afterIranian hackers were already in Navy networks.

Empower the Cyber Warfare Engineer Community

The cyber warfare engineer community was created as the home of the true cyber experts. Unfortunately, the CWEs face many challenges that the other communities are unwilling to help resolve. The CWE community currently is small and does not have the manpower to take on the entire cyber mission, nor does it have billets in some of the most important cyber jobs in the Navy. Most CWE billets are at Navy Cyber Warfare Development Group in Suitland, Maryland, and at NSA up the road. The CWE communtiy thus far has failed to obtain billets at numerous Navy and Joint commands. Growing a community in a zero-sum Department of Defense (DoD) manpower environment is a slow, cumbersome process, and with few exceptions. Conversion means taking billets from other communities, something no community in the Navy is keen to allow, even when billets go unfilled for years.

What makes the CWE community capable in cyberspace is its focus on developing deep technical expertise. CWEs have major advantages over other communities for accession thanks to the technical interview process and strict STEM degree requirements. To be competitive, applicants must already have deep technical expertise in security. Candidates compete in a 48-hour capture-the-flag screener, followed by challenging programming assignments, and, finally, a rigorous technical interview. Once in the community, CWEs are sent through a grueling six-month training pipeline in which failure is not tolerated.This is like the Navy’s Basic Underwater Demolition/SEAL (BUD/S) training or Navy Nuclear Power school, only for hackers. Only top talent is selected, with less than 30 percent of applicants screening positively in the capture-the-flag event and less than 40 percent passing the interview. This leads to more than 95 percent of CWEs completing the training pipeline, significantly better than what Navy civilians and enlisted cryptologic techician–network (CTN) developers achieve.

Aside from rigorous screening and training, CWEs have more in common with SEALs than is readily apparent. Like SEALs, CWEs are required to become proficient in a wide range of cyber skills allowing them to do what others cannot. CWEs possess the skills to develop the most cutting-edge offensive and defensive cyber capabilities, similar to those required to win a competition like the Zero Day Intiative’s Pwn2Own competition. China takes Pwn2Own and its own competition, the Tianfu Cup, seriously as a show of force.

No matter where the nation and DoD go regarding building a cyber force, the Navy will always need its own cyber professionals to operate from and defend naval platforms and target maritime adversaries. The CWEs are the only community that has the requisite technical depth, experience, and focus to lead in cyberspace.

The ultimate solution for the Navy will be to turn over full responsibility for the cyber domain to the CWEs—a course of action that will benefit not just the three communities currently involved, but the entire service. CWEs will be empowered to grow, lead, and provide domain superiority. The CWOs will be freed to focus on their traditional areas of expertise. If the IPs continue to expand their own defensive role in cyber operations, the CWE community will not need to grow to the same scale as the Air Force or Army’s cyber officer communities, but it certainly will need more than 1,000 CWEs. This would allow the Navy to not only become a top tier remote cyber operations organization, but to integrate CWEs across the fleet and with Naval Special Warfare, in addition to providing direct support to the fleet when needed.

Empowering the CWEs with full control of the cyber domain also will go a long way in improving retention, resolving billeting issues, and creating more opportunities for impact. But it is an incomplete solution. Empowering also must involve other actions to attract and retain high performers with some of the most in-demand skills in one of the most in-demand industries in the world. There currently are no incentives, bonus pay, or accession bonus options that can be offered to CWEs. As NSA has repeatedly learned, it needs skilled cyber people more than those people need NSA. The same is true for the Navy. NSA has created numerous special pay bands and incentive programs to retain personnel. The Army recognizes the importance of technical leaders and offers software developers and cyber personnel its maximum retention bonus and proficiency pay. The Navy should provide bonus pay for proficiency in programming languages, special duty pay, better career opportunities, and establish a CWE reserve component.

Adversary nations are on near-equal footing in cyberspace and cyber provides a mechanism for outsized impact, even to those that are less sophisticated. The Navy cannot keep untrained and unfocused CWOs operating in this domain. The impact of well-trained CWEs, focused cyber experts, will always be greater than CWOs with only a basic understanding of cyber. Removing cryptologic warfare officers from the cyber domain is a critical move the Navy must take.

Empowering the community that has the intense screening process, deeply technical expertise, and focus on the cyber domain is the only way the Navy can regain its cyber footing. No amount of time in roles that claim to be cyber but provide little technical depth will change the fact that the Navy currently has unqualified personnel in every cyber role. This problem is solvable, but it requires a major restructuring of responsibility for the cyber domain and requires the Navy to put its money and effort where its mouth is and take cyber seriously.

Article link: https://www.usni.org/magazines/proceedings/2022/january/navy-cryptologic-warfare-officers-cannot-do-cyber

The federal government doesn’t sound like it understands cyber risk management

Posted by timmreardon on 01/18/2022
Posted in: Uncategorized. Leave a comment

How to not deploy securely (or possibly at all).

Walter Haydock 14 hr ago

“We’re at the point where the federal government simply can’t bear the risk of buying insecure software anymore.”

– Jeff Greene, acting Senior Director for Cybersecurity at the National Security Council (NSC), April 2021.

“Security should be table stakes at the end of the day.”

– Jenn Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA), December 2021.

“You should not have to pay extra for security, I’m sorry, that is immoral for companies [to charge for]…I’d love to see an executive order that any cloud product that is bought by a federal agency has to support [multi factor authentication], [single sign on] and basic audit in the most base paid package.”

– Alex Stamos, member of the CISA Cybersecurity Advisory Committee, December 2021.

“The FTC intends to use its full legal authority to pursue companies that fail to take reasonable steps to protect consumer data from exposure as a result of Log4j, or similar known vulnerabilities [sic] in the future.”

– Unsigned Federal Trade Commission (FTC) blog post, January 2022.

Given how behind the curve the United States federal government has been in terms of information technology broadly, and cybersecurity in particular, these comments might seem like steps in the right direction. Unfortunately, and in contrast to other observers, I’m going to say that they are not.

The problem with the attitudes expressed above is that they make no acknowledgment of any sort of risk/reward tradeoff being necessary. Furthermore, they all allude to some (apparently universally accepted) standard of “security” below which no organization may reasonably operate. Finally, despite this haziness, a regulatory agency is threatening legal action against those who don’t comply with this unclear mandate.

Don’t get me wrong; I write a blog about cybersecurity, and, if all other things are equal, more security is better. Unfortunately, all other things are rarely equal, and tradeoffs and sacrifices are invariably necessary. As I wrote in my first post, organizations exist to deliver value, not to simply be secure. Thus, cybersecurity advisors should strive to communicate with precision about the risks and rewards of various courses of action so that business (or mission, in the case of the government) leaders can make the best-informed decisions possible.

None of the above comments suggest even an acknowledgment of this dynamic, and at least to me, these types of attitudes have become even more common in government circles recently. Furthermore, these statements represent broad platitudes that make taking decisive action more, rather than less, difficult.

Unclear risk tolerances

Although I have previously made recommendations for how the government should communicate about cyber risk, I have yet to see any comprehensive standard that gives either its employees or its contractors a clear way to understand what levels are acceptable and what are not. The vast majority of government-mandated requirements, such as FedRAMP and the Cybersecurity Maturity Model Certification(CMMC), focus on controls, rather than the likelihood and severity of potential adverse outcomes. Even these control requirements are often counterproductive, and the only risk-based standards that I have seen are vague and not actionable.

Thus, the only way I can interpret the first two quotations above is that anychance of a malicious party impacting data confidentiality, integrity, or availability represents an unacceptable risk and that the government should pull the plug on any system presenting such a risk.

Obviously, this is an extreme position, as America – and much of the world – would rapidly descend into chaos if this happened. Thus, the statements from Greene and Easterly do not appear to provide any value for those implementing the government’s cybersecurity programs. Furthermore, they set the tone that “no risk is an acceptable risk,” suggesting that they will hang out to dry anyone on the front lines who takes any such risk to accomplish his mission. This creates a paradoxical double bind that is common to many bureaucracies.

Something for nothing

To Stamos’ point, he seems to also imply that federal contractors should not be able to make certain types of tradeoff analyses when designing and selling their software. He further suggests that vendors who sell software to the government should only be allowed to market a product that meets his stated requirements, regardless of the resulting cost to the taxpayer or even whether the real-world use case requires that level of security. Additionally, he does not seem to acknowledge any second-order consequences of such a policy, such as essentially conceding all federal software business to established players who can “check all the boxes.” I am nearly certain that these contractors would simply raise their prices for their base offerings accordingly, eliminating the ability for individual mission owners to select a cheaper option when it is “secure enough.” Such a proposed executive order would also likely exclude from consideration any startups that have a very narrowly-focused product with game-changing capabilities that can deliver massive value, but don’t have the enterprise features Stamos demands.

Again, I’m all for multi-factor authentication and auditing features. The thing is, though, no matter how much one might wish it to be the case, these things are not free to develop and maintain. Implementing this functionality requires time, money, and manpower. There always exists a range of other initiatives to which software companies – or the government – might dedicate these resources, and weighing the available choices is the only logical course of action for those entrusted by the taxpayers to protect them and their data.

You better watch out…

The final statement – by the FTC – is perhaps the most alarming one, as it is clearly driven by one or more government staffers reading the headlines of a major newspaper and then deciding that “we need to do something about this.” Aside from the use of the word “reasonable,” this legal threat from a powerful regulatory agency provides little context on how to balance competing priorities.

For example, what if CVE-2021-44228(known informally as “log4shell”) allowed an attacker to siphon protected health information (PHI) from a popular internet-connected insulin pump (or even kill individual targets)? Should the manufacturer push an update for log4j as soon as humanly possible? How much quality assurance (QA) testing is it “reasonable” to conduct before doing so? There are potentially severe consequences for introducing a breaking code change into such pumps (e.g. devices malfunctioning across the entire user base and a large number of people going into diabetic shock) as a result of insufficient QA, but the countervailing risks of physical harm and lost privacy are also significant. What does a “reasonable” person do in this situation? It’s not immediately clear, and deciding how to proceed would require a detailed risk/reward analysis. The FTC provides no guidelines for how to conduct it.

Even more confusingly, the Food and Drug Administration (FDA), which regulates medical devices, issued the following guidance:

Manufacturers should assess whether they are affected by the [log4shell] vulnerability, evaluate the risk, and develop remediation actions…[m]anufacturers who may be affected by this most recent issue should communicate with their customers and coordinate with CISA.

Like the FTC, the FDA’s communication makes no mention of timelines, risk tolerances, or (non-)acceptability of compensating controls. From the perspective of medical device makers, it is thankfully less ominous and prescriptive with respect to the required steps. But from a societal perspective, I don’t think it’s a good thing to have two agencies with overlapping jurisdictions issuing vague, threatening, and potentially conflicting guidance regarding acceptable cyber risk tolerances.

Finally, it is not clear that the FTC staffers who wrote this post actually know what they are talking about. For example, in the quotation above, they conflate the software library (log4j) with a vulnerability in it. Additionally, they seem to limit their potential enforcement actions to companies that don’t take reasonable action in the face of “similar known vulnerabilities.” This phrasing to me would seem to exclude unknown (to anyone except an attacker prior to their exploitation) vulnerabilities. This distinction is not merely academic, as different types of security bugs require different tools to surface them. Thus, would companies who neglect to safeguard consumer data because of their failure to identify unknown vulnerabilities – by not conducting static code analysis or penetration testing, for example – get a free pass? Considering that mostidentified malicious exploitations are of this latter type of flaw, that would be a strange position for the FTC to take.

Conclusion

The four statements at the beginning of this post all sound like something an elected official would say at a campaign rally. Easterly herself seemed to realize this in acknowledging “moral outrage” on the part of Stamos. Frankly, if it were a politician saying these things, I would be okay with it. Elected officials are generalists who need to appeal broadly to a wide constituency, and as long as they hire capable folks to implement policies in line with broad strategic guidance, this can lead to good outcomes.

The problem here is that the people saying the above things are the implementers of such broad strategic guidance. What they are saying – at least publicly – is not actionable or useful, and it potentially reveals a lack of deep thinking with respect to the unavoidable tradeoffs involved with using technology.

Frankly, and this might be cynical, but I believe these statements represent preemptive CYA efforts. If another serious breach occurs that gets national attention (e.g. like what happened with SolarWinds), these folks will be able to say “we told you so.” As the below passage from the essay “How Complex Systems Fail” highlights, this tendency is common:

Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.

Although widespread, such bureaucratic techniques are not acceptable on the part of the nation’s most senior cybersecurity leaders. Moral outrage and CYA don’t help improve the functionality and security of our nation’s networks. Even more importantly, if the government is going to threaten companies with legal liability, it will need to provide much clearer standards regarding which risks are acceptable – compared to the potential rewards – and which are not.

And, in case anyone is wondering, yes, I am willing to help solve this problem! NSC, CISA, FTC, FDA, or any other alphabet soup agency that wants to tackle this problem: drop me a line and I’ll do my best to assist

Article link: https://haydock.substack.com/p/the-federal-government-doesnt-sound

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (1)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...