WASHINGTON — The chief digital innovation officer for the U.S. Navy this week hailed 5G as a “great enabler” of future operations, as the service experiments with the technology and focuses on greater connectivity through Project Overmatch.
Fifth-generation wireless gear is being considered for a range of applications, Michael Galbraith suggested April 19, from pier-side and shipboard links to smart warehouses and other logistical feats.
“Think of a ship, think of a carrier group — we need to work in the run quiet, run deep kind of thing, can’t use SATCOM,” Galbraith said at the Cloudera Government Forum. “I still need to communicate from the first deck to the third deck, I still need to communicate from that carrier to the destroyer, and 5G and other millimeter wave technologies allow that to happen.”
Exactly how 5G, the so-called internet of things and data collection interact with the Navy’s major networks is an “issue that we are actively working on,” Galbraith said. “You hear about Project Overmatch, communication as a service. That is a part of that work that the team that” Rear Adm. Doug Small’s “group is doing. That is vitally important.”
Project Overmatch is the Navy’s clandestine contribution to Joint All-Domain Command and Control, a broader Pentagon effort to better connect sensors and shooters and dissolve communication barriers between the services. Small is the leader of the Naval Information Warfare Systems Command, a key JADC2 player.
Small in early April told C4ISRNET his team was “working across systems commands, warfare centers, services and with industry to provide the architecture, or framework, for how the various components are stitched together, including the networks, infrastructure, data architecture, tools and analytics to improve on our decision advantage.”
“Ultimately,” Small said at the time, “this will aid our ability to provide synchronized effects near and far in all domains, ensuring a more lethal and better-connected fleet now and far into the future.”
Fifth-generation wireless technology promises faster speeds, lower latency and other improvements compared with its predecessors. Alone, 5G is “more, better, faster,” Galbraith said. But when synced with other capabilities, he added, the potential really shines.
“We in the Navy, you know, we work at the edge, have been working at the edge since the 1700s,” he said. “In that information domain, there are other network capabilities, and 5G just is, again, a great enabler.”
The Department of Defense has selected a dozen military installations as test beds for 5G, including sites in California, Georgia and Virginia. This month, the department unveiled a multimillion-dollar challenge to accelerate the growth and adoption of a fifth-generation open ecosystem.
AT&T Inc. this year claimed initial success in setting up a 5G network experiment that could realize smart warehouses for the Navy, Defense News reported. The service believes smart warehouses could boost the efficiency and fidelity of its logistics.
“When we first started experimenting and piloting in 5G,” Galbraith said Tuesday, “we looked at what our priorities were.”
The Defense Department received nearly $338 million for 5G and microelectronics in fiscal year 2022. It requested $250 million for fiscal year 2023, budget documents show.
Colin Demarest is a reporter at C4ISRNET, where he covers networks and IT. Colin previously covered the Department of Energy and its National Nuclear Security Administration — namely nuclear weapons development and Cold War cleanup — for a daily newspaper in South Carolina.
Although research partnerships with foreign companies can yield potential medical breakthroughs, the collection of US personal health data by foreign companies can also pose serious risks. To protect personal health data from foreign exploitation, see: https://lnkd.in/ep8at2_C
The White House announced new plans to promote quantum technology research and development while helping U.S. computer networks transition to post-quantum cryptography standards.
The Biden administration announced two new directives that focus on incorporating quantum computing technology into the U.S. cybersecurity infrastructure and policy landscape, the latest federal action regarding the emerging technology.
Announced on Tuesday afternoon, the directives work to establish a government oversight board to advance quantum science and technological development, emphasizing innovations in quantum computing technologies in relation to cryptography.
“While quantum itself is not new, recent developments in the field have shown the potential to drive innovation across the American economy, from energy to medicine, through advancements in computing, networking, and much like the earlier technological revolutions brought about by the Internet, GPS, and even the combustion engine,” a senior administration official said on a background call.
While the first directive lays out an independent advisory body for quantum information science and technology. This board will work on disseminating knowledge about the latest developments in quantum to lawmakers and the general public and guide policymaking. It will work directly beneath the White House.
The second directive focuses on quantum’s intersection within national security. Dubbed the National Security Memorandum, it will act as a guidebook for agencies and other institutions to transition their systems to become quantum-resistant.
Officials are pushing for digital networks to adapt to post-quantum cryptography standards as a functioning quantum computer could use advanced algorithms to break through standard encryption.
“Given the complexity, costs and time required to fully transition to quantum resistance standards, the (National Science Foundation) provides a roadmap for agencies to inventory their IP system for quantum vulnerable cryptography,” the official said.
Quantum technology has been gaining mainstream attention by governments around the world in recent years. While harnessing the technology stands to impact a myriad of sectors, such as telecommunications and touch-based sensors, officials are concerned that a functioning quantum computer could undermine sensitive data through its powerful decrypting algorithms.
Experts estimate that a functioning quantum computer will be operational within five to 10 years.
Biden’s directives aim to jumpstart the broad transition to post-quantum cryptography to safeguard national infrastructure ahead of time.
“There’s a particular need for our national security community to prove quantum cryptography,” a spokesperson said.
Earlier this year, the Biden administration unveiled a new education initiative to help foster a quantum-ready workforce, along with the previous Trump administration national strategy plans published in 2018.
Congress has followed suit, introducing a slew of new legislation aimed at promoting quantum research and cryptography standards within the U.S. and protecting databases and infrastructure networks.
“The process to transition America’s most vulnerable IT systems to these new standards will take substantial time, resources and commitment,” a Biden administration spokesperson said. “Accordingly, America must start the lengthy process of updating our IT infrastructure today to protect against this quantum computing threat tomorrow.”
“We strongly believe that the ability for others to scrutinize your work is an important part of research. We really invite that collaboration,” says Joelle Pineau, a longtime advocate for transparency in the development of technology, who is now managing director at Meta AI.
Meta’s move is the first time that a fully trained large language model will be made available to any researcher who wants to study it. The news has been welcomed by many concerned about the way this powerful technology is being built by small teams behind closed doors.
“I applaud the transparency here,” says Emily M. Bender, a computational linguist at the University of Washington and a frequent critic of the way language models are developed and deployed.
“It’s a great move,” says Thomas Wolf, chief scientist at Hugging Face, the AI startup behind BigScience, a project in which more than 1,000 volunteers around the world are collaborating on an open-source language model. “The more open models the better,” he says.
Large language models—powerful programs that can generate paragraphs of text and mimic human conversation—have become one of the hottest trends in AI in the last couple of years. But they have deep flaws, parroting misinformation, prejudice, and toxic language.
In theory, putting more people to work on the problem should help. Yet because language models require vast amounts of data and computing power to train, they have so far remained projects for rich tech firms. The wider research community, including ethicists and social scientists concerned about their misuse, has had to watch from the sidelines.
Meta AI says it wants to change that. “Many of us have been university researchers,” says Pineau. “We know the gap that exists between universities and industry in terms of the ability to build these models. Making this one available to researchers was a no-brainer.” She hopes that others will pore over their work and pull it apart or build on it. Breakthroughs come faster when more people are involved, she says.
Meta is making its model, called Open Pretrained Transformer (OPT), available for non-commercial use. It is also releasing its code and a logbook that documents the training process. The logbook contains daily updates from members of the team about the training data: how it was added to the model and when, what worked and what didn’t. In more than 100 pages of notes, the researchers log every bug, crash, and reboot in a three-month training process that ran nonstop from October 2021 to January 2022.
With 175 billion parameters (the values in a neural network that get tweaked during training), OPT is the same size as GPT-3. This was by design, says Pineau. The team built OPT to match GPT-3 both in its accuracy on language tasks and in its toxicity. OpenAI has made GPT-3 available as a paid service but has not shared the model itself or its code. The idea was to provide researchers with a similar language model to study, says Pineau.
OpenAI declined an invitation to comment on Meta’s announcement.
So why is Meta doing this? After all, Meta is a company that has said little about how the algorithms behind Facebook and Instagram work and has a reputation for burying unfavorable findings by its own in-house research teams. A big reason for the different approach by Meta AI is Pineau herself, who has been pushing for more transparency in AI for a number of years.
Tech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough.
Pineau helped change how research is published in several of the largest conferences, introducing a checklist of things that researchers must submit alongside their results, including code and details about how experiments are run. Since she joined Meta (then Facebook) in 2017, she has championed that culture in its AI lab.
“That commitment to open science is why I’m here,” she says. “I wouldn’t be here on any other terms.”
Ultimately, Pineau wants to change how we judge AI. “What we call state-of-the-art nowadays can’t just be about performance,” she says. “It has to be state-of-the-art in terms of responsibility as well.”
Still, giving away a large language model is a bold move for Meta. “I can’t tell you that there’s no risk of this model producing language that we’re not proud of,” says Pineau. “It will.”
Weighing the risks
Margaret Mitchell, one of the AI ethics researchers Google forced out in 2020, who is now at Hugging Face, sees the release of OPT as a positive move. But she thinks there are limits to transparency. Has the language model been tested with sufficient rigor? Do the foreseeable benefits outweigh the foreseeable harms—such as the generation of misinformation, or racist and misogynistic language?
“Releasing a large language model to the world where a wide audience is likely to use it, or be affected by its output, comes with responsibilities,” she says. Mitchell notes that this model will be able to generate harmful content not only by itself, but through downstream applications that researchers build on top of it.
Meta AI audited OPT to remove some harmful behaviors, but the point is to release a model that researchers can learn from, warts and all, says Pineau.
“There were a lot of conversations about how to do that in a way that lets us sleep at night, knowing that there’s a non-zero risk in terms of reputation, a non-zero risk in terms of harm,” she says. She dismisses the idea that you should not release a model because it’s too dangerous—which is the reason OpenAI gave for not releasing GPT-3’s predecessor, GPT-2. “I understand the weaknesses of these models, but that’s not a research mindset,” she says.
Hundreds of scientists around the world are working together to understand one of the most powerful emerging technologies before it’s too late.
Bender, who coauthored the study at the center of the Google dispute with Mitchell, is also concerned about how the potential harms will be handled. “One thing that is really key in mitigating the risks of any kind of machine-learning technology is to ground evaluations and explorations in specific use cases,” she says. “What will the system be used for? Who will be using it, and how will the system outputs be presented to them?”
Some researchers question why large language models are being built at all, given their potential for harm. For Pineau, these concerns should be met with more exposure, not less. “I believe the only way to build trust is extreme transparency,” she says.
“We have different opinions around the world about what speech is appropriate, and AI is a part of that conversation,” she says. She doesn’t expect language models to say things that everyone agrees with. “But how do we grapple with that? You need many voices in that discussion.”
Finding the right balance between continuity and change can help leaders better manage the cultural changes that occur during a digital transformation.
Digital transformation transcends technology and business models. Organizational culture also plays a critical role in successfully leading an organization into the digital era; indeed, the success of a digital transformation relies on a deep understanding of the intricacies of culture. But few business leaders fully understand how a company’s culture changes during a digital transformation — and, more importantly, how it doesn’t change.
Consider the case of Maersk, the Danish shipping and integrated logistics company, which has been undergoing a significant digital transformation. Those efforts, which took off when Jim Hagemann Snabe became chairman of the board in 2016, entailed collaborating on blockchain with IBM and providing digital platforms to customers. Internal cultural tensions at Maersk became apparent late in 2021, when Søren Vind, a senior engineering manager and head of forecasting, had a public disagreement about the company’s identity with a veteran Maersk sea captain who serves as the employee representative on the board. In an interview with a Danish publication, Vind argued that Maersk “used to be an industrial company that had technology on [the] side” but had become “a technology company where we have some physical devices we need to move around.” Captain Thomas Lindegaard Madsen responded in a public LinkedIn post — subsequently edited to defuse his criticism — that read, “I am very sorry, but I will have to correct you.” Pointing out that the maritime business contributed 78% of group revenue and that the group has 12,000 seafarers, the captain declared, “We are NOT a tech company who ‘happens’ to operate ships.”
Of course, both the captain and the executive are right — and wrong — in their statements, which merely represent different perspectives on how digital transformation and culture interact. Where the IT executive views digital transformation from a cultural changelens, the sea captain views it from a cultural continuity lens. Both lenses are valid, both can coexist, and both need to be jointly managed for a transformation effort to thrive.
The Culture-Transformation Matrix
As the names imply, cultural changerefers to how a digital transformation may alter an organization’s culture, and cultural continuity refers to elements of the culture that remain stable. In short, during any organizational transformation there’s always an interplay between cultural change and continuity as cultures evolve, but most digital transformation initiatives instead
Carsten Lund Pedersen is associate professor in digital transformation at the IT University of Copenhagen in Denmark and coeditor of the book Big Data in Small Business: Data-Driven Growth in Small and Medium-Sized Enterprises (Edward Elgar Publishing, 2021).
REFERENCES (1)
1. For related work on strategies entailing continuity and change, see D. Ravasi and M. Schultz, “Responding to Organizational Identity Threats: Exploring the Role of Organizational Culture,” Academy of Management Journal 49, no. 3 (June 2006): 433-458; S. Nasim and Sushil, “Revisiting Organizational Change: Exploring the Paradox of Managing Continuity and Change,” Journal of Change Management 11, no. 2 (June 2011): 185-206; S. Nasim and Sushil, “Flexible Strategy Framework for Managing Continuity and Change in E-Government,” in “The Flexible Enterprise,” eds. Sushil and E.A. Stohr (Delhi: Springer India, 2014), 47-66; and Sushil, “A Flexible Strategy Framework for Managing Continuity and Change,” International Journal of Global Business and Competitiveness 1, no. 1 (2005): 22-32.
Major contending forces, says this expert on business strategy, determine the state of competition in an industry: the threat of new entrants, the bargaining power of customers and of suppliers, the intense rivalry of competitors, and the threat of substitute services or products. Once the corporate strategist has assessed these forces, he can identify his own company’s strengths and weaknesses and act accordingly to put up the best defense against competitive assaults.
Editor’s note (2022): To mark our 100th anniversary, we’re highlighting 12 of our very favorite articles. Subscribers can access all 12 at any time and, for nonsubscribers, we’re unlocking one per month this year. For February, we’re sharing Harvard Business School professor Michael E. Porter’s 1979 article that first introduced his groundbreaking five forces framework.
The essence of strategy formulation is coping with competition. Yet it is easy to view competition too narrowly and too pessimistically. While one sometimes hears executives complaining to the contrary, intense competition in an industry is neither coincidence nor bad luck.
Moreover, in the fight for market share, competition is not manifested only in the other players. Rather, competition in an industry is rooted in its underlying economics, and competitive forces exist that go well beyond the established combatants in a particular industry. Customers, suppliers, potential entrants, and substitute products are all competitors that may be more or less prominent or active depending on the industry.
The state of competition in an industrydepends on five basic forces, which are diagrammed below. The collective strength of these forces determines the ultimate profit potential of an industry. It ranges from intense in industries like tires, metal cans, and steel, where no company earns spectacular returns on investment, to mild in industries like oil field services and equipment, soft drinks, and toiletries, where there is room for quite high returns.
In the economists’ “perfectly competitive” industry, jockeying for position is unbridled and entry to the industry very easy. This kind of industry structure, of course, offers the worst prospect for long-run profitability. The weaker the forces collectively, however, the greater the opportunity for superior performance.
Whatever their collective strength, the corporate strategist’s goal is to find a position in the industry where his or her company can best defend itself against these forces or can influence them in its favor. The collective strength of the forces may be painfully apparent to all the antagonists; but to cope with them, the strategist must delve below the surface and analyze the sources of each. For example, what makes the industry vulnerable to entry? What determines the bargaining power of suppliers?
Knowledge of these underlying sources of competitive pressure provides the groundwork for a strategic agenda of action. They highlight the critical strengths and weaknesses of the company, animate the positioning of the company in its industry, clarify the areas where strategic changes may yield the greatest payoff, and highlight the places where industry trends promise to hold the greatest significance as either opportunities or threats. Understanding these sources also proves to be of help in considering areas for diversification.
Contending Forces
The strongest competitive force or forces determine the profitability of an industry and so are of greatest importance in strategy formulation. For example, even a company with a strong position in an industry unthreatened by potential entrants will earn low returns if it faces a superior or a lower-cost substitute product—as the leading manufacturers of vacuum tubes and coffee percolators have learned to their sorrow. In such a situation, coping with the substitute product becomes the number one strategic priority.
Different forces take on prominence, of course, in shaping competition in each industry. In the ocean-going tanker industry the key force is probably the buyers (the major oil companies), while in tires it is powerful OEM buyers coupled with tough competitors. In the steel industry the key forces are foreign competitors and substitute materials.
Every industry has an underlying structure, or a set of fundamental economic and technical characteristics, that gives rise to these competitive forces. The strategist, wanting to position his or her company to cope best with its industry environment or to influence that environment in the company’s favor, must learn what makes the environment tick.
This view of competition pertains equally to industries dealing in services and to those selling products. To avoid monotony in this article, I refer to both products and services as “products.” The same general principles apply to all types of business.
A few characteristics are critical to the strength of each competitive force. I shall discuss them in this section.
Threat of entry.
New entrants to an industry bring new capacity, the desire to gain market share, and often substantial resources. Companies diversifying through acquisition into the industry from other markets often leverage their resources to cause a shake-up, as Philip Morris did with Miller beer.
The seriousness of the threat of entry depends on the barriers present and on the reaction from existing competitors that entrants can expect. If barriers to entry are high and newcomers can expect sharp retaliation from the entrenched competitors, obviously the newcomers will not pose a serious threat of entering.
There are six major sources of barriers to entry:
1. Economies of scale. These economies deter entry by forcing the aspirant either to come in on a large scale or to accept a cost disadvantage. Scale economies in production, research, marketing, and service are probably the key barriers to entry in the mainframe computer industry, as Xerox and GE sadly discovered. Economies of scale can also act as hurdles in distribution, utilization of the sales force, financing, and nearly any other part of a business.
2. Product differentiation. Brand identification creates a barrier by forcing entrants to spend heavily to overcome customer loyalty. Advertising, customer service, being first in the industry, and product differences are among the factors fostering brand identification. It is perhaps the most important entry barrier in soft drinks, over-the-counter drugs, cosmetics, investment banking, and public accounting. To create high fences around their businesses, brewers couple brand identification with economies of scale in production, distribution, and marketing.
3. Capital requirements. The need to invest large financial resources in order to compete creates a barrier to entry, particularly if the capital is required for unrecoverable expenditures in up-front advertising or R&D. Capital is necessary not only for fixed facilities but also for customer credit, inventories, and absorbing start-up losses. While major corporations have the financial resources to invade almost any industry, the huge capital requirements in certain fields, such as computer manufacturing and mineral extraction, limit the pool of likely entrants.
4. Cost disadvantages independent of size. Entrenched companies may have cost advantages not available to potential rivals, no matter what their size and attainable economies of scale. These advantages can stem from the effects of the learning curve (and of its first cousin, the experience curve), proprietary technology, access to the best raw materials sources, assets purchased at preinflation prices, government subsidies, or favorable locations. Sometimes cost advantages are legally enforceable, as they are through patents. (For an analysis of the much-discussed experience curve as a barrier to entry, see the sidebar “The Experience Curve as an Entry Barrier.”)
The Experience Curve as an Entry Barrier
In recent years, the experience curve has become widely …
5. Access to distribution channels. The newcomer on the block must, of course, secure distribution of its product or service. A new food product, for example, must displace others from the supermarket shelf via price breaks, promotions, intense selling efforts, or some other means. The more limited the wholesale or retail channels are and the more that existing competitors have these tied up, obviously the tougher that entry into the industry will be. Sometimes this barrier is so high that, to surmount it, a new contestant must create its own distribution channels, as Timex did in the watch industry in the 1950s.
6. Government policy. The government can limit or even foreclose entry to industries with such controls as license requirements and limits on access to raw materials. Regulated industries like trucking, liquor retailing, and freight forwarding are noticeable examples; more-subtle government restrictions operate in fields like ski-area development and coal mining. The government also can play a major indirect role by affecting entry barriers through controls such as air and water pollution standards and safety regulations.
The potential rival’s expectations about the reaction of existing competitors also will influence its decision on whether to enter. The company is likely to have second thoughts if incumbents have previously lashed out at new entrants or if:
The incumbents possess substantial resources to fight back, including excess cash and unused borrowing power, productive capacity, or clout with distribution channels and customers.
The incumbents seem likely to cut prices because of a desire to keep market shares or because of industrywide excess capacity.
Industry growth is slow, affecting its ability to absorb the new arrival and probably causing the financial performance of all the parties involved to decline.
Changing conditions.
From a strategic standpoint there are two important additional points to note about the threat of entry.
First, it changes, of course, as these conditions change. The expiration of Polaroid’s basic patents on instant photography, for instance, greatly reduced its absolute cost entry barrier built by proprietary technology. It is not surprising that Kodak plunged into the market. Product differentiation in printing has all but disappeared. Conversely, in the auto industry economies of scale increased enormously with post–World War II automation and vertical integration—virtually stopping successful new entry
Second, strategic decisions involving a large segment of an industry can have a major impact on the conditions determining the threat of entry. For example, the actions of many U.S. wine producers in the 1960s to step up product introductions, raise advertising levels, and expand distribution nationally surely strengthened the entry roadblocks by raising economies of scale and making access to distribution channels more difficult. Similarly, decisions by members of the recreational vehicle industry to vertically integrate in order to lower costs have greatly increased the economies of scale and raised the capital cost barriers.
Powerful suppliers and buyers.
Suppliers can exert bargaining power on participants in an industry by raising prices or reducing the quality of purchased goods and services. Powerful suppliers can thereby squeeze profitability out of an industry unable to recover cost increases in its own prices. By raising their prices, soft drink concentrate producers have contributed to the erosion of profitability of bottling companies because the bottlers, facing intense competition from powdered mixes, fruit drinks, and other beverages, have limited freedom to raise theirprices accordingly. Customers likewise can force down prices, demand higher quality or more service, and play competitors off against each other—all at the expense of industry profits.
The power of each important supplier or buyer group depends on a number of characteristics of its market situation and on the relative importance of its sales or purchases to the industry compared with its overall business.
A supplier group is powerful if:
It is dominated by a few companies and is more concentrated than the industry it sells to.
Its product is unique or at least differentiated, or if it has built up switching costs. Switching costs are fixed costs buyers face in changing suppliers. These arise because, among other things, a buyer’s product specifications tie it to particular suppliers, it has invested heavily in specialized ancillary equipment or in learning how to operate a supplier’s equipment (as in computer software), or its production lines are connected to the supplier’s manufacturing facilities (as in some manufacture of beverage containers).
It is not obliged to contend with other products for sale to the industry. For instance, the competition between the steel companies and the aluminum companies to sell to the can industry checks the power of each supplier.
It poses a credible threat of integrating forward into the industry’s business. This provides a check against the industry’s ability to improve the terms on which it purchases.
The industry is not an important customer of the supplier group. If the industry is an important customer, suppliers’ fortunes will be closely tied to the industry, and they will want to protect the industry through reasonable pricing and assistance in activities like R&D and lobbying.
A buyer group is powerful if:
It is concentrated or purchases in large volumes. Large-volume buyers are particularly potent forces if heavy fixed costs characterize the industry—as they do in metal containers, corn refining, and bulk chemicals, for example—which raise the stakes to keep capacity filled.
The products it purchases from the industry are standard or undifferentiated. The buyers, sure that they can always find alternative suppliers, may play one company against another, as they do in aluminum extrusion.
The products it purchases from the industry form a component of its product and represent a significant fraction of its cost. The buyers are likely to shop for a favorable price and purchase selectively. Where the product sold by the industry in question is a small fraction of buyers’ costs, buyers are usually much less price sensitive.
The industry’s product is unimportant to the quality of the buyers’ products or services. Where the quality of the buyers’ products is very much affected by the industry’s product, buyers are generally less price sensitive. Industries in which this situation obtains include oil field equipment, where a malfunction can lead to large losses, and enclosures for electronic medical and test instruments, where the quality of the enclosure can influence the user’s impression about the quality of the equipment inside.
The industry’s product does not save the buyer money. Where the industry’s product or service can pay for itself many times over, the buyer is rarely price sensitive; rather, he is interested in quality. This is true in services like investment banking and public accounting, where errors in judgment can be costly and embarrassing, and in businesses like the logging of oil wells, where an accurate survey can save thousands of dollars in drilling costs.
The buyers pose a credible threat of integrating backward to make the industry’s product. The Big Three auto producers and major buyers of cars have often used the threat of self-manufacture as a bargaining lever. But sometimes an industry engenders a threat to buyers that its members may integrate forward.
Most of these sources of buyer power can be attributed to consumers as a group as well as to industrial and commercial buyers; only a modification of the frame of reference is necessary. Consumers tend to be more price sensitive if they are purchasing products that are undifferentiated, expensive relative to their incomes, and of a sort where quality is not particularly important.
The buying power of retailers is determined by the same rules, with one important addition. Retailers can gain significant bargaining power over manufacturers when they can influence consumers’ purchasing decisions, as they do in audio components, jewelry, appliances, sporting goods, and other goods.
Strategic action.
A company’s choice of suppliers to buy from or buyer groups to sell to should be viewed as a crucial strategic decision. A company can improve its strategic posture by finding suppliers or buyers who possess the least power to influence it adversely.
Most common is the situation of a company being able to choose whom it will sell to—in other words, buyer selection. Rarely do all the buyer groups a company sells to enjoy equal power. Even if a company sells to a single industry, segments usually exist within that industry that exercise less power (and that are therefore less price sensitive) than others. For example, the replacement market for most products is less price sensitive than the overall market.
As a rule, a company can sell to powerful buyers and still come away with above-average profitability only if it is a low-cost producer in its industry or if its product enjoys some unusual, if not unique, features. In supplying large customers with electric motors, Emerson Electric earns high returns because its low cost position permits the company to meet or undercut competitors’ prices.
If the company lacks a low cost position or a unique product, selling to everyone is self-defeating because the more sales it achieves, the more vulnerable it becomes. The company may have to muster the courage to turn away business and sell only to less potent customers.
Buyer selection has been a key to the success of National Can and Crown Cork & Seal. They focus on the segments of the can industry where they can create product differentiation, minimize the threat of backward integration, and otherwise mitigate the awesome power of their customers. Of course, some industries do not enjoy the luxury of selecting “good” buyers.
As the factors creating supplier and buyer power change with time or as a result of a company’s strategic decisions, naturally the power of these groups rises or declines. In the ready-to-wear clothing industry, as the buyers (department stores and clothing stores) have become more concentrated and control has passed to large chains, the industry has come under increasing pressure and suffered falling margins. The industry has been unable to differentiate its product or engender switching costs that lock in its buyers enough to neutralize these trends.
Substitute products.
By placing a ceiling on prices it can charge, substitute products or services limit the potential of an industry. Unless it can upgrade the quality of the product or differentiate it somehow (as via marketing), the industry will suffer in earnings and possibly in growth.
Manifestly, the more attractive the price-performance trade-off offered by substitute products, the firmer the lid placed on the industry’s profit potential. Sugar producers confronted with the large-scale commercialization of high-fructose corn syrup, a sugar substitute, are learning this lesson today.
Substitutes not only limit profits in normal times; they also reduce the bonanza an industry can reap in boom times. In 1978 the producers of fiberglass insulation enjoyed unprecedented demand as a result of high energy costs and severe winter weather. But the industry’s ability to raise prices was tempered by the plethora of insulation substitutes, including cellulose, rock wool, and styrofoam. These substitutes are bound to become an even stronger force once the current round of plant additions by fiberglass insulation producers has boosted capacity enough to meet demand (and then some).
Substitute products that deserve the most attention strategically are those that (a) are subject to trends improving their price-performance trade-off with the industry’s product, or (b) are produced by industries earning high profits. Substitutes often come rapidly into play if some development increases competition in their industries and causes price reduction or performance improvement.
Jockeying for position.
Rivalry among existing competitors takes the familiar form of jockeying for position—using tactics like price competition, product introduction, and advertising slugfests. Intense rivalry is related to the presence of a number of factors:
Competitors are numerous or are roughly equal in size and power. In many U.S. industries in recent years foreign contenders, of course, have become part of the competitive picture.
Industry growth is slow, precipitating fights for market share that involve expansion-minded members.
The product or service lacks differentiation or switching costs, which lock in buyers and protect one combatant from raids on its customers by another.
Fixed costs are high or the product is perishable, creating strong temptation to cut prices. Many basic materials businesses, like paper and aluminum, suffer from this problem when demand slackens.
Capacity is normally augmented in large increments. Such additions, as in the chlorine and vinyl chloride businesses, disrupt the industry’s supply-demand balance and often lead to periods of overcapacity and price-cutting.
Exit barriers are high. Exit barriers, like very specialized assets or management’s loyalty to a particular business, keep companies competing even though they may be earning low or even negative returns on investment. Excess capacity remains functioning, and the profitability of the healthy competitors suffers as the sick ones hang on. If the entire industry suffers from overcapacity, it may seek government help—particularly if foreign competition is present.
The rivals are diverse in strategies, origins, and “personalities.” They have different ideas about how to compete and continually run head-on into each other in the process.
As an industry matures, its growth rate changes, resulting in declining profits and (often) a shakeout. In the booming recreational vehicle industry of the early 1970s, nearly every producer did well; but slow growth since then has eliminated the high returns, except for the strongest members, not to mention many of the weaker companies. The same profit story has been played out in industry after industry—snowmobiles, aerosol packaging, and sports equipment are just a few examples.
An acquisition can introduce a very different personality to an industry, as has been the case with Black & Decker’s takeover of McCullough, the producer of chain saws. Technological innovation can boost the level of fixed costs in the production process, as it did in the shift from batch to continuous-line photofinishing in the 1960s.
While a company must live with many of these factors—because they are built into industry economics—it may have some latitude for improving matters through strategic shifts. For example, it may try to raise buyers’ switching costs or increase product differentiation. A focus on selling efforts in the fastest-growing segments of the industry or on market areas with the lowest fixed costs can reduce the impact of industry rivalry. If it is feasible, a company can try to avoid confrontation with competitors having high exit barriers and can thus sidestep involvement in bitter price-cutting.
Formulation of Strategy
Once having assessed the forces affecting competition in an industry and their underlying causes, the corporate strategist can identify the company’s strengths and weaknesses. The crucial strengths and weaknesses from a strategic standpoint are the company’s posture vis-à-vis the underlying causes of each force. Where does it stand against substitutes? Against the sources of entry barriers?
Then the strategist can devise a plan of action that may include (l) positioning the company so that its capabilities provide the best defense against the competitive force; and/or (2) influencing the balance of the forces through strategic moves, thereby improving the company’s position; and/or (3) anticipating shifts in the factors underlying the forces and responding to them, with the hope of exploiting change by choosing a strategy appropriate for the new competitive balance before opponents recognize it. I shall consider each strategic approach in turn.
Positioning the company.
The first approach takes the structure of the industry as given and matches the company’s strengths and weaknesses to it. Strategy can be viewed as building defenses against the competitive forces or as finding positions in the industry where the forces are weakest.
Knowledge of the company’s capabilities and of the causes of the competitive forces will highlight the areas where the company should confront competition and where avoid it. If the company is a low-cost producer, it may choose to confront powerful buyers while it takes care to sell them only products not vulnerable to competition from substitutes.
HThe success of Dr Pepper in the soft drink industry illustrates the coupling of realistic knowledge of corporate strengths with sound industry analysis to yield a superior strategy. Coca-Cola and PepsiCola dominate Dr Pepper’s industry, where many small concentrate producers compete for a piece of the action. Dr Pepper chose a strategy of avoiding the largest-selling drink segment, maintaining a narrow flavor line, forgoing the development of a captive bottler network, and marketing heavily. The company positioned itself so as to be least vulnerable to its competitive forces while it exploited its small size.
In the $11.5 billion soft drink industry, barriers to entry in the form of brand identification, large-scale marketing, and access to a bottler network are enormous. Rather than accept the formidable costs and scale economies in having its own bottler network—that is, following the lead of the Big Two and of Seven-Up—Dr Pepper took advantage of the different flavor of its drink to “piggyback” on Coke and Pepsi bottlers who wanted a full line to sell to customers. Dr Pepper coped with the power of these buyers through extraordinary service and other efforts to distinguish its treatment of them from that of Coke and Pepsi.
Many small companies in the soft drink business offer cola drinks that thrust them into head-to-head competition against the majors. Dr Pepper, however, maximized product differentiation by maintaining a narrow line of beverages built around an unusual flavor.
Finally, Dr Pepper met Coke and Pepsi with an advertising onslaught emphasizing the alleged uniqueness of its single flavor. This campaign built strong brand identification and great customer loyalty. Helping its efforts was the fact that Dr Pepper’s formula involved lower raw materials cost, which gave the company an absolute cost advantage over its major competitors.
There are no economies of scale in soft drink concentrate production, so Dr Pepper could prosper despite its small share of the business (6%). Thus Dr Pepper confronted competition in marketing but avoided it in product line and in distribution. This artful positioning combined with good implementation has led to an enviable record in earnings and in the stock market.
Influencing the balance.
When dealing with the forces that drive industry competition, a company can devise a strategy that takes the offensive. This posture is designed to do more than merely cope with the forces themselves; it is meant to alter their causes.
Innovations in marketing can raise brand identification or otherwise differentiate the product. Capital investments in large-scale facilities or vertical integration affect entry barriers. The balance of forces is partly a result of external factors and partly in the company’s control.
Exploiting industry change.
Industry evolution is important strategically because evolution, of course, brings with it changes in the sources of competition I have identified. In the familiar product life-cycle pattern, for example, growth rates change, product differentiation is said to decline as the business becomes more mature, and the companies tend to integrate vertically.
These trends are not so important in themselves; what is critical is whether they affect the sources of competition. Consider vertical integration. In the maturing minicomputer industry, extensive vertical integration, both in manufacturing and in software development, is taking place. This very significant trend is greatly raising economies of scale as well as the amount of capital necessary to compete in the industry. This in turn is raising barriers to entry and may drive some smaller competitors out of the industry once growth levels off.
Obviously, the trends carrying the highest priority from a strategic standpoint are those that affect the most important sources of competition in the industry and those that elevate new causes to the forefront. In contract aerosol packaging, for example, the trend toward less product differentiation is now dominant. It has increased buyers’ power, lowered the barriers to entry, and intensified competition.
The framework for analyzing competition that I have described can also be used to predict the eventual profitability of an industry. In long-range planning the task is to examine each competitive force, forecast the magnitude of each underlying cause, and then construct a composite picture of the likely profit potential of the industry.
The outcome of such an exercise may differ a great deal from the existing industry structure. Today, for example, the solar heating business is populated by dozens and perhaps hundreds of companies, none with a major market position. Entry is easy, and competitors are battling to establish solar heating as a superior substitute for conventional methods.
The potential of this industry will depend largely on the shape of future barriers to entry, the improvement of the industry’s position relative to substitutes, the ultimate intensity of competition, and the power captured by buyers and suppliers. These characteristics will in turn be influenced by such factors as the establishment of brand identities, significant economies of scale or experience curves in equipment manufacture wrought by technological change, the ultimate capital costs to compete, and the extent of overhead in production facilities.
The framework for analyzing industry competition has direct benefits in setting diversification strategy. It provides a road map for answering the extremely difficult question inherent in diversification decisions: “What is the potential of this business?” Combining the framework with judgment in its application, a company may be able to spot an industry with a good future before this good future is reflected in the prices of acquisition candidates.
Multifaceted Rivalry
Corporate managers have directed a great deal of attention to defining their businesses as a crucial step in strategy formulation. Theodore Levitt, in his classic 1960 article in HBR, argued strongly for avoiding the myopia of narrow, product-oriented industry definition. Numerous other authorities have also stressed the need to look beyond product to function in defining a business, beyond national boundaries to potential international competition, and beyond the ranks of one’s competitors today to those that may become competitors tomorrow. As a result of these urgings, the proper definition of a company’s industry or industries has become an endlessly debated subject.
One motive behind this debate is the desire to exploit new markets. Another, perhaps more important motive is the fear of overlooking latent sources of competition that someday may threaten the industry. Many managers concentrate so single-mindedly on their direct antagonists in the fight for market share that they fail to realize that they are also competing with their customers and their suppliers for bargaining power. Meanwhile, they also neglect to keep a wary eye out for new entrants to the contest or fail to recognize the subtle threat of substitute products.
The key to growth—even survival—is to stake out a position that is less vulnerable to attack from head-to-head opponents, whether established or new, and less vulnerable to erosion from the direction of buyers, suppliers, and substitute goods. Establishing such a position can take many forms—solidifying relationships with favorable customers, differentiating the product either substantively or psychologically through marketing, integrating forward or backward, establishing technological leadership.
A version of this article appeared in the March–April 1979 issue of Harvard Business Review.
Michael E. Porter is the Bishop William Lawrence University Professor at Harvard Business School. He has served as an adviser to governments and campaigns around the world on the advancement of social policy and economic policy, including Mitt Romney’s presidential campaign. His latest paper is The Role of Business in Society. He is an academic adviser to the Leadership Now Project.
Editor’s note: This article is part of the series “Compete and Win: Envisioning a Competitive Strategy for the Twenty-First Century.” The series endeavors to present expert commentary on diverse issues surrounding US competitive strategy and irregular warfare with peer and near-peer competitors in the physical, cyber, and information spaces. The series is part of the Competition in Cyberspace Project (C2P), a joint initiative by the Army Cyber Institute and the Modern War Institute. Read all articles in the series here.
Special thanks to series editors Capt. Maggie Smith, PhD, C2P director, and Dr. Barnett S. Koven.
After two decades of low-intensity conflict defined by technological overmatch and asymmetric warfare, the US military has adopted a new term of art: JADC2. Joint All-Domain Command and Control is the theoretically simple (and practically complicated) idea of linking everything to everything else, at all times, and using artificial intelligence to achieve information advantage and decision dominance in conflict. Essentially, in concept, all US military sensors would be connected to all shooters and weapons platforms, across all the services, and in all domains to empower decisive victory in a future multi-domain conflict. Despite sounding impressive and promising complete interoperability, there is little evidence that JADC2 can achieve its stated goals, or that the underlying technologies will be resilient in combat. More critically, there is (or should be) concern that JADC2’s drawbacks could make the system more of a liability than an advantage—especially if US military doctrine and strategic planning do not evolve in parallel with the technology’s employment.
The recent publishing of the Pentagon’s “Summary of the Joint All-Domain Command and Control (JADC2) Strategy” was preceded by a steady stream of articles and presentations on JADC2, its proposed suite of capabilities, and an accompanying body of literature rich in vendor materials—a quick internet search returns links to JADC2 marketing sites hosted by the biggest names in defense contracting, like Raytheon and Boeing. Additionally, several military leaders involved in the project have penned op-eds and articles extolling JADC2’s many virtues and the progress being made toward its completion. However, it is difficult to find a thoughtful discussion on the role of security and resiliency in, or their criticality to, JADC2’s ability to deliver on its promises. If JADC2’s implementation and employment are projected to provide accurate, timely, and actionable information to decision makers, what happens if, or when, the system fails? And what will reliance on an integrated, AI-enabled platform designed to improve decision-making do to military thinking and planning across echelons? We can make technology do some pretty amazing things—and we are definitely technology optimists—but, because JADC2 is going to affect the entire defense enterprise, it is also important to consider, and be honest about, its limits. By using examples and lessons learned from the February 24 Russian invasion of Ukraine, we highlight some of JADC2’s underlying assumptions to investigate the challenges posed by an integrated and AI-enabled command-and-control system.
What Happens When Uber Fails?
Traditionally, each military service has developed and maintained its own tactical network, often incompatible with those of the other services. Department of Defense officials make a compelling argument that future conflicts will require quick decisions, and that the era of distinct, service-specific networks is over. Interestingly, DoD offers the rideshare application Uber as an example of the functionality JADC2 is being designed to deliver. By combining the user ride-request application with the driver acceptance application, the two information systems interact seamlessly to generate the most efficient outcome for driver and rider: the fastest and cheapest transportation option to a desired end point, at a desired time and place. While incomplete, the analogy has helped DoD sell the concept of JADC2 to Congress by providing an easy-to-understand example of information integration to achieve a desired end state. However, the comparison between Uber and an integrated military command-and-control system relies on a set of assumptions that DoD has not thoroughly investigated. Namely, what happens when Uber is unavailable (e.g., due to a lack of Uber services in a specific area, an internet or cellular service dead spot or outage, or a dead mobile device battery)?
To access the service, Uber users need a cellular or Wi-Fi connection, and DoD imagines a similar cloud-based environment for the joint force to access JADC2 and its information capabilities. JADC2 will connect the thousands of military sensors spread around the globe into a single system and, with the help of AI-enabled processing, provide a conceptually complete operational picture from any location or command, at any time. However, a lesson learned from the Internet-of-Things explosion is that more interconnected devices are not necessarily better—today, even a networked coffee machine is susceptible to ransomware. As more connection points, devices, and users are added to JADC2’s data cloud, the number of vulnerabilities introduced to the system will also increase. It should be considered that a decentralized system may actually be more secure because it presents an adversary with a more complicated task: access to one system does not mean access to the whole system of systems. Just as Uber is unavailable in a Wi-Fi dead spot or when your phone battery dies, it is dangerous to assume that a single, integrated system will provide better command and control in a contested information environment.
The Russian invasion of Ukraine provides an example of how a single communications system can also be a single point of failure. Viasat provides internet service to people across Europe via the KA-SAT, a telecommunications satellite in geosynchronous orbit above the continent. Just as Russian forces prepared to invade Ukraine on February 24, ground-based modems tied to the KA-SAT network were suddenly rendered useless. Among the affected users were parts of Ukraine’s defense establishment. Since modems are a piece of broadband hardware that are pushed centralized updates, and because officials have stated that the hack did not target the exposed signal in space, the most likely scenario is that the hack was a corrupted modem update—an attack predicted by Ruben Santamarta’s research in 2018.Viktor Zhora, a senior official at Ukraine’s State Service of Special Communication and Information Protection, reportedly said that the KA-SAT hack that crippled Ukrainian military communications was “a really huge loss in communications in the very beginning of [the] war.”
What is important about the Viasat hack is its elegance and sophistication—something we should expect from near-peer and peer adversaries in future conflict. Despite being a long-standing technology, satellite internet and communications remain remarkably vulnerable, especially given the distributed ground-based modems and a reliance on a centrally controlled maintenance and update structure—the weakness Russia likely took advantage of. Satellite internet is composed of three integrated components: 1) the satellite in orbit that sends internet signal or “spot beams” to Earth, 2) the satellite dishes located to receive internet signal within regions serviced by the satellite spot beams, and 3) a collection of dispersed earth stations, or modems, that are connected to the internet, and each other, by fiber-optic cables. If the hardware used to connect to internet service is destroyed, internet access will remain unavailable until a new modem is acquired—something particularly difficult to do during a military invasion. Even though attribution to Russia remains tentative, the hack wiped out the Ukrainian military’s ability to communicate just prior to Russia’s invasion, and resulted in Elon Musk’s now-famous tweet: “Starlink service is now active in Ukraine. More terminals en route.” What we should learn from the Viasat hack is that modern conflict demands resilient and reliable communications and presently, JADC2’s proposals do not seriously consider the security and resiliency risks inherent to a centralized data and information system.
Should I Trust You?
Two additional assumptions of JADC2 also deserve careful consideration, confidence and flexibility—namely, a user’s confidence in the system and the information it provides and the user’s ability to react to that information in a timely manner. In this context, confidence has a twofold meaning: it is the user’s confidence in knowing how to employ JADC2 effectively and the user’s confidence in the data, algorithms, and connections that JADC2 relies upon to deliver options. Flexibility refers to the ability of an individual (e.g., a decision maker, commander, or service member) to adapt to battlefield conditions and pursue JADC2’s recommended course of action in a dynamic and contested environment. Both assumptions are necessary for JADC2 to be effective, and neither are guaranteed.
During Russia’s invasion of Ukraine, Russian communications systems have exhibited a high failure rate, leaving troops to rely on insecure but trusted platforms to communicate. Their cell phone and radio use has enabled Ukrainian intelligence and ground forces to pinpoint Russian locations and to intercept or jam their tactical communications. The Russian experience shows how communications platforms need to be flexible enough to enable warfighting in any condition—when Russia’s inflexible and sensitive modes of communication failed, Russian troops tossed equipment aside for their insecure, but working, cell phones. A single data and information platform—in an ideal world—would provide all decentralized elements and echelons with timely battlefield information to drive tactical operations. But if the platform is not resilient enough to sustain a single hiccup—like the delayed dissemination of encryption keys in Russia’s case—confidence in the technology will immediately decline. Together, these lessons underscore how more technology is not always better. For battlefield technology to be effective, troops need to be skilled in its use and maintenance to enable tactical operations and have confidence that the technology will work for them when needed. If flexibility and confidence are lacking, the technology could prove to be more of a hindrance than an advantage.
Too Many Cooks in the Kitchen
Another assumption is that JADC2 will improve decision-making speed and quality, implying the presence of ongoing command-and-control issues that an integrated system will fix. However, the militaries of our near peers—namely, China and Russia—are characterized by quantity, not quality. Their numbers are staggering, both in terms of personnel and of (mostly older) weapons systems—and these systems are supplemented by a relatively small number of sophisticated platforms, maintained mostly for their deterrent effect. And, as we have seen with Russia’s lackluster performance in Ukraine, these numbers do not always amount to much in a tough fight. Being faster and more skillful than China and Russia is crucial for national security, and to compete with them requires a US military that can innovate and evolve to maintain strategic and tactical advantage. Yet, the technological sophistication and prowess of the US military are not currently in question, and we should be asking if the resources flowing toward JADC2 could be more advantageously focused elsewhere. Ultimately, JADC2 could be a very expensive solution looking for a problem to solve.
The technological overmatch promised by JADC2 also runs the risk of empowering headquarters elements at the expense of the tactical warfighter. An inherent risk of a fully integrated battlefield information system is the potential for tactical micromanagement by command elements that are far removed from the physical battlefield. With the tendency of military leaders to focus on metrics, JADC2 could have a deleterious effect on tactical decision-making, resulting in decision haste to avoid decision delay, or the opposite, decision inertia from having to wait for approvals to filter through multiple levels of command. And there is evidence that senior commanders are already shaping their thinking around a system that has yet to be tested in a strategic conflict—at risk is critical thinking and the potential ceding of human judgment and reason to JADC2-derived options. Similarly, it is unclear what will happen when commanders begin deferring to AI-derived courses of action over the recommendations of the people they command, or the impact that will have on critical-thinking skills across the force. Of course, it is not a foregone conclusion that JADC2 will minimize and displace human ingenuity on the battlefield. However, the platform is currently being portrayed as the ultimate solution to conflict and the determining factor in any future war.
Russian communications and equipment debacles, taken at face value, make a solid case for a JADC2-like system, but they also present a major lesson the US military should learn from as it moves toward a centralized command-and-control platform. The real irony of Russia’s performance in Ukraine is that President Vladimir Putin spent much of the last decade modernizing his military, investing billions in new tanks, armor, and weapons while neglecting spare parts, training, and the basic machinery required for extended supply lines. As Russia spent its billions to modernize its military, no one questioned—as often happens in an autocracy—whether allocating that money toward acquiring the newest technology was the best use of those resources. Ultimately, Russia’s latest technology quickly became deadweight on the battlefield because similar investments in training and maintenance were not made too.
You Complete Me, I Think . . .
Another central tenet of the JADC2 concept is the assumption that in great power competition the military needs to be prepared to fight a single, decisive battle. However, history and Cathal Nolan tell us that “victory in battle rarely determines the outcome of war.” A tactical win is not a strategic victory and, even if allowance is made for the supposition that JADC2-derived courses of action will almost always deliver a decisive outcome, JADC2’s theoretical underpinnings could encourage battles with no clear strategic objective. In the end, a decisive outcome absent a strategic goal is a pointless act and further conflates successful battles with strategic victories.
Furthermore, as a concept, JADC2 is designed to achieve decision dominance but, as the system matures, its AI-derived courses of action may not translate to strategic objectives. For example, if casualties or the loss of equipment are too heavily weighted, the algorithm could arrive at a conclusion where the only winning move is to not engage the enemy. Presently, JADC2 is lauded as capable of providing an appropriate solution for any combat scenario, but because algorithms are inscrutable, commanders will have little means or incentive to argue against a JADC2 course of action, which could restrict their freedom of movement or thought. For example, in 1979 and again in 1983, nuclear early-warning systems frantically urged operators to launch missiles in retaliation to what the systems thought were attacks—essentially, in both scenarios the system designed to improve decision-making nearly caused nuclear war. Also important is the recent analysis that found the targeting processes of precision strike missions—even those processes that are mostly manual and subject to multilevel review—still result in numerous cases of civilian casualties and mistaken identities. We should, therefore, be asking if it is correct to assume that JADC2 will produce more accurate outcomes when it will rely on the same intelligence data and information that currently drive operational decisions.
Ultimately, the Ukrainian conflict has been an exposé of Russian hubris and miscalculation. Russia’s false assumption that it could quickly depose the Ukrainian government drove the Russian planning effort. But, Russia’s inability to recover quickly after this plan failed is an example of how a highly centralized and secretive organization has difficulty executing complex operations—in this case, coordinated and synchronized multi-domain operations. Ultimately, over-tech-ing, or adopting technology that is too sophisticated and sensitive to meet the needs of a tactical element, is easy to do. For example, the Army has long sought to build an exoskeleton for its special operations forces. But current research on exoskeleton technology has failed to deliver on the concept’s intended purpose—namely, improving a soldier’s performance and safety on the battlefield. Projects that have pulled together sensors and physical components to create an exoskeleton suit have been bulky, cumbersome, and full of security concerns. Again, over-tech-ing battlefield technology is easy, but delivering the right amount of functionality paired with the flexibility required to shoot, move, and communicate in high-stress combat scenarios is really hard. In the end, the technology simply needs to work on demand because, as Mick Mulroy, a former US DoD official and CIA paramilitary operations officer, said, “If you can’t communicate in the field . . . all you’re doing is camping.”
There Is No Panacea for War, but We Can Make It Less Costly
Retired Admiral James Stavridis recently cautioned that “you can become utterly dependent on a new, glamorous technology, be it cyber, space, artificial intelligence. . . . It’ll enable you. It’ll move you forward. But does it create a potential Achilles’ heel? Often it does.” A related perspective is that soon, “soldiers on the battlefield may depend more on artificial intelligence than their own comrades.” No matter what the future holds, the character of warfare will continue to be changed by technology. And, to be fair, some of the discrete elements of JADC2 are worthy goals, and many of its associated initiatives with cloud computing, sensor deployment, and more advanced weapons platforms will result in better outcomes on the battlefield. However, if we fail to ask the tough questions about JADC2 now, we will continue down a pathway paved in assumptions and a belief that JADC2 will be the critical technology in any future multi-domain battle. Moving forward, emphasis should be simultaneously placed on alternative or complementary initiatives and technologies to improve command and control, bolster security, and establish information resiliency to mitigate the risks of miscalculating JADC2’s potential. Ultimately, a tactical solution at strategic scale, designed to move fast and strike hard, risks being decisive only in its ability to undermine the cognitive functions and human ingenuity necessary to win our future wars.
Capt. Maggie Smith, PhD, is a US Army cyber officer currently assigned to the Army Cyber Institute at the United States Military Academy where she is a scientific researcher, an assistant professor in the Department of Social Sciences, and an affiliated faculty of the Modern War Institute. She is also the coeditor of this series and director of the Competition in Cyberspace Project.
Jason P. Atwell is the senior intelligence analysis and national security subject matter expert with Mandiant’s Global Intelligence and Advanced Practices business group and is the co-lead for Mandiant’s military veteran and reserve component employee resource group. Jason is also a US Army Reserve officer serving within United States Cyber Command.
The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Senior Airman Daniel Hernandez, US Air Force
The Defense Department (DOD) is taking a step-by-step approach to zero trust with its Portfolio Management Office, which DOD OCIO established in January. The department expects to release a copy of this strategy with measurable outcomes in the next couple months, according to Randy Resnick, senior advisor of the Zero Trust Portfolio Management Office at DOD CIO/Cybersecurity.
DOD created the office after observing several opportunities in its cybersecurity strategy.
Resnick said money and resources had been thrown at cybersecurity for years, but ransomware attacks continued to happen. After experimenting with zero trust, DOD’s CIO office discovered not only could it significantly slow down the cyber incidents, but also it could have stopped them all together.
During FedInsider’s Action Steps to Zero Trust event, Resnick explained how the portfolio office will synchronize and bring all DOD cyber efforts together into a cohesive single “belly button” for the DOD CIO to make sense of what was happening with zero trust across the department.
“The office will keep everybody in sync so we’re not going to have this issue of non-interoperability and non-standard implementations of zero trust to prioritize and align all of the efforts in zero trust,” Resnick said. “We’re going to do this at an enterprise level. We believe the enterprise approach to zero trust is the answer for DOD rather than doing it project by project.”
Resnick described zero trust as a cybersecurity framework and a strategy, it’s not something you can buy. Implementing zero trust means creating a user inventory of who and what is allowed on the network.
“Each user and each device has to pass through two tests. They have to be authorized to get onto the network and have to be authenticated to get on the network — both have to happen. If one or the other doesn’t happen or fails, they aren’t allowed on the network,” Resnick said.
Resnick also discussed the difference between having the need to know and having the right to know when attempting to access data.
“Because just having the need to know doesn’t mean you have the right to know. You may have the need to know to get to a folder, but you may not have the right know to get into a specific file in that folder, so if you’re asking for access to a file both have to occur,” Resnick said.
Zero trust requires a list of checks, balances and tests throughout the entire process before granting data access.
“Once you sign out your session and you go back in five minutes later, the whole process continues from the beginning again. There is no assumption you are good for the day, you are only good for the session,” Resnick said. “Zero trust really tests the access rights to data, making sure the data is being protected from users that are not supposed to have any rights or access to that data.”
Resnick also discussed whether multi-factor authentication (MFA) will be a good component for zero trust in the future.
One concern with MFA is that it is only directed toward the user and completely ignores device security. Device security, such as software checks and patch updates, are critical to a robust cybersecurity strategy.
“The device has to be checked for hardware, firmware, software to make sure that nothing was modified or changed,” Resnick said. “The device has to be enrolled in the system to even know that it can get onto the system in the first place, otherwise you’re not allowed at all. It really is MFA connected to the big ‘yes’ for the device that will let you get onto the system. I’m a big proponent of MFA, but it has to come along with something else, otherwise you’re not really completing the picture here.”
DOD is working on a major plan that breaks down efforts across the zero-trust spectrum. Resnick wants the agency to break down all seven pillars of zero trust into actionable outcomes-based activity.
“We grouped each pillar into three chunks. We threw chunk one into fiscal 2023, we put chunk two into fiscal 2024 and put chunk three into fiscal 2025, and we felt it was a doable and achievable effort,” Resnick said. “So, we think we cracked the code on how to step through zero trust with measurable outcomes. It answers the question of where do I start? No one has been able to answer that question, and we believe we made great strides in trying to answer that question.”
A copy of this step-by-step process, which also measures desired outcomes, will be released in about six to eight weeks, he added.
“Staying under the present system we have today, I believe, is allowing the network to remain in a vulnerable state, and the faster we move to zero trust it becomes less vulnerable,” Resnick said.
Federal Acquisition Service Commissioner Sonny Hashmi said the new buyer experience tool “was built using human-centered design to address pain points in the acquisition process.”
The General Services Administration is launching a new tool to help simplify the federal buying process, providing the acquisition community with streamlined market research, searchable templates and interactive resources.
Sonny Hashmi, commissioner of the Federal Acquisition Service, announced the launch of buy.gsa.gov on Tuesday afternoon and said in a blog post that the buyer experience tool “was built using human-centered design to address pain points in the acquisition process.”
The platform provides a four-step process for anyone seeking information about the federal buying experience: Planning, developing documents, researching products, services and pricing and requesting a quote or purchase. Users can browse samples, templates and tips for performance work statements, statements of work and other documents required for services purchases.
Hashmi acknowledged the years-long criticisms surrounding the federal buying process in his post, writing: “For years, the federal acquisition community has been asking for a simpler way to get the information it needs to make smarter purchases while saving taxpayer dollars.”
He added that Buy.GSA.Gov was the result of a government-wide user research and usability testing effort which involved GSA acquisition experts, federal agencies and vendors.
GSA first previewed the new buyer experience tool earlier this month while highlighting key elements of its Federal Marketplace Spring 2022 release.
“With the launch of our new buyer experience, we highlight GSA’s commitment to our customers, suppliers, and workforce while improving the buying process,” Hashmi said in a statement at the time. “I am excited for our users to see what they helped develop and look forward to watching it grow and expand in the years to come.”
The agency said its focus for the latest Federal Marketplace Strategy was to reduce the burden on suppliers of goods and services to the federal government.
GSA’s Digital Innovation Division – a team within the Federal Acquisition Service – is tasked with managing the development of the site. The team said it focused on the eight requirements in the 21st Century IDEA Act for modernized websites, including accessibility, mobile-friendly capabilities, user-centered experiences and secure, searchable functionality.
The site also includes resources for vendors, including a support center, a contractor start-up kit and a forecast of contracting opportunities.
Cybersecurity in healthcare involves the protecting of electronic information and assets from unauthorized access, use and disclosure. There are three goals of cybersecurity: protecting the confidentiality, integrity and availability of information, also known as the “CIA triad.”
In today’s electronic world, cybersecurity in healthcare and protecting information is vital for the normal functioning of organizations. Many healthcare organizations have various types of specialized hospital information systems such as EHR systems, e-prescribing systems, practice management support systems, clinical decision support systems, radiology information systems and computerized physician order entry systems. Additionally, thousands of devices that comprise the Internet of Things must be protected as well. These include smart elevators, smart heating, ventilation and air conditioning (HVAC) systems, infusion pumps, remote patient monitoring devices and others. These are examples of some assets which healthcare organizations typically have, in addition to those mentioned below.
Email
Email is a primary means for communication within healthcare organizations. Information of all kinds is transacted, created, received, sent and maintained within email systems. Mailbox storage capacities tend to grow with individuals storing all kinds of valuable information such as intellectual property, financial information, patient information and others. As a result, email security is a very important part of cybersecurity in healthcare.
Phishing is a top threat. Most significant security incidents are caused by phishing. Unwitting users may unknowingly click on a malicious link or open a malicious attachment within a phishing email and infect their computer systems with malware. In certain instances, that malware may spread via the computer network to other computers. The phishing email may also elicit sensitive or proprietary information from the recipient. Phishing emails are highly effective as they typically fool the recipient into taking a desired action such as disclosing sensitive or proprietary information, clicking on a malicious link, or opening a malicious attachment. Accordingly, regular security awareness training is key to thwart phishing attempts.
Physical Security
Unauthorized physical access to a computer or device may lead to its compromise. For example, there are physical techniques that may be used to hack a device. Physical exploitation of a device may defeat technical controls that are otherwise in place. Physically securing a device, then, is important to safeguard its operation, proper configuration and data.
One example is leaving a laptop unattended while traveling or while working in another location. Careless actions may lead to the theft or loss of the laptop. Another example is an evil maid attack in which a device is altered in an undetectable way such that the device may be later accessed by the cybercriminal, such as the installation of a keylogger to record sensitive information, such as credentials.
Legacy Systems
Legacy systems are those systems that are no longer supported by the manufacturer. Legacy systems may include applications, operating systems, or otherwise. One challenge for cybersecurity in healthcare is that many organizations have a significant legacy system footprint. The disadvantage of legacy systems is that they are typically not supported anymore by the manufacturer and, as such, there is generally a lack of security patches and other updates available.
Legacy systems may exist within organizations because they are too expensive to upgrade or because an upgrade may not be available. Operating system manufacturers may sunset systems and healthcare organizations may not have enough of a cybersecurity budget to be able to upgrade systems to presently supported versions. Medical devices typically have legacy operating systems. Legacy operating systems may also exist to help support legacy applications for which there is no replacement.
Healthcare Stakeholders
Patients
Patients need to understand how to securely communicate with their healthcare providers. Additionally, if patients engage virtually with their healthcare providers, whether through a telehealth platform, evisits, secure messaging, or otherwise, patients need to understand the privacy and security policies and also how to keep their information private and secure.
Workforce Members
Workforce members need to understand the privacy and security policies of the healthcare organization. Regular security awareness training is essential to cybersecurity in healthcare so that workforce members are aware of threats and what to do in case of actual security incidents. Workforce members also need to know who to contact in the event of a question or problem. In essence, workforce members can be the eyes and ears for the cybersecurity team. This will help the cybersecurity team understand what is working and what is not working in an effort to secure the information technology infrastructure and information.
C-Suite
More healthcare organizations now have a chief information security officer (CISO) in place to make executive decisions about the cybersecurity program. CISOs typically work on strategy, whereas individuals on the cybersecurity team that report to the CISO execute the strategy as dictated by the CISO. The CISO is an executive that ideally is on the same level as other C-suite executives, such as the chief financial officer, chief information officer, and so on. The greater the executive-level buy-in, the greater degree of top-down buy-in of the organization’s cybersecurity program.
Vendors/Market Suppliers
A major retailer was breached as a result of a major cyberattack on its heating, cooling, and air conditioning (“HVAC”) vendor system. Stolen credentials from the HVAC vendor were used to break into the retailer’s systems. In essence, this was a supply chain attack since the cyberattackers had compromised the HVAC vendor to ultimately target the retailer. Following this attack, cyber supply chain attacks compromised healthcare information systems through vendors’ stolen credentials.
Some large organizations have fairly robust cybersecurity in healthcare programs. However, many of these organizations also rely upon tens of thousands of vendors. To the extent that these vendors have lax security policies, or have inferior security policies, this can create a problem for the healthcare organization. In other words, stolen vendor credentials or compromised vendor accounts may potentially result in a compromise of the healthcare organization, such as through phishing or other means. A vendor may have elevated privileges to a healthcare organization’s information technology environment and, thus, a compromise of a vendor’s account or compromised credentials may lead to elevated access by an unauthorized third party (a cyberattacker) of a healthcare organization’s information technology resources.