VA sounds confident enough to give a timeline for when it will make selections for the T4NG2 contract, even as one protest remains active in the courts.
In a one sentence notice, the Veterans Affairs Department has announced its plan to make awards for the recompete of a $60 billion IT services contract vehicle within the next few weeks.
The Transformation Twenty-one Total Technology Next Generation 2 contract is the next iteration of T4NG, which is VA’s main vehicle for buying the services it needs to run its systems as well as modernize those systems.
VA plans to make 30 awards, with half of those reserved for veteran-owned small businesses and the remainder will be competed as full-and-open.
Much of the filings are still under seal, but we do know that the judge issued a stay on Oct. 15. It isn’t clear what the stay covers. Stays generally are an order to stop something from happening.
But if VA plans to make awards in the next couple weeks, the stay likely isn’t to stop that from happening. Attempts to get clarification from attorneys representing Booz Allen have been unsuccessful.
It is worth noting that Booz Allen is the largest prime contractor under the current T4NG iteration that opened for business in 2016.
Booz Allen has received approximately $2.9 billion in task order spend, according to GovTribe data. That translates to about 20% of VA’s total $14 billion in obligations against T4NG.
In its filing at the court, Booz Allen objected to VA’s evaluation criteria and claimed that didn’t allow the department to draw meaningful distinctions between bidders.
VA uses the T4NG program for a whole host of IT requirements to include program management, strategy, enterprise architecture, software engineering, operations, maintenance and training.
The new version will have a $60 billion ceiling and a potential 10-year period of performance, beginning with a five-year base and one option for five additional years.
Like the current iteration, T4NG2 will have an on-ramp process in later years to keep offerings and competition fresh.
FBI Director Christopher Wray called China the ‘defining threat of this generation’ in ’60 Minutes’ panel
Published 10/22/23 11:00 PM ET
Mary Papenfuss
In a chilling, riveting warning Sunday, the so-called “Five Eyes” intelligence chiefs from across the globe laid bare the unprecedented threat of China‘s historically massive theft of intellectual property, trade secrets and personal data.
FBI Director Christopher Wraywarned on 60 Minutes on CBS that he believes China represents the “defining threat of this generation.”
“There is no country that presents a broader, more comprehensive threat to our ideas, our innovation, our economic security, and ultimately, our national security,” Wray told correspondent Scott Pelley.
“We have seen efforts by the Chinese government … trying to steal intellectual property, trade secrets, personal data — all across the country,” he added, revealing that there are currently 2,000 active U.S. investigations related to Chinese government efforts to steal information.
“We’re talking about agriculture, biotech, health care, robotics, aviation, academic research,” said Wray, who noted it’s not just a “Wall Street problem,” it’s a “Main Street problem” that costs workers their jobs.
“You have [in China] the biggest hacking program in the world by far, bigger than every other major nation combined,” he noted.
In 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO), made history when it made the first direct detection of gravitational waves—ripples in space and time—produced by a pair of colliding black holes.
Since then, LIGO and its sister detector in Europe, Virgo, have detected gravitational waves from dozens of mergers between black holes as well as from collisions between a related class of stellar remnants called neutron stars. At the heart of LIGO’s success is its ability to measure the stretching and squeezing of the fabric of space-time on scales 10 thousand trillion times smaller than a human hair.
As incomprehensibly small as these measurements are, LIGO’s precision has continued to be limited by the laws of quantum physics. At very tiny, subatomic scales, empty space is filled with a faint crackling of quantum noise, which interferes with LIGO’s measurements and restricts how sensitive the observatory can be.
Now, writing in a paper accepted for publication in Physical Review X, LIGO researchers report a significant advance in a quantum technology called “squeezing” that allows them to skirt around this limit and measure undulations in space-time across the entire range of gravitational frequencies detected by LIGO.
This new “frequency-dependent squeezing” technology, in operation at LIGO since it resumed operation in May 2023, means that the detectors can now probe a larger volume of the universe and are expected to detect about 60% more mergers than before. This greatly boosts LIGO’s ability to study the exotic events that shake space and time.
“We can’t control nature, but we can control our detectors,” says Lisa Barsotti, a senior research scientist at MIT who oversaw the development of the new LIGO technology, a project that originally involved research experiments at MIT led by Matt Evans, professor of physics, and Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics and the dean of the School of Science. The effort now includes dozens of scientists and engineers based at MIT, Caltech, and the twin LIGO observatories in Hanford, Washington, and Livingston, Louisiana.
“A project of this scale requires multiple people, from facilities to engineering and optics—basically the full extent of the LIGO Lab with important contributions from the LIGO Scientific Collaboration. It was a grand effort made even more challenging by the pandemic,” Barsotti says.
“Now that we have surpassed this quantum limit, we can do a lot more astronomy,” explains Lee McCuller, assistant professor of physics at Caltech and one of the leaders of the new study. “LIGO uses lasers and large mirrors to make its observations, but we are working at a level of sensitivity that means the device is affected by the quantum realm.”
The results also have ramifications for future quantum technologies such as quantum computers and other microelectronics as well as for fundamental physics experiments. “We can take what we have learned from LIGO and apply it to problems that require measuring subatomic-scale distances with incredible accuracy,” McCuller says.
“When NSF first invested in building the twin LIGO detectors in the late 1990s, we were enthusiastic about the potential to observe gravitational waves,” says NSF Director Sethuraman Panchanathan. “Not only did these detectors make possible groundbreaking discoveries, they also unleashed the design and development of novel technologies. This is truly exemplary of the DNA of NSF—curiosity-driven explorations coupled with use-inspired innovations. Through decades of continuing investments and expansion of international partnerships, LIGO is further poised to advance rich discoveries and technological progress.”
The laws of quantum physics dictate that particles, including photons, will randomly pop in and out of empty space, creating a background hiss of quantum noise that brings a level of uncertainty to LIGO’s laser-based measurements. Quantum squeezing, which has roots in the late 1970s, is a method for hushing quantum noise, or more specifically, for pushing the noise from one place to another with the goal of making more precise measurements.
The term squeezing refers to the fact that light can be manipulated like a balloon animal. To make a dog or giraffe, one might pinch one section of a long balloon into a small precisely located joint. But then the other side of the balloon will swell out to a larger, less precise size. Light can similarly be squeezed to be more precise in one trait, such as its frequency, but the result is that it becomes more uncertain in another trait, such as its power. This limitation is based on a fundamental law of quantum mechanics called the uncertainty principle, which states that you cannot know both the position and momentum of objects (or the frequency and power of light) at the same time.
Since 2019, LIGO’s twin detectors have been squeezing light in such a way as to improve their sensitivity to the upper frequency range of gravitational waves they detect. But, in the same way that squeezing one side of a balloon results in the expansion of the other side, squeezing light has a price. By making LIGO’s measurements more precise at the high frequencies, the measurements became less precise at the lower frequencies.
“At some point, if you do more squeezing, you aren’t going to gain much. We needed to prepare for what was to come next in our ability to detect gravitational waves,” Barsotti explains.
Now, LIGO’s new frequency-dependent optical cavities—long tubes about the length of three football fields—allow the team to squeeze light in different ways depending on the frequency of gravitational waves of interest, thereby reducing noise across the whole LIGO frequency range.
“Before, we had to choose where we wanted LIGO to be more precise,” says LIGO team member Rana Adhikari, a professor of physics at Caltech. “Now we can eat our cake and have it too. We’ve known for a while how to write down the equations to make this work, but it was not clear that we could actually make it work until now. It’s like science fiction.”
Uncertainty in the quantum realm
Each LIGO facility is made up of two 4-kilometer-long arms connected to form an “L” shape. Laser beams travel down each arm, hit giant suspended mirrors, and then travel back to where they started. As gravitational waves sweep by Earth, they cause LIGO’s arms to stretch and squeeze, pushing the laser beams out of sync. This causes the light in the two beams to interfere with each other in a specific way, revealing the presence of gravitational waves.
However, the quantum noise that lurks inside the vacuum tubes that encase LIGO’s laser beams can alter the timing of the photons in the beams by minutely small amounts. McCuller likens this uncertainty in the laser light to a can of BBs.
“Imagine dumping out a can full of BBs. They all hit the ground and click and clack independently. The BBs are randomly hitting the ground, and that creates a noise. The light photons are like the BBs and hit LIGO’s mirrors at irregular times,” he said in a Caltech interview.
The squeezing technologies that have been in place since 2019 make “the photons arrive more regularly, as if the photons are holding hands rather than traveling independently,” McCuller said. The idea is to make the frequency, or timing, of the light more certain and the amplitude, or power, less certain as a way to tamp down the BB-like effects of the photons.
This is accomplished with the help of specialized crystals that essentially turn one photon into a pair of two entangled (connected) photons with lower energy. The crystals don’t directly squeeze light in LIGO’s laser beams; rather, they squeeze stray light in the vacuum of the LIGO tubes, and this light interacts with the laser beams to indirectly squeeze the laser light.
“The quantum nature of the light creates the problem, but quantum physics also gives us the solution,” Barsotti says.
An idea that began decades ago
The concept for squeezing itself dates back to the late 1970s, beginning with theoretical studies by the late Russian physicist Vladimir Braginsky; Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus at Caltech; and Carlton Caves, professor emeritus at the University of New Mexico.
The researchers had been thinking about the limits of quantum-based measurements and communications, and this work inspired one of the first experimental demonstrations of squeezing in 1986 by H. Jeff Kimble, the William L. Valentine Professor of Physics, Emeritus at Caltech. Kimble compared squeezed light to a cucumber; the certainty of the light measurements are pushed into only one direction, or feature, turning “quantum cabbages into quantum cucumbers,” he wrote in an article in Caltech’s Engineering & Science magazine in 1993.
In 2002, researchers began thinking about how to squeeze light in the LIGO detectors, and in 2008, the first experimental demonstration of the technique was achieved at the 40-meter test facility at Caltech. In 2010, MIT researchers developed a preliminary design for a LIGO squeezer, which they tested at LIGO’s Hanford site. Parallel work done at the GEO600 detector in Germany also convinced researchers that squeezing would work. Nine years later, in 2019, after many trials and careful teamwork, LIGO began squeezing light for the first time.
“We went through a lot of troubleshooting,” says Sheila Dwyer, who has been working on the project since 2008, first as a graduate student at MIT and then as a scientist at the LIGO Hanford Observatory beginning in 2013. “Squeezing was first thought of in the late 1970s, but it took decades to get it right.”
Too much of a good thing
However, as noted earlier, there is a tradeoff that comes with squeezing. By moving the quantum noise out of the timing, or frequency, of the laser light, the researchers put the noise into the amplitude (power) of the laser light. The more powerful laser beams then push LIGO’s heavy mirrors around causing a rumbling of unwanted noise corresponding to lower frequencies of gravitational waves. These rumbles mask the detectors’ ability to sense low-frequency gravitational waves.
“Even though we are using squeezing to put order into our system, reducing the chaos, it doesn’t mean we are winning everywhere,” says Dhruva Ganapathy, a graduate student at MIT and one of four co-lead authors of the new study. “We are still bound by the laws of physics.” The other three lead authors of the study are MIT graduate student Wenxuan Jia, LIGO Livingston postdoc Masayuki Nakano, and MIT postdoc Victoria Xu.
Unfortunately, this troublesome rumbling becomes even more of a problem when the LIGO team turns up the power on its lasers. “Both squeezing and the act of turning up the power improve our quantum-sensing precision to the point where we are impacted by quantum uncertainty,” McCuller says. “Both cause more pushing of photons, which leads to the rumbling of the mirrors. Laser power simply adds more photons, while squeezing makes them more clumpy and thus rumbly.”
A win-win
The solution is to squeeze light in one way for high frequencies of gravitational waves and another way for low frequencies. It’s like going back and forth between squeezing a balloon from the top and bottom and from the sides.
This is accomplished by LIGO’s new frequency-dependent squeezing cavity, which controls the relative phases of the light waves in such a way that the researchers can selectively move the quantum noise into different features of light (phase or amplitude) depending on the frequency range of gravitational waves.
“It is true that we are doing this really cool quantum thing, but the real reason for this is that it’s the simplest way to improve LIGO’s sensitivity,” Ganapathy says. “Otherwise, we would have to turn up the laser, which has its own problems, or we would have to greatly increase the sizes of the mirrors, which would be expensive.”
LIGO’s partner observatory, Virgo, will likely also use frequency-dependent squeezing technology within the current run, which will continue until roughly the end of 2024. Next-generation larger gravitational-wave detectors, such as the planned ground-based Cosmic Explorer, will also reap the benefits of squeezed light.
With its new frequency-dependent squeezing cavity, LIGO can now detect even more black hole and neutron star collisions. Ganapathy says he’s most excited about catching more neutron star smashups. “With more detections, we can watch the neutron stars rip each other apart and learn more about what’s inside.”
“We are finally taking advantage of our gravitational universe,” Barsotti says. “In the future, we can improve our sensitivity even more. I would like to see how far we can push it.”
The study is titled “Broadband quantum enhancement of the LIGO detectors with frequency-dependent squeezing.” Many additional researchers contributed to the development of the squeezing and frequency-dependent squeezing work, including Mike Zucker of MIT and GariLynn Billingsley of Caltech, the leads of the “Advanced LIGO Plus” upgrades that includes the frequency-dependent squeezing cavity; Daniel Sigg of LIGO Hanford Observatory; Adam Mullavey of LIGO Livingston Laboratory; and David McClelland’s group from the Australian National University.
As the government’s prime contracting goal for Small Disadvantaged Businesses continues to climb, the U.S. Small Business Administration’s Office of Inspector General is sounding the alarm, saying that many of those contracts may be going to ineligible companies.
The SBA OIG’s concern: a big chunk of the SDB dollars the government takes credit for each year go to self-certified SDBs. In a recent report, the SBA OIG points out that in Fiscal Year 2022 “as much as $16.5 billion in prime contracts was awarded to small, disadvantaged businesses without a certification overseen by SBA.” The SBA OIG says that counting awards to self-certified SDBs is “inherently risky” and questions whether the SBA has effective measures in place to identify unqualified self-certified SDBs.
This is a complicated topic, but here are a a few of my thoughts.
First, I think it is very likely that a significant percentage of the self-certified SDBs in SAM don’t meet the eligibility criteria.
The criteria to qualify as a self-certified SDB are essentially the same as for the 8(a) Program, which means they are complex and can be quite confusing. I think that many businesses either misunderstand what it takes to qualify, or don’t bother taking the time to do their due diligence before checking the SDB box in SAM. I have spoken to plenty of folks who didn’t realize that SDB status includes income and net worth tests; they believed that they could check the SDB box simply because their company was minority-owned. I even had one gentleman several years ago tell me he self-certified because he “felt” disadvantaged.
Second, I think that when it comes to SDB oversight, SBA has largely acted like Charlie Brown’s parents–that is, SBA has been almost completely out of the picture. For instance, SBA has told self-certified SDBs that the 8(a) Program eligibility criteria apply, but hasn’t provided guidance about how fit the square SDB peg into the round 8(a) hole in cases where the 8(a) rules don’t seem to make sense for self-certified companies. SBA’s SDB website simply links to the 8(a) regulations with an implicit “good luck!” for companies trying to figure out how the rules apply to them.
Where could this sort of 8(a)/SDB confusion arise? Well, for example, in the wake of the Ultima federal court decision, most 8(a) Program applicants must provide written narratives demonstrating that they qualify as socially disadvantaged. SBA has provided fairly extensive guidance to 8(a) Program applicants and participants regarding the narratives. But how are self-certified SDBs supposed to meet the requirement to demonstrate their social disadvantage? Do they need to write a narrative and keep it in a desk drawer or in the cloud somewhere? Just ignore the whole discussion about narratives and hope that if they “feel” disadvantaged, that’s good enough? As far as I know, SBA has been completely silent.
Likewise, SBA hasn’t explained how a self-certified company is supposed to address 13 C.F.R. 124.106(a)(4), which says:
Any disadvantaged manager who wishes to engage in outside employment must notify SBA of the nature and anticipated duration of the outside employment and obtain the prior written approval of SBA. SBA will deny a request for outside employment which could conflict with the management of the firm or could hinder it in achieving the objectives of its business development plan.
SBA presumably wouldn’t process an “outside employment” request for approval from a self-certified SDB, but the regulation certainly seems to suggest that SBA’s approval is required. After all, nothing in the regulations exempts self-certified firms from this requirement–and again, to my knowledge, SBA hasn’t explained what SDBs are supposed to do to comply.
And what about all the 8(a) rules that call for something of a subjective judgment call? How does a self-certified company appropriately assess its own “potential for success” under 13 C.F.R. 124.107? How does it decide if the top officer has “managerial experience of the extent and complexity needed to run the concern” or if “[b]usiness relationships exist with non-disadvantaged individuals or entities which cause such dependence that the applicant or Participant cannot exercise independent business judgment without great economic risk” under 13 C.F.R. 124.106? And on and on.
The fact is that when it comes to complying with a set of eligibility rules that weren’t written for them, SDBs have been left without any concrete guidance–at leat, that I’m aware of–from SBA. Without SBA guidance, even the the SDBs that do their best due diligence have no choice but to guess how some of these 8(a) rules apply to them.
And that brings me to my third and final point. I think it’s almost inevitable that at some point in the not-too-distant future, the GAO or another watchdog will audit the self-certified component of the SDB program.
The report following this audit will make headlines and give SBA a black eye. The government’s SDB achievement on the annual scorecard will (appropriately) be called into question. Some unlucky and essentially random self-certified SDBs will be proposed for debarment and/or assessed other penalties to make an example of them and show that the government is super serious about getting tough on SDB misrepresentation. False Claims Act attorneys will suggest that anyone who won a federal contract while improperly certifying as an SDB should be liable for three times the contract’s value in damages.
Maybe for some companies, the SDB self-certification is worth it–perhaps because large primes they work with need SDB credit for their subcontracting goals. In my experience, though, many companies check the SDB box because there doesn’t seem to be any downside, and why not add “small disadvantaged business” to your marketing materials if you can?
If I were a self-certified SDB, though, I’d think twice about keeping that box checked. Are you really sure you qualify? And are the limited upsides of this self-certification really worth the risk, particularly given that the government no longer offers set-aside contracts for self-certified SDBs?
Stay tuned–I’m quite confident we haven’t heard the end of this one.
The phrase “cognitive warfare” doesn’t often appear in news stories, but it’s the crucial concept behind China’s latest efforts to use social media to target its foes.
Recent stories have ranged from Meta’s “Biggest Single Takedown” of thousands of false-front accounts on Facebook, Instagram, TikTok, X, and Substack to an effort to spread disinformation about the Hawaii fires to a campaign that used AI-generated images to amplify divisive U.S. political topics. Researchers and officials expect similar efforts to target the 2024 U.S. election, as well as in any Taiwan conflict.
Chinese government and military writings say cognitive operations aim to “capture the mind” of one’s foes, shaping an adversary’s thoughts and perceptions and consequently their decisions and actions. Unlike U.S. defense documents and strategic thinkers, the People’s Liberation Army puts cognitive warfare on par with the other domains of warfare like air, sea, and space, and believes it key to victory—particularly victory without war.
Social media platforms are viewed as the main battlefield of this fight. China, through extensive research and development of their own platforms, understands the power of social media to shape narratives and cognition over events and actions. When a typical user spends 2.5 hours a day on social media—36 full days out of the year, 5.5 years in an average lifespan—it is perhaps no surprise that the Chinese Communist Party believes it can, over time, shape and even control the cognition of individuals and whole societies.
A recent PLA Daily article lays out four social-media tactics, dubbed “confrontational actions”: Information Disturbance, Discourse Competition, Public Opinion Blackout, and Block Information. The goal is to achieve an “invisible manipulation” and “invisible embedding” of information production “to shape the target audience’s macro framework for recognizing, defining, and understanding events,” write Duan Wenling and Liu Jiali, professors of the Military Propaganda Teaching and Research Department of the School of Political Science at China’s National Defense University.
Information Disturbance (信息扰动). The authors describe it as “publishing specific information on social media to influence the target audience’s understanding of the real combat situation, and then shape their positions and change their actions.” Information Disturbance uses official social media accounts (such as CGTN, Global Times, and Xinhua News) to push and shape a narrative in specific ways.
While these official channels have taken on a more strident “Wolf Warrior” tone, recently, Information Disturbance is not just about appearing strong, advise the analysts. Indeed, they cite how during 2014’s “Twitter War” between the Israeli Defense Force and the Palestinian Qassam Brigade, the Palestinians managed to “win international support by portraying an image of being weak and the victim.” The tactic, which predates social media, is reminiscent of Deng Xiaoping’s Tao Guang Yang Hui (韬光养晦)—literally translated as “Hide brightness, nourish obscurity.” China created a specific message to target the United States (and the West more broadly) under the official messaging of the CCP, that China was a humble nation focused on economic development and friendly relationships with other countries. This narrative was very powerful for decades; it shaped the U.S. and other nations’ policy towards China.
Discourse Competition (话语竞争)The second type is a much more subtle and gradual approach to shaping cognition. The authors describe a “trolling strategy” [拖钓], “spreading narratives through social media and online comments, gradually affecting public perception, and then helping achieve war or political goals.”
Here, the idea is to “fuel the flames” of existing biases and manipulate emotional psychology to influence and deepen a desired narrative. The authors cite the incredible influence that “invisible manipulation” and “invisible embedding” can have on social media platforms such as Facebook and Twitter in international events, and recommend that algorithm recommendations be used to push more and more information to target audiences with desired biases. Over time, the emotion and bias will grow and the targeted users will reject information that does not align with their perspective.
Public Opinion Blackout (舆论遮蔽). This tactic aims to flood social media with a specific narrative to influence the direction of public opinion. The main tool to “blackout” public opinion are bots that drive the narrative viral, stamping out alternative views and news. Of note to the growing use of AI in Chinese influence operations, the authors reference studies that show that a common and effective method of exerting cognitive influence is to use machine learning to mine user emotions and prejudices to screen and target the most susceptible audiences, and then quickly and intensively “shoot” customized “spiritual ammunition” to the target group.
This aligned withIn another PLA article entitled, “How ChatGPT will Affect the Future of Warfare,” .” Here, the authors write that generative AI can “efficiently generate massive amounts of fake news, fake pictures, and even fake videos to confuse the public” at a n overall societal level of significance[8]. Their The idea is to create, in their words, a “flooding of lies”” while by the dissemination and Internet trolls to create “altered facts” creates confusion about facts and . The goal is to create confusion in the target audience’s cognition regarding the truth of “facts” and play on emotions of fear, anxiety and suspicion. to create an atmosphere of insecurity, uncertainty, and mistrust. The end-state for the targeted society is an atmosphere of insecurity, uncertainty, and mistrust.
Block Information (信息封锁). The fourth type focuses on “carrying out technical attacks, blockades, and even physical destruction of the enemy’s information communication channels”. The goal is to monopolize and control information flow by preventing an adversary from disseminating information. In this tactic, and none of the others, the Chinese analysts believe the United States has a huge advantage. They cite that in 2009, for example, the U.S. government authorized Microsoft to cut off the Internet instant messaging ports of Syria, Iran, Cuba and other countries, paralyzing their networks and trying to “erase” them from the world Internet. The authors also mention in 2022, Facebook announced restrictions on some media in Russia, Iran, and other countries, but falsely claim that the company did so to delete posts negative toward the United States, for the US to gain an advantage in “cognitive confrontation.”
However, this disparity in power over the network is changing. With the rise in popularity of TikTok, it is conceivable China has the ability to shape narratives and block negative information. For example, in 2019 TikTok reportedly suspended the account of a 17-year-old user in New Jersey after she posted a viral video criticizing the Chinese government’s treatment of the Uyghur ethnic minority. China has also demonstrated its influence over the Silicon Valley owners of popular social media platforms. Examples range from Mark Zuckerberg literally asking Xi what he should name his daughter to Elon Musk’s financial dependence on Communist China’s willingness to manufacture and sell Tesla cars. Indeed, Newsguard has found that since Musk purchased Twitter, engagement of Chinese, Russian, and Iranian disinformation sources has soared by roughly 70 percent.
China has also begun to seek greater influence over the next versions of the Internet, where its analysts describe incredible potential to better control how the CCP’s story is told. While the U.S. lacks an overall strategy or policy for the metaverse (which uses augmented and virtual reality technologies), the Chinese Ministry of Industry and Information Technology released in 2022 a five-year actionplan to lead in this space. The plan includes investing in 100 “core” companies and “form 10 public service platforms” by 2026.
China did not invent the internet, but it seeks to be at the forefront of its future as a means of not just communication and commerce but conflict. Its own analysts openly discuss the potential power of this space to achieve regime goals not previously possible. The question is not whether it will wage cognitive warfare, but are its target’s minds and networks ready?
Opinions, conclusions, and recommendations expressed or implied within are solely those of the author(s) and do not necessarily represent the views of the Air University, the Department of the Air Force, the Department of Defense, or any other U.S. government agency.
Rather than flip on the TV when major news-worthy events happen, like Hamas’ attack on Israel on Oct. 7 and the subsequent retaliation by Israeli forces in Gaza, we open up social media to get up-to-the-minute information. However, while television is still bound to regulations that require a modicum of truthful content, social media is a battleground of facts, lies, and deception, where governments, journalists, law enforcement, and activists are on an uneven playing field.
It is a massive understatement to use the term “fog of war” to describe what is happening in discussions of Hamas and Israel on social media. It’s a torrent of true horror, violent pronunciations, sadness, and disinformation. Some have capitalized on this moment to inflame Russia or gain clout by posting video game clips or older images of war recontextualized. Many governments, including the U.S., were shocked that Israeli Intelligence failed to see the land, sea, and air attack. Israel is known for its controversial cyber defense and spyware used to tap into journalists’ and adversaries’ networks. How could this have happened?
It may come as a surprise to some that we are involved in an information war playing out across all social media platforms every day. But it’s one thing to see disinformation, and it’s another to be an active (or unwitting) participant in battle.
Different from individuals, states conduct warfare operations using the DIME model—”diplomacy, information, military, and economics.” Most states do everything they can to inflict pain and confusion on their enemies before deploying the military. In fact, attacks on vectors of information is a well-worn tactic of war and usually are the first target when the charge begins. It’s common for telecom data and communications networks to be routinely monitored by governments, which is why the open data policies of the web are so concerning to many advocates of privacy and human rights.
With the worldwide adoption of social media, more governments are getting involved in low-grade information warfare through the use of cyber troops. According to a study by the Oxford Internet Institute in 2020, cyber troops are “government or political party actors tasked with manipulating public opinion online.” The Oxford research group was able to identify 81 countries with active cyber troop operations utilizing many different strategies to spread false information, including spending millions on online advertising. Importantly, this situation is vastly different from utilizing hacking or other forms of cyber warfare to directly attack opponents or infrastructure. Cyber troops typically utilize social media and the internet as it is designed, while employing social engineering techniques like impersonation, bots, and growth hacking.
Data on cyber troops is still limited because researchers rely heavily on takedown reports by social media companies. But the Oxford researchers were able to identify that, in 2020, Palestine was a target of information operations from Iran on Facebook and Israel was a target of Iran on Twitter, which indicates that disinformation campaigns know no borders. Researchers also noted that Israel developed high-capacity cyber troop operations internally, using tactics like botnets and human accounts to spread pro-government, anti-opposition, and suppress anti-Israel narratives. The content Israel cyber troops produced or engaged with included disinformation campaigns, trolling, amplification of favored narratives, and data-driven strategies to manipulate public opinion on social media.
Of course, there is no match for the cyber troops deployed by the U.S. government and ancillary corporations hired to smear political opponents, foreign governments, and anyone that gets in the way. Even companies like Facebook have employed PR firms to use social media to trash the reputation of competing companies. It’s open warfare—and you’ve likely participated.
As for who runs influence operations online, researchers found evidence of a blurry boundary between government operatives and private firmscontracted to conduct media manipulation campaigns online. This situation suggests that contemporary cyber operations are best characterized as fourth generation warfare, which blurs the lines between civilians and combatants.
It also has called into question the validity of the checks that platforms have built to separate fact from fiction. For instance, a graphic video of the war was posted by Donald Trump Jr.—images which Trump Jr. claimed came from a “source within Israel,”—was flagged as fake through X’s Community Notes fact-checking feature. The problem, though, was that the video was real. This would not be the first time we have seen fact-checkers spread disinformation, as pro-Russian accounts did something similar in 2022.
Time and time again, we have seen social media used to shape public opinion, defame opponents, and leak government documents using tactics that involve deception by creating fake engagement, using search engine optimization, cloaked and imposter accounts, as well as cultural interventions through meme wars. Now more than ever we need politicians to verify what they are saying and arm themselves with facts. Even President Biden was fact-checked on his claim to have seen images of beheaded babies, when he had only read news reports.
Today, as we witness more and more attacks across Israel and Palestine, influential people—politicians, business people, athletes, celebrities, journalists, and folks just like me and you—are embattled in fourth generation warfare using networks of information as a weapon. The networks are key factors here as engagement is what distributes some bytes of information—like viral videos, hashtags, or memes—across vast distances.
If we have all been drafted into this war, here are some things that information scientist and professor Amelia Acker and I developed to gauge if an online post might be disinformation. Ask yourself: Is it a promoted post or ad? This is a shortcut to massive audiences and can be very cheap to go viral. Is there authentic engagement on the post or do all of the replies seem strange or unrelated? If you suspect the account is an imposter, conduct a reverse image search of profile pics and account banners, and look to see if the way-back machine has screenshots of the account from prior months or years. Lastly, to spot spam, view attached media (pictures, videos, links) and look for duplicates and see if this account engages in spam posting, for example, replying to lots of posts with innocuous comments.
While my hope is for peace, we all must bear witness to these atrocities. In times of war, truth needs an advocate.
Formulas may be road-tested approaches to business challenges, but formulas have flaws. What worked yesterday might not be applicable or even plausible today. There are three primary weaknesses to relying on formulas to address business issues in a constantly changing environment: 1.) they don’t work the same in all contexts; 2) they can be replicated by the competition; and 3) they can have hidden risks. To manage a fluctuating business climate, companies need a different toolkit. Instead of relying on static formulas that worked in the past, organizations need to focus on changing the way people think. This requires focusing on refining people’s cognitive skills, so they can better identify, assess, and solve unique problems in unique ways. This article covers three ways that companies can sharpen cognitive skills in their own organizations. Because when we know how to adapt, we can position ourselves for future success in an unknown environment.
The only constant in business is change, and it’s recently accelerated to light-speed. If the rising rates of employee burnoutare any indicator, there are no signs of all this turbulence slowing down. People will continue to face increasing stress and anxiety from persistent uncertainty and ambiguity. To mitigate this barrage, organizations often turn to the familiar — those formulas that have had a track record of success in the past. However, what worked in the past can’t fully address today’s challenges, as they were spawned in a wholly different environment.
For example, one of our financial services clients was facing a chronic decline in membership. The attrition had reached a tipping point and was at risk of moving into a free fall. The sales team was tasked with the turnaround and moved to double the size of the sales force in the field. When membership had dropped before, an influx of new personnel had brought it back up, so it seemed like a reliable move. Yet this formula didn’t produce the anticipated results, as consumers’ engagement preferences had changed to conducting more business online.
Why You Can’t Rely on Former Formulas of Success
Formulas may be road-tested approaches to business challenges, but formulas have flaws. What worked yesterday might not be applicable or even plausible today. There are three primary reasons why you can’t rely on former formulas of success:
1. Formulas don’t work the same in all contexts.
McDonald’s is famous for using a formula of granting geographically nonexclusive licenses to franchisees. Because it is perceived as successful, many new franchisers adopted this same formula. However, research has shown that granting nonexclusive licenses increases the likelihood that a new franchiser will fail. Prospective franchisees fear their business will suffer if another unit of the same brand opens nearby. New franchisers need franchisees to grow, and the non-exclusive formula isn’t well suited to an up-and-coming brand.
Or consider JCPenney. In June 2011, Ron Johnson, the man in charge of Apple’s wildly profitable retail stores and former Target executive, took the helm of the flailing retailer. He focused on implementing the formula he found successful with his previous employers — using constant markdowns, turning stores into destinations filled with branded merchandise, and reducing the number of private-label brands. Sixteen months later, Johnson was fired. Same-store sales fell by 25%, the company recorded a $1 billion loss, and its stock fell 19.72%. What worked at Target and Apple didn’t transpose onto JCPenney because their customers weren’t looking for experiences — they were looking for consistent deals, hard-to-find specialty sizes, and an unpretentious environment with high-quality house brands. Ron Johnson’s formula ended up being the exact opposite of what the JCPenney consumer was seeking.
2. Formulas also have a limited shelf life because they can be replicated by the competition.
When Bill Walsh became head coach of the National Football League’s San Francisco 49ers in 1979, he implemented a formula known as “The West Coast Offense.” With the West Coast offense, Walsh led the 49ers to Super Bowl championships during the 1981, 1984, and 1988 seasons. While the team went on to win two more Super Bowls, the benefits of the West Coast offense declined after other coaches began implementing similar formulas with their teams.
Another example is the famed “Toyota Way,” the legendary management system popularized by Jeffrey K. Liker in his 2004 book. The framework centered on a set of principles around organizational culture and continuous improvement (Kaizen),focusing on the root cause of problems, and engaging in ongoing innovation. This formula enabled Toyota to gain significant market share in the American market between 1986 and 1997. However, since then, competitors including Ford, Honda, General Motors, and Stellantis all have adopted the system, significantly diminishing the competitive advantage the formula afforded.
3. Formulas can have hidden risks.
Research on technology startups in Silicon Valley found the “High-Commitment Management Model,” which focuses on hiring employees based on cultural fit and developing strong emotional bonds with them, is less likely to fail and more likely to ensure the company goes public as compared to startups that used other hiring approaches. However, the same study found that changing the hiring structure after a startup launch triples the likelihood of failure. While this may be an effective model for a small, flat organization, once the company begins to scale, the formula isn’t sustainable. This forces a major change in hiring practices, which in turn, puts the organization at risk.
The Just-In-Time production system was also a unique approach formulated by Toyota. Focused on making manufacturing as efficient as possible, Just-In-Time reduces the waiting time between work in progress procedures and lowers supply chain costs, by delivering raw materials only when needed, rather than holding excess inventory. Yet when the 2020 pandemic hit, manufacturers had little inventory to meet production demand, and no capability to resupply. As a result, global shortages are still reverberating across supply chains and manufacturers today.
To manage a fluctuating business climate, companies need a different toolkit. Instead of relying on static formulas that worked in the past, organizations need to focus on changing the way people think. This requires focusing on refining people’s cognitive skills, so they can better identify, assess, and solve unique problems in unique ways.
How to Sharpen Cognitive Skills in Your Organization
Cognitive skills are the mental processes that allow us to perceive, understand, and analyze information, and are essential for problem-solving, decision-making, and critical thinking. Psychologists Daniel Kahneman and Amos Tversky first researched these higher-order processes in their best-selling book, Thinking, Fast and Slow. They found that cognitive skills — which they deemed “slow” thinking — require more time and energy to effectively evaluate and apply reasoning to a problem. On the other hand, “fast thinking,” is a more automatic and reactive response. Essentially, we need to apply our “slow” thinking skills to make better decisions and more effectively solve complex challenges. Fortunately, cognitive thinking skills can be expanded and improved with practice and training. Here are three basic ways to sharpen your organization’s cognitive skills:
1. Analyze known unknowns
Donald Rumsfeld, George W. Bush’s secretary of defense, became well known for his famous statement, “As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”
While Rumsfeld didn’t invent the concept, what’s deemed as the Rumsfeld Matrix is a cognitive method for defining the things you think you know that it turns out you did not know. There are four categories of thinking in this matrix: 1) Known knowns: things we are aware of and understand; 2) Known unknowns: things we are aware of but don’t understand; 3) Unknown knowns: things we understand but are not aware of; and 4) Unknown unknowns: things we are neither aware of nor understand. Using this approach can help to better identify blind spots, false assumptions, and information gaps.
For instance, an organization may know there’s a risk of losing 10% of their customers to a new competitor (known knowns) and can easily manage and quantify the impact. However, they may also know there is a risk that rain may affect business operations, but a lack of knowledge about how much rain will fall (known unknowns). This scenario requires multiple action plans for the most probable outcomes and to be ready to switch to the right plan of action once more information is available.
2. Encourage divergent thinking
Divergent thinking is a thought process used to generate creative ideas by exploring many possible solutions. It involves breaking a problem down into its various components to gain insight about its various components. Done in a spontaneous, free-flowing manner, ideas are generated in a random, unorganized fashion.
An example of divergent thinking would be generating as many uses as possible for a normal, everyday object. For instance, using a coin as a flathead screwdriver, using a fork to dig a hole. By looking at a situation from a unique perspective, it can give rise to a unique solution. Organizations can apply divergent thinking in a variety of ways. This can be something as simple as bringing groups of employees together who normally don’t engage with one another to practicing synetics — the act of stimulating thought processes to uncover alternative ways to overcome obstacles. For instance, instead of tasking employees with finding ways to retain customers, they could be challenged with developing a list of ways to lose them, uncovering new ideas which may never have been considered if approached the typical way.
3. Apply first-principles thinking
First-principles thinking is the idea of breaking down complicated problems into basic elements and then reassembling them from the ground up. Every play we see in the NBA was at some point created by someone who thought, “What would happen if the players did this?” and went out and tested the idea. Since then, thousands, if not millions, of plays have been created.
Coaches reason from first principles. The rules of basketball are the first principles: they govern what you can and can’t do. Everything is possible as long as it’s not against the rules. First-principles thinking allows for keeping better focus on the root components of problems rather than simply reacting to the symptoms.
For instance, Elon Musk, in his mission to transform space travelwith his company SpaceX, tried to buy rockets so he could launch them into orbit. However, the costsof buying a rocket outright were too high to make SpaceX a successful company. Instead, he applied first-principles thinking to boil down a rocket to its most fundamental components and materials. He realized the price of the materials to build a rocket were much lower than buying one outright. In other words, building a rocket ship would make more sense for the business model he was creating.
As is the case with so many things, success is found in moderation. Formulas can be a helpful guide, but complex challenges typically require unique insights and perspectives which only come from applying cognitive thinking skills. Focusing on these skills also helps employees to be more independent learners. If an individual knows how to learn, they will grow abilities and behaviors that are transferable to all kinds of contexts and problems thrown their way, which is inherent to the art of effectively navigating change. When we know how to adapt, we can position ourselves for future success in an unknown environment.
Andrea Belk Olson is a differentiation strategist, speaker, author, and customer-centricity expert. She is the CEO of Pragmadik, a behavioral science driven change agency, and has served as an outside consultant for EY and McKinsey. She is the author of 3 books, a 4-time ADDY® award winner, and contributing author for Entrepreneur Magazine, Rotman Management Magazine,Chief Executive Magazine, and Customer Experience Magazine
In recent years, much attention has been drawn to the potential for social media manipulation to disrupt democratic societies. The U.S. Intelligence Community’s 2023 Annual Threat Assessment predicts that “foreign states’ malign use of digital information … will become more pervasive, automated, targeted … and probably will outpace efforts to protect digital freedoms.” Chinese Communist Party (CCP) disinformation networks are known to have been active since 2019—exploiting political polarization, the COVID-19 pandemic, and other issues and events to support its soft power agenda.
Despite the growing body of publicly available technical evidence demonstrating the threat posed by the CCP’s social media manipulation efforts, there is currently a lack of policy enforcement to target commercial actors that benefit from their involvement in Chinese influence operations (IO). However, there are existing policy options that could address this issue.
The U.S. government can topple the disinformation-for-hire industry through sanctions, enact platform transparency legislation to better document influence operations across social media platforms, and push for action by the Federal Trade Commission (FTC) to counter deceptive business practices, to better address the business of Chinese IO.
There is currently a lack of policy enforcement to target commercial actors that benefit from their involvement in Chinese influence operations.
Commercial entities, from Chinese state-owned enterprises to Western AI companies have had varying degrees of involvement in the business of Chinese influence campaigns. Chinese IO does not occur in a vacuum; it employs various tools and tactics to spread strategically favorable CCP content. For example, as reported in Meta’s Q1 2023 Adversarial Threat Report, Xi’an Tianwendian Network Technology built its own infrastructure for content dissemination by establishing a shell company, running a blog and website which were populated with plagiarized news articles, and fake pages and accounts.
Chinese IO efforts have also utilized Western companies. Synthesia, a UK-based technology company was used to create AI avatars and spread pro-CCP content via a fake news outlet called “Wolf News.” Another example is Shanghai Haixun, a Chinese public relations firm that pushed IO in an online and offline context when it financed two protests in Washington DC in 2022 and then amplified content about those protests on Haixun-controlled social media accounts and fake-media websites.
The role of private companies in Chinese IO can be expected to expand, as they provide sophisticated and tailor-made generative AI services to amplify reach and increase tradecraft. Though the Chinese IO machine is widely known to lack sophistication, it has continued to mature and adapt to technological developments, evidenced by its use of deepfakes and AI-generated content. Most recently, Microsoft’s Threat Analysis Center discovered that a recent Chinese IO campaign was using AI-generated images of popular U.S. symbols (such as the Statue of Liberty) to besmirch American democratic ideals. The use of generative AI will introduce new challenges to counter the business of Chinese IO and the U.S. government needs to act fast to curtail it.
Our first recommendation is for the U.S. government to slowly dismantle the disinformation for-hire industry by calling out the Chinese companies involved and imposing sanctions or financial costs on them. The Chinese government utilizes its gray propaganda machine to conduct overt influence operations through real media channels such as CGTN, Xinhua News, The Global Times and others, and fake accounts to spread content from these media channels in covert influence operations.
With the attribution of IO to specific private entities such as Shanghai Haixun and others, the U.S. government could build a public case against covert Chinese IO and impose financial costs on Chinese companies, especially if they also provide legitimate products and/or services. The U.S. government has the jurisdiction to sanction private entities that directly pose a threat to U.S. national security through the Treasury Department’s Office of Foreign Assets Control (OFAC). There are currently OFAC sanctions in place for Chinese military companies, but not Chinese companies involved in influence operations targeting individuals in the United States.
There is also some potential historical precedent for sanctioning Chinese IO given that it is a type of malicious cyber activity; in 2021 the Biden Administration sanctioned Russian and Russian-affiliated entities involved in “malicious cyber-enabled activities” through an executive order. If the executive branch were to direct a policy focus towards known Chinese entities involved in malign covert influence operations, it could signal a first step toward naming and sanctioning Chinese companies.
Furthermore, by sanctioning these entities, social media companies would be more inclined to remove sanctioned companies’ content from their platforms to avoid liability risks. When the European Union imposed sanctions on the media outlets Russia Today and Sputnik after the recent Russian invasion of Ukraine, Facebook and TikTok complied and removed content from these outlets to avoid liability issues, though they had not taken sweeping action on overt state media before. The U.S. government could use this approach to identify Chinese private companies bolstering IO directed at the American public, name them, and impose transactions costs on them through sanctions.
Holding private sector actors accountable is necessary to impose costs and help dismantle the disinformation business behind Chinese influence operations.
Our second recommendation is to mandate that large social media companies or Very Large Online Platforms (VLOPs) adhere to universal transparency reporting on influence operations and external independent research requirements. Large social media platforms currently face the challenge of deplatforming influence operations at scale, which grants them the ability to choose what to report in the absence of government regulations. Regulation that mandates universal transparency reporting IO would be a meaningful first step toward prodding platforms to devote greater attention to that challenge.
The implementation of this recommendation could prove to be more challenging given that transparency reporting currently operates on a voluntary basis, and the efforts of policymakers could be stymied by First Amendment and Section 230 protections. Recently, a bipartisan group of U.S. Senators proposed the Platform Accountability and Transparency Act in which social media platforms would have to comply with data access requests from external researchers. Any failure in compliance would result in the removal of Section 230 immunity.
Initiatives such as these are essential to promoting platform transparency. If policymakers can mandate transparency reporting on influence operations for VLOPs, including specific parameters of interest: companies involved, number of inauthentic and authentic accounts in the network, generative AI content identified, malicious domains used, political content/narratives, etc., the U.S. government could acquire further insight about the nature of IO at scale. A universal transparency effort could also empower the open source intelligence capabilities of intelligence agencies, result in principled moderation decisions, increase knowledge about the use of generative AI by malign actors, and empower external researchers to investigate all forms of IO.
Our third and last recommendation is for the FTC to continue to pursue and expand its focus on both domestic and foreign companies engaging in deceptive business practices to bolster Chinese influence operations. In 2019, the FTC imposed a fine of $2.5 million on Devumi, a company that engaged in social media fraud by selling fake indicators of influence (retweets, Twitter followers, etc.). Though this action was a helpful first step, it is not likely to be a long-term deterrent for all companies engaged in these harmful practices. The FTC should continue to pursue such cases and work with its international partners via its Office of International Affairs. The challenges of increased FTC involvement are vast; the agency has been under resourced and must choose its cases carefully to achieve maximum impact. However, a sharper FTC focus on the business of Chinese IO could reduce deceptive practices online, protect consumers against the harmful use of generative AI and other technologies, and increase visibility for this issue for social media companies.
Holding private sector actors accountable for Chinese influence operations will not be a straightforward process for the U.S. government, given the need for transparency regulation for social media platforms, the political capital needed for the executive branch to sanction Chinese private entities involved in IO, and FTC’s resource constraints. However, these policy options are necessary to impose costs and help dismantle the disinformation business behind Chinese influence operations.
Bilva Chandra is an adjunct technology and security policy fellow and Lev Navarre Chao was previously a policy analyst at the nonprofit, nonpartisan RAND Corporation.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.
Thousands of users across the Defense Department’s “fourth estate” will get their first chance to use modern collaboration tools on classified IT networks over the next several weeks as DoD continues its push to deploy Office 365 across the military departments, Defense agencies and field activities.
The Defense Information Systems Agency has been piloting the new service — called DOD365-Secret — since January. But officials are now fully deploying it for users across the 17 components of the Office of the Secretary of Defense (OSD), mainly in the Pentagon itself and in the nearby Mark Center in Alexandria, Virginia.
It’s a major shift, not only in that it’s one of DoD’s first large-scale forays into cloud computing at the secret level, but also because it will have the effect of consolidating an aging patchwork of tools senior leaders and their support staff have been using to discuss classified information for years, said Danielle Metz, OSD’s chief information officer.
“Over the past 10 to 15 years, those who live on our classified environment to do their mission have had to really figure out how to stitch together some collaboration capabilities using really old-school chat services that aren’t very effective and aren’t well used across the board,” she said during an interview for Federal News Network’s On DoD. “Effectively what this does is it brings everybody together — we’re all on Teams and getting the same collaborative experience where we’re able to do chat, we’re able to do video, we’re able to collaborate on documents all at the same time, we’re able to store it in a cloud-based environment. None of that exists right now on the classified side, but we are at the precipice of having all of this at our fingertips.”
The implementation of DoD365-Secret across those 17 components will be one of the first major accomplishments for Metz’s new office, which marks its first anniversary this month. Prior to that, each of the OSD sub-offices — known as “principal staff assistants” — operated somewhat independently when it came to IT governance and planning. Because of that, the networks they use are still fragmented and complex. Cloud helps solve part of that problem.
“A cloud-based approach allows us to look and feel and act as if we’re on the same environment, because we are — we’re in the cloud,” Metz said. “The networks are still going to be what the networks are, and there are some modernization activities associated with bringing those up to a better standardized and consistent digital experience. But I think we’re showcasing the importance of being able to all be on in the same environment to be able to work more jointly together to collaborate. It reduces the need for the workforce to figure out how to do it themselves — that’s what I don’t want them to do. I want them to use their creativity to actually do their job. Our job is to ensure that they have the right capabilities and tools to do their job better.”
Aside from modernizing and simplifying those networks, other near-term goals for Metz’s new office include updating end-user devices and laying the groundwork for other significant moves to the cloud. In the early days, the focus is on treating the collection of OSD offices as a single IT enterprise and building out common IT services.
“One of the things that we were able to do is to build that governance structure, create an identity, so that we can have a community of practice,” she said. “We’ve also identified a number of PSAs that are the pockets of excellence: forging ahead, failing fast, and pushing the envelope. They’ve been able to figure out what their business processes are to get to the technical makeup of moving to cloud adoption.”
Over the long-term, Metz said OSD will rely heavily on DoD’s new Joint Warfighting Cloud Computing (JWCC) contract — but those task orders will likely be organized along functional lines, once the office is ready to lean in to supporting mission-specific IT needs. For now, the objective is to map out OSD’s cloud requirements and build the support services to help them migrate.
“We want to be able to do something similar to what the Army did with their Enterprise Cloud Management Agency: create a corporate playbook for OSD,” she said. “What I don’t want is for each individual PSA to fail on their own and do it in a vacuum. We want to be able to least standardize what we think the business processes are, to help inform the technical processes to determine which systems and workloads need to be moved to a targeted cloud environment … The other side of the coin that we’ve struggled with for OSD is that we don’t have an authorizing official (AO) for cloud, which makes it extremely difficult to do anything. And so we’re working on testing out and piloting AO as a service. That and some other basic elements need to be in place and available to OSD in order for us to even start moving the needle for cloud adoption.”
Jared Serbu is deputy editor of Federal News Network and reports on the Defense Department’s contracting, legislative, workforce and IT issues.
This CISA-NSA guidance reveals concerning gaps and deficits in the multifactor authentication and Single Sign-On industry and calls for vendors to make investments and take additional steps.
The National Security Agency and the Cybersecurity and Infrastructure Security Agency published on October 4, 2023, a document titled Identity and Access Management: Developer and Vendor Challenges. This new IAM CISA-NSA guidance focuses on the challenges and tech gaps that are limiting the adoption and secure employment of multifactor authentication and Single Sign-On technologies within organizations.
The document was authored by a panel of public-private cross-sector partnerships working under the CISA-NSA-led Enduring Security Framework. The ESF is tasked with investigating critical infrastructure risks and national security systems. The guidance builds on their previous report, Identity and Access Management Recommended Best Practices Guide for Administrators.
In an email interview with TechRepublic, Jake Williams, faculty member at IANS Research and former NSA offensive hacker, said, “The publication (it’s hard to call it guidance) highlights the challenges with comparing the features provided by vendors. CISA seems to be putting vendors on notice that they want vendors to be clear about what standards they do and don’t support in their products, especially when a vendor only supports portions of a given standard.”
IAM-related challenges and gaps affecting vendors and developers
The CISA-NSA document detailed the technical challenges related to IAM affecting developers and vendors. Specifically looking into the deployment of multifactor authentication and Single-Sign-On, the report highlights different gaps.
Definitions and policy
According to CISA and the NSA, the definitions and policies of the different variations of MFAs are unclear and confusing. The report notes there is a need for clarity to drive interoperability and standardization of different types of MFA systems. This is impacting the abilities of companies and developers to make better-informed decisions on which IAM solutions they should integrate into their environments.
Lack of clarity regarding MFA security properties
The CISA-NSA report notes that vendors are not offering clear definitions when it comes to the level of security that different types of MFAs provide, as not all MFAs offer the same security.
For example, SMS MFA are more vulnerable than hardware storage MFA technologies, while some MFA are resistant to phishing — such as those based on public key infrastructure or FIDO — while others are not.
Lack of understanding leading to integration deficits
The CISA and NSA say that the architectures for leveraging open standard-based SSO together with legacy applications are not always widely understood. The report calls for the creation of a shared, open-source repository of open standards-based modules and patterns to solve these integration challenges to aid in adoption.
SSO features and pricing plans
SSO capabilities are often bundled with other high-end enterprise features, making them inaccessible to small and medium organizations. The solution to this challenge would require vendors to include organizational SSOs in pricing plans that include all types of businesses, regardless of size.
MFA governance and workers
Another main gap area identified is MFA governance integrity over time as workers join or leave organizations. The process known as “credential lifecycle management” often lacks available MFA solutions, the CISA-NSA report stated.
The overall confusion regarding MFA and SSO, lack of specifics and standards and gaps in support and available technologies, are all affecting the security of companies that have to deploy IAM systems with the information and services that are available to them.
“An often-bewildering list of options is available to be combined in complicated ways to support diverse requirements,” the report noted. “Vendors could offer a set of predefined default configurations, that are pre-validated end to end for defined use cases.”
Key takeaways from the CISA-NSA’s IAM report
Williams told TechRepublic that the biggest takeaway from this new publication is that IAM is extremely complex.
“There’s little for most organizations to do themselves,” Williams said, referring to the new CISA-NSA guidance. “This (document) is targeted at vendors and will certainly be a welcome change for CISOs trying to perform apples-to-apples comparisons of products.”
Deploying hardware security modules
Williams said another key takeaway is the acknowledgment that some applications will require users to implement hardware security modules to achieve acceptable security. HSMs are usually plug-in cards or external devices that connect to computers or other devices. These security devices protect cryptographic keys, perform encryption and decryption and create and verify digital signatures. HSMs are considered a robust authentication technology, typically used by banks, financial institutions, healthcare providers, government agencies and online retailers.
“In many deployment contexts, HSMs can protect the keys from disclosure in a system memory dump,” Williams said. “This is what led to highly sensitive keys being stolen from Microsoft by Chinese threat actors, ultimately leading to the compromise of State Department email.”
“CISA raises this in the context of usability vs. security, but it’s worth noting that nothing short of an HSM will adequately meet many high-security requirements for key management,” Williams warns.
Conclusions and key recommendations for vendors
The CISA-NSA document ends with a detailed section of key recommendations for vendors, which as Williams says, “puts them on notice” as to what issues they need to address. Williams highlighted the need for standardizing the terminology used so it’s clear what a vendor supports.
Chad McDonald, chief information security officer of Radiant Logic, also talked to TechRepublic via email and agreed with Williams. Radiant Logic is a U.S.-based company that focuses on solutions for identity data unification and integration, helping organizations manage, use and govern identity data.
“Modern-day workforce authentication can no longer fit one certain mold,” McDonald said. “Enterprises, especially those with employees coming from various networks and locations, require tools that allow for complex provisioning and do not limit users in their access to needed resources.”
For this to happen, a collaborative approach amongst all solutions is essential, added McDonald. “Several of CISA’s recommendations for vendors and developers not only push for a collaborative approach but are incredibly feasible and actionable.”
McDonald said the industry would welcome standard MFA terminology to allow equitable comparison of products, the prioritization of user-friendly MFA solutions for both mobile and desktop platforms to drive wider adoption and the implementation of broader support for and development of identity standards in the enterprise ecosystem.
Recommendations for vendors
Create standard MFA terminology Regarding the use of ambiguous MFA terminology, the report recommended creating standard MFA terminology that provides clear, interoperable and standardized definitions and policies allowing organizations to make value comparisons and integrate these solutions into their environment.
Create phishing-resistant authenticators and then standardize their adoption In response to the lack of clarity on the security properties that certain MFA implementations provide, CISA and NSA recommended additional investment by the vendor community to create phishing-resistant authenticators to provide greater defense against sophisticated attacks.
The report also concludes that simplifying and standardizing the security properties of MFA and phishing-resistant authenticators, including their form factors embedded into operating systems, “would greatly enhance the market.” CISA and NSA called for more investment to support high-assurance MFA implementations for enterprise use. These investments should be designed in a user-friendly flow, on both mobile and desktop platforms, to promote higher MFA adoption.
Develop more secure enrollment tooling Regarding governance and self-enrollment, the report said it’s necessary to develop more secure enrollment tooling to support the complex provisioning needs of large organizations. These tools should also automatically discover and purge enrollment MFA authenticators that have not been used in a particular period of time or whose usage is not normal.
“Vendors have a real opportunity to lead the industry and build trust with product consumers with additional investments to bring such phishing-resistant authenticators to more use cases, as well as simplifying and further standardizing their adoption, including in form factors embedded into operating systems, would greatly enhance the market,” stated the CISA and the NSA.