Europe’s new Digital Services Act opens a new era in access to justice for disinformation and online hate. Americans should take a close look.
When victims face disinformation or online hate speech today, they enjoy little hope of redress. They are more likely to try to look away – closing their browser windows or leaving online spaces altogether – than to seek justice. Even when faced with illegal content such as defamation or harassment – situations in which legal grounds to go to court exist – few possess the financial or emotional resources to hire a lawyer and put pressure on a powerful platform.
Europe’s Digital Services Act (DSA) changes this equation.
It represents the first sustained attempt to modernize rules for online content in a democratic and rule-of-law environment. While preserving free speech, the DSA reinforces access to justice in content moderation decisions, imposing on platforms a variety of scaffolded transparency and accountability requirements.
Among other changes, platforms must open an “internal complaint handling system” where users can submit complaints about content moderation decisions, both in cases of over-moderation (take downs) and under-moderation (when platforms decide not to act on reported content). When complaints contain sufficient grounds, the platform will need to reverse its decision.
While many platforms already have procedures for reporting content and appealing content moderation decisions, their formulation, scope, and accountability vary, and our research shows that all have ample room for improvement. Disinformation and hate speech proliferate, while it is prohibitively challenging for people to appeal wrongful content moderation decisions.
Consider LGBTQ+ hate. In their most recent report on ‘LGBTI-phobies’, the French NGO SOS Homophobie found once again that online environments remain the most prevalent spaces for anti-LGBTQ+ sentiment. At the same time, the NGO notes a decline in reporting, which they attribute to victims and witnesses’ frustrations with effective moderation and insufficient resources devoted to answering their complaints.
Content moderation is particularly lacking in languages other than English. During her testimony before the US Congress, Facebook whistle-blower Frances Haugen revealed that 87% of misinformation spending by Facebook is on English-language content, but only about 9% of their users are English speakers. Even globally spoken languages like Spanish are neglected. In March 2021, a coalition of organizations launched the campaign #YaBastaFacebookcalling for better Facebook protection of Spanish-speakers.
Facebook isn’t alone. Spanish YouTube and TikTok influencer Naim Darrechi claimed recently in a viral video that he had legally become a woman by filling out a simple form and that he could change sex to increase his welfare payments every six months. This is false: Spanish law offers no such right. Even now, this misogynistic and transphobic disinformation remains available online.
In such cases – or when initial reporting and appeals processes fail –the DSA will provide the option of “out-of-court dispute settlement.” Independent bodies in different European countries will be established to deal with content moderation disputes. This option will be especially useful for dealing with ‘lawful but awful’ content such as LGBTQ+ hate, much of which would not have grounds to be brought to court under national laws. At the same time, the independent bodies’ decision will not be legally binding, so the justice system can still have the final say.
Another DSA innovation allows representation. Specialized organizations will be able to exercise the rights of individuals under the DSA on their behalf. This will be critical since victims and witnesses of hateful content may be hesitant to appeal content moderation decisions given the emotional or traumatic engagement they have suffered.
The DSA is not just about Europe. It’s reasonable to expect that it will raise the bar globally: once a platform improves redress mechanisms in Europe, it would be logical to harmonize these policies and implement them for all their users.
The DSA is set to enter into force in 2024 for all online platforms. It will come into force even earlier for so-called Very Large Online Platforms, those with more than 45 million European users. This gives platforms plenty of time to design and implement accessible, intuitive appeals systems – and to begin offering Europeans real access to justice online.
The purpose of this Perspective is to provide policymakers an overview of the deepfake threat. It first reviews the technology undergirding deepfakes and associated artificial intelligence (AI)–driven technologies that provide the foundation for deepfake videos, voice cloning, deepfake images, and generative text. It highlights the threats deepfakes pose, as well as factors that could mitigate such threats. The paper then reviews the ongoing efforts to detect and counter deepfakes and concludes with an overview of recommendations for policymakers. This Perspective is based on a review of published literature on deepfake- and AI-disinformation technologies. Moreover, leading experts in the disinformation field contributed valuable insights that helped shape the work.
During the four years between the 2016 and 2020 presidential elections, many cyber experts and pundits were concerned that Russia or other nation-states would evolve their ability to manipulate U.S. voter perceptions by migrating from social media memes to deepfakes. That emerging technology can be used by employing computer-altered images and video footage to denigrate candidates in some way, hurting their chances at the polls.
The RAND Corporation has released a new report, “Artificial Intelligence, Deepfakes, and Disinformation: A Primer,” that concludes “the potential for havoc is yet to be realized. For example, some commentators expressed confidence that the 2020 election would be targeted and potentially upended by a deepfake video. Although the deepfakes did not come, that does not eliminate the risk for future elections.”
The report identifies several reasons why deepfakes have not yet lived up to their threatening reputation, in particular that “well-crafted deepfakes require high-end computing resources, time, money and skill.”
Matthew Stamm, Ph.D., associate professor of electrical and computer engineering at Drexel University, has been working in the field of detecting fake media for about 15 years, and has contributed to the Defense Advanced Research Projects Agency’s programs on detection algorithms. He agrees that making outstanding deepfakes is not easy or cheap, but isn’t sure how much that matters in regard to their effectiveness.
“Over time [they] will improve,” he says. “But we also have to contend with another factor—we are predisposed to believe something that confirms our prior beliefs.”
Stamm cites as an example the doctored Nancy Pelosi video that ran rampant over social media during the summer of 2020, which made it appear that her speech was slurred—Speaker Pelosi is a teetotaler. It was not a sophisticated deepfake, he says—and he helped debunk it. The sound was just slowed down a little bit to make it seem as if she was impaired. But no matter how often it’s disproved by fact checkers or flagged by social media, plenty of people accept it as “fact.”
“You can often get the same result by using ‘cheapfakes,’ Stamm said. “You have to think about what’s your goal? You can spend a lot of time and money to create a very good deepfake that will last a long time, but is that your goal?”
While videos have, to date, garnered the most attention, the RAND report identifies other types of deepfakes that are easier to execute and have already shown their potential to cause harm. Voice cloning is one technique.
“In one example, the CEO of a UK-based energy firm reported receiving a phone call from someone who sounded like his boss at a parent company. At the instruction of the voice on the phone, which was allegedly the output of voice-cloning software, the CEO executed a wire transfer of €220,000—approximately $243,000—to the bank account of a Hungarian supplier.”
Another technique is creating deepfake images—the headshots that people use everywhere on social media. The report outlines the case of such an image placed on LinkedIn, “one that was part of a state-run espionage operation.” The deepfake was discovered in 2019, where it “was connected to a small but influential network of accounts, which included an official in the Trump administration who was in office at the time of the incident.”
The value of creating such deepfake images is that they aren’t detected by a reverse image search that looks for matches to original, verified images, the report observes.
The fourth form of deepfake identified in the report is generative text—using natural language computer models and AI to generate fake, but human-like, text. This would be valuable for operating social media bot networks that would not need human operators to create content. It also could be used to mass-produce fake news stories that would flood social media networks.
The report lists four key ways that deepfakes can be weaponized by adversaries or bad actors—manipulating elections; exacerbating social divisions; weakening trust in government institutions and authorities; and undermining journalism and other trustworthy information sources.
“These are large-scale social threats, but others are economic,” Stamm notes. “There have already been [cases] of audio deepfakes. There are a lot of highly criminal opportunities,” any of these deepfake methods could be used for.
The report identifies several approaches to mitigate the threat that deepfakes of all kinds pose to information integrity: detection, provenance, regulatory initiatives, open-source intelligence techniques, journalistic approaches and citizen media literacy.
Detection often gets the most attention, because most detection efforts are aimed at automated tools. DARPA has invested heavily in this area, first with its Media Forensics program that ended in 2021, and currently with its Semantic Forensics program.
But this is where the parallels to other kinds of escalation most apply. Think of the battle in cybersecurity to try to ever get ahead of hostile actors.
“Although detection capabilities have significantly improved over the past several years, so has the development of deepfake videos. The result is an arms race, which is decidedly in favor of those creating the deepfake content,” the report states. “One challenge is that as AI programs learn the critical cues associated with deepfake video content, those lessons are quickly absorbed into the creation of new deepfake content.”
Provenance is already being used in some ways. If you have ever noticed a small “i” enclosed in a circle on an image, it means the photographer used a secure mode on their smartphone to take the picture, which embeds critical information into the digital image’s metadata. “Although this technology is not a panacea for deepfakes, it does provide a way for the viewers of a photograph (or a video or recording) to gain confidence that an image has not been synthetically altered,” the report observes, but “the technology only works if it is enabled at the time the photo is taken, so promoting effective adoption of the technology will be critical.”
Stamm suggests that the Department of Defense and the Intelligence Community should consider wargaming different scenarios that could be brought about by successful deepfakes. The report suggests the same thing, recommending that government “conduct wargames and identify deterrence strategies that could influence the decision-making of foreign adversaries.”
“Dealing with fake information is one of the big challenges of the 21st century,” Stamm says. “In the 20th century it was cryptography, but now we live in a world where information sources are decentralized … We may not trust, but we still need to consume.”
n a June 9 column in the Washington Post, longtime federal Insider Joe Davidson put it plainly: Uncle Sam isn’t a trustworthy dude. The column appeared just about a week before the country marked the 50th anniversary of the Watergate break-in, a scandal that forced the resignation of a president, that is taught in schools and universities—even in Florida and Texas, at least for now—that affected our politics and our governmental system and that shaped citizen attitudes towards government ever since. Watergate represented a demarcation in the nation’s history—between a time in which citizens trusted their government and a period in which trust was broken. And it has yet to be restored. Davidson’s column focused on a report from the Pew Research Center released that week in early June.
That report features a chart that tracks the decline of trust between citizens and government. The graphic begins in 1958, near the end of the Eisenhower administration—Eisenhower’s vice president was Richard Nixon. Then, 73% of Americans and majorities from both parties said they trusted the government to do what is right “just about always” or “most of the time.” Trust peaked at 77% in 1964, shortly after Lyndon Johnson ascended to the presidency, after the assassination of John Kennedy.
And then Johnson declined to run in 1968, the Fall of Saigon ended the Vietnam War in 1975, both Martin Luther King and Robert Kennedy were assassinated in 1968, Watergate occured in 1972 and Nixon was impeached and then resigned in 1974.
By the end of that year, after Nixon had flown off to exile in San Clemente, California, just 36% of Americans said they trusted their government. The recent report released by Pew showed that public trust has fallen to a “disturbing” and “near historic low” of just 20%.
Alarms have sounded from multiple good government groups and leaders concerned about this decline. Trust in government is higher not only when government works better, but also when people have a better understanding of what government is doing, according to Teresa Gerton, President of the National Academy of Public Administration. President Biden’s Management Agendaaddresses the trust issue in its focus on improving citizen services as well as government performance.
Max Stier, CEO of the Partnership for Public Service, believes the negative slide in trust can be turned around if the government communicates better and promotes its successes and “the great work of career civil servants,” according to Davidson’s column.
Former OMB official and now Grant-Thornton executive, Robert Shea, pointed to evidence-based policymaking as a key factor in rebuilding trust, when speaking with the Technology Policy Institute.
For Rajiv Desai of 3Di, writing in Route Fifty, the key ingredients are transparency, efficiency and accountability, orTEA. His argument resonated with me because it contained an acronym, which we all know is at the heart of the work of government. So who is right? Or are they all right? If only some are, what should be done to rebuild and restore? Let’s explore the topic a bit more.
On July 5th, another venerable public polling organization, Gallup, reported that just 27% of Americans expressed confidence in their institutions—the lowest level of trust since the questions were first asked over 50 years ago. And that lack of confidence was widespread across U.S. institutions—Congress, the presidency, the Supreme Court, the military, business, police, media, churches, schools and more—14 institutions in all. The average confidence level—27%, as noted above—has declined from 46% in 1989. Americans also report having more animosity towards one another than they used to.
In 2019 political scientists Nathan Kalmoe and Lilliana Mason found—based on the Cooperative Congressional Election Study—that nearly half of registered voters think that the opposing party is not just bad but “downright evil”; nearly a quarter concur that, if that party’s members are “going to behave badly, they should be treated like animals.” So the issue of “trust” is not reserved for just government. Levels of trust in this country—in our institutions, in our politics and in one another—are all in decline. And while opinions of individual institutions do vary among groups, the overall distrust of institutions is universal, with little variation by gender, age, race, education or even party.
Explanations are hard to come by. One factor may be economic stagnation. Social scientists tell us that 90% of Americans born in 1940 could expect to make more than their parents; for those born in the 1980’s, the rate has dropped to only 50%. Across the developed world, the poorer and less educated you are, the less trusting you tend to be. Another factor identified by Professor Benjamin Ho of Vassar College is an increase in ethnic diversity. In the U.S., he suggests, the prospect of a nonwhite majority in a country that once enslaved Black people may be intensifying tribalism. Tribalism can promote trust internally and mistrust externally—high trust within certain groups or clans and very low trust among them. And technology has made it easier for media outlets to cater to niche audiences. Doesn’t it make sense in fact to place more trust in news and news sources that confirm what you already “know”?
The partisan rancor that is so common in our country today actually makes it much harder to measure trust. Survey questions that have been part of surveys for decades (e.g., an approval rating of the president) seem much less useful nowadays, when public sentiment hinges almost entirely on partisanship. One has to wonder how trustworthy our indicators of trust are. Finally, in another take on the impact of technology, author Rachel Botsman argues that advances in IT have created a new paradigm—that of “distributed trust”. In Who Can You Trust?, she suggests that the old hierarchical model in which trust was transmitted from institution to individual—think the media: CBS Evening News with Walter Cronkite—has been replaced by a lateral model in which trust flows from individual to individual.
So what can we learn from the varied research on declining trust in government and how it might be restored? My conclusion is that while there are many good reasons to make government work better and to focus on improving citizen service—and we should continue to strive to do so—there isn’t much evidence that we should count on any of these initiatives to change the decline noted by Pew and Gallup.
None of the nearly 80 organizations that the Cyber Safety Review Board canvassed for its first report, including many federal agencies, used software bills of materials to find vulnerable Log4j deployments.
CSRB found not every organization even had software bills of materials (SBOMs), machine-readable inventories of components and how they relate because data formats haven’t been standardized.
The Department of Homeland Securitytapped CSRB to review the U.S. response to the Log4j vulnerability, one of the most serious to date, publicly disclosed on Dec. 10. In its report released Thursday, CSRB recommended SBOM tooling and adoptability be improved to support faster software supply chain vulnerability response.
“Generally our observation is that the entities who are using open source software really should be looking to help support that community directly in getting them access to training programs, developing the tools that will make things like SBOMs adoptable and being able to measure the efficacy of the security of objects,” said Heather Adkins, CSRB deputy chair, on a press call. “And we think that’s a whole-of-community approach that’s going to be needed.”
In the meantime developers should generate and ship SBOMs with their software with plans for tooling and process upgrades upon availability, according to the report. The recommendation aligns with the Cybersecurity and Infrastructure Security Agencyissuing a solicitation in May for open-source software libraries and other tools foundational to SBOMs, which many federal contractors hope become the standard for proving government-mandated compliance with the Secure Software Development Framework.
CSRB recommended agencies prepare to “champion and adopt” SBOMs as the technology matures and the Office of Management and Budget, Office of the National Cyber Director, and CISA consider issuing guidance on using software inventories and metadata to improve vulnerability detection and response.
The report further recommends government require software transparency from vendors, spearheaded by OMB and the Federal Acquisition Regulatory Council discouraging the use of products without provenance or dependence information. OMB and the FAR Council should make procurement requirements, guidance, and automation and tooling investments that set expectations for baseline SBOM information and an implementation timeframe, according to CSRB.
Board officials maintained the Log4j event is not over with vulnerable versions of the free, Java-based logging framework likely to remain in compromised systems for a decade — offering even unsophisticated attackers access. Many companies can’t quickly identify where their vulnerable code is, said Robert Silvers, CSRB chair.
“The rate at which cyber incidents occur is rapidly increasing,” said Homeland Security Secretary Alejandro Mayorkas. “And we’re at a pivotal moment for the department and our public and private sector partners to achieve a more secure cyber ecosystem.”
The Navy is deploying a shipboard metal-producing 3D printer to test during RIMPAC exercises.
The advantages of installing 3D printing across the federal government could be huge, although the technology has been slow to reach that potential. That could finally be changing as the Navy has deployed the first 3D printer onboard a warship that is capable of printing reliable metal parts while underway at sea.
The government has been interested in 3D printing for a very long time. Back in 2015, I wrote an explainer-type story for Nextgovwhere experts talked about the many advantages that government would eventually gain from investing in 3D printing technology. But while those early printers were extremely interesting, they had limited use because of the substrate they used to create physical objects.
Early 3D printers only used a plastic-based substrate, which was generally fed into them on long spools, which would be melted and then repurposed into whatever object the creator wanted. The early printers were capable of producing some amazingly advanced projects, with some of them able to accept computer-aided design plan files for extreme precision. However, because of the substrate used, the final product was made of plastic, so it was of limited use. Yes, you could print a small combustion engine, a gear for a machine or a work of art, but trying to use the finished product in any practical way would probably cause it to melt or break.
The killer application for 3D printing in government came not from finding something useful that the government could print in plastic, but from improving the printers to be able to handle more durable raw materials, especially metals. Called additive manufacturing, certain 3D printers now allow the government to print products using everything from metals to composite fibers to concrete.
I recently hosted a roundtable discussionwith experts in the field of additive manufacturing. Experts working in the field explained the many advances that 3D printing has made over the years, and how that is opening up new possibilities for government service.
“We have a system that prints stainless steel, metal tools and copper,” said Tony Higgins, Federal Leader for Markforged, one of the new leaders in additive manufacturing. “A lot of our customers are using this type of technology to create functional tools, custom parts, work holdings and fixtures.”
So far, the Army has been one of the biggest proponents of 3D printing for complex construction jobs. “We have a number of different systems that can print everything from concrete, to foams, to other types of materials,” said Megan Kreider, Mechanical Engineer for the U.S. Army Engineer Research and Development Center at the Construction Engineering Research Laboratory. Kreider recently worked on an Army project where an entire bridge was constructed of 3D parts that were printed using concrete and other heavy materials.
“You have to go through a structural engineer, and they outline what the reinforcement needs to be, how it’s going to be printed, and it’s highly interdisciplinary.” Kreider said. But after that, the parts are printed, normally right on the job site, and then fitted together to form the structure.
On land, 3D printing structures in the military can save time and money for big projects. But at sea, having the ability to print a critical part on demand might be the difference between having a ship able to continue its mission and requiring it to return to port for repairs. That is why the Navy has been so interested in additive manufacturing as it evolved from more simple 3D printing.
Last year, the Navy installed a liquid metal printer manufactured by Xerox at the Naval Postgraduate School in Monterey, California. Called the ElemX Liquid Metal Additive Manufacturing machine, it is being used to test out manufacturing during deployments, and to reduce the long supply chains needed to support ships at sea.
Apparently, the testing on land went well, as the Navy announced that an additive manufacturing 3D printer is now installed on the Wasp-class amphibious assault ship USS Essex. The printer is being tested during the massive Rim of the Pacific—or RIMPAC—2022 combat exercises taking place over the summer. The Essex is the first ship to participate in the initial testing and evaluation of an additive manufacturing 3D printer during underway conditions at sea.
During RIMPAC, the 3D printer on the Essex will be tasked with printing many of the parts that Navy ships routinely require while on maneuvers. This includes training sailors how to quickly manufacture heat sinks, housings, bleed air valves, fuel adapters, valve covers and much more. The printer on the Essex can manufacture metal parts as large as 10-by-10 inches.
According to Lt. Cmdr. Nicolas Batista, the Aircraft Intermediate Maintenance Department (AIMD) officer aboard Essex, “Additive manufacturing has become a priority and it’s evident that it will provide a greater posture in warfighting efforts across the fleet, and will enhance expeditionary maintenance that contributes to our surface competitive edge.”
If the printer performs well during RIMPAC, the Navy could expand the role of those devices. Batista said in a Navy press release that the “Commander Naval Air Force, U.S. Pacific Fleet and Commander, Naval Air Systems Command have also initiated efforts to establish an AIMD work center, solely designed for the additive manufacturing concept, and are striving towards the capability of fabricating needed aircraft parts with a 3D printer.”
So it seems like if all goes well for the 3D printer during RIMPAC, that we may soon see more heavy metal manufacturing on the high seas, and more complex and larger parts being constructed by sailors, without any assistance or materials from back on land required.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys
The agency does not have the necessary data to evaluate its IT staffing practices, according to the watchdog.
The U.S. Department of State has implemented some leading recruitment and retention practicesto address its IT workforce challenges, but more is needed to strengthen and measure the effectiveness of these steps, according to a watchdog reportreleased to the general public on July 12.
The U.S. Government Accountability Office examined 15 recruitment and retention best practices for the agency’s IT workforce and found that State has fully implemented one, partially implemented 11, and failed to implement the remaining three. The leading practices that GAO analyzed focused on issues surrounding strategic workforce planning, talent acquisition, talent management, engaging employees and employee morale.
As GAO noted, State depends on skilled IT staff in order to counter cyber threats and maintain operations across four different agency components—Foreign Service, Civil Service, contractors and locally employed staff. A lack of efficient tools for measuring the success of its current practices, however, is having a detrimental impact on State’s ability to attract and retain skilled IT professionals across these four components.
“State’s policy calls for access to timely and accurate data to set performance metrics and for a plan to monitor and evaluate progress toward achieving goals,” GAO said in the report. “However, State does not have such IT workforce data needed to set performance metrics, nor does it have a plan to monitor and evaluate progress toward achieving its goals. Consequently, State does not know if its actions are improving its recruitment and retention, and achieving its goals.”
The report identified 10 specific challenges affecting State’s ability to recruit and retain IT staff, including: low entry-level pay with no recruitment incentives, low entry-level pay raises with limited retention incentives, a lengthy hiring and security clearance process, a narrowly-focused marketing and recruitment strategy, a lack of detailed information in job postings, inaccurate job postings and the performance of non-IT work, and limited opportunities for advancement.
“For example, State has collected training performance data, but has not recruited continuously year-round for most of its IT positions or regularly assessed staffing needs,” the report noted. “If State increases its focus on recruitment and retention practices, the department can better compete with other employers for critical IT staff with key skills and abilities.”
Citing the sensitivity of the analysis, the GAO’s public-facing report did not include three of the recruitment and retention challenges that were identified. In response to State officials’ request, GAO also removed its evaluation of IT workforce vacancies and the impact of those vacancies on the agency from the public report.
GAO offered 16 recommendations for State to improve its IT workforce management and strengthen its recruitment and retention practices. These recommendations included updating competency and staffing monitoring tools, obtaining and tracking data to better align employee expectations with agency goals and developing strategies for recruitment and retention incentives.
While State agreed with the majority of GAO’s recommendations, it disagreed with the suggestion that the agency expand the number Foreign Service IT positions available to external applicants year-round, citing the fact that information management technical specialist positions, for example, are more specialized roles and are only posted when there are openings.
In comments to GAO, State said that it is working to address many of the other deficiencies identified in the report. State noted, for example, that it is drafting an IT strategic plan for the Civil Service and the Foreign Service, and that it also plans to design and implement a Foreign Service Applicant Tracking System to determine the effectiveness of the department’s recruitment efforts.
New research in internet of things (IoT) technologies may help with national security issues, but it will also offer a wealth of benefits for business.
The Department of Homeland Security’s Sensors and Platforms Technology Center (SP-TC) studies and improves applications for IoT. While much of the focus centers on public safety and natural disasters, early warning systems and sensor technology has applications in the private sector.
“We try to take take a holistic approach to R&D, where we look at the technical, commercial and policy aspects to maximize the impact and the benefit of our investments,” SP-TC Director Jeff Booth said on Federal Monthly Insights — IoT Security.
One prime example of a public-private partnership in IoT research is the Capitol One arena. In the wake of COVID-19 ventilation issues, SP-TC worked with the arena’s management to install sensors that tracked changes in air flow and air quality that occurred during different sports and entertainment events. The team developed a three-dimensional digital model of the arena to analyze and improve air quality. While air quality analysis had a direct use for arena management, DHS also had an interest in using the technology for public safety. Sensors installed in the building give advance information about any public safety crisis to building security and law enforcement. That data could include everything from where a fire might be to the presence of an active shooter.
“They can provide that before the Blue Force arrives,” Booth told Federal News Network’s Jared Serbu on the Federal Drive with Tom Temin.
Booth said the Capitol One arena offers a model of what SP-TC wants to achieve. Providing the arena with IoT technology to track air quality and public safety events benefits both law enforcement and business, enough to provide a needed cooperation on funding.
“The federal government can’t afford to fund everything, it’s just not practical,” he said.
Efforts by DHS to recover costs by applying its research in IoT to operations and public safety have involved numerous partners in several fields.
“In addition to the Cap One arena test that I mentioned, we’re also involved in a public-private partnership with the Commonwealth of Virginia, as well as Stafford County, Virginia, and several industrial partners such as Verizon, Cisco and a large number of small businesses. The Stafford County community testbed already has in place test infrastructure including IoT Edge devices, conductivity, 5G transmission, zero trust architectures, appliances as well as data monitoring and collection capabilities.”
Wildfires have also provided SP-TC with a platform to partner with private companies. Last year, the center announced a partnership with Breeze Technologies UG of Hamburg, Germany, and N5 Sensors, Inc. of Rockville, Maryland, for wildfire detection and air quality monitoring. The project involved putting sensors in remote areas to create an early warning system. The problem: How do you power it?
Booth said powering sensors creates a major challenge for IoT technology. The power needs to last, it needs to be sturdy, and it needs to be cost effective.
“We’re looking now at the wildland fire sensors. And we’ll be looking at different solar power configurations. The traditional flat screen if you will, solar is just really to test to see the efficacy of the sensors and their algorithms,” he said. “But ultimately, we’re going to need to look at solar panels and batteries that are more flexible, longer-life durations. You don’t want to set up fire sensors and with Santa Ana winds that all of a sudden the solar panels become more like wind sail. So we’re looking at flexible panels that could wrap around a pole as an example, as one mechanism to try to address that.”
Could Gen. William Westmoreland see the future? In a 1970 issue of Army Aviation Digest, Westmoreland, then chief of staff of the U.S. Army, offered his view on the future of command decision-making. In the print version of a speech he had delivered the year before, he predicted the practice of command would take place on a highly surveilled, automated, and interconnected battlefield which, if such a system were realized, could “assist the tactical commander in making sound and timely decisions” as well as permitting “commanders to be continually aware of the entire battlefield panorama.” Although Westmoreland was certainly not a clairvoyant, in discussions of contemporary dreams of command his language would not appear out of place. This is especially the case surrounding efforts to implement Joint All Domain Command and Control, which, in broad terms, is the U.S. military’s effort to link decision-makers across land, sea, air, space, and cyber using advanced communications and data analysis technologies.
The release of the Department of Defense’s Joint All Domain Command and Control strategy in early 2022 offered some additional information regarding the goals of the department’s supposed reimagining of the technological elements supporting command in the U.S. military. The document is couched in the rhetoric of significant, technologically inspired changes that will allow the military to analyze and process data from a range of sources as well as decide faster, based on advanced analysis capabilities. According to the declassified summary, the strategy is structured around three pillars: “sense,” “make sense,” and “act.” The strategy’s drafters envision these core components being enabled by the information and decision-support capabilities of artificial intelligence, machine learning, and advanced sensor systems, of which “information and decision advantage” at the “speed of relevance” is intended to be the result. Westmoreland’s “sound and timely” decisions, as well as his notion of a commander being aware of the “entire battlefield panorama” would fit well into this schema. So too would Adm. William Owens’ desire to “lift the fog of war” through the integration of new information and communication technologies 30 years later.
As articulated in the strategy, such changes are proposed as a response to novel threats. Yet, as both Westmoreland and Owens’s perspectives attest, a review of past imaginations of command-related technologies in the U.S. military makes one thing clear: The vision for a contemporary advanced command system is yet another instantiation of a longstanding dream related to the orchestration of military decisions, rather than something entirely original. In such visions, the U.S. military risks what B.A. Friedman and Olivia A. Garard suggest is the tendency to conflate “technological capacity with command.” Thus, in light of the recent release of the strategy, it is worth exploring the rhetoric that sustains these reoccurring visions of technologically enabled command. Furthermore, we should consider the practical implications of technological roadblocks facing current efforts as well as possible perils related to its promises of enabling faster and better decisions.
Echoes from the Past
While such visions have roots that trace at least to missile defense programs like the Semi-Automatic Ground Environment in the 1950s — as well as early Pentagon-supported computing research initiatives for purposes of command and control during the 1960s — notions of command systems approximating the current intentions begin to coalesce more firmly during the latter decades of the Cold War. We can touch on a few examples that resonate with current efforts.
Driven by military competition between the Soviet Union and the United States, as well as fears over the implication of Japanese fifth-generation computing, the Defense Advanced Research Projects Agency undertook a decade-long, billion-dollar effort known as the Strategic Computing Initiative. Starting in the early 1980s, it featured a multi-pronged effort to explore the implications of AI for military purposes — a key element of which was command and control. This came particularly in the form of a proposed battle management system for the U.S. Navy. Like current efforts, Strategic Computing’s contribution to command-related research talked about AI-enabled expert systems helping commanders quickly sense and decide what to do. The program’s founding document noted the goal of developing human-like, “intelligent capabilities for planning and reasoning,” the desire to augment human judgment with AI-enabled expert systems, and the need to help make sense of “unpredictable” military situations in a rapid fashion. The plan was to create a decision-support system that went beyond existing options during the era. The principal option at the time was the Worldwide Military Command and Control System, itself predicated on “accurate and timely decisions”, by leveraging intelligent computing that could assist in planning, formulating decision options, and managing uncertainty in quickly changing combat environments. In the end, as Emma Salisbury points out, the program had mixed results and generally failed to live up to its big AI-related promises due to technological roadblocks as well as funding shortfalls.
Yet almost as soon as Strategic Computing fizzled out, similar high-tech approaches to command systems popped up again. In the late 1990s, the U.S. Army’s Future Combat System proposed a modernized command structure linking manned and unmanned systems over wireless networks. The initiative emerged out of assumptions about war associated with the Revolution in Military Affairs, in which defense officials believed smaller, faster, forces linked by advanced communications networks would prove decisive in future conflicts. While the modernization effort also intended to replace some military hardware such as the M1 Abrams tank, it was the command network that bonded the program together. The desire for speed and the notion that information dominance might prove critical in future conflicts drove the Future Combat System. In terms of command, as laid out in a 2007 Congressional Research Service report, this manifested in technological artifacts such as ‘Battle Command Software’, the desire for automated mission planning capabilities for rapid response, as well as the intent to improve ”situation understanding” through the use of maps and databases that could track enemy locations. In fact, during congressional budgetary hearings, defense officials cited the supposed advantages incurred by soldiers during testing, principally stemming from “increased soldier awareness and battlefield understanding.”
Similar to Strategic Computing, the Future Combat System endeavored to help commanders assess, make sense of information, and act quicker than was previously possible. However, also akin to Strategic Computing, the program struggled to live up to its promises. A 2012 RAND assessment of the program largely considered the project a failure due, at least in part, to over-aggressive timelines and shifting goal posts. While RAND’s report on the Future Combat System only took on aspects of command and control in a limited fashion, it did point to performance issues in the command system’s ability to complete tasks such as automated data fusion to generate a common operational picture. The report noted that such issues degraded “a key operational linchpin” of the program.
Although the Future Combat System and ongoing plans to develop a high-tech command system are not precisely the same, the parallels between the initiatives are at least close enough for defense officials to contend that current efforts do not repeat past mistakes. Even amongst such claims, there are notable similarities between the Future Combat System’s intentions to help the Army “see first, understand first, act first” and contemporary efforts to “sense, make sense, and act” at a quicker pace.
Apparently, dreams of technologically enabled sensemaking and fast decisions die hard. Ongoing justifications of new technologically advanced command systems commonly rely on rhetoric that coheres with the desires of its predecessors in both Strategic Computing and the Future Combat System. For example, members of the Joint All Domain Command and Control development team have arguedthat conflict in the future will feature decision timelines that are reduced to “milliseconds” necessitating the integration of advancements in computing, AI, and machine learning as the “linchpin” of command requirements. As the strategy argues, such technological changes are necessary for processing and delivering information at the speeds required in modern conflict.
Furthermore, with respect to AI and the technological elements supporting military decision-making, similar rhetoric is deployed outside of discussions specific to current command visions. The 2021 National Security Commission’s Report on Artificial Intelligence, co-chaired by former Deputy Secretary of Defense Robert Work, asserts that, in military contexts, AI will help to “find needles in haystacks” and “enhance all-domain awareness” leading to “tighter and more informed decision cycles.” Similarly, the Defense Advanced Research Projects Agency’s notion of Mosaic Warfare is itself a metaphor for the intent to dynamically link weapons, sensors, and decision-makers. Such arguments appear all around us and reflect the degree of technological optimism surrounding AI-enabled systems. Interestingly, while these reoccurring desires for technologically enabled command systems are framed in terms of “rapid changes” to the security ecosystems and “significant new challenges” facing the United States, what seems to be happening is that the Department of Defense has turned again to its decades-long aspiration. For justification, it leans on rhetorical assertions and problematic assumptions that also have decades-long histories. And while strategic and political factors have changed in the time spanning from the Cold War to contemporary security challenges, similar tech-optimistic framings persist.
Inertia and Implications
Today, Joint All Domain Command and Control certainly has a degree of institutional momentum. As Gen. John Murray (now retired) stated in a 2020 congressional hearing, “nobody is arguing with the concept.” Furthermore, as recently as this year, Gen. Mark Milley proclaimed the “irreversible momentum” toward the program’s implementation. Whether that momentum pushes in a coherent direction or not is unclear, even considering the release of the recent strategy. What should be noted, however, is that in the days since initiatives such as Strategic Computing and the Future Combat System, AI/machine learning, the hardware and data availability needed to train algorithms, and other digital technologies have become more advanced and capable. Nonetheless, AI/machine learning is still afflicted by serious problems such as bias, issues with training data, trust in machine-human interaction, and difficulties related to explainability, or the desire to know why an AI system came to the decisions it did, though the latter is an issue the Department of Defense is working on. As an example, as suggested in a 2022 report by Stanford’s Human-Centered Artificial Intelligence, even in pristine research environments of universities and private sector companies, where datasets can be labeled and curated and algorithms trained and tested, there are still problems related to language models reproducing the bias in their training datasets at increasingly high rates. Thus, with respect to military AI, the problem is that even in ideal settings machine learning models can make unexpected and undesirable errors.
Furthermore, many models are trained with performance on benchmarking datasets as the ultimate goal, not their functionality in the real world. As machine learning researchers have suggested, this is because models are frequently “taught to the test”of the benchmark data, leading to outputs that can’t be replicated outside of controlled research environments. Therefore, major questions persist when it comes to AI/machine learning’s place in military command processes. For example, what data will the elements of command systems relying on the prediction and decision support capabilities of AI be trained on? Additionally, how certain should officials be of system performance in the complexity of war, particularly if AI-enabled command systems are offering up recommendations or possible courses of action? Avi Goldfarb and Jon Lindsay have recently pointed to the issues with data in military environments. There is a major risk to not solving such problems, particularly in the case of AI-enabled command decisions.
That said, time horizons matter. Current effortssurrounding Joint All Domain Command and Control are mostly focused on initiatives such as cloud capabilities and improving interoperability and data sharing across the services and other partners. The implementation plan remains classified, so efforts related to using AI for prediction or planning are murky. Still, it is worth assessing what the military may turn to next in the context of the recent public strategy document. Particularly if AI-enabled systems are envisioned as providing, as the strategy suggests, the “technical means to perceive, understand, and predict the actions and intentions of adversaries, and take action.”
Apart from concerns regarding technological functionality, we should also assess what the dominant assumptions contributing to current efforts might mean in practical terms. As addressed above, the longing to act faster and reduce confusion on the battlefield are old desires in military thought, ones that are frequently linked to the supposed capabilities of advanced computational systems. The implication of these long-term problems — if they are considered solvable through command-related technologies — is worrying. This is particularly the case if such technologies are envisioned as the path to achieving a rapid, decisive victory.
Scholars have documented the perils of assuming, and pursuing fast, decisive war. In fact, Dave Johnson recently explored this issue in War on the Rocks, in which he touches on and critiques the “belief that future wars will be short, decisive affairs.” Furthermore, Antoine Bousquet argues that science and technology have been repeatedly turned to by militaries seeking such decisiveness. While it is important to not conflate general strategic considerations and tactical decisions on the battlefield, the decisions commanders make should support overall political objectives. And as Paul Brister suggests in a recent Brookings edited volume, there are tough lessons to be learned from assuming technologically enabled tactical speed will lead to short or easily won wars. In the same volume, Nina Kollars argues a parallel point suggesting, “the lure of faster war leading to faster victory is not only questionable but also a persistent techno-pathological obsession.” Thus, speedy, AI-enabled decisions as a means to increasing tactical speed should not be seen as undeniably beneficial, particularly in the face of current discussions of so-called “hyperwar.”
Notably, Joint All Domain Command and Control by itself is not a theory of victory or a warfighting concept. And it does not advocate a speedy end to war as such. Rather, its proponents envision it as an enabler of other warfighting functions through establishing, at least in its initial phases, a more interoperable set of information systems. The proposed result of such endeavors is, as the strategy contests, to “directly and dramatically improve a commander’s ability to gain and maintain information and decision advantage.” That said, its proponents risk forwarding advanced command systems as the enablers of rapid — decisive — victory, thus overly buying into technological optimism as a solution to achieving political or strategic goals. As the vision for a new command system continues to emerge, we would be better served by being wary of any “techno-pathological obsession.”
Furthermore, as opposed to some conclusions from the National Security Commission on AI, it is not entirely apparent that AI-enabled systems will be successful in clarifying very much in the conduct of warfare. As Sam Tangredi notes, AI systems still struggle with problems that are not well-structured, and war is anything but a well-structured problem. Moreover, deep-learning AI models have proven to be relatively easy to trick through, for example, altering pixels in images fed to the algorithm. Significant elements of war are related to diversionary tactics and stratagems meant to confuse and mislead adversaries. This combination of a technical problem and the common practice of deception in war may lead to further confusion rather than clarity for those operating within any fully realized version of an AI-enabled command system.
Conclusion
Accordingly, we should hesitate to conclude that new command-related technologies will suddenly illuminate the battlefield for commanders leading to rapidly executed better decisions, or necessarily lead to anticipated positive military outcomes. Further, we should be skeptical that such desires are all that new, and, as Martin Van Creveld’s work on command suggests, they will remain very hard to perfect. Justifications for developing a new, more advanced, command system echo past projects such as the Strategic Computing Initiative and the Future Combat System, both of which were generally unsuccessful in hitting their ambitious marks. It is important to at least consider the rhetoric, and the outcomes, of these past projects in debates over the merits and possibilities of future technologically advanced command systems.
Furthermore, the rhetoric surrounding the development of new high-tech command systems is potentially risky if it is substantively linked to assumptions about fast-acting, technologically decisive war. There are historical cases which demonstrate the consequences of instantiating similar assumptions into military practice. Azar Gat’swork on the history of military thought leading up to WWII, while too complex to delve into detail here, demonstrates that myth-like ideas about technological artifacts such as motor vehicles and airplanes shaped how theories of fast, mechanized war were put into use. Thus, a reflective consideration of the historical similarities related to linking fast decisions and enhanced situational awareness with advanced computation or AI-enabled systems, as well as foregrounding the very real technological hurdles facing the integration of AI-related technologies into decision processes, will help to avert the worst outcomes.
Ian Reynolds is a Ph.D. candidate in International Relations at the American University School of International Service studying the history and cultural politics of military artificial intelligence. He is also a doctoral research fellow at the Internet Governance Lab and a research associate at the Center for Security Innovation and New Technology, both housed at American University. During the 2022/23 academic year, Ian will be a pre-doctoral fellow at Stanford’s Center for International Security and Cooperation as well as the Institute for Human Centered Artificial Intelligence.
Service leaders will boost research into synthetic blood, quantum computing, and more
The Army wants to dramatically change the way it provides health care to soldiers by accelerating research in a variety of emerging technologies, from using quantum computing that can better detect and treat chronic illnesses to developing synthetic blood, according to newly released plans.
Army Futures Command, charged with synchronizing the service’s modernization efforts, outlined how the Army plans to update its health system and overall approach to medicine across six critical areas, from training to policy, by 2035.
The goal of the strategy, which is dated May but was released Thursday, is to “fundamentally transform” the Army Health System to one that integrates autonomous technologies and predictive artificial intelligence tools to improve decision making, Lt. Gen. James Richardson, acting commanding general for Army Futures Command, wrote in the document’s introduction.
Richardson wrote that Army medicine has “continuously placed new technology on top of existing doctrine” for decades but that is “no longer adequate.”
In the document, Army leaders name research priority areas, including humanistic intelligence, which would build on artificial intelligence and machine-learning research to make human-machine teaming a more “seamless integration.”
Medical research was only one of six areas–including governance, policy and authorities, facilities, training, and global health–in which the Army expects to see significant changes as modern warfare incorporates more technological advancements.
According to the document, the Army also plans to focus resources in biotechnologies that “improve human form or functioning in excess of what is necessary to restore or sustain health.” That includes investments in genetic manipulation, such as genome editing, as well as developing methods that disrupt biological processes, biosensors and informatics, and robotics.
Another research priority listed is synthetic biology, which aims to improve medical care through the development of synthetic blood products or improved burn treatments, and exploring uses for additive manufacturing to create on-demand pharmacies and biologically-derived medicines.
The document feeds into the Army’s overall modernization strategy. Ideas laid out in the medical modernization strategy are expected to trickle into other areas, such as Army Health System doctrine, policy, training and education, materiel, organization, facilities, and personnel.