Story by Adelia Henderson, ODNI Office of Strategic Communications
The Office of the Director of National Intelligence turns 17 today! Since the beginning of the agency in 2005, ODNI continues to lead intelligence integration across the U.S. Intelligence Community in a variety of ways. In honor of ODNI’s 17th anniversary, here are some notable moments from our history.
Founding of ODNI
After the September 11th terrorist attacks, Congress passed the Intelligence Reform and Terrorism Prevention Act of 2004. This created the Office of the Director of National Intelligence to oversee a 17-organization Intelligence Community by improving integration and information sharing.
On April 21, 2005, Ambassador John Negroponte and Gen. Michael Hayden were sworn in as the first Director of National Intelligence and Principal Deputy Director of National Intelligence, respectively, following Senate confirmation. ODNI began operations on April 22, 2005. The first year consisted of several office moves, as ODNI was first located in the Old Executive Office Building. The agency then moved to the New Executive Office Building, before landing at present day Joint-Base Anacostia Bolling in 2006.https://www.youtube.com/embed/Q6ZHtpqaecM
The National Counterproliferation Center is Created
Under the IRTPA and based on recommendations from the Weapons of Mass Destruction Committee, DNI Negroponte announced the creation of the National Counterproliferation Center (NCPC) in December 2005. Today, NCPC helps counter threats against the United States from the proliferation of chemical, biological, radiological and nuclear weapons – missiles capable of delivering them.
Founding of IARPA
The Intelligence Advanced Project Research Activity was established under ODNI and created in 2007. IARPA invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines faced by the Intelligence Community. Learn more about IARPA here.
ODNI Moves to Liberty Crossing
In 2008, ODNI moved to its current headquarters in McLean, Virginia. A new building was constructed next to the National Counterterrorism Center to serve as ODNI’s headquarters. The two buildings are shaped like an “L” and “X”, and the site is collectively referred to as Liberty Crossing. NCTC is aligned under ODNI and plays a vital role in protecting our country and U.S. interests around the world from the threat of terrorism.
ODNI Employee Resource Groups Begin in 2014
As it approached the end of its first decade, ODNI’s approach to cultivating a diverse workforce evolved to establish Employee Resource Groups in 2014, consistent with the Principles of Professional Ethics for the Intelligence Community. These groups foster unified culture across ODNI and advocate for a more inclusive workplace aligned with ODNI’s mission, vision and values. Currently, there are 10 active, employee-led ERGs.
The National Counterintelligence and Security Center is Created
DNI Clapper established the National Counterintelligence and Security Center (NCSC) in 2014. The existing Office of the National Counterintelligence Executive was combined with the Center for Security Evaluation, the Special Security Center and the National Insider Threat Task Force to integrate and align counterintelligence and security mission areas.
Intelligence Community Campus – Bethesda Opens
As ODNI matured over the years, so did its facilities. Originally serving as a facility for the National Geospatial-Intelligence Agency, ODNI acquired the Intelligence Community Campus – Bethesda in 2012. The site underwent renovations, and officially opened in 2015.
The U.S. Space Force Joins the Intelligence Community
The country’s newest Armed Forces branch officially became the IC’s 18th agency on January 8, 2021. Space Force was the first new organization to join the IC since 2006, when the Drug Enforcement Agency joined. Space Force’s addition advances strategic change across the national security space enterprise.
Avril Haines Sworn in as DNI
On January 21, 2021, Avril Haines became the seventh Senate-confirmed Director of National Intelligence. She is the first woman to lead the IC and previously served as deputy director of CIA. Read her confirmation hearing testimony here.
The National Intelligence University Joins ODNI
In June 2021, the National Intelligence University transitioned from the Defense Intelligence Agency to ODNI. NIU is the leading institution for intelligence education, in-depth research and interagency engagement that offers classified courses at the undergraduate and graduate levels. Learn more about NIU’s transition to ODNI here.
Dr. Stacey Dixon Sworn in as PDDNI
Dr. Stacey Dixon began serving as the sixth Senate-confirmed Principal Deputy Director of National Intelligence on August 4, 2021. She is the first person of color confirmed to this position and the highest-ranking African American woman in the IC.
This story is the fourth and final part of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. It was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. Read the rest of the series here.
In the back room of an old and graying building in the northernmost region of New Zealand, one of the most advanced computers for artificial intelligence is helping to redefine the technology’s future.
Te Hiku Media, a nonprofit Māori radio station run by life partners Peter-Lucas Jones and Keoni Mahelona, bought the machine at a 50% discount to train its own algorithms for natural-language processing. It’s now a central part of the pair’s dream to revitalize the Māori language while keeping control of their community’s data.
Mahelona, a native Hawaiian who settled in New Zealand after falling in love with the country, chuckles at the irony of the situation. “The computer is just sitting on a rack in Kaitaia, of all places—a derelict rural town with high poverty and a large Indigenous population. I guess we’re a bit under the radar,” he says.
The project is a radical departure from the way the AI industry typically operates. Over the last decade, AI researchers have pushed the field to new limits with the dogma “More is more”: Amass more data to produce bigger models (algorithms trained on said data) to produce better results.
The approach has led to remarkable breakthroughs—but to costs as well. Companies have relentlessly mined people for their faces, voices, and behaviors to enrich bottom lines. And models built by averaging data from entire populations have sidelined minority and marginalized communities even as they are disproportionately subjected to the technology.
Over the years, a growing chorus of experts have argued that these impacts are repeating the patterns of colonial history. Global AI development, they say, is impoverishing communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires.
Peter-Lucas Jones (left) and Keoni Mahelona (right) attend an Indigenous AI Workshop in 2019.
COURTESY PHOTO
This has been particularly apparent for artificial intelligence and language. “More is more” has produced large language models with powerful autocomplete and text analysis capabilities now used in everyday services like search, email, and social media. But these models, built by hoovering up large swathes of the internet, are also accelerating language loss, in the same way colonization and assimilation policies did previously.
Only the most common languages have enough speakers—and enough profit potential—for Big Tech to collect the data needed to support them. Relying on such services in daily work and life thus coerces some communities to speak dominant languages instead of their own.
“Data is the last frontier of colonization,” Mahelona says.
In turning to AI to help revive te reo, the Māori language, Mahelona and Jones, who is Māori, wanted to do things differently. They overcame resource limitations to develop their own language AI tools, and created mechanisms to collect, manage, and protect the flow of Māori data so it won’t be used without the community’s consent, or worse, in ways that harm its people.
Now, as many in Silicon Valley contend with the consequences of AI development today, Jones and Mahelona’s approach could point the way to a new generation of artificial intelligence—one that does not treat marginalized people as mere data subjects but reestablishes them as co-creators of a shared future.
Like many Indigenous languages globally,te reoMāori began its decline with colonization.
After the British laid claim to Aotearoa, the te reo name for New Zealand, in 1840, English gradually took over as the lingua franca of the local economy. In 1867, the Native Schools Act then made it the only language in which Māori children could be taught, as part of a broader policy of assimilation. Schools began shaming and even physically beating Māori students who attempted to speak te reo.
In the following decades, urbanization broke up Māori communities, weakening centers of culture and language preservation. Many Māori also chose to leave in search of better economic opportunities. Within a generation, the proportion of te reospeakers plummeted from 90% to 12% of the Māori population.
In the 1970s, alarmed by this rapid decline, Māori community leaders and activists fought to reverse the trend. They created childhood language immersion schools and adult learning programs. They marched in the streets to demand that te reo haveequal status with English.
In 1987, 120 years after actively supporting its erasure, the government finally passed the Māori Language Act, declaring te reo an official language. Three years later, it began funding the creation of iwi, or tribal, radio stations like Te Hiku Media, to publicly broadcast in te reo to increase the language’s accessibility.
Many Māori I speak to today identify themselves in part by whether or not their parents or grandparents spoke te reo Māori. It’s considered a privilege to have grown up in an environment with access to intergenerational language transmission.
This is the gold standard for language preservation: learning through daily exposure as a child. Learning as a teen or adult in an academic setting is not only harder. A textbook often teaches only a single, or “standard,” version of te reo when each iwi, or tribe, has unique accents, idiomatic expressions, and embedded regional histories.
Language, in other words, is more than just a tool for communication. It encodes a culture as it’s passed from parent to child, from child to grandchild, and evolves through those who speak it and inhabit its meaning. It also influences as much as it is influenced, shaping relationships, worldviews, and identities. “It’s how we think and how we express ourselves to each other,” says Michael Running Wolf, another Indigenous technologist who’s using AI to revive a rapidly disappearing language.
“Data is the last frontier of colonization.”
Keoni Mahelona
To preserve a language is thus to preserve a cultural history. But in the digital age especially, it takes constant vigilance to yank a minority language out of its downward trajectory. Every new communication space that doesn’t support it forces speakers to choose between using a dominant language and forgoing opportunities in the larger culture.
“If these new technologies only speak Western languages, we’re now excluded from the digital economy,” says Running Wolf. “And if you can’t even function in the digital economy, it’s going to be really hard for [our languages] to thrive.”
With the advent of artificial intelligence, language revitalization is now at a crossroads. The technology can further codify the supremacy of dominant languages, or it can help minority languages reclaim digital spaces. This is the opportunity that Jones and Mahelona have seized.
Long before Jones and Mahelonaembarked on this journey, they met over barbecue at their swimming club’s member gathering in Wellington. The two instantly hit it off. Mahelona took Jones on a long bike ride. “The rest is history,” Mahelona says.
In 2012, the pair moved back to Jones’s hometown of Kaitaia, where Jones became CEO of Te Hiku Media. Because of its isolation, the region remains one of the most economically impoverished of Aotearoa, but by the same token, its Māori population is among the country’s best protected.
COURTESY PHOTO
Over its 20-odd years of broadcasting history, Te Hiku had amassed a rich archive of te reo audiomaterials. It includes gems like a recording of Jones’s own grandmother Raiha Moeroa, born in the late 19th century, whose te reo remained largely untouched by colonial influence.
Jones saw an opportunity to digitize the archive and create a more modern equivalent of intergenerational language transmission. Most Māori no longer live with their iwis and can’t rely on nearby kin for daily te reoexposure. With a digital library, however, they’d be able to listen to te reo from bygone elders whenever and wherever they wanted.
The local Māori tribes granted him permission to proceed, but Jones needed a place to host the materials online. Neither he nor Mahelona liked the idea of uploading them to Facebook or YouTube. It would give the tech giants license to do what they wanted with the precious data.
(A few years later, companies would indeed begin working with Māori speakers to acquire such data. Duolingo, for example, sought to build language-learning tools that could then be marketed back to the Māori community. “Our data would be used by the very people that beat that language out of our mouths to sell it back to us as a service,” Jones says. “It’s just like taking our land and selling it back to us,” Mahelona adds.)
The only alternative was for Te Hiku to build its own digital hosting platform. With his engineering background, Mahelona agreed to lead the project and joined as CTO.
The digital platform became Te Hiku’s first major step to establishing data sovereignty—a strategy in which communities seek control over their own data in an effort to ensure control over their future. For Māori, the desire for such autonomy is rooted in history, says Tahu Kukutai, a cofounder of the Māori data sovereignty network. During the earliest colonial censuses, after a series of devastating wars in which they killed thousands of Māori and confiscated their land, the British collected data on tribal numbers to track the success of the government’s assimilation policies.
Data sovereignty is thus the latest example of Indigenous resistance—against colonizers, against the nation-state, and now against big tech companies. “The nomenclature might be new, the context might be new, but it builds on a very old history,” Kukutai says.
In 2016, Jones embarked on a new project: to interview native te reospeakers in their 90s before their language and knowledge was lost to future generations. He wanted to create a tool that would display a transcription alongside each interview. Te reo learners would then be able to hover on words and expressions to see their definitions.
But few people had enough mastery of the language to manually transcribe the audio. Inspired by voice assistants like Siri, Mahelona began looking into natural-language processing. “Teaching the computer to speak Māori became absolutely necessary,” Jones says.
But Te Hiku faced a chicken-and-egg problem. To build a te reo speech recognition model, it needed an abundance of transcribed audio. To transcribe the audio, it needed the advanced speakers whose small numbers it was trying to compensate for in the first place. There were, however, plenty of beginning and intermediate speakers who could read te reo words aloud better than they could recognize them in a recording.
So Jones and Mahelona, along with Te Hiku COO Suzanne Duncan, devised a clever solution: rather than transcribe existing audio, they would ask people to record themselves reading a series of sentences designed to capture the full range of sounds in the language. To an algorithm, the resulting data set would serve the same function. From those thousands of pairs of spoken and written sentences, it would learn to recognize te reo syllables in audio.
The team announced a competition. Jones, Mahelona, and Duncan contacted every Māori community group they could find, including traditional kapa haka dance troupes and waka ama canoe-racing teams, and revealed that whichever one submitted the most recordings would win a $5,000 grand prize.
The entire community mobilized. Competition got heated. One Māori community member, Te Mihinga Komene, an educator and advocate of using digital technologies to revitalize te reo, recorded 4,000 phrases alone.
Money wasn’t the only motivator. People bought into Te Hiku’s vision and trusted it to safeguard their data. “Te Hiku Media said, ‘What you give us, we’re here as kaitiaki [guardians]. We look after it, but you still own your audio,’” says Te Mihinga. “That’s important. Those values define who we are as Māori.”
Within 10 days, Te Hiku amassed 310 hours of speech-text pairs from some 200,000 recordings made by roughly 2,500 people, an unheard-of level of engagement among researchers in the AI community. “No one could’ve done it except for a Māori organization,” says Caleb Moses, a Māori data scientist who joined the project after learning about it on social media.
The amount of data was still small compared with the thousands of hours typically used to train English language models, but it was enough to get started. Using the data to bootstrap an existing open-source model from the Mozilla Foundation, Te Hiku created its very first te reo speech recognition model with 86% accuracy.
COURTESY PHOTO
From there, it branched out into other language AI technologies. Mahelona, Moses, and a newly assembled team created a second algorithm for auto-tagging complex te reo phrases, and a third for giving real-time feedback to te reo learners on the accuracy of their pronunciation. The team even experimented with voice synthesis to create the te reo equivalent of a Siri, though it ultimately didn’t clear the quality bar to be deployed.
Along the way, Te Hiku established new data sovereignty protocols. Māori data scientists like Moses are still few and far between, but those who join from outside the community cannot just use the data as they please. “If they want to try something out, they ask us, and we have a decision-making framework based on our values and our principles,” Jones says.
It can be challenging. The open-source, free-wheeling culture of data science is often antithetical to the practice of data sovereignty, as is the culture of AI. There have been times when Te Hiku has let data scientists go because they “just want access to our data,” Jones says. It now seeks to cultivate more Māori data scientists through internship programs and junior positions.
Te Hiku has since made most of its tools available as APIs through its new digital language platform, Papa Reo. It’s also working with Māori-led organizations like the educational company Afed Limited, which is building an app to help te reo learners practice their pronunciation. “It’s really a game changer,” says Cam Swaison-Whaanga, Afed’s founder, who is also on his own te reo learning journey. Students no longer have to feel shy about speaking aloud in front of teachers and peers in a classroom.
Te Hiku has begun working with smaller Indigenous populations as well. In the Pacific region, many share the same Polynesian ancestors as the Māori, and their languages have common roots. Using the te reo data as a base, a Cook Islands researcher was able to train an initial Cook Islands language model to reach roughly 70% accuracy using only tens of hours of data.
“It’s no longer just about teaching computers to speak te reoMāori,” Mahelona says. “It’s about building a language foundation for Pacific languages. We’re all struggling to keep our languages alive.”
“Regardless of how widely spoken they are, languages belong to a people.”
Kathleen Siminyu
But Jones and Mahelona know there will come a time when they will have to work with more than Indigenous communities and organizations. If they want te reo to truly be ubiquitous—to the point of having te reo–speaking voice assistants on iPhones and Androids—they’ll need to partner with big tech companies.
“Even if you have the capacity in the community to do really cool speech recognition or whatever, you have to put it in the hands of the community,” says Kevin Scannell, a computer scientist helping to revitalize the Irish language, who has grappled with the same trade-offs in his research. “Having a website where you can type in some text and have it read to you is important, but it’s not the same as making it available in everybody’s hand on their phone.”
Jones says Te Hiku is preparing for this inevitability. It created a data license that spells out the ground rules for future collaborations based on the Māori principle of kaitiakitanga, or guardianship. It will only grant data access to organizations that agree to respect Māori values, stay within the bounds of consent, and pass on any benefits derived from its use back to the Māori people.
The license has yet to be used by an organization other than Te Hiku, and there remain questions around its enforceability. But the idea has already inspired other AI researchers, like Kathleen Siminyu of Mozilla’s Common Voice project, which gathers voice donations to build public data sets for speech recognition in different languages. Right now those data sets can be downloaded for any purpose. But last year, Mozilla began exploring a license more similar to Te Hiku’s that would give greater control to language communities that choose to donate their data. “It would be great if we could tell people that part of contributing to a data set leads to you having a say as to how the data set is used,” she says.
Margaret Mitchell, the former co-lead of Google’s ethical AI team who conducts research on data governance and ownership practices, agrees. “This is exactly the kind of license we want to be able to develop more generally for all different kinds of technology. I would really like to see more of it,” she says.
In some ways, Te Hiku got lucky.Te reo can take advantage of English-centric AI technologies because it has enough similarity to English in key features like its alphabet, sounds, and word construction. The Māori are also a fairly large Indigenous community, which allowed them to amass enough language data and find data scientists like Moses to help make their vision a reality.
“Most other communities are not big enough for those happy accidents to occur,” says Jason Edward Lewis, a digital technologist and artist who co-organizes the Indigenous AI Network.
At the same time, he says, Te Hiku has been a powerful demonstration that AI can be built outside the wealthy profit centers of Silicon Valley—by and for the people it’s meant to serve.Te Hiku Media receives a New Zealand innovation award for its language revitalization work.
COURTESY PHOTO
The example has already motivated others. Michael Running Wolf and his wife, Caroline, also an Indigenous technologist, are working to build speech recognition for the Makah, an Indigenous people of the Pacific Northwest coast, whose language has only around a dozen remaining speakers. The task is daunting: the Makah language is polysynthetic, which means a single word, composed of multiple building blocks like prefixes and suffixes, can express an entire English sentence. Existing natural-language processing techniques may not be applicable.
Before Te Hiku’s success, “we didn’t even consider looking into it,” Caroline says. “But when we heard the amazing work they’re doing, it was just fireworks going off in our head: ‘Oh my God, it’s finally possible.’”
Mozilla’s Siminyu says Te Hiku’s work also carries lessons for the rest of the AI community. In the way the industry operates today, it’s easy for individuals and communities to be disenfranchised; value is seen to come not from the people who give their data but from the ones who take it away. “They say, ‘Your voice isn’t worth anything on its own. It actually needs us, someone with a capacity to bring billions together, for each to be meaningful,’” she says.
In this way, then, natural-language processing “is a nice segue into starting to figure out how collective ownership should work,” she adds. “Because regardless of how widely spoken they are, languages belong to a people.”
The Cybersecurity Maturity Model program at the Department of Defense has gone through its share of changes and “evolutions” over the past year. Despite another looming regulatory process, DoD officials and contracting experts are indicating that the program is unlikely to undergo another major overhaul.
The CMMC 2.0 framework, released late last year, is currently going through a rulemaking process under Title 32 of U.S. law, which outlines rules and regulations for national defense. The program is also due for another regulatory cycle later this year under Title 48, which governs the Federal Acquisition Regulations System, but DoD’s Stacy Bostjanick said officials hope that any further changes will be minor or done in the context of a real, operational program, not a theoretical concept.
“My prayer is that once we get through this round [of rulemaking], CMMC will be a thing. Our anticipation is that we will be allowed to have another interim rule like last time. We’re hoping that that interim rule will go into effect by May,” said Bostjanick, the director of CCMC policy for the office of undersecretary of defense for acquisition and sustainment, during a panel discussion with SC Media at the AFCEA DC Cyber Mission Summit this week. “Once we get through this rulemaking process, we hope there will only be one more aspect that we’ll have to address and that will be international partners.”
The biggest changes that came out of CMMC 2.0 was a concerted effort to recalibrate who would (and would not) require a third-party cybersecurity assessment.
Faced with a shortage of trained assessors and feedback in the form of hundreds of public comments from the contracting industry about the scope of the program, the Pentagon simplified the different levels of certification from five to three and specified that defense contractors who do not handle controlled unclassified information would be able to self-attest that they are meeting the government’s cybersecurity requirements.
Bostjanick said the roughly 80,000 companies that DoD estimates will qualify for Level 2 maturity (which merged many of the requirements from Levels 2-4 in the previous plan). That change is “where there’s been a lot of conversation” with the contracting community.
However, defense contracting experts say that often contractors are unaware of whether they even handle CUI or misunderstand how the government classifies protected information. Even contracts for non-technical equipment, supplies and services end up being classified as controlled information because sometimes those requirements come in a package of information that include documents detailing sensitive designs or layouts for military facilities. If they’re not flagged, those same documents can end up flowing to subcontractors and other third parties.
“Unfortunately, we haven’t done a good job within the department training our program managers and contracting officers to identify [controlled unclassified information],” said Bostjanick.
Defense-contract watchers predict few CCMC changes
Some observers are predicting that the core elements on the program and its requirements are unlikely to change drastically. Jacob Horne, who works on CMMC compliance issues at Summit7, told SC Media that despite the handwringing and looming regulatory process, the program’s requirements remain tied to the National Institute for Standards and Technology’s Special Publication 800-171, which covers how contractors must handle and secure controlled unclassified information.
“The overall takeaway is that — much like the changes from 1.0 to 2.0 — there’s actually much less that can change than people think,” Horne said in an interview. “The majority of the burden and cost and impact that are facing companies with CUI stem from NIST, not the CMMC program.”
Additionally, he said many of the same complaints raised by industry around CMMC (namely that the cost burden and impact associated with the cybersecurity requirements will hurt the ability of small businesses to compete and create new barriers to entry) were similarly levied in previous regulatory processes around controlled unclassified information in 2013 and 2017.
To be clear, shrinking participation from smaller businesses (including startups that DoD often taps for innovation) is a real problem. An analysis by Amanda and Alex Bresler of PW Communications found that the total number of small businesses in the defense market shrank by nearly a quarter, 23%, over the last six years, from an estimated 68,000 companies in 2015 to about 52,000 in 2021. Bostjanick said the department is looking at developing a “cybersecurity-as-a-service” model that could provide smaller companies with higher end defensive capabilities, though such services wouldn’t replace the need for a dedicated cybersecurity program at those companies.
Even still, Horne said that the government has consistently dismissed those complaints in the past and has little reason to back off that position now or make broad exceptions.
“I don’t see anything in the language, in the indirect language of the DoD’s webinars and industry presentations, that indicate to me a fundamental policy shift from previous rulemaking on this subject,” said Horne. “If anything, I think the nature of the threat and the lack of implementation by the DIB and how bad this problem has languished, the government will be even less inclined to give out waivers and [plans of action and milestones].”
NCCoE) announces the release of three related publications on trusted cloud and hardware-enabled security. The foundation of any data center or edge computing security strategy should be securing the platform on which data and workloads will be executed and accessed. The physical platform represents the first layer for any layered security approach and provides the initial protections to help ensure that higher-layer security controls can be trusted.
NIST Special Publication (SP) 1800-19 presents an example of a trusted hybrid cloud solution that demonstrates how trusted compute pools leveraging hardware roots of trust can provide the necessary security capabilities for cloud workloads in addition to protecting the virtualization and application layers. View the document.
Each of the reports below, NISTIR 8320B and NISTIR 8320C, are intended to be used as a blueprint or template that the general security community can use as example proof of concept implementations.
NISTIR 8320B explains an approach based on hardware-enabled security techniques and technologies for safeguarding container deployments in multi-tenant cloud environments. View the document.
Draft NISTIR 8320C presents an approach for overcoming security challenges associated with creating, managing, and protecting machine identities, such as cryptographic keys, throughout their lifecycle. View the document.
We Want to Hear from You!
Review the draft NISTIR 8320C and submit comments online on or before June 6, 2022. You can also contact us at hwsec@nist.gov. We value and welcome your input and look forward to your comments.
NIST Cybersecurity and Privacy Program NIST Applied Cybersecurity Division (ACD) National Cybersecurity Center of Excellence (NCCoE) Questions/Comments about this notice: hwsec@nist.gov
Preston Dunlap, the Department of the Air Force’s first-ever chief architect officer, is set to leave the Pentagon in the coming weeks, he confirmed in a lengthy LinkedIn post on April 18—and he has a long list of recommendations for those coming after him on how to combat Defense Department bureaucracy.
Dunlap’s departure, first reported by Bloomberg, marks the latest exit by a high-ranking Air Force official tasked with modernizing the department’s approach to software and technology. In September 2021, Nicolas M. Chaillan, the first-ever chief software officer of the Air Force, announced his resignation, also on LinkedIn and also offering a candid assessment of the challenges facing DAF.
Dunlap first came to the Air Force in 2019, primarily tasked with overseeing the development and organization of the Advanced Battle Management System, the Air Force’s contribution to joint all-domain command and control—the so-called military Internet of Things that will connect sensors and shooters into one massive network.
Under Dunlap, progress on ABMS proceeded with numerousexperiments. Dunlap also oversaw the development of an “integrated warfighting network” to allow small teams of Airmen serving in far-flung locations to use their work laptops on deployments.
“It’s been my honor to help our nation get desperately needed technology into the hands of our service members who place their lives on the line every day,” Dunlap wrote on LinkedIn. “Some of that technology was previously unimaginable before we developed new capabilities, and at other times it was previously unattainable—available commercially, yet beyond DOD’s grasp.”
Initially, Dunlap wrote, he signed on for two years in the Pentagon, before agreeing to extend his stay for a third year. Now, as he departs, he is joining Chaillan in pointing out the DOD’s shortcomings when it comes to innovation and pushing the department to revolutionize its approach, especially for adopting new technologies, so that it can, in his words, “defy gravity.”
“Not surprisingly to anyone who has worked for or with the government before, I arrived to find no budget, no authority, no alignment of vision, no people, no computers, no networks, a leaky ceiling, even a broken curtain,” Dunlap wrote.
In looking to break through bureaucracy, Dunlap wrote, he followed four key steps that he urged his successor to follow: shock the system, flip the acquisition script, just deliver already, and slay the valley of death and scale.
In doing so, he said, he sought to operate more like SpaceX, the aerospace company founded by Elon Musk that has earned plaudits for its fast-moving, innovative practices.
“By the time the government manages to produce something, it is too often obsolete; no business would ever survive this way, nor should it. Following a commercial approach, just like SpaceX, allowed me to accomplish a number of ‘firsts’ in DOD in under two years,” Dunlap wrote.
Among those firsts, Dunlap referenced the integration of artificial intelligence into military kill chains, interoperability of data and communications across different satellites and aircraft, the deployment of zero trust architecture, and the promotion of security in software development, known as DevSecOps.
In addition, Dunlap argued for a “reformatting” of the Pentagon’s acquisition enterprise, an oft-criticized process seen by many as out-dated and antiquated. By leveraging commercial technologies, shifting focus to outcomes instead of detailed requirements, putting more investments in outside innovators, and pushing forward with a concerted, rapid pace, the Pentagon can start to “regrow its thinning technological edge,” Dunlap wrote.
In order to help develop innovation and progress, Dunlap also pushed for flexibility—both in how the department works and connects, and in how it develops new systems. In particular, he argued for open systems and open architectures to allow new systems to rapidly adapt to and integrate new capabilities as they are developed, pointing to the B-21 Raider and Next-Generation Air Dominanceprograms as examples of that approach.
“We should never be satisfied,” Dunlap closed by writing. “We need this kind of progress at scale now, not tomorrow. So let’s be careful to not…
“Lull ourselves into complacency, when we should be running on all cylinders.
“Do things the same way, when we should be doing things better.
“Distract ourselves with process, when we should be focused on delivering product.
“Compete with each other, when we should be competing with China.
“Defend our turf, when we should be defending our country.
“Focus on input metrics, when we should be focused on output metrics.
“Buy the same things, when we should be investing in what we need.
“Be comfortable with the way things are, when we should be fighting for the way things should be.”
Dunlap’s departure comes at a seeming inflection point for the ABMS program he was tasked with overseeing. Air Force Secretary Frank Kendall has indicated he wants to take a different approach to the program, focusing more on specific operational impacts delivered quickly and less on experiments showing advanced capabilities.
“We can’t invest in everything, and we shouldn’t invest in improvements that don’t have clear operational benefit,” Kendall said March 3 at the AFA Warfare Symposium in Orlando, Fla.. “We must be more focused on specific things with measurable value and operational impact.”
As part of that approach, Kendall has made it one of his organizational imperatives to more fully define the goals and impacts the ABMS program is going for.
While concerns about AI and ethical violations have become common in companies, turning these anxieties into actionable conversations can be tough. With the complexities of machine learning, ethics, and of their points of intersection, there are no quick fixes, and conversations around these issues can feel nebulous and abstract. Getting to the desired outcomes requires learning to talk about these issues differently. First, companies must decide who needs to be part of these conversations. Then, they should: 1) define their organization’s ethical standards for AI, 2) identify the gaps between where you are now and what your standards call for, and 3) understand the complex sources of the problems and operationalize solutions.
Over the past several years, concerns around AI ethics have gone mainstream. The concerns, and the outcomes everyone wants to avoid, are largely agreed upon and well documented. No one wants to push out discriminatory or biased AI. No one wants to be the object of a lawsuit or regulatory investigation for violations of privacy. But once we’ve all agreed that biased, black box, privacy-violating AI is bad, where do we go from here? The question most every senior leader asks is: How do we take action to mitigate those ethical risks?
Acting quickly to address concerns is admirable, but with the complexities of machine learning, ethics, and of their points of intersection, there are no quick fixes. To implement, scale, and maintain effective AI ethical risk mitigation strategies, companies should begin with a deep understanding of the problems they’re trying to solve. A challenge, however, is that conversations about AI ethics can feel nebulous. The first step, then, should consist of learning how to talk about it in concrete, actionable ways. Here’s how you can set the table to have AI ethics conversations in a way that can make next steps clear.
Who Needs to Be Involved?
We recommend assembling a senior-level working group that is responsible for driving AI ethics in your organization. They should have the right skills, experience, and knowledge such that the conversations are well-informed about the business needs, technical capacities, and operational know-how. At a minimum, we recommend involving four kinds of people: technologists, legal/compliance experts, ethicists, and business leaders who understand the problems you’re trying to solve for using AI. Their collective goal is to understand the sources of ethical risks generally, for the industry of which they are members, and for their particular company. After all, there are no good solutions without a deep understanding of the problem itself and the potential obstacles for proposed solutions.
You need the technologist to assess what is technologically feasible, not only at a per product level but also at an organizational level. That is because, in part, various ethical risk mitigation plans require different tech tools and skills. Knowing where your organization is from a technological perspective can be essential to mapping out how to identify and close the biggest gaps.
Legal and compliance experts are there to help ensure that any new risk mitigation plan is compatible and not redundant with existing risk mitigation practices. Legal issues loom particularly large in light of the fact that it’s neither clear how existing laws and regulations bear on new technologies, nor what new regulations or laws are coming down the pipeline.
Ethicists are there to help ensure a systematic and thorough investigation into the ethical and reputational risks you should attend to, not only by virtue of developing and procuring AI, but also those risks that are particular to your industry and/or your organization. Their importance is particularly relevant because compliance with outdated regulations does not ensure the ethical and reputational safety of your organization.
Finally, business leaders should help ensure that all risk is mitigated in a way that is compatible with business necessities and goals. Zero risk is an impossibility so long as anyone does anything. But unnecessary risk is a threat to the bottom line, and risk mitigation strategies also should be chosen with an eye towards what is economically feasible.
Three Conversations to Push Things Forward
Once the team is in place, here are three crucial conversations to have. One conversation concerns coming to a shared understanding of what goals an AI ethical risk program should be striving for. The second conversation concerns identifying gaps between where the organization is now and where it wants to be. The third conversation is aimed at understanding the sources of those gaps so that they are comprehensively and effectively addressed.
1) Define your organization’s ethical standard for AI.
Any conversation should recognize that legal compliance (e.g. anti-discrimination law) and regulatory compliance (with, say, GDPR and/or CCPA) are table stakes. The question to address is: Given that the set of ethical risks is not identical to the set of legal/regulatory risks, what do we identify as the ethical risks for our industry/organization and where do we stand on them?
There are a lot of tough questions that need answers here. For instance, what, by your organization’s lights, counts as a discriminatory model? Suppose, for instance, your AI hiring software discriminates against women but it discriminates lessthan they’ve been historically discriminated against. Is your benchmark for sufficiently unbiased “better than humans have done in the last 10 years”? Or is there some other benchmark you think is appropriate? Those in the self-driving car sector know this question well: “Do we deploy self-driving cars at scale when they are better than the average human driver or when they are at least as good as (or better than) our best human drivers?”
Similar questions arise in the context of black box models. Where does your organization stand on explainability? Are there cases in which you find using a black box acceptable (e.g. so long as it tests well against your chosen benchmark)? What are the criteria for determining whether an AI with explainable outputs is otiose, a nice-to-have, or a need-to-have?
Going deep on these questions allows you to develop frameworks and tools for your product teams and the executives who green light deployment of the product. For instance, you may decide that every product must go through an ethical risk due diligence process before being deployed or even at the earliest stages of product design. You may also settle on guidelines regarding when, if at any time, black box models may be used. Getting to a point where you can articulate what the minimum ethical standards are that all AI must meet is a good sign that progress has been made. They are also important for gaining the trust of customers and clients, and they demonstrate your due diligence has been performed should regulators investigate whether your organization has deployed a discriminatory model.
2) Identify the gaps between where you are now and what your standards call for.
There are various technical “solutions” or “fixes” to AI ethics problems. A number of software products from big tech to startups to non-profits help data scientists apply quantitative metrics of fairness to their model outputs. Tools like LIME and SHAP aid data scientists in explaining how outputs are arrived at in the first place. But virtually no one thinks these technical solutions, or any technological solution for that matter, will sufficiently mitigate the ethical risk and transform your organization into one that meets its AI ethics standards.
Your AI ethics team should determine where their respective limits are and how their skills and knowledge can complement each other. This means asking:
What, exactly, is the risk we’re trying to mitigate?
How does software/quantitative analysis help us mitigate that risk?
What gaps do the software/quantitative analyses leave?
What kinds of qualitative assessments do we need to make, when do we need to make them, on what basis do we make them, and who should make them, so that those gaps are appropriately filled?
These conversations should also include a crucial piece that is standardly left out: what level of technological maturity is needed to satisfy (some) ethical demands (e.g., whether you have the technological capacity to provide explanations that are needed in the context of deep neural networks). Having productive conversations about what AI ethical risk management goals are achievable requires keeping an eye on what is technologically feasible for your organization.
Answers to these questions can provide clear guidance on next steps: assess what quantitative solutions can be dovetailed with existing practices by product teams, assessing the organization’s capacity for the qualitative assessments, and assessing how, in your organization, these things can be married effectively and seamlessly.
3) Understand the complex sources of the problems and operationalize solutions.
Many conversations around bias in AI start with giving examples and immediately talking about “biased data sets.” Sometimes this will slide into talk about “implicit bias” or “unconscious bias,” which are terms borrowed from psychology that lack a clear and direct application to “biased data sets.” But it’s not enough to say, “the models are trained on biased data sets” or “the AI reflects our historical societal discriminatory actions and policies.”
The issue isn’t that these things aren’t (sometimes, often) true; it’s that it cannot be the whole picture. Understanding bias in AI requires, for instance, talking about the various sources of discriminatory outputs. That can be the result of the training data; but how, exactly, those data sets can be biased is important, if for no other reason than that how they are biased informs how you determine the optimal bias-mitigation strategy. Other issues abound: how inputs are weighted, where thresholds are set, and what objective function is chosen. In short, the conversation around discriminatory algorithms has to go deep around the sourcesof the problem and how those sources connect to various risk-mitigation strategies.
. . .
Productive conversations on ethics should go deeper than broad stroke examples descried by specialists and non-specialists alike. Your organization needs the right people at the table so that its standards can be defined and deepened. Your organization should fruitfully marry its quantitative and qualitative approaches to ethical risk mitigation so it can close the gaps between where it is now and where it wants it to be. And it should appreciate the complexity of the sources of its AI ethical risks. At the end of the day, AI ethical risk isn’t nebulous or theoretical. It’s concrete. And it deserves and requires a level attention that goes well beyond the repetition of scary headlines.
Reid Blackman, Ph.D., is the author of “Ethical Machines: Your concise guide to totally unbiased, transparent, and respectful AI” (Harvard Business Review Press, July 2022) and Founder and CEO of Virtue, an ethical risk consultancy. He is also a Senior Advisor to the Deloitte AI Institute, previously served on Ernst & Young’s AI Advisory Board, and volunteers as the Chief Ethics Officer to the non-profit Government Blockchain Association. Previously, Reid was a professor of philosophy at Colgate University and the University of North Carolina, Chapel Hill.
BABeena Ammanath is the Executive Director of the global Deloitte AI Institute,author of the book “Trustworthy AI,” founder of the non-profit Humans For AI, and also leads Trustworthy and Ethical Tech for Deloitte. She is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, and e*trade.
Our monthly Marine Corps Systems Command Tech Talk for Industry and the Acquisition community is 3 days away!
Hosted by the CTO of MARCORSYSCOM Luis E. Velazquez. Date and Time: Thursday (4/21) @1300 Location: #QuanticoCyberHub – 1010 Corporate Dr. Stafford, Va 22554
The #MARCORSYSCOM Tech Talk is open to everyone and we strongly encourage all from #government and #industry to attend to see the technologies showcased that our United States Marine Corps is exploring to help the war fighting capabilities of our troops.
This month will highlight the technologies from the following organizations:
– Marine Corps Systems Command – Cubic Mission & Performance Solutions – 4 Horsemen Solutions™ – Epic Games – Defense Innovation Unit (DIU) – Perspective – CORAS – Checkmarx
Matthew Weaver | ♦️Amanda S. | Joel Scharlat | Kaleb Hunter | Dr Aaron J Miller | Jonathan Payton | John Barker | Matthew Wiltshire | Cesar Nader (USMC, Ret.) | Brooke B. | Luis E. Velazquez | Jourdan Davis | Gabriella Mathieu
And with that 14 new military hospital and clinic commands across Waves BRAGG and HOOD deployed MHS GENESIS, the new federal electronic health record.
MHS GENESIS replaces several Department of Defense legacy health care systems supporting the availability of electronic health records at 66 military hospital and clinic commands across the United States.
The latest waves added hospitals and clinics at Ft. Bragg, Seymour Johnson Air Force Base, Marine Corps Base Camp Lejeune, Marine Corps Air Station Cherry Point, Second Dental Battalion, Ft. Hood, Ft. Sill, Ft. Polk, Barksdale AB, Little Rock AFB, Tinker AFB, Altus AFB, Vance AFB, and Naval Air Station Belle Chase.
This new electronic health record will integrate beneficiary data to healthcare teams across the DOD, Department of Veterans Affairs and the Department of Homeland Security’ U.S. Coast Guard, as all are using the same system.
“On Saturday morning a mother in labor arrived at our Labor, Delivery and Postpartum department, we had a patient in our mixed medical surgical department and normal weekend trauma and emergencies gave our staff the opportunity to use the system live in real time,” he said. “As we prepared for “go live” we also invited patients to come through our outpatient clinics. We had patients arrive at different departments for periodic health assessments, knee pain, immunizations, pharmacy and lab work. It went pretty smoothly minus a few bumps in the road which our information management department immediately fixed.”
At the Carl R. Darnall Army Medical CenterCarl R. Darnall Army Medical Center website at Fort Hood in Texas, Arrington, clinical workflow analyst, added her take on the new system. “[MHS] GENESIS is putting every system together, and staff members will not have to switch between multiple programs, which will make it easy for hospital staff to serve our beneficiaries better.”
“We trained over 4,000 users within the hospital, outlying clinics, and operational forces over a five- month period. We received millions of dollars of new equipment–enough to fill up a warehouse–and issued it to thousands of users across Fort Bragg,” Jarvis said. “The training of personnel and administration of new equipment alone, required thousands of manpower hours and an inordinate amount of coordination and resources.”
Army Lt. Col. Daniel Cash, deputy commander for clinical services at BJACH, explained how rebuilding patient records is the biggest challenge in the deployment of MHS GENESIS.
“Not all information from the legacy systems is pulled over into MHS GENESIS so we are updating medical records at each patient’s initial visit post ‘go live’,” he said. “Patients and beneficiaries should come to their first appointments prepared to give a little historical background on their medical history in order to populate the new system.”
MHS GENESIS enables the application of standardized workflows, integrated healthcare delivery, and data standards for the improved and secure electronic exchange of medical and patient data.
By the end of next year, MHS GENESIS will be deployed across the entire enterprise providing for all 9.6 million beneficiaries.
Many organizations have started their journey towards edge computing to take advantage of data produced at the edge. The definition of edge computing is quite broad. Simply stated, it is moving compute power physically closer to where data is generated, usually an edge device or IoT sensor.
This encompasses far edge scenarios like mobile devices and smart sensors, as well as more near edge use cases like micro-data centers and remote office computing. In fact, this definition is so broad that it is often talked about as anything outside of the cloud or main data center.
With such a wide variety of use cases, it is important to understand the different types of edge computing and how they are being used by organizations today.
Provider edge
The provider edge is a network of computing resources accessed by the Internet. It is mainly used for delivering services from telcos, service providers, media companies, or other content delivery network (CDN) operators. Examples of use cases include content delivery, online gaming, and AI as a service (AIaaS).
One key example of the provider edge that is expected to grow rapidly is augmented reality (AR) and virtual reality (VR). Service providers want to find ways to deliver these use cases, commonly known as eXtended Reality (XR), from the cloud to end user edge systems.
In late 2021, Google partnered with NVIDIA to deliver high-quality XR streaming from Google Cloud NVIDIA RTX powered servers, to lightweight mobile XR displays. By using NVIDIA CloudXR to stream from the provider edge, users can securely access data from the cloud at any time and easily share high-fidelity, full graphics immersive XR experiences with other teams or customers.
Enterprise edge
The enterprise edge is an extension of the enterprise data center, consisting of things like data centers at remote office sites, micro-data centers, or even racks of servers sitting in a compute closet on a factory floor. This environment is generally owned and operated by IT as they would a traditional centralized data center, though there may be space or power limitations at the enterprise edge that change the design of these environments.
Figure 1. Enterprises across all industries use edge AI to drive more intelligent use cases on site.
Looking at examples of the enterprise edge, you can see workloads like intelligent warehouses and fulfillment centers. Improved efficiency and automation of these environments requires robust information, data, and operational technologies to enable AI solutions like real-time product recognition.
Kinetic Vision helps customers build AI for these enterprise edge environments using a digital twin, or photorealistic virtual version, of a fulfillment or distribution center to train and optimize a classification model that is then deployed in the real world. This powers faster, more agile product inspections, and order fulfillments.
Industrial edge
The industrial edge, sometimes called the far edge, generally has smaller compute instances that can be one or two small, ruggedized servers or even embedded systems deployed outside of any sort of data center environment.
Industrial edge use cases include robotics, autonomous checkout, smart city capabilities like traffic control, and intelligent devices. These use cases run entirely outside of the normal data center structure, which means there are a number of unique challenges for space, cooling, security, and management.
BMW is leading the way with industrial edge by adopting robotics to redefine their factory logistics. Using different robots for parts of the process, these robots take boxes of raw parts on the line and transport them to shelves to await production. They are then taken to manufacturing, and finally returned back to the supply area when empty.
Robotics use cases require compute power both in the autonomous machine itself, as well as compute systems that sit on the factory floor. To optimize the efficiency and accelerate deployment of these solutions, NVIDIA introduced the NVIDIA Isaac Autonomous Mobile Robot (AMR) platform.
Accelerating edge computing
Each of these edge computing scenarios has different requirements, benefits, and deployment challenges. To understand if your use case would benefit from edge computing, download the Considerations for Deploying AI at the Edge whitepaper
Sign up for Edge AI News to stay up to date with the latest trends, customers use cases, and technical walkthroughs.
About the Authors
About Amanda Saunders Amanda Saunders leads product marketing for Edge AI in edge and enterprise computing solutions group at NVIDIA. She brings to life edge computing solutions that bring intelligence to hospitals, stores, warehouses, factories, and more. In addition to working on edge solutions, Amanda has held sales and marketing roles at NVIDIA working with AI, data science, virtual GPU, and many different industries.
Like zero trust, the cybersecurity mesh re-envisions the perimeter at the identity layer and centers upon unifying disparate security tools into a single, interoperable ecosystem.
The past two years’ events have taught us all just how important it is to stay agile and flexible. We’ve experienced a more challenging threat landscape as well as expanding attack surfaces. These challenges have come with accelerated cloud transformation and the dissolution of traditional corporate network perimeters and distributed workforces. As a result, there’s growing interest in security strategies emphasizing security controls that span widely distributed assets — including multicloud ecosystems.
One such strategy that’s currently generating quite a bit of buzz is cybersecurity mesh architecture (CSMA).
The term “cybersecurity mesh” was coined by analyst firm Gartner, which called CSMA one of the top strategic technology trends of 2022. Gartner defines cybersecurity mesh architecture as a “common, broad and unified approach … [that] extend[s] security beyond enterprise perimeters.” In Gartner’s view, CSMA focuses on composability, scalability, and interoperability to create a collaborative ecosystem of security tools. Somewhat optimistically, Gartner predicts that “organizations adopting a cybersecurity mesh architecture to integrate security tools to work as a cooperative ecosystem will reduce the financial impact of individual security incidents by an average of 90% by 2024.”
Like zero trust, the cybersecurity mesh model is well suited for today’s cloud applications and workloads since it re-envisions the perimeter at the identity layer and centers upon unifying disparate security tools into a holistic, interoperable ecosystem.
The emphasis on composability, scalability, and interoperability means that CSMA can move security teams from managing fragmented, individually configured services to deploying best-of-breed solutions that work together to mature the organization’s security posture. To achieve this end, though, multiple vendors will need to adopt open, standards-based approaches to interoperability.
As the concept of CSMA becomes more and more popular, however, questions remain. Will organizations invest in zero trust and CSMA side by side as they advance along the path to modernization? Both approaches are, after all, complementary, and building a resilient CSMA will enable an organization to achieve zero trust objectives. And do enough best-of-breed solutions exist that can integrate successfully to deliver the outcomes enterprises want from CSMA in the real world?
The idea of the cybersecurity mesh relies on assumptions about how widely available truly composable security services really are. These solutions feature an architecture designed to scale in a more agile fashion through an API-first approach — enabling flexibility and multicloud ecosystem management. CSMA also calls for common frameworks for everything from analytics to threat intelligence and security controls that can communicate via APIs.
An effective mesh architecture will also demand stronger, centralized policy management and governance. It’ll be essential to orchestrate better least-privilege access policies, which organizations can achieve by using a centralized policy management engine in conjunction with distributed enforcement. Security leaders must apply artificial intelligence/machine learning-based policies at the identity layer and extend these policies across the entirety of the access path — from device or endpoint to workload or application — to create integrated security out of an array of individual components.
Although CSMA remains more of a concept than an architecture at this point, there are three ways that security leaders can begin thinking about how to start deriving value.
Look to Deploy Composable Cybersecurity Technologies On average, every large organization runs 47 different cybersecurity toolswithin its environment, leaving security teams to spend unsustainable amounts of time and effort managing complex, unwieldy integrations. By taking an API-first and standards-based approach, organizations can make everything a service. This way, security tools can talk to one another, sharing context and risk intelligence.
While open standards have seen increased adoption in many other areas of IT, the cybersecurity industry has lagged behind. Stakeholders across the industry need to work together to ensure that risk, identity context, usage, and other telemetries are effortlessly consumable across different solutions. This way, for instance, the secure email gateway can “talk” to the network firewall, and both can inform authentication decisions.
Centralize Policy Management Across All Your Security Tools This isn’t simple. It will take a concerted effort to consolidate all security policies, including identity and access policies, in your environment and additional work to streamline this across multiple security tools. You’ll need to incorporate a central policy engine that can decide whether to grant, deny, or revoke access to resources for entities across the organization. And you’ll need to ensure that your organization administers and enforces these policies across every device and resource in the environment, no matter how widely distributed they may be.
Establish KPIs and Track Them This is the only way to ensure that your CSMA genuinely works well together and delivers the intended results. Your organization should identify which metrics are essential to track and report, while keeping in mind that there may be multiple levels of KPIs to address. For example, a CISO may wish to report specific KPIs at the board level to show the CSMA strategy is impacting business outcomes — while individual teams will need to measure separate KPIs to assess security posture and overall cyber resiliency.
Current trends like the large-scale adoption of remote work and increasing reliance on hybrid and multicloud infrastructures won’t reverse themselves anytime soon. To meet the modern enterprise’s ever-growing requirements for agility, security leaders must carefully examine their existing security infrastructure to find opportunities to bring previously siloed solutions together. Whether this will become known as CSMA or simply “enhanced interoperability and efficiency” in the months and years to come remains to be seen, but the need is pressing.