Shortly after its central bank completed its first CBDC tests, Turkey announced a blockchain-based digital identity application.
Turkey plans to use blockchain technology during the login process for online public services. E-Devlet, Turkey’s digital government portal used to access a wide range of public services, will use a blockchain-based digital identity to verify Turkish citizens during login.
Fuat Oktay, the vice president of Turkey, announced during the Digital Turkey 2023 event that citizens will be able to use blockchain-based digital identity to access e-wallet applications, Cointelegraph Turkey reported.
Oktay called the blockchain-based application a revolution for e-government efforts, adding that online services will be more secure and accessible with blockchain. Users will be able to keep their digital information on their mobile phones.
“With the login system that will work within the scope of the e-wallet application, our citizens will be able to enter the e-Devlet with a digital identity created in the blockchain network,” the vice president said.
As of January 2020, Turkey’s cultural hub of Konya was developing a “City Coin” project to be used by citizens to pay for public services, but no further updates have been shared with the public in the last two years.
A look at OWASP’s Software Assurance Maturity Model (SAMM)
By Chris Hughes
We’ve spoken a fair bit about secure software development and efforts to improve the overall security maturity of an organizations applications and software. In this article, we will in particular look at OWASP’s SAMM.
We’ve discussed emerging requirements in the U.S. Federal sector such as the Office of Management and Budget (OMB) calling for third-party software vendors to self-attest to aligning with NIST’s Secure Software Development Framework (SSDF).
As covered in that article, SSDF utilizes a myriad of existing industry sources, such as OWASP’s ASVS, BSIMM and OWASP’s SAMM when it comes to its specific practices, tasks and examples for secure software development.
For an example of the synergy between NIST’s SSDF and OWASP SAM, let’s take a look at a specific SSDF Practice, such as “Identity and Confirm Vulnerabilities on an Ongoing Basis” (RV.1) – in the Respond to Vulnerabilities area. You can see that it cites OWASP SAMM IM1-A, IM2-B and EH1-B among its references on the right.
Looking at models such as Building Security in Maturity Model (BSIMM) or Software Assurance Maturity Model (SAMM) can be very effective ways to document the maturity of a supplier’s software program or even internal software development efforts across organizational development teams.
When it comes to third-party software vendors, it should be noted that receiving a self-attestation of adherence for compliance should be taken cautiously, as it isn’t concrete evidence of true alignment with requirements, just take a quick look at the DoD Defense Industrial Base (DIB) who has seen several years of notable security incidents among vendors who have historically self-attested to NIST 800-171 security controls, which has now led to the push for 3rd party attestation through their emerging Cybersecurity Maturity Model Certification (CMMC) framework, much like FedRAMP’s use of a 3PAO.
However, as we have discussed in our article on FedRAMP, 3PAO compliance schemes come with their own challenges, most notably, scalability and severely limiting the body of vendors an organization/industry get access to, which isn’t always desirable.
This is also where other artifacts such as SBOM’s which show the software component inventory of applications/software and their associated vulnerabilities can be valuable, along with traditional technical measures such as vulnerability scanning and penetration testing.
All of that aside, the focus of this article is OWASP’s SAMM, so let’s dive in.
OWASP SAMM
OWASP SAMM, also known as OpenSAMM in prior versions. (https://owasp.org/www-project-samm/) aims to help organizations formulate and implement strategies for software security, and the project cites 4 key areas it can help organizations, such as:
Evaluate an organization’s existing software security practices
Build a balanced software security assurance program in well-defined iterations
Demonstrate concrete improvements to a security assurance program
Define and measure security-related activities throughout an organization
BSIMM is also widely used, but as it is a proprietary model, we will opt to examine open-source approaches, such as OWASP’s SAMM.
SAMM defines 5 business functions, as depicted in the image above, which are Governance, Design, Implementation, Verification and Operations. Within those business functions there are 3 security practices. Each security practice involves 2 streams of activities that complement and build upon one another. Since SAMM is an open model, it can be used internally to assess an organization, or by a third-party.
Like all OWASP projects, SAMM is a community-driven effort and aims to be measurable, actionable, and versatile. Unlike BSIMM, SAMM is prescriptive, meaning it prescribes specific actions and practices organizations can take to improve their software assurance. SAMM is, as the name states, a maturity model.
It ranges from levels 1-3 across the security practices it specifies, with an implicit starting point of 0. While SAMM is a maturity model, it does not state that all organizations must achieve the highest level of maturity across all practices. Maturity requirements and goals will depend on the organization’s resources, compliance requirements, resources and mission sets. Each “Stream” has a corresponding maturity level ranging from 1-3, building on lower maturity levels and activities. See the example below:
Let us dive into some of the business functions and associated security practices within SAMM a bit.
Governance
The first business function is Governance, which is focused on the processes and activities related to how an organization manages their software development activities. The practices involved include Strategy & Metrics, Policy & Compliance and Education & Guidance.
This involves creating and promoting strategies and metrics and then measuring and improving them over time. On the policy and compliance front, it involves creating policies and standards and then managing their implementation and adherence across the organization. Underneath Education & Guidance you have streams such as Training and Awareness, and Organization and Culture.
Training and Awareness focuses on organizations improving knowledge around software security with their various stakeholders and organization and culture is oriented around promoting a culture of security within the organization.
Design
The second business function is Design, which focuses on processes and activities for how organizations create and design software. The security practices include Threat Assessment, Security Requirements and Security Architecture. Threat Assessment focuses on streams such as application risk profiling and threat modeling.
As part of profiling, organizations determine which applications pose serious threats to the organization if compromised and threat modeling, as we have discussed elsewhere is helping teams understand what is being built, what can go wrong and how to mitigate those risks.
Security Requirements involves requirements for how software is built and protected as well as requirements for relevant supplier organizations that may be involved in the development context of an organization’s applications, such as outsourced developers. Security Architecture deals with the various components and technologies involved in the architecture design of a firm’s software.
This includes the architecture design to ensure secure design as well as technology management which involves understanding risks associated with the various technologies, frameworks, tools, and integrations that applications use.
Implementation
The third business function is Implementation, which involves how an organization builds and deploys software components and their associated defects. The security practices involved are Secure Build, which is consistently repeatable build processes and accounting of dependencies, Secure Deployment which increases the security of software deployments to production and Defect Management which involves managing security defects of deployed software.
The streams within the Secure Build practice are Build Process and Software Dependencies. The build process ensures you are deploying predictable, repeatable secure build processes. Software dependencies focus on external libraries and their security posture matches the organizational requirements and risk tolerance. The Secure Deployment security practice focuses on the final stages of delivering software to production environments and ensuring its integrity and security during that process.
The streams associated with this practice are the Deployment Process and Secrets Management. The deployment process ensures organizations have a repeatable and consistent deployment process to push software artifacts to production as well as the requisite test environments.
Secret management is focused on properly handling of sensitive data such as credentials, API keys and other secrets which can be abused by malicious actors to compromise environments and systems involved in software development. Defect Management is the last security practice in this business function and focuses on collecting, recording, and analyzing software security defects to make data-driven decisions.
The streams involved include defect tracking and metrics and feedback. Both involve managing the collection and follow-up of defects as well as driving improvement of security through these activities.
Verification
The Verification business function is the processes and activities for how organizations check and test artifacts throughout software development. The security practices associated with verification are architecture assessment, requirements-driven testing, and security testing.
Architecture assessment validates the security and compliance for the software and supporting architecture while requirements and security testing based on things such as user stories to detect and resolve the security issues through automation. Architecture assessment has streams involved that include both validation and mitigation.
This means validating the provision of security objectives and requirements in the supporting architecture and mitigating the identified threats in the existing architecture. The testing streams under these practices ensure organizations are doing activities such as misuse/abuse testing to use methods such as fuzzing and others to identify functionality that can be abused to attack an application.
Security testing will involve both a broad baseline of automated testing and deep understanding that involves manual testing for high-risk components and complex attack vectors that automated testing cannot complete.
Operations
The Operations business function ensures the Confidentiality, Integrity and Availability of applications and their associated data is maintained throughout their lifecycle, including in runtime environments.
Security practices include incident, environment, and operational management. Going further, streams encompass various areas such as incident detection and response as well as configuration hardening and patching.
Lastly, Operational Management ensures that data protection occurs throughout the lifecycle of creation, handling, storage and processing and that legacy management to ensure end of life services and software are no longer actively deployed or supported. This reduces organizations attack surface and removes potentially vulnerable components from systems and applications.
By utilizing SAMM and covering the various Business Functions, Security Practices and Streams, organizations can get more assurance around their application security maturity, and the same goes for their software consumers who benefit from software suppliers maturing their software development practices.
OWASP provides a set of useful resources for organizations looking to use SAMM, such as a How-To Guide, Quick-Start Guide and a SAMM Tool Box. If you’re interested in digging deeper into the practices and associated details, be sure to check out the web or PDF versions of the full SAMM model. This will help you understand each Business Function, Security Practice, their associated Streams and Maturity Levels.
For example, the OWASP SAMM Toolkit provides a structured worksheet where organizations can capture and document their existing OWASP SAMM maturity levels across the various Business Functions and Security Practices and produce correlating scores to see how they measure up and where they have gaps they want to address.
PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels.
They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.
Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained.
BLOOM was created over the last year by over 1,000 volunteer researchers in a project called BigScience, which was coordinated by AI startup Hugging Face using funding from the French government. It officially launched on July 12. The researchers hope developing an open-access LLM that performs as well as other leading models will lead to long-lasting changes in the culture of AI development and help democratize access to cutting-edge AI technology for researchers around the world.
The model’s ease of access is its biggest selling point. Now that it’s live, anyone can download it and tinker with it free of charge on Hugging Face’s website. Users can pick from a selection of languages and then type in requests for BLOOM to do tasks like writing recipes or poems, translating or summarizing texts, or writing programming code. AI developers can use the model as a foundation to build their own applications.
At 176 billion parameters (variables that determine how input data is transformed into the desired output), it is bigger than OpenAI’s 175-billion-parameter GPT-3, and BigScience claims that it offerssimilar levels of accuracy and toxicity as other models of the same size. For languages such as Spanish and Arabic, BLOOM is the first large language model of this size.
But even the model’s creators warn it won’t fix the deeply entrenched problems around large language models, including the lack of adequate policies on data governance and privacy and the algorithms’ tendency to spew toxic content, such as racist or sexist language.
Out in the open
Large language models are deep-learning algorithms that are trained on massive amounts of data. They are one of the hottest areas of AI research. Powerful models such as GPT-3 and LaMDA, which produce text that reads as if a human wrote it, have huge potential to change the way we process information online. They can be used as chatbots or to search for information, moderate online content, summarize books, or generate entirely new passages of text based on prompts. But they are also riddled with problems. It takes only a little prodding before these models start producing harmful content.
The models are also extremely exclusive. They need to be trained on massive amounts of data using lots of expensive computing power, which is something only large (and mostly American) technology companies such as Google can afford.
Most big tech companies developing cutting-edge LLMs restrict their use by outsiders and have not released information about the inner workings of their models. This makes it hard to hold them accountable. The secrecy and exclusivity are what the researchers working on BLOOM hope to change.
Meta has already taken steps away from the status quo: in May 2022 the company released its own large language model, Open Pretrained Transformer (OPT-175B), along with its code and a logbook detailing how the model was trained.
But Meta’s model is available only upon request, and it has a license that limits its use to research purposes. Hugging Face goes a step further. The meetings detailing its work over the past year are recorded and uploaded online, and anyone can download the model free of charge and use it for research or to build commercial applications.
A big focus for BigScience was to embed ethical considerations into the model from its inception, instead of treating them as an afterthought. LLMs are trained on tons of data collected by scraping the internet. This can be problematic, because these data sets include lots of personal information and often reflect dangerous biases. The group developed data governance structures specifically for LLMs that should make it clearer what data is being used and who it belongs to, and it sourced different data sets from around the world that weren’t readily available online.
The group is also launching a new Responsible AI License, which is something like a terms-of-service agreement. It is designed to act as a deterrent from using BLOOM in high-risk sectors such as law enforcement or health care, or to harm, deceive, exploit, or impersonate people. The license is an experiment in self-regulating LLMs before laws catch up, says Danish Contractor, an AI researcher who volunteered on the project and co-created the license. But ultimately, there’s nothing stopping anyone from abusing BLOOM.
The project had its own ethical guidelines in place from the very beginning, which worked as guiding principles for the model’s development, says Giada Pistilli, Hugging Face’s ethicist, who drafted BLOOM’s ethical charter. For example, it made a point of recruiting volunteers from diverse backgrounds and locations, ensuring that outsiders can easily reproduce the project’s findings, and releasing its results in the open.
All aboard
This philosophy translates into one major difference between BLOOM and other LLMs available today: the vast number of human languages the model can understand. It can handle 46 of them, including French, Vietnamese, Mandarin, Indonesian, Catalan, 13 Indic languages (such as Hindi), and 20 African languages. Just over 30% of its training data was in English. The model also understands 13 programming languages.
This is highly unusual in the world of large language models, where English dominates. That’s another consequence of the fact that LLMs are built by scraping data off the internet: English is the most commonly used language online.
The reason BLOOM was able to improve on this situation is that the team rallied volunteers from around the world to build suitable data sets in other languages even if those languages weren’t as well represented online. For example, Hugging Face organized workshops with African AI researchers to try to find data sets such as records from local authorities or universities that could be used to train the model on African languages, says Chris Emezue, a Hugging Face intern and a researcher at Masakhane, an organization working on natural-language processing for African languages.
Including so many different languages could be a huge help to AI researchers in poorer countries, who often struggle to get access to natural-language processing because it uses a lot of expensive computing power. BLOOM allows them to skip the expensive part of developing and training the models in order to focus on building applications and fine-tuning the models for tasks in their native languages.
“If you want to include African languages in the future of [natural-language processing] … it’s a very good and important step to include them while training language models,” says Emezue.
Handle with caution
BigScience has done a “phenomenal” job of building a community around BLOOM, and its approach of involving ethics and governance from the beginning is a thoughtful one, says Percy Liang, director of Stanford’s Center for Research on Foundation Models.
However, Liang doesn’t think it will lead to significant changes to LLM development. “OpenAI and Google and Microsoft are still blazing ahead,” he says.
Ultimately, BLOOM is still a large language model, and it still comes with all the associated flaws and risks. Companies such as OpenAI have not released their models or code to the public because, they argue, the sexist and racist language that has gone into them makes them too dangerous to use that way.
BLOOM is also likely to incorporate inaccuracies and biased language, but since everything about the model is out in the open, people will be able to interrogate the model’s strengths and weaknesses, says Margaret Mitchell, an AI researcher and ethicist at Hugging Face.
BigScience’s biggest contribution to AI might end up being not BLOOM itself, but the numerous spinoff research projects its volunteers are getting involved in. For example, such projects could bolster the model’s privacy credentials and come up with ways to use the technology in different fields, such as biomedical research.
“One new large language model is not going to change the course of history,” says Teven Le Scao, a researcher at Hugging Face who co-led BLOOM’s training. “But having one good open language model that people can actually do research on has a strong long-term impact.”
When it comes to the potential harms of LLMs, “ Pandora’s box is already wide open,” says Le Scao. “The best you can do is to create the best conditions possible for researchers to study them.”
What is the nature of the critical materials problem?
How did the current rare earth element (REE) supply chain form and what lessons learned are applicable to other critical materials, such as those found in LIB materials?
What are the potential risks of a disruption to the critical material supply chain?
What should be the aim of policies used to prevent or mitigate the effects of shocks to critical material supply chains?
The ongoing coronavirus disease 2019 pandemic and Russian invasion of Ukraine highlight the vulnerabilities of supply chains that lack diversity and are dependent on foreign inputs. This report presents a short, exploratory analysis summarizing the state of critical materials — materials essential to economic and national security — using two case studies and policies available to the U.S. Department of Defense (DoD) to increase the resilience of its supply chains in the face of disruption.
China is the largest producer and processor of rare earth oxides (REOs) worldwide and a key producer of lithium-ion battery (LIB) materials and components. China’s market share of REO extraction has decreased, but it still has large influence over the downstream supply chain–processing and magnet manufacturing. Chinese market share of the LIB supply chain mirrors REO supply bottlenecks. If it desired, China could effectively cut off 40 to 50 percent of global REO supply, affecting U.S. manufacturers and suppliers of DoD systems and platforms.
Although a deliberate disruption is unlikely, resilience against supply disruption and building domestic competitiveness are important. The authors discuss plausible REO disruption scenarios and their hazards and synthesize insights from a “Day After . . .” exercise and structured interviews with stakeholders to identify available policy options for DoD and the U.S. government to prevent or mitigate the effects of supply disruptions on the defense industrial base (DIB) and broader U.S. economy. They explore these policies’ applicability to another critical material supply chain — LIB materials — and make recommendations for policy goals.
Key Findings
China has used a variety of economic practices to capture a large portion of the REE supply chain. It has used this disproportionate market share to manipulate the availability and pricing of these materials outside China.
Economic coercion by China has usually been executed through denying access to the domestic markets. REOs have been the lone exception: China threatened to restrict access to Chinese exports to manipulate U.S. partner nations (Japan) to geopolitical ends.
Projects planned to increase the extraction and processing capacity of REOs fall short of meeting estimated future demand outside China; there is not enough supply outside China to mitigate a disruption event.
China could effectively cut off 40–50 percent of global REO supply, which would affect manufacturers and suppliers of advanced components used in DoD systems and platforms. The DIB has a limited time frame in which to respond to a disruption before industrial readiness suffers.
The potential risks associated with a disruption could affect the broader U.S. economy and the DIB’s ability to procure critical materials, as well as interrupt military operations in some cases.
DoD has a variety of policies available to mitigate the effects of disruption. They can be categorized as proactive or reactive policies, or both. These policies have an effective time to impact, or the time needed for implementation and benefits to materialize.
Policy options used thus far have yielded mixed results.
Recommendations
Proactive policies should aim to diversify critical material supply chains away from Chinese industry by expanding extraction capacity or increasing material recycling efforts.
Proactive efforts should aim to co-locate the upstream and downstream sectors to better leverage industrial efficiencies.
Reactive policies should aim to increase the DIB’s resiliency in the face of supply disruption by reducing its time to recover and increasing its time to survive.
Both proactive and reactive policy options with the longest time to impact should be implemented sooner rather than later to realize benefits.
Policies, planning, and coordination should also aim to reduce the time to impact for both proactive and reactive policies.
Both proactive and reactive policies should leverage U.S. ally and partner capabilities, or build relationships with nontraditional countries, to establish free access to critical materials at a fair market price wherever possible.
Working with nontraditional partners will be necessary because these countries have geographic access to critical materials and extraction capacity. Depending only on traditional allies and partners may only divert part of the supply chain away from Chinese industry.
Chinese disinformation campaigns should be expected in other critical material supply chains. This sector should work with cybersecurity experts and the U.S. intelligence community to educate executives and local governments about risks. Businesses should communicate their plans — and any influence operations underway — to local communities. The intelligence community should educate policymakers, U.S. allies and partners, and the public about the extent of Chinese interference in critical material supply chains.
You would not be reading this if you did not realize that it is important for the Department of Defense (DoD) to get software right. There are two sides to the coin of ubiquitous software for military systems. On one side lies untold headaches—new cyber vulnerabilities in our weapons and supporting systems, long development delays and cost overruns, endless upgrade requirements for software libraries and underlying infrastructure, challenges in modernizing legacy systems, and unexpected and undesirable bugs that emerge even after successful operational testing and evaluation. On the other side lies a vast potential for future capability, with surprising new military capabilities deployed to aircraft during and between sorties, seamless collaboration between military systems from different services and domains, and rich data exchange between allies and partners in pursuit of military goals. This report offers advice to help maximize the benefits and minimize the liabilities of the software-based aspects of acquisition, largely through structuring acquisition to enable rapid changes across diverse software forms.
This report features a narrower focus and more technical depth than typical policy analysis. We believe this detail is necessary to achieve our objectives and reach our target audience. We intend this to be a useful handbook for the DoD acquisition community and, in particular, the program executive officers (PEOs)1 and program managers as they navigate a complex landscape under great pressure to deliver capability in an environment of strategic competition. All of the 83 major defense acquisition programs and the many smaller acquisition category II and III efforts that make up the other 65 percent of defense investment scattered across the 3,112 program, project, and activity (PPA) line items found in the president’s budget request now include some software activity by our accounting.2 We would be thrilled if a larger community—contracting officers, industry executives, academics, engineers and programmers, policy analysts, legislators, staff, and operational military service members—also gleaned insight from this document. But we know that some terms may come across as jargon and that not everyone is familiar with the names of common software development tools or methods. We encourage them to read this nonetheless and are confident that the core principles and insights we present are still accessible to a broader audience.
While other recent analyses have focused on the imperative for software and offered high-level visions for a better future,3 we believe most of the acquisition community already recognizes the potential of a digital future and is engaged in a more tactical set of battles and decisions: how to structure their organizations, how to manage expectations, how to structure their deliverables, and how to write solicitations and let contracts meet requirements and strive for useful outcomes. We attempt to present background and principles that can assist them in navigating this complex landscape.
The real motivation for getting military software right is not to make the DoD more like commercial industry through digital modernization. Instead, it is to create a set of competitive advantages for the United States that stem from the embrace of distributed decision-making and mission command. The strategic context of this work stems from the observation that advantage in future military contests is less likely to come from the mere presence of robotics or artificial intelligence in military systems and is more likely to come from using these (and other) components effectively as part of a force design, and specifically from maintaining sufficient adaptability in the creation and implementation of tactics. Software, software systems, and information processing systems (including artificial intelligence and machine learning, or AI/ML) that leverage software-generated data are the critical ingredients in future force design, force employment, tactics generation, and execution monitoring. Software will bind together human decision-makers and automation support in future conflict.
This report aims to accomplish two goals:
Elucidate a set of principles that can help the acquisition community navigate the software world—a turbulent sea of buzzwords, technological change, bureaucratic processes, and external pressures.
Recruit this community to apply energy to a handful of areas that will enable better decisions and faster actions largely built around an alternative set of processes for evolutionary development.
The report is structured in chapters, each built around a core insight. Chapter 1 notes that the speed and ubiquity of digital information systems is driving military operations ever closer to capability development, and that this trend brings promise for capability and peril for bureaucratic practices.
Chapter 2 offers the military framing for software development, pointing out that it can enable greater adaptability in force employment and that the DoD cannot derive future concepts of more distributed forces and greater disaggregation of capability from requirements and specifications alone. Instead, software should enable employment tactics to evolve, especially as the elements of future combat are outside the control of any one program manager or office.
Chapter 3 introduces a framing analogy between software delivery and logistics. As logistics practitioners recognize, there is no one-size-fits-all solution to the problem of moving physical goods, but a set of principles that helps navigate many gradients of logistics systems. Similarly, the world of software is full of diversity—different use cases, computing environments, and security levels—and this heterogeneity is an intrinsic feature that the DoD needs to embrace.
Chapter 4 introduces the essential tool of the modern program office—the software factory—the tooling and processes via which operational software is created and delivered. All existing operational software was made and delivered somehow and, whether we like it or not, we will have to update it in the future. Thus, the production process matters. In many ways, the most important thing a PEO can do is identify, establish, or source an effective software factory suited to the particular needs of their unique deliverables.
Chapter 5 seeks to break down the ideas introduced earlier into actionable atomic principles that the DoD can use to navigate the tough decisions that the acquisition community faces; we refer to these ideas as ACTS. To aid the reader in understanding these, we also provide a fictitious vignette.
We recognize the inherent tension between making a report compact and accessible and covering the myriad facets of a broad and fast-changing scene. To balance this, we believe that we have focused this report on the most impactful aspects of software acquisition.
Most contractors the Department of Defense hired in the last five years failed to meet the required minimum cybersecurity standards, posing a significant risk to U.S. national security.
Managed service vendor CyberSheathon Nov. 30 released a report showing that 87% of the Pentagon supply chain fails to meet basic cybersecurity minimums. Those security gaps are subjecting sizeable prime defense contractors and their subcontractors to cyberattacks from a range of threat actors putting U.S. national security at risk.
Those risks have been well-known for some time without attempts to fix them. This independent study of the Defense Industrial Base (DIB) is the first to show that federal contractors are not properly securing military secrets, according to CyberSheath.
The DIB is a complex supply chain comprised of 300,000 primes and subcontractors. The government allows these approved companies to share sensitive files and communicate securely to get their work done.
Defense contractors will soon be required to meet Cybersecurity Maturity Model Certification (CMMC) compliance to keep those secrets safe. Meanwhile, the report warns that nation-state hackers are actively and specifically targeting these contractors with sophisticated cyberattack campaigns.
“Awarding contracts to federal contractors without first validating their cybersecurity controls has been a complete failure,” Eric Noonan, CEO at CyberSheath, told TechNewsWorld.
Defense contractors have been mandated to meet cybersecurity compliance requirements for more than five years. Those conditions are embedded in more than one million contracts, he added.
Dangerous Details
The Merrill Research Report 2022, commissioned by CyberSheath, revealed that 87% of federal contractors have a sub-70 Supplier Performance Risk System (SPRS) score. The metric shows how well a contractor meets Defense Federal Acquisition Regulation Supplement (DFARS) requirements.
DFARS has been law since 2017 and requires a score of 110 for full compliance. Critics of the system have anecdotally deemed 70 to be “good enough.” Even so, the overwhelming majority of contractors still come up short.
“The report’s findings show a clear and present danger to our national security,” said Eric Noonan. “We often hear about the dangers of supply chains that are susceptible to cyberattacks.”
The DIB is the Pentagon’s supply chain, and we see how woefully unprepared contractors are despite being in threat actors’ crosshairs, he continued.
“Our military secrets are not safe, and there is an urgent need to improve the state of cybersecurity for this group, which often does not meet even the most basic cybersecurity requirements,” warned Noonan.
More Report Findings
The survey data came from 300 U.S.-based DoD contractors, with accuracy tested at the 95% confidence level. The study was completed in July and August 2022, with CMMC 2.0 on the horizon.
Roughly 80% of the DIB users failed to monitor their computer systems around-the-clock and lacked U.S.-based security monitoring services. Other deficiencies were evident in the following categories that will be required to achieve CMMC compliance:
80% lack a vulnerability management solution
79% lack a comprehensive multi-factor authentication (MFA) system
73% lack an endpoint detection and response (EDR) solution
70% have not deployed security information and event management (SIEM)
These security controls are legally required of the DIB, and since they are not met, there is a significant risk facing the DoD and its ability to conduct armed defense. In addition to being largely non-compliant, 82% of contractors find it “moderately to extremely difficult to understand the governmental regulations on cybersecurity.
Confusion Rampant Among Contractors
Some defense contractors across the DIB have focused on cybersecurity only to be stalled by obstacles, according to the report.
When asked to rate DFARS reporting challenges on a scale from one-to-10 (with 10 being extremely challenging), about 60% of all respondents rated “understanding requirements” a seven in 10 or higher. Also high on the list of challenges were routine documentation and reporting.
The primary obstacles contractors listed are challenges in understanding the necessary steps to achieve compliance, the difficulty with implementing sustainable CMMC policies and procedures, and the overall cost involved.
Unfortunately, those results closely paralleled what CyberSheath expected, admitted Noonan. He noted that the research confirmed that even fundamental cybersecurity measures like multi-factor authentication had been largely ignored.
“This research, combined with the False Claims Act case against defense giant Aerojet Rocketdyne, shows that both large and small defense contractors are not meeting contractual obligations for cybersecurity and that the DoD has systemic risk throughout their supply chain,” Noonan said.
No Big Surprise
Noonan believes the DoD has long known that the defense industry is not addressing cybersecurity. News reporting of seemingly never-ending nation-state breaches of defense contractors, including large-scale incidents like the SolarWinds and False Claims Act cases, proves that point.
“I also believe the DoD has run out of patience after giving contractors years to address the problem. Only now is the DoD going to make cybersecurity a pillar of contract acquisition,” said Noonan.
He noted the planned new DoD principle would be “No cybersecurity, no contract.”
Noonan admitted that some of the struggles that contractors voiced about difficulties in understanding and meeting cyber requirements have merit.
“It is a fair point because some of the messaging from the government has been inconsistent. In reality, though, the requirements have not changed since about 2017,” he offered.
What’s Next
Perhaps the DoD will pursue a get-tougher policy with contractors. If contractors complied with what the law required in 2017, the entire supply chain would be in a much better place today. Despite some communication challenges, the DoD has been incredibly consistent on what is required for defense contractor cybersecurity, Noonan added.
The current research now sits atop a mountain of evidence that proves federal contractors have a lot of work to do to improve cybersecurity. It is clear that work will not be done without enforcement from the federal government.
“Trust without verification failed, and now the DoD appears to be moving to enforce verification,” he said.
DoD Response
TechNewsWorld submitted written questions to the DoD about the supply chain criticism in the CyberSheath report. A spokesperson for CYBER/IT/DOD CIO for the Department of Defense replied, stating that it would take a few days to dig into the issues. We will update this story with any response we receive.
Update: Dec. 9, 2022 – 3:20 PM PT DoD Spokesperson and U.S. Navy Commander Jessica McNulty provided this response to TechNewsWorld:
CyberSheath is a company that has been evaluated by the Cyber Accreditation Body (Cyber AB) and met the requirements to become a Registered Practitioner Organization, qualified to advise and assist Defense Industrial Base (DIB) companies with implementing CMMC. The Cyber AB is a 501(c)(3) that authorizes and accredits third-party companies conducting assessments of companies within the DIB, according to U.S. Navy Commander Jessica McNulty, a Department of Defense spokesperson.
McNulty confirmed that the DoD is aware of this report and its findings. The DoD has not taken any action to validate the findings, nor does the agency endorse this report, she said.
However, the report and its findings are generally not inconsistent with other prior reports (such as the DoD Inspector General’s Audit of Protection of DoD Controlled Unclassified Information on Contractor-Owned Networks and Systems (ref. DODIG-2019-105) or with results of compliance assessments performed by the DoD, as allowed/required by DFARS clause 252.204-7020 (when applicable), she noted.
“Upholding adequate cybersecurity standards, such as those defined by the National Institute of Standards and Technology (NIST) and levied as contractual requirements through application of DFARS 252.204-7012, is of the utmost importance for protecting DoD’s controlled unclassified information. DoD has long recognized that a mechanism is needed to assess the degree to which contract performers comply with these standards, rather than taking it on faith that the standards are met,” McNulty told TechNewsWorld.
For this reason, the DoD’s Cybersecurity Maturity Model Certification (CMMC) program was initiated, and the DoD is working to codify its requirements in part 32 of the Code of Federal Regulations, she added.
“Once implemented, CMMC assessment requirements will be levied as pre-award requirements, where appropriate, to ensure that DoD contracts are awarded to companies that do, in fact, comply with underlying cybersecurity requirements,” McNulty concluded.
Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics.
WASHINGTON — Without working software, the F-35 stealth fighter is a trillion-dollar lawn ornament.
Called “a computer that happens to fly” by one former Air Force chief, with algorithms running everything from basic flight controls to long-range targeting, the F-35 runs off eight million lines of code. That’s actually less than a late-model luxury car like the 2020 Mercedes S-Class, which has over 30 million lines, notes a forthcoming report from a national security thinktank, the Hudson Institute.
Yet, co-authors Jason Weiss and Dan Patt told Breaking Defense that even as private-sector software surges ahead, a Pentagon bureaucracy built to handle industrial-age hardware still struggles to get all the fighter’s code to work.
And that’s in peacetime. What if war broke out tomorrow — say a desperate Vladimir Putin lashes out at NATO, or Xi Jinping decides to seize Taiwan — and the stress of combat reveals some unexpected glitch? Imagine, for instance, that enemy anti-aircraft radars reveal a new wartime-only mode that doesn’t register in the F-35’s threat-detection algorithms. In that nightmare scenario, every day without a software update means people die.
But the Pentagon bureaucracy has simply not kept pace with the evolution of military hardware from heavy metal to software-dependent, let alone with lightning progress in the private sector. As a result, program managers cannot to update warfighting software in everything from command posts to combat vehicles at the speed that will be required in a fast-moving future conflict, according to an exclusive preview of a Hudson Institute report to be published later today.
Cover of the forthcoming Hudson Institute report. (Hudson Institute graphic, courtesy of the authors)
What’s needed, Weiss and Patt argue, is system that updates software so rapidly, implementing lessons-learned and enabling new tactics, that commanders can “shift on demand with the ease of sliding a finger across the capacitive glass screen of a mobile phone.”
When done right, rapid software updates can save lives. In their war with Russia, for example, Ukrainian troops have come to depend on Elon Musk’s Starlink satellite network. So Russian electronic warfare jammed its transmissions — for a few hours, and then SpaceX updated its software to bypass the interference. “When Starlink came under attack in the Ukraine, they didn’t deploy new satellites at a speed of months or years,” said Weiss, a Navy veteran and entrepreneur who served the Pentagon’s chief software officer. “They merely reconfigured the behavior of the software, in hours.”
But even hours may be too slow sometimes, Weiss and Patt went on in apreview of their report for Breaking Defense. When reconfiguring a well-designed, zero-trust cybersecurity system to stop a newly detected intrusion, they said, or changing how a drone shares data with Low-Earth Orbit satellites, changes can happen “in minutes.”
As the armed services seek to coordinate their disparate forces through an all-encompassing, AI-enabled meta-network, rapid software updates become even more important. What’s called Joint All-Domain Command & Control (JADC2) aims to pass targeting data so quickly from “sensors” to “shooters” that the time between target detected and shots fired is only seconds. In a battle between two such highly networked militaries trying to outguess each other —China is developing what it calls an “informationized” force of its own — the winner may be the one that updates its code the quickest.
Jason Weiss (courtesy of the author)
Unfortunately, the Defense Department mostly handles major software updates the same way it handles hardware procurements: through a time-honored, time-consuming Planning, Programming, Budgeting, and Execution (PPBE) process that locks in major decisions eight years in advance. (The annual Future Years Defense Plan, or FYDP, sets spending levels five years out, but it takes two years for the Pentagon to build a FYDP and another year for Congress to review, amend, and enact it).
“It is impossible to predict out seven or eight years where software technology will be,” Weiss said. What’s worse, the formal requirements for how a piece of equipment must perform — covering everything from armor protection on a tank to cybersecurity in software — can be locked a decade or more before it’s delivered to actual troops.
So, the report snarks, “the program manager must defend program execution to an obsolete baseline, an obsolete requirement, and an obsolete prediction of the future.”
Dan Patt (Hudson Institute photo)
Here lies what Weiss and Patt call Pentagon procurement’s “original sin”: a “predilection for prediction.” Projecting eight years out is workable, sort of, for old-school, industrial-age, heavy-metal hardware. It really does take years to build physical prototypes, field-test them, set up assembly lines, and ramp up mass production, and the pace of all that happening is predictable. Even America’s “arsenal of democracy” in World War II, a miracle of industrial mobilization, was only possible because far-sighted planners got to work years before Pearl Harbor, and it still took, for example, until 1944 to build enough landing craft for D-Day.
But software update cycles spin much faster — if you set up the proper architecture. That requires what techies call a “software factory,” but it’s really “a process,” the report says, “that provides software development teams with a repeatable, well-defined path to create and update production software.” If you have a team of competent coders, if they can get rapid feedback from users on what works and what glitches, and if they can push out software patches rapidly over a long-range wireless network, then you can write upgraded code ASAP and push it out over the network quickly at negligible marginal cost.
The Pentagon has some promising factories already, Weiss and Patt point out, like the Air Force’s famous Kessel Run. “Over the last few years, the number of cleverly named software factories across the DoD has rapidly grown to at least 30 today,” they write. “Others across the Air Force, then across each of the other services, and finally across the DoD’s fourth estate quickly followed the trail that Kessel Run blazed.”
But how does DoD scale up these promising start-ups to the massive scale required for major war?
The cycle of developing and updating weapons increasingly resembles the Development-Operations (DevOps) cycle used for software. (Hudson Institute graphic, courtesy of the authors)
Cycles Converge: DevOps And The OODA Loop
What’s crucial is that connection between coders and users, between the people who develop the technology and the people who operate it. The faster you go, the more you automate the exchange of data, the more the boundary between these two groups dissolves. The industry calls this fusion “DevOps” – developmental operations – or sometimes “DevSecOps” – emphasizing the need for cybersecurity updates to be part of the same cycle.
Cybersecurity updates are a crucial part of the cycle because software is under constant attack, even on civilian products as innocuous as baby monitors. Coders need diagnostic data in near-real-time on the latest hack.
Military equipment, however, comes under all sorts of other attacks as well: not just cyber, but also electronic warfare (jamming and spoofing), and of course physical attack. Even a new form of camouflage or decoy — like the sun-warmed pans of water the Serbs used to draw NATO infrared sensors away from their tanks in 1999 — is a form of “attack” that military systems must take into account.
The need for constant adaptation to lessons from combat — of measure, countermeasure, and counter-countermeasures — is as old as warfare itself. Alexander the Great drilled his pikemen to outmaneuver Persia’s scythed chariots, Edward III combined longbowmen and dismounted knights to crush French heavy cavalry, and a peasant visionary named Joan of Arc convinced aristocratic French commanders to listen to common-born cannoneers about how to bring down castle walls.
Historically, however, it was much quicker to adapt your tactics than to update your technology. In 1940, for instance, when the Germans realized their anti-tank shells just bounced off the heavy armor of many French and British tanks, Rommel’s immediate response was to repurpose existing 88 mm anti-aircraft guns, which were lethal but highly vulnerable to enemy fire. It took two years to deploy a 88 mm cannon on an armored vehicle, the notorious Tiger.
The German 88 mm anti-aircraft gun proved equally capable as an anti-tank weapon. (Bundesarchiv photo)
Col. John Boyd called this process of adaptation the OODA loop: A combatant must Observe, Orient, Decide, and Act — then observe the results of their action, re-orient themselves to the changed situation, decide what to do next, and take further action. Boyd, a fighter pilot, developed the OODA concept to describe how American mental quickness won dogfights in Korea against technologically superior MiGs. But military theorists soon applied the OODA loop to larger scales of combat. In essence, conflicts consist of nested OODA loops, with speed decreasing as size increases: a sergeant might change his squad’s scheme of fire in seconds, a major might redeploy his battalion in hours, a general might bring up supplies and reinforcements over days and weeks, a government might train new troops in months and develop new weapons over years.
But with the software-driven weapons of the 21st century, such as the F-35, these different OODA loops begin to merge. When you can push out a software patch in hours or minutes, technological updates, once the slowest kind of military change, become part of tactical adaption — the fastest. The OODA loop becomes a DevOps cycle, where the latest combat experience drives rapid changes in technology.
“Exercise of military capability bears increasing resemblance to DevOps cycle,” Weiss and Patt write in their report. “The speed and ubiquity of digital information systems is driving military operations ever closer to capability development… This trend brings promise for capability and peril for bureaucratic practices.”
So how does the Pentagon overcome that peril and get its bureaucracy up to speed?
Six principles, or ” Acquisition Competency Targets,” for Pentagon software development (Hudson Institute graphic, courtesy of the authors)
Acquisition Reform For The Digital Age
First, Weiss and Patt warn, do no harm: “Reforms” that impose some top-down, one-size-fits-all standard will just make software acquisition worse. So, instead of trying to merge “redundant” software development teams, toolkits, and platforms, as some reformers have proposed, they argue the Pentagon needs to build a zoo of diverse approaches, able to handle problems ranging from financial management algorithms to F-35 code.
Future warfare is so complex, involving such a wide array of different systems working together — from submarines to jets, tanks to satellites, all increasingly connected by what may one day involve into a JADC2 meta-network — that no “single contractor or … program office” can solve the problem on its own, the report says. Likewise, it argues, “modern software is no longer some monolithic thing. It exists in no singular repository… It is no longer built from a singular programming language [and] is rarely designed for a singular type of hardware.”
In fact, Weiss and Patt calculate that, looking at all the possible choices from what programming language to use, what contract structure, what mix of government and contractor coders, to whether to optimize to run on tablets or desktops or to assume cloud-based connectivity or enemy jamming, etcetera ad (nearly) infinitum, “there are more than six billion combinations.”
Yet certain principles still apply across the board, they say.
First of all, “the essential tool of the modern program office” is a software factory. Again, that’s not a physical place, but a process and a team, bringing together coders, users, the latest operational data, and a capability to push out rapid updates. While program managers should use existing software factories where possible, rather than reinvent the wheel, the sheer variety of different programs means the Pentagon must maintain a variety of different ones.
A “digital twin” of AFRL’s Gray Wolf prototype cruise missile. (Air Force Research Laboratory photo)
What’s more, and more controversially, Weiss and Patt argue that the government shouldn’t simply contract these factories out, nor try to run them entirely in-house. Instead, they recommend a hybrid called Government-Owned, Contractor-Operated. This GOCO model, already in widespread use for everything from ammunition plants to nuclear labs, allows the government to keep the lights on and sustain software support, even as workloads and profit margins ebb and flow, but still draw on contractor talent to ramp up rapidly when needed, for instance, during a war.
The Pentagon should also make extensive use of a new, streamlined process called the Software Pathway (SWP), created by then-acquisition undersecretary Ellen Lord in 2019. Some programs may exist entirely as SWPs, but even hardware-heavy programs like Army vehicles can use SWP for design and simulation functions such as digital twins. “The traditional acquisition process can execute in parallel for the bent metal and hardware aspects,” Weiss said, “[but] every new program should employ the SWP for its software needs.”
The full report goes into six broad principles and dozens of specific recommendations for everything from recruiting talent to defining requirements. But the ultimate message is simple: The key to victory is adaptability, the secret to adaptability is software, and how the Pentagon procures it needs to change.
U.S. government’s mandates around the creation and delivery of SBOMs (software bill of materials) to help mitigate supply chain attacks has run into strong objections from big-name technology vendors.
A lobbying outfit representing big tech is calling on the federal government’s Office of Management and Budget (OMB) to “discourage agencies” from requiring SBOMs, arguing that “it is premature and of limited utility” for vendors to accurately provide a nested inventory of the ingredients that make up software components.
The trade group, called ITI (Information Technology Industry Council), counts Amazon, Microsoft, Apple, Intel, AMD, Lenovo, IBM, Cisco, Samsung, TSMC, Qualcomm, Zoom and Palo Alto Networks among its prominent members.
In a recent letter to the OMB, the group argues that SBOMs are not currently scalable or consumable.
“We recognize and appreciate the value of flexibility built into the OMB process. Given the current level of (im-)maturity, we believe that SBOMs are not suitable contract requirements yet. The SBOM conversation needs more time to move towards a place where standardized SBOMs are scalable for all software categories and can be consumed by agencies,” the ITI letter read.
“At this time, it is premature and of limited utility for software producers to provide an SBOM. We ask that OMB discourage agencies from requiring artifacts until there is a greater understanding of how they ought to be provided and until agencies are ready to consume the artifacts that they request,” the group added.
At its core, an SBOM is meant to be a definitive record of the supply chain relationships between components used when building a software product. It is a machine-readable document that lists all components in a product, including all open source software, much like the mandatory ingredient list seen on food packaging.
The National Telecommunications and Information Administration (NTIA) has been busy issuing technical documentation, corralling industry feedback, and proposing the use of existing formats for the creation, distribution and enforcement of SBOMs.
In its objections, the big vendors are adamant that SBOMs are not yet suitable contract requirements. “Currently available industry tools create SBOMs of varying degrees of complexity, quality, completeness. The presence of multiple, at times inconsistent or even contradictory, efforts suggests a lacking maturity of SBOMs,” the group said.
The ITI letter cautioned that this is evident in a series of practical challenges related to implementation, including naming, identification, scalability, delivery and access, the linking to vulnerability information, as well as the applicability to cloud services, platforms and legacy software.
“These challenges make it difficult to effectively deploy and utilize SBOMs as a tool to foster transparency. The SBOM conversation needs more time to mature and move towards a place where SBOMs are scalable and consumable,” the group added.
The tech vendors also flagged concerns around the security of sensitive proprietary information that may be collected via SBOMs and held by federal agencies and called for clarifications around the definition of artifacts and what protections will be afforded to safeguard sensitive information.
The U.S. Commerce Department’s NTIA has been out front advocating for SBOMs with a wide range of new documentation including:
SBOM at a glance – an introduction to the practice of SBOM, supporting literature, and the pivotal role SBOMs play in providing much-needed transparency for the software supply chain.
A detailed FAQ document that outlines information, benefits, and commonly asked questions.
A two-page overview provides high-level information on SBOM’s background and eco-wide solution, the NTIA process, and an example of an SBOM.
Separately, the open-source Linux Foundation has released a batch of new industry research, training, and tools aimed at accelerating the use of a Software Bill of Materials (SBOM) in secure software development.
Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. Ryan is a veteran cybersecurity strategist who has built security engagement programs at major global brands, including Intel Corp., Bishop Fox and GReAT. He is a co-founder of Threatpost and the global SAS conference series. Ryan’s past career as a security journalist included bylines at major technology publications including Ziff Davis eWEEK, CBS Interactive’s ZDNet, PCMag and PC World. Ryan is a director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world. Follow Ryan on Twitter @ryanaraine.
Who remembers when floppy disks provided a new level of capability for the Department of Defense to support the operations of strategic forces across the world?
It was only in the last few years that DoD retired those 8-inch floppies in favor of modern computer capabilities, and the push is on to accelerate modernization of systems that rely on older technology in order to address modern threats.
This accumulation of older software and infrastructure, known as technical debt, takes a substantial amount of budget and resources to maintain, which puts pressure on new innovation required for enterprise functions such as cybersecurity.
The Pentagon recognizes the impact of technical debt on protecting systems and data from cyberattacks. It intends to deploy a zero trust strategy across the whole department by 2027. By assuming every application, network, connection, and user can become a threat, a zero trust framework validates access through control policies for specific functions – a significant advancement from perimeter-based security that can allow unrestricted access once an attacker gets inside the network.
Technical Debt Hinders Cybersecurity
The Biden Administration’s executive order on cybersecurity in May 2021 jump started the movement toward a zero trust architecture. It called for modernized cyber defenses, improved information sharing and stronger responses to attacks, all of which start with zero trust and depend on modern technologies such as identity management, cloud, artificial intelligence, machine learning and data analytics.
To accelerate this implementation of zero trust, DoD must continue to pay down its technical debt.
The Pentagon can look to the Department of Labor as a model to balance legacy system needs and innovation. As CIO Gundeep Ahluwaliaexplained in a recent interview, “When I joined the department six years ago, we invested only 10% of funds in modernization and development. Now we allocate 25% of our overall funds, and ideally this will increase to 40% in the near future for modernization and development. Modernization is a continuum, and this investment is essentially paying down that technical debt while preventing it from building up again.”
Technical debt has taken on new importance with its reference in the National Defense Authorization Act of 2022, which authorizes the DoD to a study it and make recommendations on its impact to software-intensive systems. DOD CIO John Sherman recently emphasized the connection between technical debt and zero trustas a cyber defense, and he confirmed the department’s commitment to addressing it as adversarial threats increase.
The scope of the technical debt problem is difficult to calculate, but if historical patterns have continued then 75 percent of the federal budget goes to operations and maintenance of legacy systems. That’s older technology that is not prepared or hardened for today’s cyber attacks.
Technology That Stays Ahead of Evolving Threats
There are ways to surround legacy systems with newer, resilient technologies and provide a layered defense of mission critical applications and data.
Identity management, for example, distinguishes a digital identity, while authentication confirms the identity to allow permission-based access to networks, applications, and data. Moreover, operating within a zero trust environment prevents users from gaining access unless they have the appropriate authentication and authorization.
The sheer amount of data from cyberattacks requires innovations in artificial intelligence, machine learning and analytics so that data quickly gets aggregated and filtered, patterns are detected, and threats are elevated for further review.
Because the Pentagon is such a large target for cyberattacks, the Defense Department needs these technologies and a zero trust methodology to eliminate both older technology and processes performed manually, which will reduce threats and fend off penetration.
While perimeter security has become standard for many networks, the executive order on cybersecurity and the DoD’s strategy for zero trust represent the government’s intention to change that. It will require more than technology to nullify existing threats and prevent unknown ones from becoming attacks.
For DoD to reach its full potential for cybersecurity, it must instill a shift in thinking that goes beyond perimeter defenses. Users need to embrace requirements such as multiple logins for enterprise-wide zero trust as well as information sharing that allows for better threat detection and response.
As the Pentagon tackles its technical debt to improve cybersecurity and overall asset protection, it can take the opportunity to renew users’ passion for the mission and good cyber behavior.
Kynan Carver is Defense Cybersecurity Lead at Maximus, an IT services management company focused on the federal government.