How to not deploy securely (or possibly at all).
Walter Haydock 14 hr ago
“We’re at the point where the federal government simply can’t bear the risk of buying insecure software anymore.”
– Jeff Greene, acting Senior Director for Cybersecurity at the National Security Council (NSC), April 2021.
“Security should be table stakes at the end of the day.”
– Jenn Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA), December 2021.
“You should not have to pay extra for security, I’m sorry, that is immoral for companies [to charge for]…I’d love to see an executive order that any cloud product that is bought by a federal agency has to support [multi factor authentication], [single sign on] and basic audit in the most base paid package.”
– Alex Stamos, member of the CISA Cybersecurity Advisory Committee, December 2021.
“The FTC intends to use its full legal authority to pursue companies that fail to take reasonable steps to protect consumer data from exposure as a result of Log4j, or similar known vulnerabilities [sic] in the future.”
– Unsigned Federal Trade Commission (FTC) blog post, January 2022.
Given how behind the curve the United States federal government has been in terms of information technology broadly, and cybersecurity in particular, these comments might seem like steps in the right direction. Unfortunately, and in contrast to other observers, I’m going to say that they are not.
The problem with the attitudes expressed above is that they make no acknowledgment of any sort of risk/reward tradeoff being necessary. Furthermore, they all allude to some (apparently universally accepted) standard of “security” below which no organization may reasonably operate. Finally, despite this haziness, a regulatory agency is threatening legal action against those who don’t comply with this unclear mandate.
Don’t get me wrong; I write a blog about cybersecurity, and, if all other things are equal, more security is better. Unfortunately, all other things are rarely equal, and tradeoffs and sacrifices are invariably necessary. As I wrote in my first post, organizations exist to deliver value, not to simply be secure. Thus, cybersecurity advisors should strive to communicate with precision about the risks and rewards of various courses of action so that business (or mission, in the case of the government) leaders can make the best-informed decisions possible.
None of the above comments suggest even an acknowledgment of this dynamic, and at least to me, these types of attitudes have become even more common in government circles recently. Furthermore, these statements represent broad platitudes that make taking decisive action more, rather than less, difficult.
Unclear risk tolerances
Although I have previously made recommendations for how the government should communicate about cyber risk, I have yet to see any comprehensive standard that gives either its employees or its contractors a clear way to understand what levels are acceptable and what are not. The vast majority of government-mandated requirements, such as FedRAMP and the Cybersecurity Maturity Model Certification(CMMC), focus on controls, rather than the likelihood and severity of potential adverse outcomes. Even these control requirements are often counterproductive, and the only risk-based standards that I have seen are vague and not actionable.
Thus, the only way I can interpret the first two quotations above is that anychance of a malicious party impacting data confidentiality, integrity, or availability represents an unacceptable risk and that the government should pull the plug on any system presenting such a risk.
Obviously, this is an extreme position, as America – and much of the world – would rapidly descend into chaos if this happened. Thus, the statements from Greene and Easterly do not appear to provide any value for those implementing the government’s cybersecurity programs. Furthermore, they set the tone that “no risk is an acceptable risk,” suggesting that they will hang out to dry anyone on the front lines who takes any such risk to accomplish his mission. This creates a paradoxical double bind that is common to many bureaucracies.
Something for nothing
To Stamos’ point, he seems to also imply that federal contractors should not be able to make certain types of tradeoff analyses when designing and selling their software. He further suggests that vendors who sell software to the government should only be allowed to market a product that meets his stated requirements, regardless of the resulting cost to the taxpayer or even whether the real-world use case requires that level of security. Additionally, he does not seem to acknowledge any second-order consequences of such a policy, such as essentially conceding all federal software business to established players who can “check all the boxes.” I am nearly certain that these contractors would simply raise their prices for their base offerings accordingly, eliminating the ability for individual mission owners to select a cheaper option when it is “secure enough.” Such a proposed executive order would also likely exclude from consideration any startups that have a very narrowly-focused product with game-changing capabilities that can deliver massive value, but don’t have the enterprise features Stamos demands.
Again, I’m all for multi-factor authentication and auditing features. The thing is, though, no matter how much one might wish it to be the case, these things are not free to develop and maintain. Implementing this functionality requires time, money, and manpower. There always exists a range of other initiatives to which software companies – or the government – might dedicate these resources, and weighing the available choices is the only logical course of action for those entrusted by the taxpayers to protect them and their data.
You better watch out…
The final statement – by the FTC – is perhaps the most alarming one, as it is clearly driven by one or more government staffers reading the headlines of a major newspaper and then deciding that “we need to do something about this.” Aside from the use of the word “reasonable,” this legal threat from a powerful regulatory agency provides little context on how to balance competing priorities.
For example, what if CVE-2021-44228(known informally as “log4shell”) allowed an attacker to siphon protected health information (PHI) from a popular internet-connected insulin pump (or even kill individual targets)? Should the manufacturer push an update for log4j as soon as humanly possible? How much quality assurance (QA) testing is it “reasonable” to conduct before doing so? There are potentially severe consequences for introducing a breaking code change into such pumps (e.g. devices malfunctioning across the entire user base and a large number of people going into diabetic shock) as a result of insufficient QA, but the countervailing risks of physical harm and lost privacy are also significant. What does a “reasonable” person do in this situation? It’s not immediately clear, and deciding how to proceed would require a detailed risk/reward analysis. The FTC provides no guidelines for how to conduct it.
Even more confusingly, the Food and Drug Administration (FDA), which regulates medical devices, issued the following guidance:
Manufacturers should assess whether they are affected by the [log4shell] vulnerability, evaluate the risk, and develop remediation actions…[m]anufacturers who may be affected by this most recent issue should communicate with their customers and coordinate with CISA.
Like the FTC, the FDA’s communication makes no mention of timelines, risk tolerances, or (non-)acceptability of compensating controls. From the perspective of medical device makers, it is thankfully less ominous and prescriptive with respect to the required steps. But from a societal perspective, I don’t think it’s a good thing to have two agencies with overlapping jurisdictions issuing vague, threatening, and potentially conflicting guidance regarding acceptable cyber risk tolerances.
Finally, it is not clear that the FTC staffers who wrote this post actually know what they are talking about. For example, in the quotation above, they conflate the software library (log4j) with a vulnerability in it. Additionally, they seem to limit their potential enforcement actions to companies that don’t take reasonable action in the face of “similar known vulnerabilities.” This phrasing to me would seem to exclude unknown (to anyone except an attacker prior to their exploitation) vulnerabilities. This distinction is not merely academic, as different types of security bugs require different tools to surface them. Thus, would companies who neglect to safeguard consumer data because of their failure to identify unknown vulnerabilities – by not conducting static code analysis or penetration testing, for example – get a free pass? Considering that mostidentified malicious exploitations are of this latter type of flaw, that would be a strange position for the FTC to take.
The four statements at the beginning of this post all sound like something an elected official would say at a campaign rally. Easterly herself seemed to realize this in acknowledging “moral outrage” on the part of Stamos. Frankly, if it were a politician saying these things, I would be okay with it. Elected officials are generalists who need to appeal broadly to a wide constituency, and as long as they hire capable folks to implement policies in line with broad strategic guidance, this can lead to good outcomes.
The problem here is that the people saying the above things are the implementers of such broad strategic guidance. What they are saying – at least publicly – is not actionable or useful, and it potentially reveals a lack of deep thinking with respect to the unavoidable tradeoffs involved with using technology.
Frankly, and this might be cynical, but I believe these statements represent preemptive CYA efforts. If another serious breach occurs that gets national attention (e.g. like what happened with SolarWinds), these folks will be able to say “we told you so.” As the below passage from the essay “How Complex Systems Fail” highlights, this tendency is common:
Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.
Although widespread, such bureaucratic techniques are not acceptable on the part of the nation’s most senior cybersecurity leaders. Moral outrage and CYA don’t help improve the functionality and security of our nation’s networks. Even more importantly, if the government is going to threaten companies with legal liability, it will need to provide much clearer standards regarding which risks are acceptable – compared to the potential rewards – and which are not.
And, in case anyone is wondering, yes, I am willing to help solve this problem! NSC, CISA, FTC, FDA, or any other alphabet soup agency that wants to tackle this problem: drop me a line and I’ll do my best to assist