Officials noted that identity action items could still be included in later iterations of the national cybersecurity strategy implementation plan.
The White House’s implementation plan for the national cybersecurity strategy has 69 initiatives for agencies to carry out, but lacks any action items on digital identity, a shock to stakeholders who want action.
“The Brits invented the word ‘gobsmacked’ to describe things like this,” Jeremy Grant, longtime digital identity expert who has worked at the National Institute of Standards and Technology and now runs the Better Identity Coalition interest group, told Nextgov/FCW.
The strategy, released in March, had “[supporting] development of a digital identity ecosystem” as a strategic objective and previewed government investments in digital identity solutions, like providing attribute validation services, updating standards and developing digital identity platforms.
But the new “roadmap,” as acting national cyber director Kemba Walden called it in a briefing with reporters, doesn’t include anything on digital identity, the only objective left out other than data privacy.
When asked about the absence, a senior administration official told reporters: “This is an iterative document, so just because you’re not seeing an initiative tied to a strategic objective today doesn’t mean it won’t be there for the next go-round.”
The White House’s current efforts on digital identity, they said, are going to be coming from other ongoing efforts around digital identity and fraud.
“You’ve heard… the administration several times talk about work on digital identity actions in the context of identity fraud and combating that, and that is fundamentally what is holding that space right now,” the official said. “The administration is committed to action in that space, and that is still pre-decisional activity, but we would expect that follow-on actions from the identity fraud work would come into future iterations of the implementation plan.”
President Biden first promised an executive order on the topic in his 2022 State of the Union address, although it has yet to materialize.
“When you have this many departments and you actually have an executive order that is requiring an actual change of behavior, it requires a level of vetting from Justice Department, the [Office of Legal Counsel] and each counsel’s office that is a bit more from other executive orders,” White House senior advisor Gene Sperling told reporters in March during a call about a $1.6 billion anti-fraud proposal, including $600 million for identity theft and fraud prevention.
“It should be out soon, but I don’t want to predict what I can’t control, which is the time it takes to get everybody to completely — every counsel’s office to completely sign off,” said Sperling.
As for the impact of the omission, the original strategy itself noted that “the lack of secure, privacy-preserving, consent-based digital identity solutions allows fraud to flourish, perpetuates exclusion and inequity and adds inefficiency to our financial activities and daily life.”
There were 14,817 identity crimes reported in 2022, according to the nonprofit Identity Theft Resource Center. The center’s president and CEO Eva Velasquez told Nextgov/FCW via email that “the U.S. needs a digital identity strategy to help reduce identity fraud and the number of people who are victims of identity crimes.”
“I think it was a huge oversight,” Linda Miller — former deputy executive director of the government’s Pandemic Response Accountability Committee and founder and CEO of boutique consultancy Audient Group — told Nextgov/FCW via email about the lack of digital identity in the implementation plan. “It is very surprising and disappointing to me.
“Identity theft is a primary threat vector for government fraud at the federal, state and local levels of government,” she said. “We saw historic levels of identity theft-based fraud in unemployment assistance during the pandemic, and nation state actors used stolen identities to defraud many other pandemic programs as well.”
Jordan Burris, former chief of staff in the Office of the Federal CIO from 2017 to 2021 and current vice president and head of public sector strategy at digital identity company Socure said via email that although “it is understood that the plan may not encompass all the activities underway by the administration,” the omission of digital identity is “deeply concerning.
“I fear that the failure to include digital identity in the cybersecurity implementation plan signals this critical work is being deferred, all the while we continue to see an uptick in identity theft, synthetic fraud and AI-driven attacks targeting government programs,” he said. “There appears to be little momentum at the federal level to change the status quo.”
A draft of the promised executive order obtained by Nextgov/FCW in February focused heavily on scaling the General Services Administration’s identity service, Login.gov —although nothing is official policy until an actual order is issued. Since then, GSA’s inspector general issued a report in March which found that GSA has misled agencies about the level of identity proofing standards Login.gov met.
Grant’s identity-focused trade group, meanwhile, has urged the White House to focus on digitizing identity credentials with a task force, noting that “White House leadership is essential,” given the dispersion of issuers of identity documents across levels of government.
Grant told Nextgov/FCW that the “hope is that in the weeks ahead, we’ll see additional details from the administration outlining how they will protect millions of Americans from identity-related cybercrime and identity theft, by strengthening the security and privacy of digital credentials.”
As for when action items on digital identity might be added to the implementation plan, Walden said that the document will “evolve” over time, with a 2.0 version coming next year as the first of annual updates.
“The implementation plan does not capture all of the cybersecurity activities in the federal government, nor does it intend to,” she said. “What it does do is capture key initiatives that we must get done in the near term.”
The White House declined to comment on the promised executive order.
Editor’s note: This article has been updated to include comment from Eva Velasquez and White House response.
“These challenges limit the degree of transparency into the use of pandemic relief funds,” a new report from the Pandemic Response Accountability Committee says.
Even government watchdogs run into data gaps when they try to track pandemic spending, according to a new report by the Pandemic Response Accountability Committee and 10 inspectors general offices.
“You may want to know how much pandemic relief money your community received from the federal government,” the PRAC websitereads. “But getting the answers can be difficult because of the quality and availability of federal spending data.”
The PRAC — a group of inspectors general set up by Congress to coordinate and support oversight of pandemic relief — picked six random communities across the country to try to find out how much pandemic spending they received from the over $5 trillion doled out through grants, loans, contracts, direct assistance and other forms. Future reporting will focus on how the six communities used the funding.
Over 40 federal agencies and hundreds of programs were responsible for dispersing pandemic relief spending, the report states, but the watchdogs zeroed in on funding given by 10 agencies specifically. The group found a topline of about $2.65 billion in pandemic relief funding went to these six communities via 89 programs and subprograms in those agencies from March 2020 to September 2021.
But getting that information was no easy task: “Tracking pandemic funds to the community level required the use of multiple federal, state and local data systems, and ultimately we had to contact state and local entities directly to gain a better understanding and fill data gaps,” the report said.
Data collection and system shortcomings and differences in data formats occurred across federal, state and local levels of government.
“Sometimes data was either unreliable or unavailable. In other cases, we had to use data sources that the public can’t access,” the website said. “One partner had to access five federal non-public databases to determine the recipients in a single program.”
For example, the primary source of federal government spending data, USAspending.gov, “does not definitively track COVID-19 supplemental spending at the subrecipient level,” the PRAC previously reported.
The newest report notes that “attempts to only use USAspending.gov would not enable full identification of pandemic funding due to differences among USAspending.gov and other non-public data sources not accounted for on the publicly available USAspending.gov website.”
The group is “continuing to explore” how these differences affect the quality of data on the site, it says, and for now, advises that “continuing to disclose data limitations in USAspending.gov or pursuing other efforts around data reporting can help increase transparency of federal spending for the public.”
The latest report didn’t come with new recommendations, but it did point to previously made recommendations to the Office of Management and Budget about federal spending data gaps, including creating a feasibility study on how to better track subrecipient funding and engaging with lawmakers to consider “extending independent oversight of USAspending.gov data submissions.”
“While it may not be practical for every federal spending dollar to be integrated into USAspending.gov, opportunities exist to increase federal spending data transparency, a shared interest for all stakeholders, especially when the federal government administers large emergency spending programs to address nationwide challenges and respond to new disasters,” the report said.
For now, “we believe these challenges limit the degree of transparency into the use of pandemic relief funds,” the report said.
“If the PRAC Oversight Team — comprised of auditing and data experts from across the accountability community — had such difficulty tracking the funds, then the general public and even members of Congress will likely also experience challenges in understanding how much of taxpayer funds were provided to communities,” the report concluded.
House members today grilled officials from the Department of Defense (DoD) on a long-standing and vexing issue for Congress – the Pentagon’s inability to produce a clean audit opinion on its financial statements since that requirement went into effect for Federal agencies in 1990.
In a May 2023 report, the Government Accountability Office (GAO) explained that although DoD is responsible for about half of the Federal government’s discretionary spending, “DOD remains the only major federal agency that has never been able to receive a clean audit opinion on its financial statements.”
Since 1995, GAO has designated DOD financial management as high risk “because of pervasive deficiencies in its financial management systems, business processes, internal controls, and financial reporting,” the watchdog agency said.
“DOD should not get a free pass from the law, especially when other agencies that invest in priorities like health care, education, climate change, or economic growth are under high scrutiny and take important and difficult steps to comply with statutory auditing requirements,” stated Rep. Robert Garcia, D-Calif., today during a joint subcommittee hearing of the House Oversight and Accountability Committee.
John Tenaglia, principal director of Defense Pricing and Contracting at the DoD, blamed the current procurement system as one factor that’s stopping DoD from keeping better tabs on its finances.
“The standard procurement system needs to be retired. It’s a legacy system. It works, but we’re working with the comptroller on the standards to make sure that it communicates with the financial system, so it’s really all about the data standards behind that,” stated Tenaglia.
Asif Khan, director of Financial Management and Assurance at the GAO, discussed some of the reasons that DoD has not been able to produce a clean audit, including its use of older financial systems.
“It is helping that DoD is embracing technology” including robotic process automation in its financial work, Khan said. “It is freeing up the resources to be able to do more analytical work, but the problems are very pervasive across DoD and that’s why it’s taking them time,” to generate the ability to produce a clean audit, he said.
The issue of antiquated systems at DoD to manage the department’s finances was something that Rep. Gerry Connolly, D-Va., said can also be a hallmark of introducing cybersecurity risks.
“This directly affects the security of our missions, of our intelligence, of our defense planning on our new weapons systems development,” said Rep. Connelly.
Earlier this year, GAO released a new set of recommendations for the DoD on steps that the department needs to take to produce a clean audit.
Microsoft says hackers somehow stole a cryptographic key, perhaps no from its own network, that let them forge user identities and slip past cloud defenses.
FOR MOST IT professionals, the move to the cloud has been a godsend. Instead of protecting your data yourself, let the security experts at Google or Microsoft protect it instead. But when a single stolen key can let hackers access cloud data from dozens of organizations, that trade-off starts to sound far more risky.
Late Tuesday evening, Microsoft revealed that a China-based hacker group, dubbed Storm-0558, had done exactly that. The group, which is focused on espionage against Western European governments, had accessed the cloud-based Outlook email systems of 25 organizations, including multiple government agencies.
Those targets encompass US government agencies including the State Department, according to CNN, though US officials are still working to determine the full scope and fallout of the breaches. An advisory from the US Cybersecurity and Infrastructure Security Agencysays the breach, which was detected in mid-June by a US government agency, stole unclassified email data “from a small number of accounts.”
China has been relentlessly hacking Western networks for decades. But this latest attack uses a unique trick: Microsoft says hackers stole a cryptographic key that let them generate their own authentication “tokens”—strings of information meant to prove a user’s identity—giving them free rein across dozens of Microsoft customer accounts.
“We put trust in passports, and someone stole a passport-printing machine,” says Jake Williams, a former NSA hacker who now teaches at the Institute for Applied Network Security in Boston. “For a shop as large as Microsoft, with that many customers impacted—or who could have been impacted by this—it’s unprecedented.”
In web-based cloud systems, users’ browsers connect to a remote server and, when they enter credentials like a username and password, they’re given a bit of data, known as a token, from that server. The token serves as a kind of temporary identity card that lets users come and go as they please within a cloud environment while only occasionally reentering their credentials. To ensure that the token can’t be spoofed, it’s cryptographically signed with a unique string of data known as a certificate or key that the cloud service possesses, a kind of unforgeable stamp of authenticity.
Microsoft, in its blog postrevealing the Chinese Outlook breaches, has described a kind of two-stage breakdown of that authentication system. First, hackers were somehow able to steal a key that Microsoft uses to sign tokens for consumer-grade users of its cloud services. Second, the hackers exploited a bug in Microsoft’s token validation system, which allowed them to sign consumer-grade tokens with the stolen key and then use them to instead access enterprise-grade systems. All of this occurred despite Microsoft’s attempt to check for signatures from different keys for those different grades of token.
Microsoft says it has now blocked all tokens that were signed with the stolen key and replaced the key with a new one, preventing the hackers from accessing victims’ systems. The company adds that it has also worked to improve the security of its “key management systems” since the theft occurred.
But exactly how such a sensitive key, allowing such broad access, could be stolen in the first place remains unknown. WIRED contacted Microsoft, but the company declined to comment further.
In the absence of more details from Microsoft, one theory of how the theft occurred is that the token-signing key wasn’t in fact stolen from Microsoft at all, according to Tal Skverer, who leads research at the security Astrix, which earlier this year uncovered a token security issue in Google’s cloud. In older setups of Outlook, the service is hosted and managed on a server owned by the customer rather than in Microsoft’s cloud. That might have allowed the hackers to steal the key from one of these “on-premises” setups on a customer’s network.
Then, Skverer suggests, hackers might have been able to exploit the bug that allowed the key to sign enterprise tokens to gain access to an Outlook cloud instance shared by all the 25 organizations hit by the attack. “My best guess is that they started from a single server that belonged to one of these organizations,” says Skverer, “and made the jump to the cloud by abusing this validation error, and then they got access to more organizations that are sharing the same cloud Outlook instance.”
But that theory doesn’t explain why an on-premises server for a Microsoft service inside an enterprise network would be using a key that Microsoft describes as intended for signing consumer account tokens. It also doesn’t explain why so many organizations, including US government agencies, would all be sharing one Outlook cloud instance.
Another theory, and a far more troubling one, is that the token-signing key used by the hackers was stolen from Microsoft’s own network, obtained by tricking the company into issuing a new key to the hackers, or even somehow reproduced by exploiting mistakes in the cryptographic process that created it. In combination with the token validation bug Microsoft describes, that may mean it could have been used to sign tokens for any Outlook cloud account, consumer or enterprise—a skeleton key for a large swath, or even all, of Microsoft’s cloud.
The well-known web security researcher Robert “RSnake” Hansen says he read the line in Microsoft’s post about improving the security of “key management systems” to suggest that Microsoft’s “certificate authority”—its own system for generating the keys for cryptographically signing tokens—was somehow hacked by the Chinese spies. “It’s very likely there was either a flaw in the infrastructure or configuration of Microsoft’s certificate authority that led an existing certificate to be compromised or a new certificate to be created,” Hansen says.
If the hackers did in fact steal a signing key that could be used to forge tokens broadly across consumer accounts—and, thanks to Microsoft’s token validation issue, on enterprise accounts, too—the number of victims could be far greater than 25 organizations Microsoft has publicly accounted for, warns Williams.
To identify enterprise victims, Microsoft could look for which of their tokens had been signed with a consumer-grade key. But that key could have been used to generate consumer-grade tokens, too, which might be far harder to spot given that the tokens might have been signed with the expected key. “On the consumer side, how would you know?” Williams asks. “Microsoft hasn’t discussed that, and I think there’s a lot more transparency that we should expect.”
Microsoft’s latest Chinese spying revelation isn’t the first time state-sponsored hackers have exploited tokens to breach targets or spread their access. The Russian hackers who carried out the notorious Solar Winds supply chain attack also stole Microsoft Outlook tokens from victims’ machines that could be used elsewhere on the network to maintain and expand their reach into sensitive systems.
For IT administrators, those incidents—and particularly this latest one—suggest some of the real-world trade-offs of migrating to the cloud. Microsoft, and most of the cybersecurity industry, has for years recommended the move to cloud-based systems to put security in the hands of techgiants rather than smaller companies. But centralized systems can have their own vulnerabilities—with potentially massive consequences.
“You’re handing over the keys to the kingdom to Microsoft,” says Williams. “If your organization is not comfortable with that now, you don’t have good options.”
President Biden has made clear that all Americans deserve the full benefits and potential of our digital future. The Biden-Harris Administration’s recently released National Cybersecurity Strategy calls for two fundamental shifts in how the United States allocates roles, responsibilities, and resources in cyberspace:
Ensuring that the biggest, most capable, and best-positioned entities – in the public and private sectors – assume a greater share of the burden for mitigating cyber risk
Increasing incentives to favor long-term investments into cybersecurity
Today, the Administration is announcing a roadmap to realize this bold, affirmative vision. It is taking the novel step of publishing the National Cybersecurity Strategy Implementation Plan (NCSIP) to ensure transparency and a continued path for coordination. This plan details more than 65 high-impact Federal initiatives, from protecting American jobs by combatting cybercrimes to building a skilled cyber workforce equipped to excel in our increasingly digital economy. The NCSIP, along with the Bipartisan Infrastructure Law, CHIPS and Science Act, Inflation Reduction Act, and other major Administration initiatives, will protect our investments in rebuilding America’s infrastructure, developing our clean energy sector, and re-shoring America’s technology and manufacturing base.
Eighteen agencies are leading initiatives in this whole-of-government plan demonstrating the Administration’s deep commitment to a more resilient, equitable, and defensible cyberspace. The Office of the National Cyber Director (ONCD) will coordinate activities under the plan, including an annual report to the President and Congress on the status of implementation, and partner with the Office of Management and Budget (OMB) to ensure funding proposals in the President’s Budget Request are aligned with NCSIP initiatives. The Administration looks forward to implementing this plan in continued collaboration with the private sector, civil society, international partners, Congress, and state, local, Tribal, and territorial governments. As an example of the Administration’s commitment to public-private collaboration, ONCD is also working on a request for information regarding cybersecurity regulatory harmonization that will be published in the near future.
The NCSIP is not intended to capture all Federal agency activities in support of the NCS. The following are sample initiatives from the plan, which is organized by the NCS pillars and strategic objectives.
Pillar One | Defending Critical Infrastructure
Update the National Cyber Incident Response Plan (1.4.1): During a cyber incident, it is critical that the government acts in a coordinated manner and that private sector and SLTT partners know how to get help. The Cybersecurity and Infrastructure Security Agency (CISA) will lead a process to update the National Cyber Incident Response Plan to more fully realize the policy that “a call to one is a call to all.” The update will also include clear guidance to external partners on the roles and capabilities of Federal agencies in incident response and recovery.
Pillar Two | Disrupting and Dismantling Threat Actors
Combat Ransomware (2.5.2 and 2.5.4): Through the Joint Ransomware Task Force, which is co-chaired by CISA and the FBI, the Administration will continue its campaign to combat the scourge of ransomware and other cybercrime. The FBI will work with Federal, international, and private sector partners to carry out disruption operations against the ransomware ecosystem, including virtual asset providers that enable laundering of ransomware proceeds and web fora offering initial access credentials or other material support for ransomware activities. A complementary initiative, led by CISA, will include offering resources such as training, cybersecurity services, technical assessments, pre-attack planning, and incident response to high-risk targets of ransomware, like hospitals and schools, to make them less likely to be affected and to reduce the scale and duration of impacts if they are attacked.
Pillar Three | Shaping Market Forces and Driving Security and Resilience
Software Bill of Materials (3.3.2): Increasing software transparency allows market actors to better understand their supply chain risk and to hold their vendors accountable for secure development practices. CISA continues to lead work with key stakeholders to identify and reduce gaps in software bill of materials (SBOM) scale and implementation. CISA will also explore requirements for a globally-accessible database for end of life/end of support software and convene an international staff-level working group on SBOM.
Pillar Four | Investing in a Resilient Future
Drive Key Cybersecurity Standards (4.1.3, 4.3.3): Technical standards are foundational to the Internet, and U.S. leadership in this area is essential to the vibrancy and security of cyberspace. Consistent with the National Standards Strategy, the National Institute of Standards and Technology (NIST) will convene the Interagency International Cybersecurity Standardization Working Group to coordinate major issues in international cybersecurity standardization and enhance U.S. federal agency participation in the process. NIST will also finish standardization of one or more quantum-resistant publickey cryptographic algorithms.
Pillar Five | Forging International Partnerships to Pursue Shared Goals
International Cyberspace and Digital Policy Strategy (5.1.1 and 5.1.2): Cyberspace is inherently global, and policy solutions must reflect close collaboration with our partners and allies. The Department of State will publish an International Cyberspace and Digital Policy Strategy that incorporates bilateral and multilateral activities. State will also work to catalyze the development of staff knowledge and skills related to cyberspace and digital policy that can be used to establish and strengthen country and regional interagency cyber teams to facilitate coordination with partner nations.
The White House’s Office of the National Cyber Director (NCD) today released its much-awaited marching orders to implement the National Cybersecurity Strategy (NCS) that it published in March.
On the Federal agency front, the implementation plan enlists the additional efforts of several agencies that already are doing some of the heavy lifting on many aspects of cybersecurity work and policy.
Similarly, key issues that will get more attention include some that are already well-known in policy circles – including creation of software bills of material, fighting ransomware and other cybercrime, improving incident response work, and pushing harder for international cybersecurity harmonization.
And high atop the initiatives featured in the plan released today, the NCD said it is preparing a request for information on “cybersecurity regulatory harmonization” for critical infrastructure that it plans to publish “in the near future.”
When it rolled out the NCS earlier this year, the National Cyber Director keyed on multiple focus points – including continuing efforts to improve security in already-regulated critical infrastructure sectors, a high-level goal of shifting more security responsibility onto providers of tech products and services, and a robust focus on using “all tools of national power” to go after attackers.
In the implementation plan released today, the NCD shaped its focus around two goals: “Ensuring that the biggest, most capable, and best-positioned entities – in the public and private sectors – assume a greater share of the burden for mitigating cyber risk,” and “increasing incentives to favor long-term investments into cybersecurity.”
Orders to Agencies
The implementation plan features “more than 65 high-impact initiatives requiring executive visibility and interagency coordination that the Federal government will carry out to achieve the strategy’s objectives,” NCD said.
Each of the 69 initiatives has been tasked to a Federal agency, with a timeline for completion, NCD said.
Taking on many of those initiatives will be 18 Federal agencies, NCD said. Agencies making prominent showings on NCD’s initiatives list include the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, Justice Department, State Department, and the Commerce Department’s National Institute of Standards and Technology (NIST).
NCD added that while the implementation plan represents a “whole of government” approach, the entire effort also will encompass Congress, state, local, Tribal, and territorial governments, the private sector, civil society, and international partners.
NCD said it will coordinate activities under the plan, and work with the Office of Management and Budget (OMB) “to ensure funding proposals in the President’s Budget Request are aligned” with the implementation plan.
NCD also pledged today that the implementation plan will be a “living document,” and will be updated annually.
Steps Already Taken
Some of the 69 tasks on the list have already been completed – including the White House’s release in June of its fiscal year 2025 cybersecurity priorities, the Defense Department’s delivery to Congress in May of its 2023 cyber strategy, and the creation in June of the Justice Department’s National Security Cyber Division.
Big To-Do List
Among the 65 initiatives listed in the plan, NCD highlighted some of the major steps in a fact sheet released by the White House today:
CISA will lead an effort to update the National Cyber Incident Response Plan “to more fully realize the policy that ‘a call to one is a call to all.’” NCD said, and to offer “clear guidance to external partners on the roles and capabilities of Federal agencies in incident response and recovery.”
CISA and the FBI through the existing Joint Ransomware Task Force will lead efforts to combat ransomware and other cybercrime. The FBI will work with Federal, international, and private sector partners to “carry out disruption operations against the ransomware ecosystem, including virtual asset providers that enable laundering of ransomware proceeds and web fora offering initial access credentials or other material support for ransomware activities.” At the same time, CISA will lead efforts to offer resources “such as training, cybersecurity services, technical assessments, pre-attack planning, and incident response to high-risk targets of ransomware, like hospitals and schools, to make them less likely to be affected and to reduce the scale and duration of impacts if they are attacked,” NCD said.
CISA will continue to lead work with stakeholders on identifying and reducing “gaps in software bill of materials (SBOM) scale and implementation,” and also will “explore requirements for a globally-accessible database for end of life/end of support software and convene an international staff-level working group on SBOM.”
NIST will “convene the Interagency International Cybersecurity Standardization Working Group to coordinate major issues in international cybersecurity standardization and enhance U.S. federal agency participation in the process,” and finish standardization of one or more quantum-resistant public key cryptographic algorithms, NCD said. “Technical standards are foundational to the Internet, and U.S. leadership in this area is essential to the vibrancy and security of cyberspace,” it said.
The State Department will create an International Cyberspace and Digital Policy Strategy “that incorporates bilateral and multilateral activities,” and will also work to “catalyze the development of staff knowledge and skills related to cyberspace and digital policy that can be used to establish and strengthen country and regional interagency cyber teams to facilitate coordination with partner nations.” Cyberspace, said NCD, “is inherently global, and policy solutions must reflect close collaboration with our partners and allies.”
The full scope of directives to agencies are listedin the plan released today.
VA Secretary Denis McDonough said the department is poised “to deliver a completely new, fully integrated experience” for veterans as officials expect a further increase in PACT Act claims.
The Department of Veterans Affairs has launched numerous modernization efforts in recent years to enhance its veteran-facing services, but more must be done to meet the coming deluge of PACT Act-related benefits claims — including using more artificial intelligence tools to bolster existing processes — VA Secretary Denis McDonough said at the DigitalVA Expo Thursday.
The PACT Act, which was signed into law by President Joe Biden last August, increased the number of veterans and their family members eligible to receive benefits as a result of exposure to burn pits and other toxic substances. A VA official said during a media roundtable last month that over 560,000 disability benefits claims had already been filed since the law’s enactment.
While McDonough noted the department’s progress in modernizing and enhancing many of its digital and telehealth services, he said conducting a simple task — such as having a veteran update their personal information — “in too many cases still requires visits to multiple outdated websites.”
As the VA experienced a surge in telehealth visits as a result of the coronavirus pandemic, the department worked to streamline many of its disparate claims, benefits and health services into a centralized mobile app that launched in July 2021.
With VA’s backlog of benefits claims expected to peak later this year as a result of the PACT Act, McDonough said the department must continue “building on the innovation that our providers have led” over the past several years, since “we don’t have a choice.”
He said this includes prioritizing technologies that will allow the department to speed up its claims processing timeframe while also reducing system outages and downtimes — efforts that have been bolstered by VA partnering with private sector companies “to leverage best practices in AI, natural language processing, machine learning [and] robotic process automation technology.
“That means using automation to help us make better, faster, more informed decisions, improving veteran health outcomes and benefits decisions, while eliminating redundant administrative tasks and workflows and promoting jointness across VA — including with our federal and community partners,” McDonough added.
A report released by the VA Office of Inspector General earlier this month noted that the Veterans Benefits Administration, which oversees veterans’ benefits programs within the department, plans to use IT system modernization tools and automation to help meet the influx of PACT Act claims.
The report’s release came after lawmakers on two House Veterans’ Affairs subcommittees also called earlier this month for the department to speed up its deployment of advanced automation tools to help reduce menial tasks that could prolong the processing of veterans’ claims.
McDonough said VA is “early in this journey” when it comes to using more advanced technologies and automation, but that the department is poised “to deliver a completely new, fully integrated experience in which the support vets need is just a few clicks away.
“These emerging technologies promise to serve more veterans more efficiently, more effectively, more equitably and more accurately,” he added.
Lou Gerstner will always be known as the man who saved IBM after resuscitating, then reinvigorating, the near bankrupt company when he took over as chairman and CEO in 1993. Gerstner’s career, though, spanned 43 years which also included more than a decade at McKinsey, senior positions at American Express, and four years as chairman and CEO of RJR Nabisco. Since stepping down from IBM in 2002 he has continued to lead an active “portfolio” life in education, healthcare, and private equity. In this conversation with former McKinsey managing director Ian Davis, he reflects on the DNA of companies that keep on creating value.Share
Sidebar
Lou Gerstner biography
Vital statistics
Born March 1, 1942, in Mineola, New York
Education
Graduated with a bachelor’s degree in engineering from Dartmouth College in 1963 and an MBA from Harvard Business School in 1965
Career highlights
The Carlyle Group(2003–present)
Senior adviser (2008–present)
Chairman (2003–08)
IBM (1993–2002)
Chairman and CEO
RJR Nabisco(1989–93)
Chairman and CEO
American Express (1978–89)
Various executive positions
McKinsey & Company (1965–78)
Director
Fast facts
Awarded the designation of honorary Knight of the British Empire from Queen Elizabeth II, in June 2001, for his work in education and business
Chairman of the board of directors at the Broad Institute of MIT and Harvard
Vice chairman of Memorial Sloan Kettering Cancer Center’s boards of overseers and managers
Received the American Museum of Natural History’s Distinguished Service to Science and Education Award
The Quarterly: How do you think about corporate longevity? Does it help executives if their companies explicitly aim to be around a long time, by which I mean a generation or more?
Lou Gerstner: I don’t think so. It seems to me that companies should focus on trying to be successful five years from now, perhaps ten. If your business has already been around, say, 20 years, I don’t see how it can help the management team if one of the primary objectives is getting to 100. It’s not something they can execute on. I’m not sure what you can do to guarantee success in that time frame, or even on a 20- to 30-year view.
The Quarterly: So why do some businesses last much longer than others?
Lou Gerstner: A lot of it has to do with the industry. Many companies that have made it over many years have been in slow-changing industries that haven’t been much affected by the external environment, that are characterized by significant scale economies, or that are heavily regulated.
Take food production. The big global players in this sector are not, typically, huge profit generators, and their turnover only increases modestly—say, by 1 to 2 percent a year, in line with demographic trends. But those businesses are in a nice place: there’s not much new competition, and the changes they’re up against, whether technological or otherwise, tend to be relatively small. In the automobile industry, it’s long cycle times and scale economies that deter others. And in banking, it’s been regulation. You see a lot of small bank start-ups in the US but the reason that so many of the large players have been around a long time is that state and federal laws make it difficult to start a national bank.
The Quarterly: Conversely, the entry and exit barriers are much lower in, say, software or technology, where capital requirements for new entrants can be relatively light.
Lou Gerstner: That’s true, and it’s in those sectors that companies are most often subject to strong competition, technological innovation, and regulatory change. The question, at the end of the day, is whether leaders in these and other industries can adjust. I would argue that more often than not they can’t. Think about all those companies in the computer or consumer-electronics industries, like Control Data or RCA. Corporate longevity is either driven by the leadership team that is there or by a new one that comes in from the outside and is able to manage the transition to a significantly different competitive environment. There was nothing that said American Express or IBM couldn’t go out of business, and IBM very nearly did. For a long time, American Express wouldn’t go into credit cards, because it thought that would cannibalize its Travelers Cheques business. When I arrived at IBM in 1993, there was no inheritable or even extendable platform. The company was dying.
The Quarterly: Is there something in the DNA of those firms that have endured—perhaps a willingness to respond to a change of direction—that enables them to survive?
Lou Gerstner: In anything other than a protected industry, longevity is the capacity to change, not to stay with what you’ve got. Too many companies build up an internal commitment to their existing businesses, and there’s the problem: it’s very, very difficult to “eat your seed corn,” go into other activities, or radically change something fundamental about what you’ve been doing, like the pricing structure or distribution system. Rather than changing, they find it easier to just keep doing the same things that brought them success. They codify why they’re successful. They write guidebooks. They create teaching manuals. They create whole cultures around sustaining the model. That’s great until the model gets threatened by external change; then, all too often, the adjustment is discontinuous. It requires a wrench, often from an outside force. Andy Grove put it well when he said “only the paranoid survive.”
Remember that the enduring companies we see are not really companies that have lasted for 100 years. They’ve changed 25 times or 5 times or 4 times over that 100 years, and they aren’t the same companies as they were. If they hadn’t changed, they wouldn’t have survived. If you could take a snapshot of the values and processes of most companies 50 years ago—and did the same with a surviving company in 2014—you would say it’s a different company other than, perhaps, its name and maybe its purpose and maybe its industry. The leadership that really counts is the leadership that keeps a company changing in an incremental, continuous fashion. It’s constantly focusing on the outside, on what’s going on in the marketplace, what’s changing there, noticing what competitors are doing.
The Quarterly: How important are values in sustaining companies, even those that change? And can values be an enemy of change?
Lou Gerstner: I think values are really, really important, but I also think that too many values are just words. When I teach at the IBM School, I use the annual reports of about ten major companies that invariably announce, on the back page or inside back page, “These are our values.” What’s striking to me is that almost all the values are the same. “We focus on our customers; we value teamwork; we respect the dignity of our workforce.”
But when you go inside those companies, you often see that the words don’t translate into practices. When I arrived at IBM, one of my first questions was, “Do we have teamwork?,” because the new strategy crucially depended on our ability to provide an integrated approach to our customers. “Oh, yes, Lou, we have teamwork,” I was told. “Look at those banners up there. Mr. Watson put them up in 1938; they’re still there. Teamwork!” “Oh, good,” I responded. “How do we pay people?” “Oh, we pay on individual performance.” The rewards system is a powerful driver of behavior and therefore culture. Teamwork is hard to cultivate in a world where employees are paid solely on their individual performance.
I found a similar problem at American Express, where our stated distinguishing capability was the quality of the service we delivered versus that of our competitors Visa and MasterCard, which were owned by a diverse group of bank holding companies. It turned out that on a quarterly basis, we only measured financial performance and that the assessment of our service quality, on crucial customer-satisfaction matters such as statement clarity or phone-call wait times, was only done once a year. People do what you inspect—not what you expect.
If the practices and processes inside a company don’t drive the execution of values, then people don’t get it. The question is, do you create a culture of behavior and action that really demonstrates those values and a reward system for those who adhere to them? At American Express, we had an annual award for people, all over the world, who delivered great service. One winner I’ll never forget was a young chauffeur whose car windscreen had smashed and hit him in the head while he was driving an American Express client to the airport. Bleeding profusely, he continued the journey and got the client to the plane on time. By explicitly recognizing through worldwide communications the incredible commitment of people like this (and the rewards they receive), you can get people to behave in a certain way. Simply talking about it as part of your values isn’t enough.
The Quarterly: Some companies with reputedly strong values still find it hard to change. Do values ever get in the way of the adjustment you are talking about?
Lou Gerstner: I find it hard to think about bad values per se. The problem, as I say, comes when values are simply ignored and not reinforced every day by the internal processes of the company. The fault lies in not demanding adherence to the important values: sensitivity to the marketplace, awareness of competitors, and a willingness to deliver to the customer whatever he or she wants, regardless of what your internal historical assets have been.
In that sort of situation, it’s very hard to change. IBM was enamored with mainframes because mainframes made all the money. But if we were going to change, we had to find a way to take the money away from mainframes and allocate it to something else. So it isn’t what companies say; it’s what they do. Do you think Eastman Kodak didn’t see the move from analog to digital photography? Of course they did. They invented it. But if they had a value—I’m sure they did—of being market sensitive and following the customer, they didn’t follow it. They didn’t make the shift.
The Quarterly: Are there any relevant lessons from your post-IBM experience in the private-equity industry?
Lou Gerstner: I think that private-equity activity tends to come at the end of the corporate cycle, when a company is already in trouble, has been mismanaged, or is an orphan in need of new leadership. So private equity is another outside agent that comes in when management has failed to do what it needs to do.
The Quarterly: Is the management of generational change within a company an important component of adaptability and staying sensitive to the market? Does involving younger people meaningfully in routine decisions help create the right conditions for change?
Lou Gerstner: The problem with all of these things is that there’s a ditch on both sides of the road. I’ve known times in my career when older and wiser heads restrained younger people carried away by short-term dollar signs. So it’s hard to generalize, but certainly you have to listen to all the executive team. Organizations, in my experience, tend to be healthiest where there is a supremacy of ideas, where people are willing to listen to the youngest person in the room—provided, of course, that he or she has the facts.
My successor at IBM has embraced what we called an IBM jam. It goes on for several days; every IBMer could dial in and discuss important topics like cloud computing or mobile computing. That represented a real effort at IBM to tap the ideas of the younger, newer employees, not just the senior executives. Always listening to the younger folks won’t guarantee you the best strategy. But if you don’t listen to them at all, you won’t get it either.
The Quarterly: Do you think ownership structure makes a difference? We’ve noticed that a large proportion of enduring companies have been privately owned.
Lou Gerstner: There are obviously many more private companies than public companies, certainly in the United States, so you would probably expect this outcome. One thing I would say, though, is that the preoccupation with short-term earnings in the public-company environment—not something private companies are so concerned with—is quite destructive of longevity.
And that’s a bad thing. Who says the analysts are right when they mark down a company’s stock just because it makes 89 cents in the first quarter rather than the 93 forecast by the market? Are they thinking about the long-term competitiveness of the company? Are they thinking this would have been a good time to reinvest, or are they just churning out numbers and saying they want earnings per share to go up every quarter? This kind of short-term pressure on current earnings can lead to underinvestment in the long-term competitiveness of a business.
It’s very interesting to me that a company like Amazon has been able to convince the world that it doesn’t have to make meaningful earnings, because it’s investing for the future—building warehouses and building distribution and building hardware and software applications. It’s so rare to see that happening. It’s like they’re acting like a private company. It could be that private companies can operate without the pressure of trade-offs of short- and long-term investment and performance.
ABOUT THE AUTHOR(S)
This interview was conducted by Ian Davis, former managing director of McKinsey, and McKinsey Publishing’s Tim Dickson.
The latest quantum computer-adjacent device is part of NIST’s broader plan to help agencies move forward as they develop more emerging technology infrastructure.
Researchers affiliated with the National Institute of Standards and Technology unveiled a new device last week that aims to cut down the “noise” generated by quantum computing systems’ information processing, part of a larger agency agenda to meet private sector innovation in the quantum information science and technology field.
The device, a programmable switch that can improve connectivity between two qubits, or quantum units of information, was initially introduced in a research paper. One of the authors, NIST physicist Ray Simmonds, told Nextgov/FCW that with these devices, NIST aims to enable future advances in quantum computing infrastructure.
“NIST is trying to…say, ‘what do we need to do to make that technology happen?’” he said. “Right now, Google and IBM have been working on scaling something up, and they’re using a technology that can work on a larger scale…but it’s not going to have all the functionality you need necessarily to make a robust machine at this point. There’s other things that have to get sorted.”
Making a strong switch between these qubits to help them perform calculations and other operations more accurately is one of those external components key to emerging QIST architectures.
“What we’ve been doing is to try to push the envelope and say, ‘Can we make a system where all these individual elements can be turned on and off?’” he said.
As quantum computing is still an emerging technology field — where universally operable quantum computers are an estimated decade away — experts are uncertain as to the exact system architecture it will require.
Devices like NIST’s switch intend to promote a level of flexibility in operating a future quantum computer, especially as the field continues to evolve.
“We feel like this is the kind of thing that’s going to need to happen if you want to make a machine that’s more flexible, and to push to the next level to make it have more connections and have more ways of of operating the device,” Simmonds said. ”We don’t know exactly what is the best architecture to make a quantum machine yet.”
The toggle tackles a fundamental element required for precise quantum computing — noise reduction between qubits — but also can turn on the measurement of both qubits simultaneously, a feature that can help reduce quantum computational errors.
“One of NIST’s missions is to not only figure out how to measure things better but more efficiently and with higher fidelity; and making those measurements better but figuring out what are the next measurements that need to happen to enable technologies,” Simmonds said.
He added that as technologies continue to advance, specific measurements and standards applicable to them will change. In this vein, NIST is “really trying to push the whole U.S. forward and help bring those new ideas to bear,” he said.
The research mainly aimed at examining VPNs, firewalls, access points, routers, and other remote server management appliances used by top government agencies in the United States.
Cybersecurity researchers at Censys referred to publicly-accessible exposed interfaces as “low-hanging fruit” for cybercriminals, as they can easily gain unauthorized access to crucial assets.
Researchers at Censys, an attack surface management company, have discovered hundreds of devices linked to federal networks that have remotely accessible management interfaces. These interfaces can allow for the controlling and configuring of federal agency networks through the public internet.
Shocking Details Emerge about Federal Network Devices
According to a blog post from the Censys Research Team, published on June 26, an examination of the attack surfaces of approximately fifty sub-organizations within the federal civilian executive branch (FCEB) revealed 13,000 different hosts spread across 100 autonomous systems.
Further probing uncovered that services running on a subset of 1,300 FCEB hosts, accessible through IPv4 addresses, had hundreds of devices with publicly accessible management interfaces. This revelation falls within the scope of CISA’s BOD 23-02 (Binding Operational Directive).
What is BOD 23-02?
CISA’s BOD 23-02 helps federal agencies eliminate risks associated with remotely accessible management interfaces. It requires federal civilian agencies to remove certain networked management interfaces from the internet and mandates them to implement Zero Trust Architecture capabilities to enforce access control to internet-exposed interfaces within fourteen days of discovery.
What are the Dangers of Internet-Exposed Interfaces?
Researchers at Censys referred to publicly-accessible interfaces as “low-hanging fruit” for cybercriminals, as they can easily gain unauthorized access to crucial assets. CISA notes that threat actors are taking a keen interest in targeting certain classes of devices, especially those supporting network infrastructures, as it helps them evade detection.
After compromising these devices, attackers can obtain full access to the network. Misconfigurations, insufficient or outdated security measures, and unpatched software make devices vulnerable to exploitation. If the device management interface is directly connected to or accessible from a public-facing internet, it will be far more damaging for the organization.
Which Devices Are Impacted?
Researchers mainly examined VPNs, firewalls, access points, routers, and other remote server management appliances. They found around 250 different web interfaces for hosts exposing network appliances, all using SSH and TELNET remote protocols.
Most of the impacted devices were Cisco network appliances with a publicly accessible Adaptive Security Device Manager interface, whereas they also discovered enterprise Cradlepoint router interfaces revealing wireless network details. Other impacted products include Fortinet FortiGuard, SonicWall, and other popular firewalls.
In addition, researchers observed exposed remote access protocols, including NetBIOS, FTP, SNMP, and SMB, out-of-band remote server management devices like Lantronix SLC console server, physical Barracuda Email Security Gateway appliances, Nessus vulnerability scanning servers, HTTP services that exposed directory listings, managed file transfer protocols such as GoAnywhere, MOVEit, and SolarWinds Serv-U, and over 150 end-of-life software.
Fifteen of the remote access protocols, which already contain multiple known vulnerabilities exploitable by threat actors, were running on FCEB’s exposed hosts. The report highlights the need for federal agencies to be more proactive in safeguarding their digital assets and improving security mechanisms across all systems to make devices CISA’s BOD 23-02 compliant.