A studious adversary may be hellbent on destruction, and a comprehensive approach is needed to successfully govern the protection of critical infrastructure, specialists say.
The discovery of a malware tool targeting the operational technology in critical infrastructure like power plants and water treatment facilities is highlighting issues policymakers are grappling with in efforts to establish a regulatory regime for cybersecurity.
The tool enables the adversary to move laterally across industrial control system environments by effectively targeting their crucial programmable logic controllers.
“There are only a few places that can build something like this,” said Bryson Bort, CEO and Founder of cybersecurity firm Scythe. “This is not the kind of thing that the script kitty—the amateur—can all of a sudden, gen up and be like, ‘look, I’m doing things against PLCs.’ These are very complicated machines.”
“These are not protocols you can just go up, and, like, do against, like [web application penetration testing,]” Bort said. “So the complexity of this cannot be [overstated], the comprehensive nature of this particular malware cannot be [overstated]. This thing, I think calling it a shopping mall doesn’t quite capture it right. This was Mall of America. This thing had almost everything in it and the ability to add even more.”
Bort said the design of the tool suggests a switch in the mindset of the adversary—likely the Kremlin in the estimation of cyber intelligence analysts, although U.S. officials have not attributed the tool’s origin.
He connected the tool’s emergence to “what we’re seeing here in phase three on the ground in the Ukraine, which is the Russians seem to be going almost with a scorched earth approach. They are killing civilians, they are destroying the infrastructure. And that’s a complete, almost, 180 from what we saw within the first few days of the war where it looked like … they thought they were gonna kind of stroll into the country, take everything. And you don’t want to destroy what you’re about to take. And now it seems to be just to cause destruction.”
In response to a question about the role of global vendors to the industrial control systems community, and potentially limiting their production to trustworthy partner nations, Bort argued, if there is a need for regulations, the focus should be on the owners and operators of the critical infrastructure.
“This isn’t a vendor problem,” he said. “This is about ICS asset owners, and asset owners are working closely with their respective governments … and different countries of course, have different levels of regulation or partial regulation. We’re in a kind of partially regulated area with likely more regulation coming in these sectors. But I would say it’s the asset owners, not the vendors that I’d be looking to.”
But connected industrial control system environments are complicated, with many different vendors in the supply chain, including commercial information technologies like cloud services, which adversaries are increasingly targeting for their potential to create an exponential effect.
“Security matters on all of these sides,” Trey Herr, director of the Atlantic Council initiative, told Nextgov. “The vendors are the point of greatest regulatory leverage so addressing cybersecurity at the design stage can have the widest impact but with least understanding of the specific environments in which they’ll be used. Asset owners have the best picture about how they use this technology and security matters here in how they deploy and manage the security of these devices. Vendors might be OT focused or IT focused, like cloud vendors, so regulators need to keep focused on both communities.”
That is something lawmakers are currently deliberating on with the goal of introducing legislation this summer. But Herr said more of the community’s attention is currently on the asset-owner incorporation level than on the IT supply-chain elements that are also involved.
“We have a lot more effort and energy on the asset owner level with the Sector Risk Management Agencies at the moment than other parties, especially the IT vendors,” he said.
I is the most impactful technological advance of our time, transforming every aspect of the global economy.
Five waves of growth have carried AI from inception to ubiquity: the big bang of AI, cloud services, enterprise AI, edge AI and autonomy.
Like other technical breakthroughs — such as industrial machinery, transistors, the internet and mobile computing — AI was conceived in academia and commercialized in successive phases. It first took hold in large, well-resourced organizations before spreading over years to smaller organizations, professionals and consumers.
Since the term “AI” was coined at Dartmouth University in 1956, people in this field have explored many approaches to solving the world’s toughest problems. One of the most popular, deep learning, exploits data structures called neural networks that mirror how human brain cells operate.
Data scientists using deep learning configure a neural network with the parameters that work best for a particular problem, and then feed the AI up to millions of sample questions and answers. With each sample answer, the AI adjusts its neural weights until it can answer the questions on its own — even new ones it hasn’t seen before.
Learn more about the five waves of modern AI, determine which wave your organization is in and gear up for what comes next.
The Big Bang of AI
The first wave of AI computing was its “big bang,” which started with the discovery of deep neural networks.
Three fundamental factors fueled this explosion: academic breakthroughs in deep learning, the widespread availability of big data, and the novel application of GPUs to accelerate deep learning development and training.
Where computer scientists used to specify each AI instruction, algorithms can now write other algorithms, software can write software, and computers can learn on their own. This marked the beginning of the machine learning era.
And over the last decade, deep learning has migrated from academia to commerce, carried by the next four waves of growth.
The Cloud
The first businesses to use AI were large tech companies with the scientific know-how and computing resources to adapt neural networks to benefit their customers. They did so using the cloud — the second wave of AI computing.
Google, for example, applied deep learning to natural language processing to offer Google Translate. Facebook applied AI to identify consumer goods from images to make them shoppable. Through these types of cloud applications, Google, Amazon and Microsoft introduced many of AI’s first real-world applications.
Soon, these large tech companies created infrastructure-as-a-service platforms, unleashing the power of public clouds for enterprises and startups alike, and driving AI adoption further.
Now, companies of all sizes rely on the cloud to get started with AI quickly and affordably. It offers an easy onramp for companies to deploy AI, allowing them to focus on developing and training models, instead of building underlying infrastructure.
Enterprise AI
As tools are developed to make AI more accessible, large enterprises are embracing the technology to improve the quality, safety and efficiency of their workflows — and leading the third wave of AI computing. Data scientists in finance, healthcare, environmental services, retail, entertainment and other industries started training neural networks in their own data centers or the cloud.
For example, conversational AI chatbots enhance call centers, and fraud-detection AI monitors unusual activity in online marketplaces. Computer vision acts as a virtual assistant for mechanics, doctors and pilots, providing them with information to make more accurate decisions.
While this wave of AI computing has widespread applications and garners headlines each week, it’s just getting started. Companies are investing heavily in data scientists who can prepare data to train models and machine learning engineers who can create and automate AI training and deployment pipelines.
The Edge
The fourth wave pushes AI from the cloud or data center to the edge, to places like factories, hospitals, airports, stores, restaurants and power grids. The advent of 5G is furthering the ability for edge computing devices to be deployed and managed anywhere. It’s created an explosive opportunity for AI to transform workplaces and for enterprises to realize the value of data from their end users.
With the adoption of IoT devices and advances in compute infrastructure, the proliferation of big data allows enterprises to create and train AI models to be deployed at the edge, where end users are located.
This wave requires machine learning engineers and data scientists to consider the design constraints of AI inference at the edge. Such limits include connectivity, storage, battery power, compute power and physical access for maintenance. Designs must also align with the needs of business owners, IT teams and security operations to better ensure the success of deployments.
Edge AI is also in its early days, but already used across many industries. Computer vision monitors factory floors for safety infractions, scans medical images for anomalous growths and drives cars safely down the freeway. The potential for new applications is limitless.
Autonomy
The fifth wave of AI will be the rise of autonomy — the evolution of AI to the point where AI navigates mobile machinery without human intervention. Cars, trucks, ships, planes, drones and other robots will operate without human piloting. For this to unfold, the network connectivity of 5G, the power of accelerated computing, and continued innovation in the capabilities of neural networks are necessary.
Autonomous AI is making headway, driven by the pandemic, global supply chain constraints and the related need for automation for efficiency in business processes.
Incorporating domains of engineering beyond deep learning, autonomous AI requires machine learning engineers to collaborate with robotics engineers. Together, they work to fulfill the four pillars of a robotics system workflow: collecting and generating ground-truth data, creating the AI model, simulating with a digital twin and operating the robot in the real world.
For robotics, simulation capabilities are especially important in modeling and testing all possible corner cases to mitigate the safety risks of deploying robots in the real world.
Autonomous machines also face novel challenges around deployment, management and security that require coordination across teams in engineering, operations, manufacturing, networking, security and compliance.
Getting Started With AI
Starting with the big bang of AI, the industry has grown quickly and spawned further waves of computing, including cloud services, enterprise AI, edge AI and autonomous machines. These advancements are carrying AI from laboratories to living rooms, improving businesses and the daily lives of consumers.
NVIDIA has spent decades building the computational products and software necessary to enable the AI ecosystem to drive these waves of growth. In addition to developing and implementing AI into the company, NVIDIA has helped countless enterprises, startups, factories, healthcare firms and more to adopt, implement and scale their own AI initiatives.
Whether starting an initial AI project, transitioning a team into AI workloads or looking at infrastructure blueprints and expansions, set your AI projects up for success.
The 2021 hack of Colonial Pipeline, the biggest fuel pipeline in the United States, ended with thousands of panicked Americans hoarding gas and a fuel shortage across the eastern seaboard. Basic cybersecurity failures let the hackers in, and then the company made the unilateral decision to pay a $5 million ransom and shut down much of the east coast’s fuel supply without consulting the US government until it was time to clean up the mess.
From across the Atlantic, Ciaran Martin looked on in baffled amazement.
“The brutal assessment of the Colonial hack is that the company made decisions off of narrow commercial self-interest, everything else is for the federal government to pick up,” says Martin, previously the United Kingdom’s top cybersecurity official.
Now some of the US’s top cybersecurity officials—including the White House’s current Cyber director—say the time has come for a stronger government role and regulation in cybersecurity so that fiascos like Colonial don’t happen again.
The change in tack comes just as the war in Ukraine, and the heightened threat of new cyberattacks from Russia, is forcing the White House to rethink how it keeps the nation safe.
“We’re at an inflection point,” Chris Inglis, the White House’s national cyber director and Biden’s top advisor on cybersecurity, tells MIT Technology Review in his first interview since Russia’s invasion of Ukraine. “When critical functions that serve the needs of society are at issue, some things are just not discretionary.”
The White House’s new cybersecurity strategy consists of stronger government oversight, rules mandating that organizations meet minimum cybersecurity standards, closer partnerships with the private sector, a move away from the current market-first approach, and enforcement to make sure any new rules are followed. It will take its cue from some of the nation’s most famous regulatory landmarks, such as the Clean Air Act or the formation of the Food and Drug Administration.
With looming threats from Russian hackers, the FCC is planning for the prospect of Russians hijacking internet traffic, a tactic they’ve seen Moscow employ in the past. A new FCC initiative, announced March 11, aims to investigate if US telecom companies are doing enough to be secure against the threat. However, it’s a real test for the agency because it doesn’t have the power to force companies to comply. They are relying on the possibility of a national security crisis to get them to toe the line.
For many officials, this almost total reliance on the goodwill of the market to keep citizens safe cannot continue.
“The purely voluntary approach [to cybersecurity] simply has not gotten us to where we need to be, despite decades of effort,” says Suzanne Spaulding, previously a senior Obama administration cybersecurity official. “Externalities have long justified regulation and mandates such as with pollution and highway safety.”
Crucially, the White House’s top officials concur. “I’m a strong fan of what Suzanne says and I agree with her,” says Inglis.
Without a dramatic change, advocates argue, history will repeat itself.
“It’s no secret that companies don’t want strong cybersecurity rules,” says Senator Ron Wyden, one of congress’s loudest voices on cybersecurity and privacy issues. “That’s how our country got where it is on cybersecurity. So I’m not going to pretend that changing the status quo is going to be easy. But the alternative is to let hackers from Russia and China and even North Korea run wild in critical systems all across America. I sincerely hope the next hack doesn’t cause more damage than the Colonial Pipeline breach, but unless Congress gets serious it’s almost inevitable.”
A shift won’t be easy. Many experts, both inside and outside government, worry that poorly written regulation could do more harm than good and some officials have misgivings about regulators’ lack of cybersecurity expertise. For example, the Transportation Security Administration’s recent cyber regulations on pipelines were criticized loudly by some as “screwed up” due to what several critics say are inflexible, inaccurate rules that cause more problems than they solve. The detractors point to it as the result of a regulator with a huge remit but not nearly enough time, resources, and expert staff to do the job right.
“TSA maintains regular and frequent contact with owners and operators, and many of these pipeline companies appreciate the significance and pace of this public-private endeavor for improvements in protection and resilience against future cyberattacks,” says R. Carter Langston, a TSA spokesperson, who disputes critics of the pipeline regulation.
Glenn Gerstell, who was general counsel at the National Security Agency until 2020, argues that the current scattershot approach–a host of different regulators working on their own specific sectors–doesn’t work and that the US needs one central cybersecurity authority with the expertise and resources that can scale across different critical industries.
Pushback against the pipeline regulations signals how difficult the process might be. But despite that, there is a growing consensus that the status quo—a litany of security failures and perverse incentives—is unsustainable.
Landmark law
The Colonial Pipeline incident proved what many cyber experts already know: most attacks are the result of opportunistic hackers exploiting years-old problems that companies fail to invest in and solve.
Soldiers and tanks may care about national borders. Cyber doesn’t.
“The good news is that we actually know how to solve these problems,” says Glenn Gerstell. “We can fix cybersecurity. It may be expensive and difficult but we know how to do it. This is not a technology problem.”
Another major recent cyberattack proves the point again: SolarWinds, a Russian hacking campaign against the US government and major companies, could have been neutralized if the victims had followed well-known cybersecurity standards.
“There’s a tendency to hype the capabilities of the hackers responsible for major cybersecurity incidents, practically to the level of a natural disaster or other so-called acts of God,” Wyden says. “That conveniently absolves the hacked organizations, their leaders, and government agencies of any responsibility. But once the facts come out, the public has seen repeatedly that the hackers often get their initial foothold because the organization failed to keep up with patches or correctly configure their firewalls.”
It’s clear to the White House that many businesses do not and will not invest enough in cybersecurity on their own. In the past six months, the administration has enacted new cybersecurity rules for banks, pipelines, rail systems, airlines, and airports. Biden signed a cybersecurity executive order last year to bolster federal cybersecurity and impose security standards on any company making sales to the government. Changing the private sector has always been the more challenging task and, arguably, the more important one. The vast majority of critical infrastructure and technology systems belong to the private sector.
Most of the new rules have amounted to very basic requirements and a light government touch—yet they’ve still received pushback from the companies. Even so, it’s clear that more is coming.
“There are three major things that are needed to fix the ongoing sorry state of US cybersecurity,” says Wyden. “Mandatory minimum cybersecurity standards enforced by regulators; mandatory cybersecurity audits, performed by independent auditors who are not picked by the companies they are auditing, with the results delivered to regulators; and steep fines, including jail time for senior execs, when a failure to practice basic cyber hygiene results in a breach.”
The new mandatory incident reporting regulation, which became law on Tuesday, is seen as a first step. The law requires private companies to quickly share information about shared threats that they used to keep secret—even though that exact information can often help build a stronger collective defense.
Previous attempts at regulation have failed but the latest push for a new reporting law gained steam due to key support from corporate giants like Mandiant CEO Kevin Mandia and Microsoft president Brad Smith. It’s a sign that private sector leaders now see regulation as both inevitable and, in key areas, beneficial.
Inglis emphasizes that crafting and enforcing new rules will require close collaboration at every step between government and the private companies. And even from inside the private sector, there is agreement that change is needed.
“We’ve tried purely voluntary for a long time now,” says Michael Daniel, who leads the Cyber Threat Alliance, a collection of tech companies sharing cyber threat information to form a better collective defense. “It’s not going as fast or as well as we need.”
The view from across the Atlantic
From the White House, Inglis argues that the United States has fallen behind its allies. He points to the UK’s National CyberSecurity Centre (NCSC) as a pioneering government cybersecurity agency that the US needs to learn from. Ciaran Martin, the founding CEO of the NCSC, views the American approach to cyber with confused amazement.
“If a British energy company had done to the British government what Colonial did to the US government, we’d have torn strips off them verbally at the highest level,” he says. “I’d have had the prime minister calling the chairman to say, ‘What the fuck do you think you’re doing paying a ransom and switching off this pipeline without telling us?’”
The White House was quick to publicly blame Russia for a cyberattack against Ukraine, the latest sign that cyber attribution is a crucial tool in the American arsenal.
The UK’s cyber regulations work so that banks must be resilient against both a global financial shock and cyber stresses. The UK has also focused stronger regulation on telecoms as a result of a major British telecom being “completely owned” by Russian hackers, says Martin, who says the new security rules make the telecom’s previous security failures illegal.
On the other side of the Atlantic, the situation is different. The Federal Communications Commission, which oversees telecommunications and broadband in the US, had its regulatory power significantly rolled back during the Trump presidency and relies mostly on voluntary cooperation from internet giants.
The UK’s approach of tackling specific industries one at a time by building on the regulatory powers they already have, as opposed to a single new centralized law that covers everything, is similar to how the Biden White House strategy on cyber will work.
“We have to exhaust the [regulation] authorities we already have,” Inglis says.
For Wyden, the White House strategy signals a much needed change.
“Federal regulators, across the board, have been afraid to use the authority they have or to ask Congress for new authorities to regulate industry cybersecurity practices,” he says. “It’s no wonder that so many industries have atrocious cybersecurity. Their regulators have essentially let the companies regulate themselves.”
Why the cybersecurity market fails
There are three fundamental reasons why the cybersecurity market, worth hundreds of billions of dollars and growing globally, falls short.
Companies have not figured out how cybersecurity makes them money, Daniel says. The market fails at measuring cybersecurity and, more importantly, often cannot connect it to a company’s bottom line–so they often can’t justify spending the necessary money.
The second reason is secrecy. Companies have not had to report hacks, so crucial data about big hacks has been kept locked away to protect companies from bad press, lawsuits, and lawmakers.
Third is the problem of scale. The price that the government and society paid for the Colonial hack went well beyond what the company itself would pay for. Just like with the issue of pollution, “the costs don’t show up on your bottom line as a business,” Spaulding says, so the market incentives to fix the problems are weak.
Advocates for reform say that a stronger government hand can change the equation on all of that, exactly the way reform has in dozens of industries over the last century.
Gerstell sees pressure building slowly to do something different than the status quo.
“I have never seen such near unanimity and awareness ever before,” says Gerstell. “This looks and feels different. Whether it’s enough to really push change is not yet clear. But the temperature is increasing.”
Inglis points to the nearly $2 billion in cybersecurity money from Biden’s 2021 $1 trillion infrastructure bill as a “once in a generation opportunity” for the government to step up on cybersecurity and privacy.
“We have to make sure we don’t overlook the stunning opportunities we have to invest in the resilience and robustness of digital infrastructure,” Inglis argues. “We have to ask, what are the systemically critical functions that our society depends on? Will market forces alone attend to that? And when that falls short, how do we determine what we should do? That’s the course ahead for us. It doesn’t need to be a process that lasts years. We can do this with a sense of urgency.”
WASHINGTON: Flashy programs like hypersonic systems or altering human skin to be less attractive to mosquitosmay get more public attention, but figures released by the military’s fringe R&D department today reveal that its money is really pouring into microelectronics.
The Defense Advanced Research Projects Agency (DARPA) plans to spend some $896 million on microelectronics, a total that is more than the combined figures for its second and third big money investment areas — biotech and artificial intelligence, respectively, at about $410 million each — in fiscal 2023, according to slides presented today by DARPA Director Stefanie Tompkins. Cyber projects come in fourth at $184 million, followed by hypersonics at $143 million and quantum research at $90 million.
The investment in microelectronics is part of DARPA’s now five-year-old Electronics Resurgence Initiative, which Tompkins said was about “essentially bringing back US leadership in microelectronics.” Tompkins presented the budget figures today to industry officials during a webinar hosted by the National Defense Industrial Association.
That initiative, which Tompkins said was on the cusp of being updated to ERI 2.0, began in 2017 after DARPA said the military was suffering “limited […] access to leading-edge electronics, challenging U.S. economic and security advantages.”
The concern over microelectronics — and its vulnerability to supply chain disruption — was only made more dire in the wake of the COVID-19 pandemic. In response, the Biden administration aggressively pushed US investment in domestic chip production as a priority area, with the military, already concerned about the issue, happy to go along.
In February, Pentagon Undersecretary for Research and Engineering Heidi Shyu announced a different effort by the Defense Department to pursue “lab-to-fab” testing and prototyping hubs for microelectronics technology.
Shyu said microelectronics “support nearly all DoD activities, enabling capabilities such as GPS, radar, and command, control and communications systems.”
Just today the DoD publicized a Pentagon-led “microelectronics commons” that “aims to close the gaps that exist now which prevent the best ideas in technology from reaching the market.”
“The context was an understanding from really top-tier academics that investments that we were marking in early-stage microelectronics research could not be proven in the facilities that we have here at home. We had to go instead off to overseas places, in particular [Asia], to do the work that is necessary to prove out the innovation,” Air Force chief scientist and DARPA alum Victoria Coleman is quoted as saying in the announcement. “That kind of blew my mind.”
In all, Tompkins said DARPA has about 250 “active” programs running across its areas of interest — closing and starting programs at about one per week. Sadly, she did not mention the mosquito bite research program in the presentation.
The COVID-19 pandemic has accelerated trends in the healthcare industry. A survey of Latin American physicians reveals how they believe the industry will evolve as the pandemic evolves and the future unfolds.
Healthcare systems in Latin America have played a critical role in protecting human lives during the COVID-19 pandemic. They are also among those industries most affected by the crisis due to unanticipated costs, the suspension of nonurgent healthcare services, emotional exhaustion of the healthcare workforce, and the untimely deaths of many healthcare workers who lost their lives in the line of duty.
Sidebar
About the authors
This article is a collaborative effort by Felipe Child, Roberto García, Laura Medford-Davis, Robin Roark, and Jorge Torres, representing views from McKinsey’s Healthcare Systems & Services Practice.
Moreover, the healthcare sector will experience a tremendous amount of long-term, disruptive change—sparked or accelerated by the pandemic—in the months and years ahead. Frontline healthcare providers can offer a unique perspective on the changes occurring in the industry and how the future of healthcare is likely to unfold.
McKinsey surveyed physicians across Latin America to understand their perceptions of how the COVID-19 crisis is reshaping healthcare systems in the region (see sidebar, “Our methodology”).
Physicians expect healthcare volumes to rebound in 2022, and supporting telehealth is their preferred strategy for improving patient access
Sidebar
Our methodology
To help our clientsunderstand responses to COVID-19, McKinsey surveyed 517 general and specialty physicians across five countries in Latin America from October 14 to November 17, 2021, to gather insights on how the pandemic is shaping healthcare delivery in the region. The survey tested perspectives on five main topics: financial health, return of elective care, site of care delivery, telemedicine, and value-based care.
Participants included 517 physicians from Brazil (122 physicians), Chile (90 physicians), Colombia (94 physicians), Mexico (121 physicians), and Peru (90 physicians). Physician specialties included anesthesiology, cardiology, emergency medicine, general practice, general surgery, geriatrics, immunology, nephrology, obstetrics and gynecology, oncology, ophthalmology, orthopedics, otorhinolaryngology, and pediatrics.
COVID-19 caused the volume of medical services delivered to drop dramatically during 2020 and 2021: 80 percent of surveyed physicians reported a reduced number of patients since the onset of the pandemic in all countries surveyed. On the other hand, physicians surveyed predict an approximately 25 percent increase in outpatient medical consultations, hospitalizations, and surgery volumes in 2022. Across the five countries surveyed, the most popular strategy for helping patients return to care is offering virtual options such as telehealth (Exhibit 1).
Telehealth is now a core offering that physicians expect to continue even after in-person visits return
The survey found that 60 to 84 percent of physicians were actively offering telehealth services. Among these physicians, 20 percent on average introduced telehealth during the pandemic, and 80 percent reported they plan to continue doing so in the future as part of a hybrid care model for at least a few hours a day or for one day a week (Exhibit 2).
In addition, more than half of all physicians surveyed consider telehealth to be less costly for their practice than in-person visits, and two-thirds view telehealth visits as effective, especially for follow-up visits and nonurgent primary-care consultations.
Physicians expect more care to shift out of hospitals, primarily into patients’ homes
The shift from hospitals to other locations, including patient homes, for in-person care has also accelerated since the start of the COVID-19 pandemic. Survey respondents expect that by 2025, care will take place in patients’ homes an average of 1.5 times to 2.5 times more often than it does today (Exhibit 3). They expect an average of 35 percent of all palliative-care, mental-health, primary-care, and physical-therapy services to be delivered in the home by 2025. For three services—intensive care, emergency care, and dialysis—they expect home care to more than double its current state but achieve a lower absolute percentage of services in the home.
Exhibit 3
Physicians surveyed for the most part do not consider pharmacies to be a preferred location for most care services—although, in Mexico, 14 percent view them as appropriate for vaccine administration, and 7 percent consider them appropriate for physical therapy.
The pandemic has reaffirmed physicians’ commitment to their clinical careers, although their preferred practice model is shifting in some countries
Despite personal health risks associated with providing care to patients during the pandemic, the physicians surveyed in all countries are still committed to their careers; in fact, 90 percent say they are now less likely to leave medicine than they were before the pandemic. However, their interest in switching from independent practice to working for a healthcare system has increased (Exhibit 4).
Physicians anticipate a shift to value-based care in the future
Nearly half of the physicians surveyed believe value-based care improves the quality of patient care. Meanwhile, an average of 55 percent think their revenues will increase, and an average of 20 percent believe it will have no effect on their revenues (Exhibit 5). However, more than one-third of physicians surveyed overall and more than half in Mexico and Peru also believe fee-for-service models improve quality of care. Physicians reported they are now more willing to use outcome-based payer contract agreements and believe that in two to three years, 15 percent more patients will be treated under these models than are today.
COVID-19 trends will affect healthcare stakeholders across the value chain, including payers, providers, patients, pharmacies, and investors
COVID-19 has accelerated change in the healthcare industry and catalyzed new trends across Latin America. Telehealth and the shift to at-home care have met or exceeded patient expectations, and they may unlock lower healthcare costs in the future, although potentially at the expense of traditional hospital revenue streams. Stakeholders across the healthcare value chain likely need to accelerate their digital transformations to reflect new care models while boosting efforts to enhance the patient experience and incorporating new care sites into their operations.
Digital health ecosystems: Voices of key healthcare leaders
More physicians now see the benefit of shifting from fee-for-service to value-based care to improve quality of care and their future financial stability. Healthcare ecosystem participants will need to develop the necessary capabilities to implement true value-based care models that benefit all stakeholders of the system. Furthermore, those most likely to succeed in this shifting environment will adopt hybrid ecosystems that embrace telehealth and at-home care when clinically appropriate, adding value to the healthcare system while providing patients with affordable care.
Felipe Child is a partner in McKinsey’s Bogotá office; Roberto García is an associate partner in the Mexico City office, where Jorge Torres is a consultant; Laura Medford-Davis, MD, is an associate partner in the Houston office; and Robin Roark, MD, is a partner in the Miami office.
The authors wish to thank Andrés Arboleda, Sebastian Gonzalez, Pollyana Lima, Romina Mendoza, Claudia Mones, Tiago Sanfelice, and Javier Valenzuela for their contributions to this article.
Story by Adelia Henderson, ODNI Office of Strategic Communications
The Office of the Director of National Intelligence turns 17 today! Since the beginning of the agency in 2005, ODNI continues to lead intelligence integration across the U.S. Intelligence Community in a variety of ways. In honor of ODNI’s 17th anniversary, here are some notable moments from our history.
Founding of ODNI
After the September 11th terrorist attacks, Congress passed the Intelligence Reform and Terrorism Prevention Act of 2004. This created the Office of the Director of National Intelligence to oversee a 17-organization Intelligence Community by improving integration and information sharing.
On April 21, 2005, Ambassador John Negroponte and Gen. Michael Hayden were sworn in as the first Director of National Intelligence and Principal Deputy Director of National Intelligence, respectively, following Senate confirmation. ODNI began operations on April 22, 2005. The first year consisted of several office moves, as ODNI was first located in the Old Executive Office Building. The agency then moved to the New Executive Office Building, before landing at present day Joint-Base Anacostia Bolling in 2006.https://www.youtube.com/embed/Q6ZHtpqaecM
The National Counterproliferation Center is Created
Under the IRTPA and based on recommendations from the Weapons of Mass Destruction Committee, DNI Negroponte announced the creation of the National Counterproliferation Center (NCPC) in December 2005. Today, NCPC helps counter threats against the United States from the proliferation of chemical, biological, radiological and nuclear weapons – missiles capable of delivering them.
Founding of IARPA
The Intelligence Advanced Project Research Activity was established under ODNI and created in 2007. IARPA invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines faced by the Intelligence Community. Learn more about IARPA here.
ODNI Moves to Liberty Crossing
In 2008, ODNI moved to its current headquarters in McLean, Virginia. A new building was constructed next to the National Counterterrorism Center to serve as ODNI’s headquarters. The two buildings are shaped like an “L” and “X”, and the site is collectively referred to as Liberty Crossing. NCTC is aligned under ODNI and plays a vital role in protecting our country and U.S. interests around the world from the threat of terrorism.
ODNI Employee Resource Groups Begin in 2014
As it approached the end of its first decade, ODNI’s approach to cultivating a diverse workforce evolved to establish Employee Resource Groups in 2014, consistent with the Principles of Professional Ethics for the Intelligence Community. These groups foster unified culture across ODNI and advocate for a more inclusive workplace aligned with ODNI’s mission, vision and values. Currently, there are 10 active, employee-led ERGs.
The National Counterintelligence and Security Center is Created
DNI Clapper established the National Counterintelligence and Security Center (NCSC) in 2014. The existing Office of the National Counterintelligence Executive was combined with the Center for Security Evaluation, the Special Security Center and the National Insider Threat Task Force to integrate and align counterintelligence and security mission areas.
Intelligence Community Campus – Bethesda Opens
As ODNI matured over the years, so did its facilities. Originally serving as a facility for the National Geospatial-Intelligence Agency, ODNI acquired the Intelligence Community Campus – Bethesda in 2012. The site underwent renovations, and officially opened in 2015.
The U.S. Space Force Joins the Intelligence Community
The country’s newest Armed Forces branch officially became the IC’s 18th agency on January 8, 2021. Space Force was the first new organization to join the IC since 2006, when the Drug Enforcement Agency joined. Space Force’s addition advances strategic change across the national security space enterprise.
Avril Haines Sworn in as DNI
On January 21, 2021, Avril Haines became the seventh Senate-confirmed Director of National Intelligence. She is the first woman to lead the IC and previously served as deputy director of CIA. Read her confirmation hearing testimony here.
The National Intelligence University Joins ODNI
In June 2021, the National Intelligence University transitioned from the Defense Intelligence Agency to ODNI. NIU is the leading institution for intelligence education, in-depth research and interagency engagement that offers classified courses at the undergraduate and graduate levels. Learn more about NIU’s transition to ODNI here.
Dr. Stacey Dixon Sworn in as PDDNI
Dr. Stacey Dixon began serving as the sixth Senate-confirmed Principal Deputy Director of National Intelligence on August 4, 2021. She is the first person of color confirmed to this position and the highest-ranking African American woman in the IC.
This story is the fourth and final part of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. It was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. Read the rest of the series here.
In the back room of an old and graying building in the northernmost region of New Zealand, one of the most advanced computers for artificial intelligence is helping to redefine the technology’s future.
Te Hiku Media, a nonprofit Māori radio station run by life partners Peter-Lucas Jones and Keoni Mahelona, bought the machine at a 50% discount to train its own algorithms for natural-language processing. It’s now a central part of the pair’s dream to revitalize the Māori language while keeping control of their community’s data.
Mahelona, a native Hawaiian who settled in New Zealand after falling in love with the country, chuckles at the irony of the situation. “The computer is just sitting on a rack in Kaitaia, of all places—a derelict rural town with high poverty and a large Indigenous population. I guess we’re a bit under the radar,” he says.
The project is a radical departure from the way the AI industry typically operates. Over the last decade, AI researchers have pushed the field to new limits with the dogma “More is more”: Amass more data to produce bigger models (algorithms trained on said data) to produce better results.
The approach has led to remarkable breakthroughs—but to costs as well. Companies have relentlessly mined people for their faces, voices, and behaviors to enrich bottom lines. And models built by averaging data from entire populations have sidelined minority and marginalized communities even as they are disproportionately subjected to the technology.
Over the years, a growing chorus of experts have argued that these impacts are repeating the patterns of colonial history. Global AI development, they say, is impoverishing communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires.
Peter-Lucas Jones (left) and Keoni Mahelona (right) attend an Indigenous AI Workshop in 2019.
COURTESY PHOTO
This has been particularly apparent for artificial intelligence and language. “More is more” has produced large language models with powerful autocomplete and text analysis capabilities now used in everyday services like search, email, and social media. But these models, built by hoovering up large swathes of the internet, are also accelerating language loss, in the same way colonization and assimilation policies did previously.
Only the most common languages have enough speakers—and enough profit potential—for Big Tech to collect the data needed to support them. Relying on such services in daily work and life thus coerces some communities to speak dominant languages instead of their own.
“Data is the last frontier of colonization,” Mahelona says.
In turning to AI to help revive te reo, the Māori language, Mahelona and Jones, who is Māori, wanted to do things differently. They overcame resource limitations to develop their own language AI tools, and created mechanisms to collect, manage, and protect the flow of Māori data so it won’t be used without the community’s consent, or worse, in ways that harm its people.
Now, as many in Silicon Valley contend with the consequences of AI development today, Jones and Mahelona’s approach could point the way to a new generation of artificial intelligence—one that does not treat marginalized people as mere data subjects but reestablishes them as co-creators of a shared future.
Like many Indigenous languages globally,te reoMāori began its decline with colonization.
After the British laid claim to Aotearoa, the te reo name for New Zealand, in 1840, English gradually took over as the lingua franca of the local economy. In 1867, the Native Schools Act then made it the only language in which Māori children could be taught, as part of a broader policy of assimilation. Schools began shaming and even physically beating Māori students who attempted to speak te reo.
In the following decades, urbanization broke up Māori communities, weakening centers of culture and language preservation. Many Māori also chose to leave in search of better economic opportunities. Within a generation, the proportion of te reospeakers plummeted from 90% to 12% of the Māori population.
In the 1970s, alarmed by this rapid decline, Māori community leaders and activists fought to reverse the trend. They created childhood language immersion schools and adult learning programs. They marched in the streets to demand that te reo haveequal status with English.
In 1987, 120 years after actively supporting its erasure, the government finally passed the Māori Language Act, declaring te reo an official language. Three years later, it began funding the creation of iwi, or tribal, radio stations like Te Hiku Media, to publicly broadcast in te reo to increase the language’s accessibility.
Many Māori I speak to today identify themselves in part by whether or not their parents or grandparents spoke te reo Māori. It’s considered a privilege to have grown up in an environment with access to intergenerational language transmission.
This is the gold standard for language preservation: learning through daily exposure as a child. Learning as a teen or adult in an academic setting is not only harder. A textbook often teaches only a single, or “standard,” version of te reo when each iwi, or tribe, has unique accents, idiomatic expressions, and embedded regional histories.
Language, in other words, is more than just a tool for communication. It encodes a culture as it’s passed from parent to child, from child to grandchild, and evolves through those who speak it and inhabit its meaning. It also influences as much as it is influenced, shaping relationships, worldviews, and identities. “It’s how we think and how we express ourselves to each other,” says Michael Running Wolf, another Indigenous technologist who’s using AI to revive a rapidly disappearing language.
“Data is the last frontier of colonization.”
Keoni Mahelona
To preserve a language is thus to preserve a cultural history. But in the digital age especially, it takes constant vigilance to yank a minority language out of its downward trajectory. Every new communication space that doesn’t support it forces speakers to choose between using a dominant language and forgoing opportunities in the larger culture.
“If these new technologies only speak Western languages, we’re now excluded from the digital economy,” says Running Wolf. “And if you can’t even function in the digital economy, it’s going to be really hard for [our languages] to thrive.”
With the advent of artificial intelligence, language revitalization is now at a crossroads. The technology can further codify the supremacy of dominant languages, or it can help minority languages reclaim digital spaces. This is the opportunity that Jones and Mahelona have seized.
Long before Jones and Mahelonaembarked on this journey, they met over barbecue at their swimming club’s member gathering in Wellington. The two instantly hit it off. Mahelona took Jones on a long bike ride. “The rest is history,” Mahelona says.
In 2012, the pair moved back to Jones’s hometown of Kaitaia, where Jones became CEO of Te Hiku Media. Because of its isolation, the region remains one of the most economically impoverished of Aotearoa, but by the same token, its Māori population is among the country’s best protected.
COURTESY PHOTO
Over its 20-odd years of broadcasting history, Te Hiku had amassed a rich archive of te reo audiomaterials. It includes gems like a recording of Jones’s own grandmother Raiha Moeroa, born in the late 19th century, whose te reo remained largely untouched by colonial influence.
Jones saw an opportunity to digitize the archive and create a more modern equivalent of intergenerational language transmission. Most Māori no longer live with their iwis and can’t rely on nearby kin for daily te reoexposure. With a digital library, however, they’d be able to listen to te reo from bygone elders whenever and wherever they wanted.
The local Māori tribes granted him permission to proceed, but Jones needed a place to host the materials online. Neither he nor Mahelona liked the idea of uploading them to Facebook or YouTube. It would give the tech giants license to do what they wanted with the precious data.
(A few years later, companies would indeed begin working with Māori speakers to acquire such data. Duolingo, for example, sought to build language-learning tools that could then be marketed back to the Māori community. “Our data would be used by the very people that beat that language out of our mouths to sell it back to us as a service,” Jones says. “It’s just like taking our land and selling it back to us,” Mahelona adds.)
The only alternative was for Te Hiku to build its own digital hosting platform. With his engineering background, Mahelona agreed to lead the project and joined as CTO.
The digital platform became Te Hiku’s first major step to establishing data sovereignty—a strategy in which communities seek control over their own data in an effort to ensure control over their future. For Māori, the desire for such autonomy is rooted in history, says Tahu Kukutai, a cofounder of the Māori data sovereignty network. During the earliest colonial censuses, after a series of devastating wars in which they killed thousands of Māori and confiscated their land, the British collected data on tribal numbers to track the success of the government’s assimilation policies.
Data sovereignty is thus the latest example of Indigenous resistance—against colonizers, against the nation-state, and now against big tech companies. “The nomenclature might be new, the context might be new, but it builds on a very old history,” Kukutai says.
In 2016, Jones embarked on a new project: to interview native te reospeakers in their 90s before their language and knowledge was lost to future generations. He wanted to create a tool that would display a transcription alongside each interview. Te reo learners would then be able to hover on words and expressions to see their definitions.
But few people had enough mastery of the language to manually transcribe the audio. Inspired by voice assistants like Siri, Mahelona began looking into natural-language processing. “Teaching the computer to speak Māori became absolutely necessary,” Jones says.
But Te Hiku faced a chicken-and-egg problem. To build a te reo speech recognition model, it needed an abundance of transcribed audio. To transcribe the audio, it needed the advanced speakers whose small numbers it was trying to compensate for in the first place. There were, however, plenty of beginning and intermediate speakers who could read te reo words aloud better than they could recognize them in a recording.
So Jones and Mahelona, along with Te Hiku COO Suzanne Duncan, devised a clever solution: rather than transcribe existing audio, they would ask people to record themselves reading a series of sentences designed to capture the full range of sounds in the language. To an algorithm, the resulting data set would serve the same function. From those thousands of pairs of spoken and written sentences, it would learn to recognize te reo syllables in audio.
The team announced a competition. Jones, Mahelona, and Duncan contacted every Māori community group they could find, including traditional kapa haka dance troupes and waka ama canoe-racing teams, and revealed that whichever one submitted the most recordings would win a $5,000 grand prize.
The entire community mobilized. Competition got heated. One Māori community member, Te Mihinga Komene, an educator and advocate of using digital technologies to revitalize te reo, recorded 4,000 phrases alone.
Money wasn’t the only motivator. People bought into Te Hiku’s vision and trusted it to safeguard their data. “Te Hiku Media said, ‘What you give us, we’re here as kaitiaki [guardians]. We look after it, but you still own your audio,’” says Te Mihinga. “That’s important. Those values define who we are as Māori.”
Within 10 days, Te Hiku amassed 310 hours of speech-text pairs from some 200,000 recordings made by roughly 2,500 people, an unheard-of level of engagement among researchers in the AI community. “No one could’ve done it except for a Māori organization,” says Caleb Moses, a Māori data scientist who joined the project after learning about it on social media.
The amount of data was still small compared with the thousands of hours typically used to train English language models, but it was enough to get started. Using the data to bootstrap an existing open-source model from the Mozilla Foundation, Te Hiku created its very first te reo speech recognition model with 86% accuracy.
COURTESY PHOTO
From there, it branched out into other language AI technologies. Mahelona, Moses, and a newly assembled team created a second algorithm for auto-tagging complex te reo phrases, and a third for giving real-time feedback to te reo learners on the accuracy of their pronunciation. The team even experimented with voice synthesis to create the te reo equivalent of a Siri, though it ultimately didn’t clear the quality bar to be deployed.
Along the way, Te Hiku established new data sovereignty protocols. Māori data scientists like Moses are still few and far between, but those who join from outside the community cannot just use the data as they please. “If they want to try something out, they ask us, and we have a decision-making framework based on our values and our principles,” Jones says.
It can be challenging. The open-source, free-wheeling culture of data science is often antithetical to the practice of data sovereignty, as is the culture of AI. There have been times when Te Hiku has let data scientists go because they “just want access to our data,” Jones says. It now seeks to cultivate more Māori data scientists through internship programs and junior positions.
Te Hiku has since made most of its tools available as APIs through its new digital language platform, Papa Reo. It’s also working with Māori-led organizations like the educational company Afed Limited, which is building an app to help te reo learners practice their pronunciation. “It’s really a game changer,” says Cam Swaison-Whaanga, Afed’s founder, who is also on his own te reo learning journey. Students no longer have to feel shy about speaking aloud in front of teachers and peers in a classroom.
Te Hiku has begun working with smaller Indigenous populations as well. In the Pacific region, many share the same Polynesian ancestors as the Māori, and their languages have common roots. Using the te reo data as a base, a Cook Islands researcher was able to train an initial Cook Islands language model to reach roughly 70% accuracy using only tens of hours of data.
“It’s no longer just about teaching computers to speak te reoMāori,” Mahelona says. “It’s about building a language foundation for Pacific languages. We’re all struggling to keep our languages alive.”
“Regardless of how widely spoken they are, languages belong to a people.”
Kathleen Siminyu
But Jones and Mahelona know there will come a time when they will have to work with more than Indigenous communities and organizations. If they want te reo to truly be ubiquitous—to the point of having te reo–speaking voice assistants on iPhones and Androids—they’ll need to partner with big tech companies.
“Even if you have the capacity in the community to do really cool speech recognition or whatever, you have to put it in the hands of the community,” says Kevin Scannell, a computer scientist helping to revitalize the Irish language, who has grappled with the same trade-offs in his research. “Having a website where you can type in some text and have it read to you is important, but it’s not the same as making it available in everybody’s hand on their phone.”
Jones says Te Hiku is preparing for this inevitability. It created a data license that spells out the ground rules for future collaborations based on the Māori principle of kaitiakitanga, or guardianship. It will only grant data access to organizations that agree to respect Māori values, stay within the bounds of consent, and pass on any benefits derived from its use back to the Māori people.
The license has yet to be used by an organization other than Te Hiku, and there remain questions around its enforceability. But the idea has already inspired other AI researchers, like Kathleen Siminyu of Mozilla’s Common Voice project, which gathers voice donations to build public data sets for speech recognition in different languages. Right now those data sets can be downloaded for any purpose. But last year, Mozilla began exploring a license more similar to Te Hiku’s that would give greater control to language communities that choose to donate their data. “It would be great if we could tell people that part of contributing to a data set leads to you having a say as to how the data set is used,” she says.
Margaret Mitchell, the former co-lead of Google’s ethical AI team who conducts research on data governance and ownership practices, agrees. “This is exactly the kind of license we want to be able to develop more generally for all different kinds of technology. I would really like to see more of it,” she says.
In some ways, Te Hiku got lucky.Te reo can take advantage of English-centric AI technologies because it has enough similarity to English in key features like its alphabet, sounds, and word construction. The Māori are also a fairly large Indigenous community, which allowed them to amass enough language data and find data scientists like Moses to help make their vision a reality.
“Most other communities are not big enough for those happy accidents to occur,” says Jason Edward Lewis, a digital technologist and artist who co-organizes the Indigenous AI Network.
At the same time, he says, Te Hiku has been a powerful demonstration that AI can be built outside the wealthy profit centers of Silicon Valley—by and for the people it’s meant to serve.Te Hiku Media receives a New Zealand innovation award for its language revitalization work.
COURTESY PHOTO
The example has already motivated others. Michael Running Wolf and his wife, Caroline, also an Indigenous technologist, are working to build speech recognition for the Makah, an Indigenous people of the Pacific Northwest coast, whose language has only around a dozen remaining speakers. The task is daunting: the Makah language is polysynthetic, which means a single word, composed of multiple building blocks like prefixes and suffixes, can express an entire English sentence. Existing natural-language processing techniques may not be applicable.
Before Te Hiku’s success, “we didn’t even consider looking into it,” Caroline says. “But when we heard the amazing work they’re doing, it was just fireworks going off in our head: ‘Oh my God, it’s finally possible.’”
Mozilla’s Siminyu says Te Hiku’s work also carries lessons for the rest of the AI community. In the way the industry operates today, it’s easy for individuals and communities to be disenfranchised; value is seen to come not from the people who give their data but from the ones who take it away. “They say, ‘Your voice isn’t worth anything on its own. It actually needs us, someone with a capacity to bring billions together, for each to be meaningful,’” she says.
In this way, then, natural-language processing “is a nice segue into starting to figure out how collective ownership should work,” she adds. “Because regardless of how widely spoken they are, languages belong to a people.”
The Cybersecurity Maturity Model program at the Department of Defense has gone through its share of changes and “evolutions” over the past year. Despite another looming regulatory process, DoD officials and contracting experts are indicating that the program is unlikely to undergo another major overhaul.
The CMMC 2.0 framework, released late last year, is currently going through a rulemaking process under Title 32 of U.S. law, which outlines rules and regulations for national defense. The program is also due for another regulatory cycle later this year under Title 48, which governs the Federal Acquisition Regulations System, but DoD’s Stacy Bostjanick said officials hope that any further changes will be minor or done in the context of a real, operational program, not a theoretical concept.
“My prayer is that once we get through this round [of rulemaking], CMMC will be a thing. Our anticipation is that we will be allowed to have another interim rule like last time. We’re hoping that that interim rule will go into effect by May,” said Bostjanick, the director of CCMC policy for the office of undersecretary of defense for acquisition and sustainment, during a panel discussion with SC Media at the AFCEA DC Cyber Mission Summit this week. “Once we get through this rulemaking process, we hope there will only be one more aspect that we’ll have to address and that will be international partners.”
The biggest changes that came out of CMMC 2.0 was a concerted effort to recalibrate who would (and would not) require a third-party cybersecurity assessment.
Faced with a shortage of trained assessors and feedback in the form of hundreds of public comments from the contracting industry about the scope of the program, the Pentagon simplified the different levels of certification from five to three and specified that defense contractors who do not handle controlled unclassified information would be able to self-attest that they are meeting the government’s cybersecurity requirements.
Bostjanick said the roughly 80,000 companies that DoD estimates will qualify for Level 2 maturity (which merged many of the requirements from Levels 2-4 in the previous plan). That change is “where there’s been a lot of conversation” with the contracting community.
However, defense contracting experts say that often contractors are unaware of whether they even handle CUI or misunderstand how the government classifies protected information. Even contracts for non-technical equipment, supplies and services end up being classified as controlled information because sometimes those requirements come in a package of information that include documents detailing sensitive designs or layouts for military facilities. If they’re not flagged, those same documents can end up flowing to subcontractors and other third parties.
“Unfortunately, we haven’t done a good job within the department training our program managers and contracting officers to identify [controlled unclassified information],” said Bostjanick.
Defense-contract watchers predict few CCMC changes
Some observers are predicting that the core elements on the program and its requirements are unlikely to change drastically. Jacob Horne, who works on CMMC compliance issues at Summit7, told SC Media that despite the handwringing and looming regulatory process, the program’s requirements remain tied to the National Institute for Standards and Technology’s Special Publication 800-171, which covers how contractors must handle and secure controlled unclassified information.
“The overall takeaway is that — much like the changes from 1.0 to 2.0 — there’s actually much less that can change than people think,” Horne said in an interview. “The majority of the burden and cost and impact that are facing companies with CUI stem from NIST, not the CMMC program.”
Additionally, he said many of the same complaints raised by industry around CMMC (namely that the cost burden and impact associated with the cybersecurity requirements will hurt the ability of small businesses to compete and create new barriers to entry) were similarly levied in previous regulatory processes around controlled unclassified information in 2013 and 2017.
To be clear, shrinking participation from smaller businesses (including startups that DoD often taps for innovation) is a real problem. An analysis by Amanda and Alex Bresler of PW Communications found that the total number of small businesses in the defense market shrank by nearly a quarter, 23%, over the last six years, from an estimated 68,000 companies in 2015 to about 52,000 in 2021. Bostjanick said the department is looking at developing a “cybersecurity-as-a-service” model that could provide smaller companies with higher end defensive capabilities, though such services wouldn’t replace the need for a dedicated cybersecurity program at those companies.
Even still, Horne said that the government has consistently dismissed those complaints in the past and has little reason to back off that position now or make broad exceptions.
“I don’t see anything in the language, in the indirect language of the DoD’s webinars and industry presentations, that indicate to me a fundamental policy shift from previous rulemaking on this subject,” said Horne. “If anything, I think the nature of the threat and the lack of implementation by the DIB and how bad this problem has languished, the government will be even less inclined to give out waivers and [plans of action and milestones].”
NCCoE) announces the release of three related publications on trusted cloud and hardware-enabled security. The foundation of any data center or edge computing security strategy should be securing the platform on which data and workloads will be executed and accessed. The physical platform represents the first layer for any layered security approach and provides the initial protections to help ensure that higher-layer security controls can be trusted.
NIST Special Publication (SP) 1800-19 presents an example of a trusted hybrid cloud solution that demonstrates how trusted compute pools leveraging hardware roots of trust can provide the necessary security capabilities for cloud workloads in addition to protecting the virtualization and application layers. View the document.
Each of the reports below, NISTIR 8320B and NISTIR 8320C, are intended to be used as a blueprint or template that the general security community can use as example proof of concept implementations.
NISTIR 8320B explains an approach based on hardware-enabled security techniques and technologies for safeguarding container deployments in multi-tenant cloud environments. View the document.
Draft NISTIR 8320C presents an approach for overcoming security challenges associated with creating, managing, and protecting machine identities, such as cryptographic keys, throughout their lifecycle. View the document.
We Want to Hear from You!
Review the draft NISTIR 8320C and submit comments online on or before June 6, 2022. You can also contact us at hwsec@nist.gov. We value and welcome your input and look forward to your comments.
NIST Cybersecurity and Privacy Program NIST Applied Cybersecurity Division (ACD) National Cybersecurity Center of Excellence (NCCoE) Questions/Comments about this notice: hwsec@nist.gov
Preston Dunlap, the Department of the Air Force’s first-ever chief architect officer, is set to leave the Pentagon in the coming weeks, he confirmed in a lengthy LinkedIn post on April 18—and he has a long list of recommendations for those coming after him on how to combat Defense Department bureaucracy.
Dunlap’s departure, first reported by Bloomberg, marks the latest exit by a high-ranking Air Force official tasked with modernizing the department’s approach to software and technology. In September 2021, Nicolas M. Chaillan, the first-ever chief software officer of the Air Force, announced his resignation, also on LinkedIn and also offering a candid assessment of the challenges facing DAF.
Dunlap first came to the Air Force in 2019, primarily tasked with overseeing the development and organization of the Advanced Battle Management System, the Air Force’s contribution to joint all-domain command and control—the so-called military Internet of Things that will connect sensors and shooters into one massive network.
Under Dunlap, progress on ABMS proceeded with numerousexperiments. Dunlap also oversaw the development of an “integrated warfighting network” to allow small teams of Airmen serving in far-flung locations to use their work laptops on deployments.
“It’s been my honor to help our nation get desperately needed technology into the hands of our service members who place their lives on the line every day,” Dunlap wrote on LinkedIn. “Some of that technology was previously unimaginable before we developed new capabilities, and at other times it was previously unattainable—available commercially, yet beyond DOD’s grasp.”
Initially, Dunlap wrote, he signed on for two years in the Pentagon, before agreeing to extend his stay for a third year. Now, as he departs, he is joining Chaillan in pointing out the DOD’s shortcomings when it comes to innovation and pushing the department to revolutionize its approach, especially for adopting new technologies, so that it can, in his words, “defy gravity.”
“Not surprisingly to anyone who has worked for or with the government before, I arrived to find no budget, no authority, no alignment of vision, no people, no computers, no networks, a leaky ceiling, even a broken curtain,” Dunlap wrote.
In looking to break through bureaucracy, Dunlap wrote, he followed four key steps that he urged his successor to follow: shock the system, flip the acquisition script, just deliver already, and slay the valley of death and scale.
In doing so, he said, he sought to operate more like SpaceX, the aerospace company founded by Elon Musk that has earned plaudits for its fast-moving, innovative practices.
“By the time the government manages to produce something, it is too often obsolete; no business would ever survive this way, nor should it. Following a commercial approach, just like SpaceX, allowed me to accomplish a number of ‘firsts’ in DOD in under two years,” Dunlap wrote.
Among those firsts, Dunlap referenced the integration of artificial intelligence into military kill chains, interoperability of data and communications across different satellites and aircraft, the deployment of zero trust architecture, and the promotion of security in software development, known as DevSecOps.
In addition, Dunlap argued for a “reformatting” of the Pentagon’s acquisition enterprise, an oft-criticized process seen by many as out-dated and antiquated. By leveraging commercial technologies, shifting focus to outcomes instead of detailed requirements, putting more investments in outside innovators, and pushing forward with a concerted, rapid pace, the Pentagon can start to “regrow its thinning technological edge,” Dunlap wrote.
In order to help develop innovation and progress, Dunlap also pushed for flexibility—both in how the department works and connects, and in how it develops new systems. In particular, he argued for open systems and open architectures to allow new systems to rapidly adapt to and integrate new capabilities as they are developed, pointing to the B-21 Raider and Next-Generation Air Dominanceprograms as examples of that approach.
“We should never be satisfied,” Dunlap closed by writing. “We need this kind of progress at scale now, not tomorrow. So let’s be careful to not…
“Lull ourselves into complacency, when we should be running on all cylinders.
“Do things the same way, when we should be doing things better.
“Distract ourselves with process, when we should be focused on delivering product.
“Compete with each other, when we should be competing with China.
“Defend our turf, when we should be defending our country.
“Focus on input metrics, when we should be focused on output metrics.
“Buy the same things, when we should be investing in what we need.
“Be comfortable with the way things are, when we should be fighting for the way things should be.”
Dunlap’s departure comes at a seeming inflection point for the ABMS program he was tasked with overseeing. Air Force Secretary Frank Kendall has indicated he wants to take a different approach to the program, focusing more on specific operational impacts delivered quickly and less on experiments showing advanced capabilities.
“We can’t invest in everything, and we shouldn’t invest in improvements that don’t have clear operational benefit,” Kendall said March 3 at the AFA Warfare Symposium in Orlando, Fla.. “We must be more focused on specific things with measurable value and operational impact.”
As part of that approach, Kendall has made it one of his organizational imperatives to more fully define the goals and impacts the ABMS program is going for.