Story updated 4/22/2022 at 10:25 am ET to include a clarification that Bostjanick was referring to May 2023 for when a new CMMC interim rule might go into effect.
WASHINGTON: The Pentagon is assessing whether to develop cloud service offerings to help contractors meet requirements for its cyber certification program, according to the Defense Department’s deputy chief information officer.
The Cybersecurity Maturity Model Certification (CMMC) program aims to strengthen the cybersecurity of the defense industrial base by holding contractors accountable for following best practices to protect their network, but can be an onerous undertaking both for the companies and their assessors. The Pentagon last November rolled out CMMC version 2.0, streamlining the security tiers of the program from five to three and resulting in some requirements changes for its first two levels.
David McKeown, DoD deputy chief information officer and senior information security officer, whose office leads the CMMC effort, said Tuesday at the AFCEA Cyber Mission Summit he’s looking for “innovative solutions” to help contractors meet at least 85 out of 110 controls in NIST Special Publication 800-171 in order to achieve certification required for Level 2 of CMMC.
“For instance, in the CMMC realm, rather than go out and assess each and every network of our industry partners, I’m kind of keen on establishing some sort of cloud services that either achieve many of the 110 controls in [NIST SP] 800-171 or all of them that industry partners can consume to store our data and safeguard our data without us having to go out onto your network,” McKeown said.
Pentagon CIO John Sherman in February said he hoped the upgraded CMMC program would raise the cybersecurity “waterline” across DoD to keep potential adversaries away from critical data.
“This is basic hygiene to raise the water level to make sure we can protect our sensitive data so that when our service members have to go into action, they’re not going to have an unfair position because our adversary’s already stolen key data and technologies that’ll put them at an advantage,” Sherman said at the AFCEA Space Force IT conference.
Meanwhile, CMMC’s policy director said Wednesday another interim rule for the program could come in May next year. The Pentagon released its first interim rule, which define some mandatory compliance requirements, in September 2020 for the first version of CMMC, prompting hundreds of comments and criticism from industry regarding the timeframe and complexity of the program.
“Our anticipation is that we will be allowed to have another interim rule like we did last time,” Stacy Bostjanick, CMMC policy director for the Office of the Undersecretary of Defense for Acquisition and Sustainment, said. “We’re hoping that the interim rule will go into effect by May. In fact, my team is very frustrated with me today because I’m sitting here with you guys and they’re stuck in a room going through a rule that’s like hundreds of pages long.”
Once the rulemaking process is over, she said she hopes “there will be only one more aspect that we’ll have to address and that will be the international partners.”
“That will probably take some rulemaking effort,” Bostjanick said. “We’re working through how that’s going to work in getting that laying flat today.”
Markus Bernhardt is the Chief Evangelist at Obrizum, pioneering deeptech AI digital learning solutions for corporate learning.
Google Director of Research Peter Norvig famously stated in his keynote speech for the Association for Learning Technology Conference in 2007 that if you only had to read one research paper to learn about learning, it would beBenjamin Bloom’s The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.
To aid his preparation for the keynote, this brilliant piece of advice had been given to Peter by his friend Hal Abelson, an educator and professor at MIT. In Bloom’s well-known and often-cited paper, the outcome of an experiment is reported comparing the efficacy of three types of teaching: the conventional lecture, the conventional lecture with regular testing and feedback, and one-to-one tuition.
Using the “straight lecture” as the mean, Bloom found an 84% increase in mastery above the mean for a “formative feedback” approach to teaching and an astonishing 98% increase in mastery for one-to-one tuition. While intuitively one might of course have expected one-to-one tuition to stand out in such an experiment, the measured impact compared to the other two methods must surely be considered staggering, and we can immediately see why Hal would have chosen to recommend this paper to Peter.
The highly personalized approach for individuals or to small groups in regard to tuition, as well as many forms of training and practice, had of course been the preferred modality long before this piece of research, albeit it being extraordinarily expensive and thus not widely available. However, Bloom’s research provided the data, and extraordinarily so, that cemented one-to-one tuition and all forms of personalization as the gold standard in learning.
Lecturing is a form of one-size-fits-all learning, and it was invented in 1350, a time when books were extremely expensive and not available to the masses since they were arduously written by hand, traditionally by monks. The idea of a lecture arose since this approach would allow one person to read the material out loud to a large group of learners, allowing everyone in the group to access the content and learn. Like all one-size-fits-all learning, lecturing aims to cater to the “average” learner in a group. This idea of an “average” person turns out foolishly inadequate when the realization sets in that given human complexity, the average will almost never adequately describe a given individual, an idea explored in great detail by Todd Rose in his book The End of Average: How We Succeed in a World That Values Sameness.
In recent years, the term one-size-fits-all has mainly been used to denounce click-through e-learning, where such a one-size-fits-all approach and delivery has been the norm—and the pain for employees across the globe and across sectors. Most notably, this has been the case for annual compliance training, delivered digitally and as a tick-box exercise to allow organizations to present completion certificates to regulators.
This is already changing and fast gathering momentum, as artificial intelligence (AI) is not only changing the way we learn but also the way trainers train, coaches coach and teachers teach. Through AI technology, the individual learner can now access fully personalized programs of learning, in their own time, and thus cover almost any topic via self-guided learning. This personalization at a granular level can be achieved, as now the AI is able to measure and take into account prior existing knowledge, as well as continuously measure learner progress, in both competence and confidence throughout the learning journey. It can do this for each topic or learning objective, respectively. AI is thus able continuously to adapt to the learner, their individual pace of progress and their individual learning needs.
Furthermore, AI can not only deliver the learning fully personalized but also in line with evidence-based theory, such as variation (including multiple-media and multiple model content), utilizing worked examples, retrieval practice, spaced practice, low-stakes quizzes and interleaving of topics.
Being able to deliver fully personalized learning journeys through AI is in itself a huge game changer for digital learning. It will drastically change the way we learn, and it will play a major role in the way employees will reskill and upskill in the future of work.
As we enter an era of more effective and efficient learning, asynchronously delivered, individually personalized and at scale, we are looking at the AI learning revolution.
However, there is even more to this. Often overlooked and not covered in nearly as much depth is what exciting and positive impact this will have on in-person tutoring, training, coaching, mentoring and workshops. We are looking at a whole new realm of learning analytics that the AI piece will deliver as an automatic output—strengths, weaknesses, competence, confidence and learner self-awareness.
For human-led sessions, this available data from the AI piece will allow organizations to better plan and deliver far more personalized in-person sessions than previously possible. Organizations will be able to identify learning needs and gaps and, where necessary, group participants accordingly, greatly increasing the potential learning and performance impact of these in-person sessions.
Where previously these in-person sessions, trainings and workshops would have been delivered with the same approach, and often the same slide deck and supporting materials and content, we will see a higher degree of personalization to the individual or learning group and their respective learning needs.
Utilizing the power of AI for learning is already revolutionizing the level of personalization the learner is receiving, across industries and sectors. Learning impact and learning efficiency are being lifted to excitingly new, previously inaccessible levels — both digitally as well as for in-person learning. Learners, designers, trainers, coaches, and educators — everyone is benefiting as we reach into our toolbox and deploy the new AI learning and automation tools available.
A few years from now, the Department of Defense’s Joint Warfighter Cloud Capability will be in place and operational as the foundation for the department’s JADC2 initiative, according to John Sherman, chief information officer of DOD and a 2022 Wash100 Award winner.
The department’s cloud adoption efforts have been difficult and remain difficult, Sherman said during his keynote address at the Potomac Officers Club’s 3rd Annual CIO Summit. Over the last year, DOD has pivoted from its first single-source cloud program, the Joint Enterprise Defense Infrastructure contract, to its new multi-cloud, multi-award JWCC.
Although JWCC has seen delays recently, the potential $9 billion effort aims to award contracts by the end of 2022.
“Within a few years, not only will the contract be in place, we’re going to be running workloads top secret, secret and unclassified across multiple clouds all the way from CONUS out to the very tactical edge,” Sherman forecasted.
Sherman also shared that the DOD’s Cybersecurity Maturity Model Certification program, which was recently rolled under his office, is expected to undergo further changes in the coming years to reduce the barriers for businesses to achieve compliance.
Join the 2022 CMMC Forum hosted by the Potomac Officers Club on May 18 to learn more about the future of the program and its implications on the public and private sectors. Register here.
Of the CMMC updates, Sherman said, “I’m going to expect you to see something that is understandable, that makes maybe a little more sense of where we started – looking at maybe three versus five levels – but very importantly, for small and medium sized businesses, that we have taken steps to make this 800-171 implementation a bit more palatable and doable.”
Sherman’s priority in amending the CMMC program expands beyond the nation’s capital and is driven by his own family’s history of small business ownership “We all live in the Beltway,” he said, addressing the in-person audience. “I’m thinking about the companies that are out in the Midwest, the West coast, the Northeast, the Southeastern United States, you name it – folks who aren’t near the 495 here and how CMMC and DIB security looks to them.”
Cybersecurity, and CMMC in particular, are areas in which we’re “going to have to move the needle” as adversarial threats continue to escalate, he said. “This is where the Chinese and the Russians and non-state actors are trying to lift our information.”
In regards to other focus areas, buying down the department’s technical debt, getting zero trust on good footing, electromagnetic spectrum operations, 5G and PNT – or position, navigation and timing – all still remain top priorities for the department in the near future, Sherman noted.
For many companies moving to the cloud, a focus on short-term gain leads to long-term pain and prevents them from capturing a much bigger portion of cloud’s estimated $1 trillion of value potential. It happens when IT departments, usually with the assistance of systems integrators, migrate a set of applications to the cloud as quickly as possible to capture initial gains. That short-term focus frequently has significant consequences.
What is a cloud foundation?
A cloud foundation is a set of design decisions that is implemented in code files and defines how the cloud is used, secured, and operated. We have found that the ideal cloud foundation is split into three layers to reduce risk, accelerate change, and provide appropriate levels of isolation (exhibit):
Application patterns: Code artifacts that automate the secure, compliant, and standardized configuration and deployment of applications with similar functional and nonfunctional requirements through the use of infrastructure as code (IaC), pipeline as code (PiaC), policy as code (PaC), security as code (SaC), and compliance as code (CaC):
Policy as code: The translation of an organization’s standards and policies into actual executable code that secures the infrastructure and environment of the organization automatically in accordance with the policy
Compliance as code: A composed set of rules interpreted by a software-based policy engine that enforces compliance policy for a specific cloud environment
Isolation zones: A set of separate CSP-specific zones (sometimes called landing zones) that isolate application environments to prevent concentration risk. Each zone contains CSP services, identity and access management (IAM), network isolation, capacity management, shared services scoped to the isolation zone, and change control where one or more related applications run
Base: A set of CSP-agnostic capabilities that are provided to a set of isolation zones, including network connectivity and routing; centralized firewall and proxy capabilities; identity standardization; enterprise logging, monitoring, and analytics (ELMA); shared enterprise services; golden-image (or primary-image) pipelines; and compliance enforcement
The culprit? Lack of attention to the cloud foundation, that unsexy but critical structural underpinning that determines the success of a company’s entire cloud strategy (see sidebar, “What is a cloud foundation?”). Several large banks are paying that price, resulting in the need to hire hundreds of cloud engineers because they did not put the right foundational architecture in place at the beginning.
Building a solid cloud foundation as part of a transformation enginedoes not mean delaying financial returns or investing significant resources. It just requires knowing what critical steps to take and executing them well. Our experience shows that companies that put in place a solid cloud foundation reap benefits in the form of a potential eightfold acceleration in the pace of cloud migration and adoption and a 50 percent reduction in migration costs over the long term—without delaying their cloud program.
A top consumer-packaged-goods (CPG) company was running into significant delays with its cloud-migration program—each application was taking up to two months to migrate. With portions of the business divesting, finance and legal were pressuring the company to isolate them from the company quickly. Realizing that deficiencies in its cloud foundation were causing the delays, it made the counterintuitive decision to pause the migration to focus on strengthening the cloud foundation.
For example, it automated critical infrastructure capabilities, deployed security software to automate compliance, deployed reusable application patterns, and created isolation zones to insulate workloads from one another and prevent potential problems in one zone from spreading. Once these improvements were in place, the company was able to migrate applications quickly, safely, and securely, with single applications taking days rather than weeks.
Ten actions to get your cloud foundation right
Building a strong cloud foundation is not the “cost of doing business.” It’s a critical investment that will reap significant rewards in terms of speed and value. The following ten actions are the most important in building this foundation.
1. Optimize technology to enable the fastest ‘idea-to-live’ process
Whether their workloads are in the cloud or in traditional data centers, many companies have outdated and bureaucratic work methods that introduce delays and frustrations. Your cloud foundation should be constructed to enable an idea’s rapid progression from inception to up and running in a production environment, without sacrificing safety and security.
In practice, that means automating as many steps of the production journey as possible, including sandbox requests, firewall changes, on-demand creation of large numbers of isolated networks, identity and access management (IAM), application registration, certificate generation, compliance, and so on. Automating these steps is as valuable in traditional data centers as it is in the cloud. But since the cloud offers unique tools that make automation easier, and because the move to cloud leads organizations to rethink their entire strategy, the beginning of the migration process is often the right time to change how IT operates.
2. Design the cloud architecture so it can scale
If companies do it right, they can build a cloud architecture based on five people that can scale up to support 500 or more without significant changes. As the cloud footprint grows, a well-designed architecture should be able to accommodate more components, including more application patterns, isolation zones, and capabilities. Support for this scaling requires simple, well-designed interfaces between components. Because this is difficult to get right the first time, cloud-architecture engineers who have done it before at scale are a big advantage.
3. Build an organization that mirrors the architecture
According to Conway’s law, the way teams are organized will determine the shape of the technology they develop. IT organizations have a set structure for teams, and that can lead them to build things that don’t fit the shape of the cloud architecture.
For example, some companies have a separate cloud team for each of its business units. This can lead to each team building different cloud capabilities for its respective business unit and not architecting them for reuse by other business units. That can create slowdowns and even delays when changes made by one team affect the usage of another.
IT needs to design its cloud architecture first and then build an organization based on that structure. That means building out an organization that has a base team, isolation-zone teams, and application-pattern teams in order to reduce dependencies and redundancies between groups and ultimately deliver well-architected components at a lower cost.
4. Use the cloud that already exists
Many companies operate in fear of being locked into a specific cloud service provider (CSP), so they look for ways to mitigate that risk. A common pattern is an overreliance on containers, which can be expensive and time consuming and keep businesses from realizing the genuine benefits available from CSPs. One example of this was a company that created a container platform in the cloud as opposed to using the cloud’s own resiliency tools. When there was an outage, the impact was so large that it took multiple days to get its systems back online because the fault was embedded in the core of its non-cloud tooling.
There are other ways to mitigate against CSP lock-in, such as defining a limited lock-in time frame and putting practices and systems in place that enable a rapid shift, if necessary. By attempting to build non-native resiliency capabilities, companies are essentially competing with CSPs without having their experience, expertise, or resources. The root of this issue is that companies still tend to treat CSPs as if they were hardware vendors rather than software partners.
5. Offer cloud products, not cloud services
It is common for companies to create internal cloud-service teams to help IT and the business use the cloud. Usually these service teams operate like fulfillment centers, responding to requests for access to approved cloud services. The business ends up using dozens of cloud services independently and without a coherent architecture, resulting in complexity, defects, and poor transparency into usage.
Instead, companies need dedicated product teams staffed with experienced cloud architects and engineers to create and manage simple, scalable, and reusable cloud products for application teams. The constraints imposed by aligning around cloud products can help to ensure that the business uses the correct capabilities in the correct way.
Once the product team has an inventory of cloud products, it can encourage application teams to use them to fast-track their cloud migration. The aptitude and interest of each application team, however, will influence how quickly and easily it adopts the new cloud products. Teams with little cloud experience, skill, or interest will need step-by-step assistance, while others will be able to move quickly with little guidance. The product team, therefore, needs to have an operating model that can support varying levels of application-team involvement in the cloud-migration journey.
One effective route offers three levels of engagement (exhibit):
Concierge level: The engagement team builds everything needed by an application team.
Embedded level: Architects from the central cloud team are embedded into application teams to help them build the right application patterns.
Partner level: A partner team builds and runs its own isolation zone using the core capabilities from the base foundation, such as networking, logging, and identity.
By establishing the cloud products, the teams to support them, and the model by which application teams can engage with product teams, the business has the mechanisms in place to thoughtfully scale its cloud strategy.
6. Application teams should not reinvent how to design and deploy applications in cloud
When organizations give free rein to application teams to migrate applications to the cloud provider, the result is a menagerie of disparate cloud capabilities and configurations that makes ongoing maintenance of the entire inventory difficult.
Instead, organizations should treat the deployment capabilities of an application as a stand-alone product, solving common problems once using application patterns. Application patterns can be responsible for configuring shared resources, standardizing deployment pipelines, and ensuring quality and security compliance. The number of patterns needed to support the inventory of applications can be small, therefore maximizing ROI. For example, one large bank successfully used just ten application patterns to satisfy 95 percent of its necessary use cases.
7. Provide targeted change management by using isolation zones
Isolation zones are cloud environments where applications live. In an effort to accelerate cloud migration, CSPs and systems integrators usually start with a single isolation zone to host all applications. That’s a high-risk approach, because configuration changes to support one application can unintentionally affect others. Going to the other extreme—one isolation zone for each application—prevents the efficient deployment of configuration changes, requiring the same work to be carried out across many isolation zones.
As a rule of thumb, a company should have from five to 100 isolation zones, depending on the size of the business and how it answers the following questions:
Does the application face the internet?
What level of resiliency is required?
What is the risk-assurance level or security posture required for applications running in the zone?
Which business unit has decision rights on how the zone is changed for legal purposes?
8. Build base capabilities once to use across every CSP
Most companies will be on multiple clouds. The mix often breaks down to about 60 percent of workloads in one, 30 percent in another, and the rest in a third. Rather than building the same base capabilities (for example, network connectivity and routing, identity services, logging, and monitoring) across all the CSPs, companies should build them once and reuse the capabilities across all isolation zones, even those that reside in a different CSP from the base.
9. Speed integration of acquisitions by putting in place another instance of the base foundation
During an acquisition, merging IT assets is difficult and time consuming. The cloud can speed the merger process and ease its complexity if the acquiring company creates an “integration-base foundation” that can run the assets of the company being acquired. This enables the IAM, security, network, and compliance policies already in place at the acquired company to continue, allowing its existing workloads to continue to function as designed. Over time, those workloads can be migrated from the integration base to the main base at a measured and predictable pace.
Using this approach, companies can efficiently operate their core cloud estate as well as the acquisition’s using the same software with a different configuration. This typically can reduce integration time from two to three years to closer to three to nine months.
10. Make preventative and automated cloud security and compliance the cornerstone
All software components and systems must go through a security layer. Traditional cybersecurity mechanisms are dependent on human oversight and review, which cannot match the tempo required to capture the cloud’s full benefits of agility and speed. For this reason, companies must adopt new security architectures and processes to protect their cloud workloads.
Security as code (SaC) has been the most effective approach to securing cloud workloads with speed and agility. The SaC approach defines cybersecurity policies and standards programmatically so they can be referenced automatically in the configuration scripts used to provision cloud systems. Systems running in the cloud can be evaluated against security policies to prevent changes that move the system out of compliance.
“Start small and grow” is a viable cloud strategy only if the fundamental building blocks are created from the start. Companies need to design and build their cloud foundation to provide a reusable, scalable platform that supports all the IT workloads destined for the cloud. This approach unlocks the benefits that the cloud offers and ultimately captures its full value.
The Kessel Run group is currently developing a playbook that would make it easier for organizations across the federal government to adopt engineering and security best practices.
The Air Force’s Kessel Run software factory wants to share its recipes for success with the whole federal government when it comes to engineering and security best practices.
“We’re talking to other software factories and part of our initiative is to release all these templates and playbooks that not just [Defense Department] entities can use, right, from a software factory perspective or just a program office perspective, but any agency can just grab them off our site and say, hey, this is how Kessel Run does chaos engineering, this is how we do performance engineering,” said Omar Marrero, Kessel Run’s deputy test chief and the chaos and performance tech lead.
Marrero told FCW that Kessel Run, which is part of the Air Force Life Cycle Management Center and focuses on software development and acquisitions, is routinely looking for partnerships, consulting with organizations that are looking to “start their own chaos engineering journey” by sharing Kessel Run’s templates, playbooks, or tech stacks.
The goal, he said, is to bring industry best practices across engineering, security, and performance to an organization “like a vaccine that you’re injecting into a system” to bolster preparedness.
That means working to prove or test assumptions and develop a process to address the worst case scenarios: “just think of it as pre-emptive practicing where you can practice fire drills: like we know what happens if all sudden we have a surge, does it happen, did we get an alert to put resources on it, that kind of stuff.”
It might sound routine, but in practice, the testing concept has already proven beneficial through a partnership with the General Services Administration, which recently teamed up with Kessel Run to make sure the Cloud.gov service could handle a surge in users.
Lindsay Young, the acting director of Cloud.gov, told FCW that the partnership was a “fantastic opportunity.”
“It was so much fun,” Young said, “really spending the time to understand each other’s setups and things like that, and then figure out bigger and bigger ways to make trouble and see if we could stand up to it.”
Young said the aim was to ensure Cloud.gov users could get a seamless experience and that testing out extreme scenarios like scaling ability was “invaluable” because “you don’t know you can do something until you can prove you can do something.”
The next challenge, Young said, is finding and then partnering with other federal civilian agencies that have similar problems. Kessel Run will be doing the same as it is currently partnering with other software factories across the Defense Department, including the Navy’s Black Pearl, but is also planning to release its playbooks to broaden its impact.
The Air Force’s software factory darling has long been heralded as a Defense Department success story and mold for addressing emerging technology needs with plans to expand its influence in the Air Force and beyond.
But the release of the guidebooks, which are being drafted, could speed that along. Marrero said there isn’t a hard release date and many of the materials being drafted are also being worked on with other software factories, including the Army. The newer concept of security chaos will also be included.
“That’s what we do. We just share what we learn. And if another opportunity comes like this one where we’re going to collaborate again, we’ll jump on that,” he said. “And hopefully, we can spread this chaos engineering thing to the rest of the government and that initiative helps us deliver more resilient stuff.”
A studious adversary may be hellbent on destruction, and a comprehensive approach is needed to successfully govern the protection of critical infrastructure, specialists say.
The discovery of a malware tool targeting the operational technology in critical infrastructure like power plants and water treatment facilities is highlighting issues policymakers are grappling with in efforts to establish a regulatory regime for cybersecurity.
The tool enables the adversary to move laterally across industrial control system environments by effectively targeting their crucial programmable logic controllers.
“There are only a few places that can build something like this,” said Bryson Bort, CEO and Founder of cybersecurity firm Scythe. “This is not the kind of thing that the script kitty—the amateur—can all of a sudden, gen up and be like, ‘look, I’m doing things against PLCs.’ These are very complicated machines.”
“These are not protocols you can just go up, and, like, do against, like [web application penetration testing,]” Bort said. “So the complexity of this cannot be [overstated], the comprehensive nature of this particular malware cannot be [overstated]. This thing, I think calling it a shopping mall doesn’t quite capture it right. This was Mall of America. This thing had almost everything in it and the ability to add even more.”
Bort said the design of the tool suggests a switch in the mindset of the adversary—likely the Kremlin in the estimation of cyber intelligence analysts, although U.S. officials have not attributed the tool’s origin.
He connected the tool’s emergence to “what we’re seeing here in phase three on the ground in the Ukraine, which is the Russians seem to be going almost with a scorched earth approach. They are killing civilians, they are destroying the infrastructure. And that’s a complete, almost, 180 from what we saw within the first few days of the war where it looked like … they thought they were gonna kind of stroll into the country, take everything. And you don’t want to destroy what you’re about to take. And now it seems to be just to cause destruction.”
In response to a question about the role of global vendors to the industrial control systems community, and potentially limiting their production to trustworthy partner nations, Bort argued, if there is a need for regulations, the focus should be on the owners and operators of the critical infrastructure.
“This isn’t a vendor problem,” he said. “This is about ICS asset owners, and asset owners are working closely with their respective governments … and different countries of course, have different levels of regulation or partial regulation. We’re in a kind of partially regulated area with likely more regulation coming in these sectors. But I would say it’s the asset owners, not the vendors that I’d be looking to.”
But connected industrial control system environments are complicated, with many different vendors in the supply chain, including commercial information technologies like cloud services, which adversaries are increasingly targeting for their potential to create an exponential effect.
“Security matters on all of these sides,” Trey Herr, director of the Atlantic Council initiative, told Nextgov. “The vendors are the point of greatest regulatory leverage so addressing cybersecurity at the design stage can have the widest impact but with least understanding of the specific environments in which they’ll be used. Asset owners have the best picture about how they use this technology and security matters here in how they deploy and manage the security of these devices. Vendors might be OT focused or IT focused, like cloud vendors, so regulators need to keep focused on both communities.”
That is something lawmakers are currently deliberating on with the goal of introducing legislation this summer. But Herr said more of the community’s attention is currently on the asset-owner incorporation level than on the IT supply-chain elements that are also involved.
“We have a lot more effort and energy on the asset owner level with the Sector Risk Management Agencies at the moment than other parties, especially the IT vendors,” he said.
I is the most impactful technological advance of our time, transforming every aspect of the global economy.
Five waves of growth have carried AI from inception to ubiquity: the big bang of AI, cloud services, enterprise AI, edge AI and autonomy.
Like other technical breakthroughs — such as industrial machinery, transistors, the internet and mobile computing — AI was conceived in academia and commercialized in successive phases. It first took hold in large, well-resourced organizations before spreading over years to smaller organizations, professionals and consumers.
Since the term “AI” was coined at Dartmouth University in 1956, people in this field have explored many approaches to solving the world’s toughest problems. One of the most popular, deep learning, exploits data structures called neural networks that mirror how human brain cells operate.
Data scientists using deep learning configure a neural network with the parameters that work best for a particular problem, and then feed the AI up to millions of sample questions and answers. With each sample answer, the AI adjusts its neural weights until it can answer the questions on its own — even new ones it hasn’t seen before.
Learn more about the five waves of modern AI, determine which wave your organization is in and gear up for what comes next.
The Big Bang of AI
The first wave of AI computing was its “big bang,” which started with the discovery of deep neural networks.
Three fundamental factors fueled this explosion: academic breakthroughs in deep learning, the widespread availability of big data, and the novel application of GPUs to accelerate deep learning development and training.
Where computer scientists used to specify each AI instruction, algorithms can now write other algorithms, software can write software, and computers can learn on their own. This marked the beginning of the machine learning era.
And over the last decade, deep learning has migrated from academia to commerce, carried by the next four waves of growth.
The first businesses to use AI were large tech companies with the scientific know-how and computing resources to adapt neural networks to benefit their customers. They did so using the cloud — the second wave of AI computing.
Google, for example, applied deep learning to natural language processing to offer Google Translate. Facebook applied AI to identify consumer goods from images to make them shoppable. Through these types of cloud applications, Google, Amazon and Microsoft introduced many of AI’s first real-world applications.
Soon, these large tech companies created infrastructure-as-a-service platforms, unleashing the power of public clouds for enterprises and startups alike, and driving AI adoption further.
Now, companies of all sizes rely on the cloud to get started with AI quickly and affordably. It offers an easy onramp for companies to deploy AI, allowing them to focus on developing and training models, instead of building underlying infrastructure.
As tools are developed to make AI more accessible, large enterprises are embracing the technology to improve the quality, safety and efficiency of their workflows — and leading the third wave of AI computing. Data scientists in finance, healthcare, environmental services, retail, entertainment and other industries started training neural networks in their own data centers or the cloud.
For example, conversational AI chatbots enhance call centers, and fraud-detection AI monitors unusual activity in online marketplaces. Computer vision acts as a virtual assistant for mechanics, doctors and pilots, providing them with information to make more accurate decisions.
While this wave of AI computing has widespread applications and garners headlines each week, it’s just getting started. Companies are investing heavily in data scientists who can prepare data to train models and machine learning engineers who can create and automate AI training and deployment pipelines.
The fourth wave pushes AI from the cloud or data center to the edge, to places like factories, hospitals, airports, stores, restaurants and power grids. The advent of 5G is furthering the ability for edge computing devices to be deployed and managed anywhere. It’s created an explosive opportunity for AI to transform workplaces and for enterprises to realize the value of data from their end users.
With the adoption of IoT devices and advances in compute infrastructure, the proliferation of big data allows enterprises to create and train AI models to be deployed at the edge, where end users are located.
This wave requires machine learning engineers and data scientists to consider the design constraints of AI inference at the edge. Such limits include connectivity, storage, battery power, compute power and physical access for maintenance. Designs must also align with the needs of business owners, IT teams and security operations to better ensure the success of deployments.
Edge AI is also in its early days, but already used across many industries. Computer vision monitors factory floors for safety infractions, scans medical images for anomalous growths and drives cars safely down the freeway. The potential for new applications is limitless.
The fifth wave of AI will be the rise of autonomy — the evolution of AI to the point where AI navigates mobile machinery without human intervention. Cars, trucks, ships, planes, drones and other robots will operate without human piloting. For this to unfold, the network connectivity of 5G, the power of accelerated computing, and continued innovation in the capabilities of neural networks are necessary.
Autonomous AI is making headway, driven by the pandemic, global supply chain constraints and the related need for automation for efficiency in business processes.
Incorporating domains of engineering beyond deep learning, autonomous AI requires machine learning engineers to collaborate with robotics engineers. Together, they work to fulfill the four pillars of a robotics system workflow: collecting and generating ground-truth data, creating the AI model, simulating with a digital twin and operating the robot in the real world.
For robotics, simulation capabilities are especially important in modeling and testing all possible corner cases to mitigate the safety risks of deploying robots in the real world.
Autonomous machines also face novel challenges around deployment, management and security that require coordination across teams in engineering, operations, manufacturing, networking, security and compliance.
Getting Started With AI
Starting with the big bang of AI, the industry has grown quickly and spawned further waves of computing, including cloud services, enterprise AI, edge AI and autonomous machines. These advancements are carrying AI from laboratories to living rooms, improving businesses and the daily lives of consumers.
NVIDIA has spent decades building the computational products and software necessary to enable the AI ecosystem to drive these waves of growth. In addition to developing and implementing AI into the company, NVIDIA has helped countless enterprises, startups, factories, healthcare firms and more to adopt, implement and scale their own AI initiatives.
The 2021 hack of Colonial Pipeline, the biggest fuel pipeline in the United States, ended with thousands of panicked Americans hoarding gas and a fuel shortage across the eastern seaboard. Basic cybersecurity failures let the hackers in, and then the company made the unilateral decision to pay a $5 million ransom and shut down much of the east coast’s fuel supply without consulting the US government until it was time to clean up the mess.
From across the Atlantic, Ciaran Martin looked on in baffled amazement.
“The brutal assessment of the Colonial hack is that the company made decisions off of narrow commercial self-interest, everything else is for the federal government to pick up,” says Martin, previously the United Kingdom’s top cybersecurity official.
Now some of the US’s top cybersecurity officials—including the White House’s current Cyber director—say the time has come for a stronger government role and regulation in cybersecurity so that fiascos like Colonial don’t happen again.
The change in tack comes just as the war in Ukraine, and the heightened threat of new cyberattacks from Russia, is forcing the White House to rethink how it keeps the nation safe.
“We’re at an inflection point,” Chris Inglis, the White House’s national cyber director and Biden’s top advisor on cybersecurity, tells MIT Technology Review in his first interview since Russia’s invasion of Ukraine. “When critical functions that serve the needs of society are at issue, some things are just not discretionary.”
The White House’s new cybersecurity strategy consists of stronger government oversight, rules mandating that organizations meet minimum cybersecurity standards, closer partnerships with the private sector, a move away from the current market-first approach, and enforcement to make sure any new rules are followed. It will take its cue from some of the nation’s most famous regulatory landmarks, such as the Clean Air Act or the formation of the Food and Drug Administration.
With looming threats from Russian hackers, the FCC is planning for the prospect of Russians hijacking internet traffic, a tactic they’ve seen Moscow employ in the past. A new FCC initiative, announced March 11, aims to investigate if US telecom companies are doing enough to be secure against the threat. However, it’s a real test for the agency because it doesn’t have the power to force companies to comply. They are relying on the possibility of a national security crisis to get them to toe the line.
For many officials, this almost total reliance on the goodwill of the market to keep citizens safe cannot continue.
“The purely voluntary approach [to cybersecurity] simply has not gotten us to where we need to be, despite decades of effort,” says Suzanne Spaulding, previously a senior Obama administration cybersecurity official. “Externalities have long justified regulation and mandates such as with pollution and highway safety.”
Crucially, the White House’s top officials concur. “I’m a strong fan of what Suzanne says and I agree with her,” says Inglis.
Without a dramatic change, advocates argue, history will repeat itself.
“It’s no secret that companies don’t want strong cybersecurity rules,” says Senator Ron Wyden, one of congress’s loudest voices on cybersecurity and privacy issues. “That’s how our country got where it is on cybersecurity. So I’m not going to pretend that changing the status quo is going to be easy. But the alternative is to let hackers from Russia and China and even North Korea run wild in critical systems all across America. I sincerely hope the next hack doesn’t cause more damage than the Colonial Pipeline breach, but unless Congress gets serious it’s almost inevitable.”
A shift won’t be easy. Many experts, both inside and outside government, worry that poorly written regulation could do more harm than good and some officials have misgivings about regulators’ lack of cybersecurity expertise. For example, the Transportation Security Administration’s recent cyber regulations on pipelines were criticized loudly by some as “screwed up” due to what several critics say are inflexible, inaccurate rules that cause more problems than they solve. The detractors point to it as the result of a regulator with a huge remit but not nearly enough time, resources, and expert staff to do the job right.
“TSA maintains regular and frequent contact with owners and operators, and many of these pipeline companies appreciate the significance and pace of this public-private endeavor for improvements in protection and resilience against future cyberattacks,” says R. Carter Langston, a TSA spokesperson, who disputes critics of the pipeline regulation.
Glenn Gerstell, who was general counsel at the National Security Agency until 2020, argues that the current scattershot approach–a host of different regulators working on their own specific sectors–doesn’t work and that the US needs one central cybersecurity authority with the expertise and resources that can scale across different critical industries.
Pushback against the pipeline regulations signals how difficult the process might be. But despite that, there is a growing consensus that the status quo—a litany of security failures and perverse incentives—is unsustainable.
The Colonial Pipeline incident proved what many cyber experts already know: most attacks are the result of opportunistic hackers exploiting years-old problems that companies fail to invest in and solve.
Soldiers and tanks may care about national borders. Cyber doesn’t.
“The good news is that we actually know how to solve these problems,” says Glenn Gerstell. “We can fix cybersecurity. It may be expensive and difficult but we know how to do it. This is not a technology problem.”
Another major recent cyberattack proves the point again: SolarWinds, a Russian hacking campaign against the US government and major companies, could have been neutralized if the victims had followed well-known cybersecurity standards.
“There’s a tendency to hype the capabilities of the hackers responsible for major cybersecurity incidents, practically to the level of a natural disaster or other so-called acts of God,” Wyden says. “That conveniently absolves the hacked organizations, their leaders, and government agencies of any responsibility. But once the facts come out, the public has seen repeatedly that the hackers often get their initial foothold because the organization failed to keep up with patches or correctly configure their firewalls.”
It’s clear to the White House that many businesses do not and will not invest enough in cybersecurity on their own. In the past six months, the administration has enacted new cybersecurity rules for banks, pipelines, rail systems, airlines, and airports. Biden signed a cybersecurity executive order last year to bolster federal cybersecurity and impose security standards on any company making sales to the government. Changing the private sector has always been the more challenging task and, arguably, the more important one. The vast majority of critical infrastructure and technology systems belong to the private sector.
Most of the new rules have amounted to very basic requirements and a light government touch—yet they’ve still received pushback from the companies. Even so, it’s clear that more is coming.
“There are three major things that are needed to fix the ongoing sorry state of US cybersecurity,” says Wyden. “Mandatory minimum cybersecurity standards enforced by regulators; mandatory cybersecurity audits, performed by independent auditors who are not picked by the companies they are auditing, with the results delivered to regulators; and steep fines, including jail time for senior execs, when a failure to practice basic cyber hygiene results in a breach.”
The new mandatory incident reporting regulation, which became law on Tuesday, is seen as a first step. The law requires private companies to quickly share information about shared threats that they used to keep secret—even though that exact information can often help build a stronger collective defense.
Previous attempts at regulation have failed but the latest push for a new reporting law gained steam due to key support from corporate giants like Mandiant CEO Kevin Mandia and Microsoft president Brad Smith. It’s a sign that private sector leaders now see regulation as both inevitable and, in key areas, beneficial.
Inglis emphasizes that crafting and enforcing new rules will require close collaboration at every step between government and the private companies. And even from inside the private sector, there is agreement that change is needed.
“We’ve tried purely voluntary for a long time now,” says Michael Daniel, who leads the Cyber Threat Alliance, a collection of tech companies sharing cyber threat information to form a better collective defense. “It’s not going as fast or as well as we need.”
The view from across the Atlantic
From the White House, Inglis argues that the United States has fallen behind its allies. He points to the UK’s National CyberSecurity Centre (NCSC) as a pioneering government cybersecurity agency that the US needs to learn from. Ciaran Martin, the founding CEO of the NCSC, views the American approach to cyber with confused amazement.
“If a British energy company had done to the British government what Colonial did to the US government, we’d have torn strips off them verbally at the highest level,” he says. “I’d have had the prime minister calling the chairman to say, ‘What the fuck do you think you’re doing paying a ransom and switching off this pipeline without telling us?’”
The White House was quick to publicly blame Russia for a cyberattack against Ukraine, the latest sign that cyber attribution is a crucial tool in the American arsenal.
The UK’s cyber regulations work so that banks must be resilient against both a global financial shock and cyber stresses. The UK has also focused stronger regulation on telecoms as a result of a major British telecom being “completely owned” by Russian hackers, says Martin, who says the new security rules make the telecom’s previous security failures illegal.
On the other side of the Atlantic, the situation is different. The Federal Communications Commission, which oversees telecommunications and broadband in the US, had its regulatory power significantly rolled back during the Trump presidency and relies mostly on voluntary cooperation from internet giants.
The UK’s approach of tackling specific industries one at a time by building on the regulatory powers they already have, as opposed to a single new centralized law that covers everything, is similar to how the Biden White House strategy on cyber will work.
“We have to exhaust the [regulation] authorities we already have,” Inglis says.
For Wyden, the White House strategy signals a much needed change.
“Federal regulators, across the board, have been afraid to use the authority they have or to ask Congress for new authorities to regulate industry cybersecurity practices,” he says. “It’s no wonder that so many industries have atrocious cybersecurity. Their regulators have essentially let the companies regulate themselves.”
Why the cybersecurity market fails
There are three fundamental reasons why the cybersecurity market, worth hundreds of billions of dollars and growing globally, falls short.
Companies have not figured out how cybersecurity makes them money, Daniel says. The market fails at measuring cybersecurity and, more importantly, often cannot connect it to a company’s bottom line–so they often can’t justify spending the necessary money.
The second reason is secrecy. Companies have not had to report hacks, so crucial data about big hacks has been kept locked away to protect companies from bad press, lawsuits, and lawmakers.
Third is the problem of scale. The price that the government and society paid for the Colonial hack went well beyond what the company itself would pay for. Just like with the issue of pollution, “the costs don’t show up on your bottom line as a business,” Spaulding says, so the market incentives to fix the problems are weak.
Advocates for reform say that a stronger government hand can change the equation on all of that, exactly the way reform has in dozens of industries over the last century.
Gerstell sees pressure building slowly to do something different than the status quo.
“I have never seen such near unanimity and awareness ever before,” says Gerstell. “This looks and feels different. Whether it’s enough to really push change is not yet clear. But the temperature is increasing.”
Inglis points to the nearly $2 billion in cybersecurity money from Biden’s 2021 $1 trillion infrastructure bill as a “once in a generation opportunity” for the government to step up on cybersecurity and privacy.
“We have to make sure we don’t overlook the stunning opportunities we have to invest in the resilience and robustness of digital infrastructure,” Inglis argues. “We have to ask, what are the systemically critical functions that our society depends on? Will market forces alone attend to that? And when that falls short, how do we determine what we should do? That’s the course ahead for us. It doesn’t need to be a process that lasts years. We can do this with a sense of urgency.”
WASHINGTON: Flashy programs like hypersonic systems or altering human skin to be less attractive to mosquitosmay get more public attention, but figures released by the military’s fringe R&D department today reveal that its money is really pouring into microelectronics.
The Defense Advanced Research Projects Agency (DARPA) plans to spend some $896 million on microelectronics, a total that is more than the combined figures for its second and third big money investment areas — biotech and artificial intelligence, respectively, at about $410 million each — in fiscal 2023, according to slides presented today by DARPA Director Stefanie Tompkins. Cyber projects come in fourth at $184 million, followed by hypersonics at $143 million and quantum research at $90 million.
The investment in microelectronics is part of DARPA’s now five-year-old Electronics Resurgence Initiative, which Tompkins said was about “essentially bringing back US leadership in microelectronics.” Tompkins presented the budget figures today to industry officials during a webinar hosted by the National Defense Industrial Association.
That initiative, which Tompkins said was on the cusp of being updated to ERI 2.0, began in 2017 after DARPA said the military was suffering “limited […] access to leading-edge electronics, challenging U.S. economic and security advantages.”
The concern over microelectronics — and its vulnerability to supply chain disruption — was only made more dire in the wake of the COVID-19 pandemic. In response, the Biden administration aggressively pushed US investment in domestic chip production as a priority area, with the military, already concerned about the issue, happy to go along.
In February, Pentagon Undersecretary for Research and Engineering Heidi Shyu announced a different effort by the Defense Department to pursue “lab-to-fab” testing and prototyping hubs for microelectronics technology.
Shyu said microelectronics “support nearly all DoD activities, enabling capabilities such as GPS, radar, and command, control and communications systems.”
Just today the DoD publicized a Pentagon-led “microelectronics commons” that “aims to close the gaps that exist now which prevent the best ideas in technology from reaching the market.”
“The context was an understanding from really top-tier academics that investments that we were marking in early-stage microelectronics research could not be proven in the facilities that we have here at home. We had to go instead off to overseas places, in particular [Asia], to do the work that is necessary to prove out the innovation,” Air Force chief scientist and DARPA alum Victoria Coleman is quoted as saying in the announcement. “That kind of blew my mind.”
In all, Tompkins said DARPA has about 250 “active” programs running across its areas of interest — closing and starting programs at about one per week. Sadly, she did not mention the mosquito bite research program in the presentation.
The COVID-19 pandemic has accelerated trends in the healthcare industry. A survey of Latin American physicians reveals how they believe the industry will evolve as the pandemic evolves and the future unfolds.
Healthcare systems in Latin America have played a critical role in protecting human lives during the COVID-19 pandemic. They are also among those industries most affected by the crisis due to unanticipated costs, the suspension of nonurgent healthcare services, emotional exhaustion of the healthcare workforce, and the untimely deaths of many healthcare workers who lost their lives in the line of duty.
About the authors
This article is a collaborative effort by Felipe Child, Roberto García, Laura Medford-Davis, Robin Roark, and Jorge Torres, representing views from McKinsey’s Healthcare Systems & Services Practice.
Moreover, the healthcare sector will experience a tremendous amount of long-term, disruptive change—sparked or accelerated by the pandemic—in the months and years ahead. Frontline healthcare providers can offer a unique perspective on the changes occurring in the industry and how the future of healthcare is likely to unfold.
McKinsey surveyed physicians across Latin America to understand their perceptions of how the COVID-19 crisis is reshaping healthcare systems in the region (see sidebar, “Our methodology”).
Physicians expect healthcare volumes to rebound in 2022, and supporting telehealth is their preferred strategy for improving patient access
To help our clientsunderstand responses to COVID-19, McKinsey surveyed 517 general and specialty physicians across five countries in Latin America from October 14 to November 17, 2021, to gather insights on how the pandemic is shaping healthcare delivery in the region. The survey tested perspectives on five main topics: financial health, return of elective care, site of care delivery, telemedicine, and value-based care.
Participants included 517 physicians from Brazil (122 physicians), Chile (90 physicians), Colombia (94 physicians), Mexico (121 physicians), and Peru (90 physicians). Physician specialties included anesthesiology, cardiology, emergency medicine, general practice, general surgery, geriatrics, immunology, nephrology, obstetrics and gynecology, oncology, ophthalmology, orthopedics, otorhinolaryngology, and pediatrics.
COVID-19 caused the volume of medical services delivered to drop dramatically during 2020 and 2021: 80 percent of surveyed physicians reported a reduced number of patients since the onset of the pandemic in all countries surveyed. On the other hand, physicians surveyed predict an approximately 25 percent increase in outpatient medical consultations, hospitalizations, and surgery volumes in 2022. Across the five countries surveyed, the most popular strategy for helping patients return to care is offering virtual options such as telehealth (Exhibit 1).
Telehealth is now a core offering that physicians expect to continue even after in-person visits return
The survey found that 60 to 84 percent of physicians were actively offering telehealth services. Among these physicians, 20 percent on average introduced telehealth during the pandemic, and 80 percent reported they plan to continue doing so in the future as part of a hybrid care model for at least a few hours a day or for one day a week (Exhibit 2).
In addition, more than half of all physicians surveyed consider telehealth to be less costly for their practice than in-person visits, and two-thirds view telehealth visits as effective, especially for follow-up visits and nonurgent primary-care consultations.
Physicians expect more care to shift out of hospitals, primarily into patients’ homes
The shift from hospitals to other locations, including patient homes, for in-person care has also accelerated since the start of the COVID-19 pandemic. Survey respondents expect that by 2025, care will take place in patients’ homes an average of 1.5 times to 2.5 times more often than it does today (Exhibit 3). They expect an average of 35 percent of all palliative-care, mental-health, primary-care, and physical-therapy services to be delivered in the home by 2025. For three services—intensive care, emergency care, and dialysis—they expect home care to more than double its current state but achieve a lower absolute percentage of services in the home.
Physicians surveyed for the most part do not consider pharmacies to be a preferred location for most care services—although, in Mexico, 14 percent view them as appropriate for vaccine administration, and 7 percent consider them appropriate for physical therapy.
The pandemic has reaffirmed physicians’ commitment to their clinical careers, although their preferred practice model is shifting in some countries
Despite personal health risks associated with providing care to patients during the pandemic, the physicians surveyed in all countries are still committed to their careers; in fact, 90 percent say they are now less likely to leave medicine than they were before the pandemic. However, their interest in switching from independent practice to working for a healthcare system has increased (Exhibit 4).
Physicians anticipate a shift to value-based care in the future
Nearly half of the physicians surveyed believe value-based care improves the quality of patient care. Meanwhile, an average of 55 percent think their revenues will increase, and an average of 20 percent believe it will have no effect on their revenues (Exhibit 5). However, more than one-third of physicians surveyed overall and more than half in Mexico and Peru also believe fee-for-service models improve quality of care. Physicians reported they are now more willing to use outcome-based payer contract agreements and believe that in two to three years, 15 percent more patients will be treated under these models than are today.
COVID-19 trends will affect healthcare stakeholders across the value chain, including payers, providers, patients, pharmacies, and investors
COVID-19 has accelerated change in the healthcare industry and catalyzed new trends across Latin America. Telehealth and the shift to at-home care have met or exceeded patient expectations, and they may unlock lower healthcare costs in the future, although potentially at the expense of traditional hospital revenue streams. Stakeholders across the healthcare value chain likely need to accelerate their digital transformations to reflect new care models while boosting efforts to enhance the patient experience and incorporating new care sites into their operations.
Digital health ecosystems: Voices of key healthcare leaders
More physicians now see the benefit of shifting from fee-for-service to value-based care to improve quality of care and their future financial stability. Healthcare ecosystem participants will need to develop the necessary capabilities to implement true value-based care models that benefit all stakeholders of the system. Furthermore, those most likely to succeed in this shifting environment will adopt hybrid ecosystems that embrace telehealth and at-home care when clinically appropriate, adding value to the healthcare system while providing patients with affordable care.
Felipe Child is a partner in McKinsey’s Bogotá office; Roberto García is an associate partner in the Mexico City office, where Jorge Torres is a consultant; Laura Medford-Davis, MD, is an associate partner in the Houston office; and Robin Roark, MD, is a partner in the Miami office.
The authors wish to thank Andrés Arboleda, Sebastian Gonzalez, Pollyana Lima, Romina Mendoza, Claudia Mones, Tiago Sanfelice, and Javier Valenzuela for their contributions to this article.