Federal Acquisition Service Commissioner Sonny Hashmi said the new buyer experience tool “was built using human-centered design to address pain points in the acquisition process.”
The General Services Administration is launching a new tool to help simplify the federal buying process, providing the acquisition community with streamlined market research, searchable templates and interactive resources.
Sonny Hashmi, commissioner of the Federal Acquisition Service, announced the launch of buy.gsa.gov on Tuesday afternoon and said in a blog post that the buyer experience tool “was built using human-centered design to address pain points in the acquisition process.”
The platform provides a four-step process for anyone seeking information about the federal buying experience: Planning, developing documents, researching products, services and pricing and requesting a quote or purchase. Users can browse samples, templates and tips for performance work statements, statements of work and other documents required for services purchases.
Hashmi acknowledged the years-long criticisms surrounding the federal buying process in his post, writing: “For years, the federal acquisition community has been asking for a simpler way to get the information it needs to make smarter purchases while saving taxpayer dollars.”
He added that Buy.GSA.Gov was the result of a government-wide user research and usability testing effort which involved GSA acquisition experts, federal agencies and vendors.
GSA first previewed the new buyer experience tool earlier this month while highlighting key elements of its Federal Marketplace Spring 2022 release.
“With the launch of our new buyer experience, we highlight GSA’s commitment to our customers, suppliers, and workforce while improving the buying process,” Hashmi said in a statement at the time. “I am excited for our users to see what they helped develop and look forward to watching it grow and expand in the years to come.”
The agency said its focus for the latest Federal Marketplace Strategy was to reduce the burden on suppliers of goods and services to the federal government.
GSA’s Digital Innovation Division – a team within the Federal Acquisition Service – is tasked with managing the development of the site. The team said it focused on the eight requirements in the 21st Century IDEA Act for modernized websites, including accessibility, mobile-friendly capabilities, user-centered experiences and secure, searchable functionality.
The site also includes resources for vendors, including a support center, a contractor start-up kit and a forecast of contracting opportunities.
Cybersecurity in healthcare involves the protecting of electronic information and assets from unauthorized access, use and disclosure. There are three goals of cybersecurity: protecting the confidentiality, integrity and availability of information, also known as the “CIA triad.”
In today’s electronic world, cybersecurity in healthcare and protecting information is vital for the normal functioning of organizations. Many healthcare organizations have various types of specialized hospital information systems such as EHR systems, e-prescribing systems, practice management support systems, clinical decision support systems, radiology information systems and computerized physician order entry systems. Additionally, thousands of devices that comprise the Internet of Things must be protected as well. These include smart elevators, smart heating, ventilation and air conditioning (HVAC) systems, infusion pumps, remote patient monitoring devices and others. These are examples of some assets which healthcare organizations typically have, in addition to those mentioned below.
Email
Email is a primary means for communication within healthcare organizations. Information of all kinds is transacted, created, received, sent and maintained within email systems. Mailbox storage capacities tend to grow with individuals storing all kinds of valuable information such as intellectual property, financial information, patient information and others. As a result, email security is a very important part of cybersecurity in healthcare.
Phishing is a top threat. Most significant security incidents are caused by phishing. Unwitting users may unknowingly click on a malicious link or open a malicious attachment within a phishing email and infect their computer systems with malware. In certain instances, that malware may spread via the computer network to other computers. The phishing email may also elicit sensitive or proprietary information from the recipient. Phishing emails are highly effective as they typically fool the recipient into taking a desired action such as disclosing sensitive or proprietary information, clicking on a malicious link, or opening a malicious attachment. Accordingly, regular security awareness training is key to thwart phishing attempts.
Physical Security
Unauthorized physical access to a computer or device may lead to its compromise. For example, there are physical techniques that may be used to hack a device. Physical exploitation of a device may defeat technical controls that are otherwise in place. Physically securing a device, then, is important to safeguard its operation, proper configuration and data.
One example is leaving a laptop unattended while traveling or while working in another location. Careless actions may lead to the theft or loss of the laptop. Another example is an evil maid attack in which a device is altered in an undetectable way such that the device may be later accessed by the cybercriminal, such as the installation of a keylogger to record sensitive information, such as credentials.
Legacy Systems
Legacy systems are those systems that are no longer supported by the manufacturer. Legacy systems may include applications, operating systems, or otherwise. One challenge for cybersecurity in healthcare is that many organizations have a significant legacy system footprint. The disadvantage of legacy systems is that they are typically not supported anymore by the manufacturer and, as such, there is generally a lack of security patches and other updates available.
Legacy systems may exist within organizations because they are too expensive to upgrade or because an upgrade may not be available. Operating system manufacturers may sunset systems and healthcare organizations may not have enough of a cybersecurity budget to be able to upgrade systems to presently supported versions. Medical devices typically have legacy operating systems. Legacy operating systems may also exist to help support legacy applications for which there is no replacement.
Healthcare Stakeholders
Patients
Patients need to understand how to securely communicate with their healthcare providers. Additionally, if patients engage virtually with their healthcare providers, whether through a telehealth platform, evisits, secure messaging, or otherwise, patients need to understand the privacy and security policies and also how to keep their information private and secure.
Workforce Members
Workforce members need to understand the privacy and security policies of the healthcare organization. Regular security awareness training is essential to cybersecurity in healthcare so that workforce members are aware of threats and what to do in case of actual security incidents. Workforce members also need to know who to contact in the event of a question or problem. In essence, workforce members can be the eyes and ears for the cybersecurity team. This will help the cybersecurity team understand what is working and what is not working in an effort to secure the information technology infrastructure and information.
C-Suite
More healthcare organizations now have a chief information security officer (CISO) in place to make executive decisions about the cybersecurity program. CISOs typically work on strategy, whereas individuals on the cybersecurity team that report to the CISO execute the strategy as dictated by the CISO. The CISO is an executive that ideally is on the same level as other C-suite executives, such as the chief financial officer, chief information officer, and so on. The greater the executive-level buy-in, the greater degree of top-down buy-in of the organization’s cybersecurity program.
Vendors/Market Suppliers
A major retailer was breached as a result of a major cyberattack on its heating, cooling, and air conditioning (“HVAC”) vendor system. Stolen credentials from the HVAC vendor were used to break into the retailer’s systems. In essence, this was a supply chain attack since the cyberattackers had compromised the HVAC vendor to ultimately target the retailer. Following this attack, cyber supply chain attacks compromised healthcare information systems through vendors’ stolen credentials.
Some large organizations have fairly robust cybersecurity in healthcare programs. However, many of these organizations also rely upon tens of thousands of vendors. To the extent that these vendors have lax security policies, or have inferior security policies, this can create a problem for the healthcare organization. In other words, stolen vendor credentials or compromised vendor accounts may potentially result in a compromise of the healthcare organization, such as through phishing or other means. A vendor may have elevated privileges to a healthcare organization’s information technology environment and, thus, a compromise of a vendor’s account or compromised credentials may lead to elevated access by an unauthorized third party (a cyberattacker) of a healthcare organization’s information technology resources.
The Internet has been revolutionary. It provides unprecedented opportunities for people around the world to connect and to express themselves, and continues to transform the global economy, enabling economic opportunities for billions of people. Yet it has also created serious policy challenges. Globally, we are witnessing a trend of rising digital authoritarianism where some states act to repress freedom of expression, censor independent news sites, interfere with elections, promote disinformation, and deny their citizens other human rights. At the same time, millions of people still face barriers to access and cybersecurity risks and threats undermine the trust and reliability of networks.
Democratic governments and other partners are rising to the challenge. Today, the United States with more than 60 partners from around the globe launched the Declaration for the Future of the Internet.
This Declaration represents a political commitment among Declaration partners to advance a positive vision for the Internet and digital technologies. It reclaims the promise of the Internet in the face of the global opportunities and challenges presented by the 21st century. It also reaffirms and recommits its partners to a single global Internet – one that is truly open and fosters competition, privacy, and respect for human rights. The Declaration’s principles include commitments to:
Protect human rights and fundamental freedoms of all people;
Promote a global Internet that advances the free flow of information;
Advance inclusive and affordable connectivity so that all people can benefit from the digital economy;
Promote trust in the global digital ecosystem, including through protection of privacy; and
Protect and strengthen the multi-stakeholder approach to governance that keeps the Internet running for the benefit of all.
In signing this Declaration, the United States and partners will work together to promote this vision and its principles globally, while respecting each other’s regulatory autonomy within our own jurisdictions and in accordance with our respective domestic laws and international legal obligations.
Over the last year, the United States has worked with partners from all over the world – including civil society, industry, academia, and other stakeholders to reaffirm the vision of an open, free, global, interoperable, reliable, and secure Internet and reverse negative trends in this regard. Under this vision, people everywhere will benefit from an Internet that is unified unfragmented; facilitates global communications and commerce; and supports freedom, innovation, education and trust.
THE DECLARATION FOR THE FUTURE OF THE INTERNET PARTNERS
Albania | Andorra | Argentina | Australia | Austria | Belgium | Bulgaria | Cabo Verde | Canada | Colombia | Costa Rica | Croatia | Cyprus | Czech Republic | Denmark | Dominican Republic | Estonia | The European Commission | Finland | France | Georgia | Germany | Greece | Hungary | Iceland | Ireland | Israel | Italy | Jamaica | Japan | Kenya | Kosovo | Latvia | Lithuania | Luxembourg | Maldives | Malta | Marshall Islands | Micronesia | Moldova | Montenegro | Netherlands | New Zealand | Niger | North Macedonia | Palau | Peru | Poland | Portugal | Romania | Serbia | Slovakia | Slovenia | Spain | Sweden | Taiwan | Trinidad and Tobago | the United Kingdom | Ukraine | Uruguay
OPEN CALL FOR PARTICIPATION
The Declaration remains open to all governments or relevant authorities willing to commit and implement its vision and principles. Contact the nearest U.S. embassy, mission, or representative to learn more.
Researchers build a portable desalination unit that generates clear, clean drinking water without the need for filters or high-pressure pumps.
Adam Zewe | MIT News Office
April 28, 2022
MIT researchers have developed a portable desalination unit, weighing less than 10 kilograms, that can remove particles and salts to generate drinking water.
The suitcase-sized device, which requires less power to operate than a cell phone charger, can also be driven by a small, portable solar panel, which can be purchased online for around $50. It automatically generates drinking water that exceeds World Health Organization quality standards. The technology is packaged into a user-friendly device that runs with the push of one button.
Unlike other portable desalination units that require water to pass through filters, this device utilizes electrical power to remove particles from drinking water. Eliminating the need for replacement filters greatly reduces the long-term maintenance requirements.
This could enable the unit to be deployed in remote and severely resource-limited areas, such as communities on small islands or aboard seafaring cargo ships. It could also be used to aid refugees fleeing natural disasters or by soldiers carrying out long-term military operations.
“This is really the culmination of a 10-year journey that I and my group have been on. We worked for years on the physics behind individual desalination processes, but pushing all those advances into a box, building a system, and demonstrating it in the ocean, that was a really meaningful and rewarding experience for me,” says senior author Jongyoon Han, a professor of electrical engineering and computer science and of biological engineering, and a member of the Research Laboratory of Electronics (RLE).
Joining Han on the paper are first author Junghyo Yoon, a research scientist in RLE; Hyukjin J. Kwon, a former postdoc; SungKu Kang, a postdoc at Northeastern University; and Eric Brack of the U.S. Army Combat Capabilities Development Command (DEVCOM). The research has been published online in Environmental Science and Technology.
Commercially available portable desalination units typically require high-pressure pumps to push water through filters, which are very difficult to miniaturize without compromising the energy-efficiency of the device, explains Yoon.
Instead, their unit relies on a technique called ion concentration polarization (ICP), which was pioneered by Han’s group more than 10 years ago. Rather than filtering water, the ICP process applies an electrical field to membranes placed above and below a channel of water. The membranes repel positively or negatively charged particles — including salt molecules, bacteria, and viruses — as they flow past. The charged particles are funneled into a second stream of water that is eventually discharged.
The process removes both dissolved and suspended solids, allowing clean water to pass through the channel. Since it only requires a low-pressure pump, ICP uses less energy than other techniques.
But ICP does not always remove all the salts floating in the middle of the channel. So the researchers incorporated a second process, known as electrodialysis, to remove remaining salt ions.
Yoon and Kang used machine learning to find the ideal combination of ICP and electrodialysis modules. The optimal setup includes a two-stage ICP process, with water flowing through six modules in the first stage then through three in the second stage, followed by a single electrodialysis process. This minimized energy usage while ensuring the process remains self-cleaning.
“While it is true that some charged particles could be captured on the ion exchange membrane, if they get trapped, we just reverse the polarity of the electric field and the charged particles can be easily removed,” Yoon explains.
They shrunk and stacked the ICP and electrodialysis modules to improve their energy efficiency and enable them to fit inside a portable device. The researchers designed the device for nonexperts, with just one button to launch the automatic desalination and purification process. Once the salinity level and the number of particles decrease to specific thresholds, the device notifies the user that the water is drinkable.
The researchers also created a smartphone app that can control the unit wirelessly and report real-time data on power consumption and water salinity.
Beach tests
After running lab experiments using water with different salinity and turbidity (cloudiness) levels, they field-tested the device at Boston’s Carson Beach.
Yoon and Kwon set the box near the shore and tossed the feed tube into the water. In about half an hour, the device had filled a plastic drinking cup with clear, drinkable water.
“It was successful even in its first run, which was quite exciting and surprising. But I think the main reason we were successful is the accumulation of all these little advances that we made along the way,” Han says.
The resulting water exceeded World Health Organization quality guidelines, and the unit reduced the amount of suspended solids by at least a factor of 10. Their prototype generates drinking water at a rate of 0.3 liters per hour, and requires only 20 watts of power per liter.
“Right now, we are pushing our research to scale up that production rate,” Yoon says.
One of the biggest challenges of designing the portable system was engineering an intuitive device that could be used by anyone, Han says.
Yoon hopes to make the device more user-friendly and improve its energy efficiency and production rate through a startup he plans to launch to commercialize the technology.
In the lab, Han wants to apply the lessons he’s learned over the past decade to water-quality issues that go beyond desalination, such as rapidly detecting contaminants in drinking water.
“This is definitely an exciting project, and I am proud of the progress we have made so far, but there is still a lot of work to do,” he says.
For example, while “development of portable systems using electro-membrane processes is an original and exciting direction in off-grid, small-scale desalination,” the effects of fouling, especially if the water has high turbidity, could significantly increase maintenance requirements and energy costs, notes Nidal Hilal, professor of engineering and director of the New York University Abu Dhabi Water research center, who was not involved with this research.
“Another limitation is the use of expensive materials,” he adds. “It would be interesting to see similar systems with low-cost materials in place.”
The research was funded, in part, by the DEVCOM Soldier Center, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), the Experimental AI Postdoc Fellowship Program of Northeastern University, and the Roux AI Institute.
Story updated 4/22/2022 at 10:25 am ET to include a clarification that Bostjanick was referring to May 2023 for when a new CMMC interim rule might go into effect.
WASHINGTON: The Pentagon is assessing whether to develop cloud service offerings to help contractors meet requirements for its cyber certification program, according to the Defense Department’s deputy chief information officer.
The Cybersecurity Maturity Model Certification (CMMC) program aims to strengthen the cybersecurity of the defense industrial base by holding contractors accountable for following best practices to protect their network, but can be an onerous undertaking both for the companies and their assessors. The Pentagon last November rolled out CMMC version 2.0, streamlining the security tiers of the program from five to three and resulting in some requirements changes for its first two levels.
David McKeown, DoD deputy chief information officer and senior information security officer, whose office leads the CMMC effort, said Tuesday at the AFCEA Cyber Mission Summit he’s looking for “innovative solutions” to help contractors meet at least 85 out of 110 controls in NIST Special Publication 800-171 in order to achieve certification required for Level 2 of CMMC.
“For instance, in the CMMC realm, rather than go out and assess each and every network of our industry partners, I’m kind of keen on establishing some sort of cloud services that either achieve many of the 110 controls in [NIST SP] 800-171 or all of them that industry partners can consume to store our data and safeguard our data without us having to go out onto your network,” McKeown said.
Pentagon CIO John Sherman in February said he hoped the upgraded CMMC program would raise the cybersecurity “waterline” across DoD to keep potential adversaries away from critical data.
“This is basic hygiene to raise the water level to make sure we can protect our sensitive data so that when our service members have to go into action, they’re not going to have an unfair position because our adversary’s already stolen key data and technologies that’ll put them at an advantage,” Sherman said at the AFCEA Space Force IT conference.
Meanwhile, CMMC’s policy director said Wednesday another interim rule for the program could come in May next year. The Pentagon released its first interim rule, which define some mandatory compliance requirements, in September 2020 for the first version of CMMC, prompting hundreds of comments and criticism from industry regarding the timeframe and complexity of the program.
“Our anticipation is that we will be allowed to have another interim rule like we did last time,” Stacy Bostjanick, CMMC policy director for the Office of the Undersecretary of Defense for Acquisition and Sustainment, said. “We’re hoping that the interim rule will go into effect by May. In fact, my team is very frustrated with me today because I’m sitting here with you guys and they’re stuck in a room going through a rule that’s like hundreds of pages long.”
Once the rulemaking process is over, she said she hopes “there will be only one more aspect that we’ll have to address and that will be the international partners.”
“That will probably take some rulemaking effort,” Bostjanick said. “We’re working through how that’s going to work in getting that laying flat today.”
Markus Bernhardt is the Chief Evangelist at Obrizum, pioneering deeptech AI digital learning solutions for corporate learning.
Google Director of Research Peter Norvig famously stated in his keynote speech for the Association for Learning Technology Conference in 2007 that if you only had to read one research paper to learn about learning, it would beBenjamin Bloom’s The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.
To aid his preparation for the keynote, this brilliant piece of advice had been given to Peter by his friend Hal Abelson, an educator and professor at MIT. In Bloom’s well-known and often-cited paper, the outcome of an experiment is reported comparing the efficacy of three types of teaching: the conventional lecture, the conventional lecture with regular testing and feedback, and one-to-one tuition.
Using the “straight lecture” as the mean, Bloom found an 84% increase in mastery above the mean for a “formative feedback” approach to teaching and an astonishing 98% increase in mastery for one-to-one tuition. While intuitively one might of course have expected one-to-one tuition to stand out in such an experiment, the measured impact compared to the other two methods must surely be considered staggering, and we can immediately see why Hal would have chosen to recommend this paper to Peter.
The highly personalized approach for individuals or to small groups in regard to tuition, as well as many forms of training and practice, had of course been the preferred modality long before this piece of research, albeit it being extraordinarily expensive and thus not widely available. However, Bloom’s research provided the data, and extraordinarily so, that cemented one-to-one tuition and all forms of personalization as the gold standard in learning.
Lecturing is a form of one-size-fits-all learning, and it was invented in 1350, a time when books were extremely expensive and not available to the masses since they were arduously written by hand, traditionally by monks. The idea of a lecture arose since this approach would allow one person to read the material out loud to a large group of learners, allowing everyone in the group to access the content and learn. Like all one-size-fits-all learning, lecturing aims to cater to the “average” learner in a group. This idea of an “average” person turns out foolishly inadequate when the realization sets in that given human complexity, the average will almost never adequately describe a given individual, an idea explored in great detail by Todd Rose in his book The End of Average: How We Succeed in a World That Values Sameness.
In recent years, the term one-size-fits-all has mainly been used to denounce click-through e-learning, where such a one-size-fits-all approach and delivery has been the norm—and the pain for employees across the globe and across sectors. Most notably, this has been the case for annual compliance training, delivered digitally and as a tick-box exercise to allow organizations to present completion certificates to regulators.
This is already changing and fast gathering momentum, as artificial intelligence (AI) is not only changing the way we learn but also the way trainers train, coaches coach and teachers teach. Through AI technology, the individual learner can now access fully personalized programs of learning, in their own time, and thus cover almost any topic via self-guided learning. This personalization at a granular level can be achieved, as now the AI is able to measure and take into account prior existing knowledge, as well as continuously measure learner progress, in both competence and confidence throughout the learning journey. It can do this for each topic or learning objective, respectively. AI is thus able continuously to adapt to the learner, their individual pace of progress and their individual learning needs.
Furthermore, AI can not only deliver the learning fully personalized but also in line with evidence-based theory, such as variation (including multiple-media and multiple model content), utilizing worked examples, retrieval practice, spaced practice, low-stakes quizzes and interleaving of topics.
Being able to deliver fully personalized learning journeys through AI is in itself a huge game changer for digital learning. It will drastically change the way we learn, and it will play a major role in the way employees will reskill and upskill in the future of work.
As we enter an era of more effective and efficient learning, asynchronously delivered, individually personalized and at scale, we are looking at the AI learning revolution.
However, there is even more to this. Often overlooked and not covered in nearly as much depth is what exciting and positive impact this will have on in-person tutoring, training, coaching, mentoring and workshops. We are looking at a whole new realm of learning analytics that the AI piece will deliver as an automatic output—strengths, weaknesses, competence, confidence and learner self-awareness.
For human-led sessions, this available data from the AI piece will allow organizations to better plan and deliver far more personalized in-person sessions than previously possible. Organizations will be able to identify learning needs and gaps and, where necessary, group participants accordingly, greatly increasing the potential learning and performance impact of these in-person sessions.
Where previously these in-person sessions, trainings and workshops would have been delivered with the same approach, and often the same slide deck and supporting materials and content, we will see a higher degree of personalization to the individual or learning group and their respective learning needs.
Utilizing the power of AI for learning is already revolutionizing the level of personalization the learner is receiving, across industries and sectors. Learning impact and learning efficiency are being lifted to excitingly new, previously inaccessible levels — both digitally as well as for in-person learning. Learners, designers, trainers, coaches, and educators — everyone is benefiting as we reach into our toolbox and deploy the new AI learning and automation tools available.
A few years from now, the Department of Defense’s Joint Warfighter Cloud Capability will be in place and operational as the foundation for the department’s JADC2 initiative, according to John Sherman, chief information officer of DOD and a 2022 Wash100 Award winner.
The department’s cloud adoption efforts have been difficult and remain difficult, Sherman said during his keynote address at the Potomac Officers Club’s 3rd Annual CIO Summit. Over the last year, DOD has pivoted from its first single-source cloud program, the Joint Enterprise Defense Infrastructure contract, to its new multi-cloud, multi-award JWCC.
Although JWCC has seen delays recently, the potential $9 billion effort aims to award contracts by the end of 2022.
“Within a few years, not only will the contract be in place, we’re going to be running workloads top secret, secret and unclassified across multiple clouds all the way from CONUS out to the very tactical edge,” Sherman forecasted.
Sherman also shared that the DOD’s Cybersecurity Maturity Model Certification program, which was recently rolled under his office, is expected to undergo further changes in the coming years to reduce the barriers for businesses to achieve compliance.
Join the 2022 CMMC Forum hosted by the Potomac Officers Club on May 18 to learn more about the future of the program and its implications on the public and private sectors. Register here.
Of the CMMC updates, Sherman said, “I’m going to expect you to see something that is understandable, that makes maybe a little more sense of where we started – looking at maybe three versus five levels – but very importantly, for small and medium sized businesses, that we have taken steps to make this 800-171 implementation a bit more palatable and doable.”
Sherman’s priority in amending the CMMC program expands beyond the nation’s capital and is driven by his own family’s history of small business ownership “We all live in the Beltway,” he said, addressing the in-person audience. “I’m thinking about the companies that are out in the Midwest, the West coast, the Northeast, the Southeastern United States, you name it – folks who aren’t near the 495 here and how CMMC and DIB security looks to them.”
Cybersecurity, and CMMC in particular, are areas in which we’re “going to have to move the needle” as adversarial threats continue to escalate, he said. “This is where the Chinese and the Russians and non-state actors are trying to lift our information.”
In regards to other focus areas, buying down the department’s technical debt, getting zero trust on good footing, electromagnetic spectrum operations, 5G and PNT – or position, navigation and timing – all still remain top priorities for the department in the near future, Sherman noted.
For many companies moving to the cloud, a focus on short-term gain leads to long-term pain and prevents them from capturing a much bigger portion of cloud’s estimated $1 trillion of value potential. It happens when IT departments, usually with the assistance of systems integrators, migrate a set of applications to the cloud as quickly as possible to capture initial gains. That short-term focus frequently has significant consequences.
Sidebar
What is a cloud foundation?
A cloud foundation is a set of design decisions that is implemented in code files and defines how the cloud is used, secured, and operated. We have found that the ideal cloud foundation is split into three layers to reduce risk, accelerate change, and provide appropriate levels of isolation (exhibit):
Application patterns: Code artifacts that automate the secure, compliant, and standardized configuration and deployment of applications with similar functional and nonfunctional requirements through the use of infrastructure as code (IaC), pipeline as code (PiaC), policy as code (PaC), security as code (SaC), and compliance as code (CaC):
Policy as code: The translation of an organization’s standards and policies into actual executable code that secures the infrastructure and environment of the organization automatically in accordance with the policy
Compliance as code: A composed set of rules interpreted by a software-based policy engine that enforces compliance policy for a specific cloud environment
Isolation zones: A set of separate CSP-specific zones (sometimes called landing zones) that isolate application environments to prevent concentration risk. Each zone contains CSP services, identity and access management (IAM), network isolation, capacity management, shared services scoped to the isolation zone, and change control where one or more related applications run
Base: A set of CSP-agnostic capabilities that are provided to a set of isolation zones, including network connectivity and routing; centralized firewall and proxy capabilities; identity standardization; enterprise logging, monitoring, and analytics (ELMA); shared enterprise services; golden-image (or primary-image) pipelines; and compliance enforcement
The culprit? Lack of attention to the cloud foundation, that unsexy but critical structural underpinning that determines the success of a company’s entire cloud strategy (see sidebar, “What is a cloud foundation?”). Several large banks are paying that price, resulting in the need to hire hundreds of cloud engineers because they did not put the right foundational architecture in place at the beginning.
Building a solid cloud foundation as part of a transformation enginedoes not mean delaying financial returns or investing significant resources. It just requires knowing what critical steps to take and executing them well. Our experience shows that companies that put in place a solid cloud foundation reap benefits in the form of a potential eightfold acceleration in the pace of cloud migration and adoption and a 50 percent reduction in migration costs over the long term—without delaying their cloud program.
A top consumer-packaged-goods (CPG) company was running into significant delays with its cloud-migration program—each application was taking up to two months to migrate. With portions of the business divesting, finance and legal were pressuring the company to isolate them from the company quickly. Realizing that deficiencies in its cloud foundation were causing the delays, it made the counterintuitive decision to pause the migration to focus on strengthening the cloud foundation.
For example, it automated critical infrastructure capabilities, deployed security software to automate compliance, deployed reusable application patterns, and created isolation zones to insulate workloads from one another and prevent potential problems in one zone from spreading. Once these improvements were in place, the company was able to migrate applications quickly, safely, and securely, with single applications taking days rather than weeks.
Ten actions to get your cloud foundation right
Building a strong cloud foundation is not the “cost of doing business.” It’s a critical investment that will reap significant rewards in terms of speed and value. The following ten actions are the most important in building this foundation.
1. Optimize technology to enable the fastest ‘idea-to-live’ process
Whether their workloads are in the cloud or in traditional data centers, many companies have outdated and bureaucratic work methods that introduce delays and frustrations. Your cloud foundation should be constructed to enable an idea’s rapid progression from inception to up and running in a production environment, without sacrificing safety and security.
In practice, that means automating as many steps of the production journey as possible, including sandbox requests, firewall changes, on-demand creation of large numbers of isolated networks, identity and access management (IAM), application registration, certificate generation, compliance, and so on. Automating these steps is as valuable in traditional data centers as it is in the cloud. But since the cloud offers unique tools that make automation easier, and because the move to cloud leads organizations to rethink their entire strategy, the beginning of the migration process is often the right time to change how IT operates.
2. Design the cloud architecture so it can scale
If companies do it right, they can build a cloud architecture based on five people that can scale up to support 500 or more without significant changes. As the cloud footprint grows, a well-designed architecture should be able to accommodate more components, including more application patterns, isolation zones, and capabilities. Support for this scaling requires simple, well-designed interfaces between components. Because this is difficult to get right the first time, cloud-architecture engineers who have done it before at scale are a big advantage.
3. Build an organization that mirrors the architecture
According to Conway’s law, the way teams are organized will determine the shape of the technology they develop. IT organizations have a set structure for teams, and that can lead them to build things that don’t fit the shape of the cloud architecture.
For example, some companies have a separate cloud team for each of its business units. This can lead to each team building different cloud capabilities for its respective business unit and not architecting them for reuse by other business units. That can create slowdowns and even delays when changes made by one team affect the usage of another.
IT needs to design its cloud architecture first and then build an organization based on that structure. That means building out an organization that has a base team, isolation-zone teams, and application-pattern teams in order to reduce dependencies and redundancies between groups and ultimately deliver well-architected components at a lower cost.
4. Use the cloud that already exists
Many companies operate in fear of being locked into a specific cloud service provider (CSP), so they look for ways to mitigate that risk. A common pattern is an overreliance on containers, which can be expensive and time consuming and keep businesses from realizing the genuine benefits available from CSPs. One example of this was a company that created a container platform in the cloud as opposed to using the cloud’s own resiliency tools. When there was an outage, the impact was so large that it took multiple days to get its systems back online because the fault was embedded in the core of its non-cloud tooling.
There are other ways to mitigate against CSP lock-in, such as defining a limited lock-in time frame and putting practices and systems in place that enable a rapid shift, if necessary. By attempting to build non-native resiliency capabilities, companies are essentially competing with CSPs without having their experience, expertise, or resources. The root of this issue is that companies still tend to treat CSPs as if they were hardware vendors rather than software partners.
5. Offer cloud products, not cloud services
It is common for companies to create internal cloud-service teams to help IT and the business use the cloud. Usually these service teams operate like fulfillment centers, responding to requests for access to approved cloud services. The business ends up using dozens of cloud services independently and without a coherent architecture, resulting in complexity, defects, and poor transparency into usage.
Instead, companies need dedicated product teams staffed with experienced cloud architects and engineers to create and manage simple, scalable, and reusable cloud products for application teams. The constraints imposed by aligning around cloud products can help to ensure that the business uses the correct capabilities in the correct way.
Once the product team has an inventory of cloud products, it can encourage application teams to use them to fast-track their cloud migration. The aptitude and interest of each application team, however, will influence how quickly and easily it adopts the new cloud products. Teams with little cloud experience, skill, or interest will need step-by-step assistance, while others will be able to move quickly with little guidance. The product team, therefore, needs to have an operating model that can support varying levels of application-team involvement in the cloud-migration journey.
One effective route offers three levels of engagement (exhibit):
Concierge level: The engagement team builds everything needed by an application team.
Embedded level: Architects from the central cloud team are embedded into application teams to help them build the right application patterns.
Partner level: A partner team builds and runs its own isolation zone using the core capabilities from the base foundation, such as networking, logging, and identity.
By establishing the cloud products, the teams to support them, and the model by which application teams can engage with product teams, the business has the mechanisms in place to thoughtfully scale its cloud strategy.
6. Application teams should not reinvent how to design and deploy applications in cloud
When organizations give free rein to application teams to migrate applications to the cloud provider, the result is a menagerie of disparate cloud capabilities and configurations that makes ongoing maintenance of the entire inventory difficult.
Instead, organizations should treat the deployment capabilities of an application as a stand-alone product, solving common problems once using application patterns. Application patterns can be responsible for configuring shared resources, standardizing deployment pipelines, and ensuring quality and security compliance. The number of patterns needed to support the inventory of applications can be small, therefore maximizing ROI. For example, one large bank successfully used just ten application patterns to satisfy 95 percent of its necessary use cases.
7. Provide targeted change management by using isolation zones
Isolation zones are cloud environments where applications live. In an effort to accelerate cloud migration, CSPs and systems integrators usually start with a single isolation zone to host all applications. That’s a high-risk approach, because configuration changes to support one application can unintentionally affect others. Going to the other extreme—one isolation zone for each application—prevents the efficient deployment of configuration changes, requiring the same work to be carried out across many isolation zones.
As a rule of thumb, a company should have from five to 100 isolation zones, depending on the size of the business and how it answers the following questions:
Does the application face the internet?
What level of resiliency is required?
What is the risk-assurance level or security posture required for applications running in the zone?
Which business unit has decision rights on how the zone is changed for legal purposes?
8. Build base capabilities once to use across every CSP
Most companies will be on multiple clouds. The mix often breaks down to about 60 percent of workloads in one, 30 percent in another, and the rest in a third. Rather than building the same base capabilities (for example, network connectivity and routing, identity services, logging, and monitoring) across all the CSPs, companies should build them once and reuse the capabilities across all isolation zones, even those that reside in a different CSP from the base.
9. Speed integration of acquisitions by putting in place another instance of the base foundation
During an acquisition, merging IT assets is difficult and time consuming. The cloud can speed the merger process and ease its complexity if the acquiring company creates an “integration-base foundation” that can run the assets of the company being acquired. This enables the IAM, security, network, and compliance policies already in place at the acquired company to continue, allowing its existing workloads to continue to function as designed. Over time, those workloads can be migrated from the integration base to the main base at a measured and predictable pace.
Using this approach, companies can efficiently operate their core cloud estate as well as the acquisition’s using the same software with a different configuration. This typically can reduce integration time from two to three years to closer to three to nine months.
10. Make preventative and automated cloud security and compliance the cornerstone
All software components and systems must go through a security layer. Traditional cybersecurity mechanisms are dependent on human oversight and review, which cannot match the tempo required to capture the cloud’s full benefits of agility and speed. For this reason, companies must adopt new security architectures and processes to protect their cloud workloads.
Security as code (SaC) has been the most effective approach to securing cloud workloads with speed and agility. The SaC approach defines cybersecurity policies and standards programmatically so they can be referenced automatically in the configuration scripts used to provision cloud systems. Systems running in the cloud can be evaluated against security policies to prevent changes that move the system out of compliance.
“Start small and grow” is a viable cloud strategy only if the fundamental building blocks are created from the start. Companies need to design and build their cloud foundation to provide a reusable, scalable platform that supports all the IT workloads destined for the cloud. This approach unlocks the benefits that the cloud offers and ultimately captures its full value.
Aaron Bawcom is a distinguished cloud architect in McKinsey’s Atlanta office, where Sebastian Becerra, Beau Bennett, and Bill Gregg are principal cloud architects.
The Kessel Run group is currently developing a playbook that would make it easier for organizations across the federal government to adopt engineering and security best practices.
The Air Force’s Kessel Run software factory wants to share its recipes for success with the whole federal government when it comes to engineering and security best practices.
“We’re talking to other software factories and part of our initiative is to release all these templates and playbooks that not just [Defense Department] entities can use, right, from a software factory perspective or just a program office perspective, but any agency can just grab them off our site and say, hey, this is how Kessel Run does chaos engineering, this is how we do performance engineering,” said Omar Marrero, Kessel Run’s deputy test chief and the chaos and performance tech lead.
Marrero told FCW that Kessel Run, which is part of the Air Force Life Cycle Management Center and focuses on software development and acquisitions, is routinely looking for partnerships, consulting with organizations that are looking to “start their own chaos engineering journey” by sharing Kessel Run’s templates, playbooks, or tech stacks.
The goal, he said, is to bring industry best practices across engineering, security, and performance to an organization “like a vaccine that you’re injecting into a system” to bolster preparedness.
That means working to prove or test assumptions and develop a process to address the worst case scenarios: “just think of it as pre-emptive practicing where you can practice fire drills: like we know what happens if all sudden we have a surge, does it happen, did we get an alert to put resources on it, that kind of stuff.”
It might sound routine, but in practice, the testing concept has already proven beneficial through a partnership with the General Services Administration, which recently teamed up with Kessel Run to make sure the Cloud.gov service could handle a surge in users.
Lindsay Young, the acting director of Cloud.gov, told FCW that the partnership was a “fantastic opportunity.”
“It was so much fun,” Young said, “really spending the time to understand each other’s setups and things like that, and then figure out bigger and bigger ways to make trouble and see if we could stand up to it.”
Young said the aim was to ensure Cloud.gov users could get a seamless experience and that testing out extreme scenarios like scaling ability was “invaluable” because “you don’t know you can do something until you can prove you can do something.”
The next challenge, Young said, is finding and then partnering with other federal civilian agencies that have similar problems. Kessel Run will be doing the same as it is currently partnering with other software factories across the Defense Department, including the Navy’s Black Pearl, but is also planning to release its playbooks to broaden its impact.
The Air Force’s software factory darling has long been heralded as a Defense Department success story and mold for addressing emerging technology needs with plans to expand its influence in the Air Force and beyond.
But the release of the guidebooks, which are being drafted, could speed that along. Marrero said there isn’t a hard release date and many of the materials being drafted are also being worked on with other software factories, including the Army. The newer concept of security chaos will also be included.
“That’s what we do. We just share what we learn. And if another opportunity comes like this one where we’re going to collaborate again, we’ll jump on that,” he said. “And hopefully, we can spread this chaos engineering thing to the rest of the government and that initiative helps us deliver more resilient stuff.”
A studious adversary may be hellbent on destruction, and a comprehensive approach is needed to successfully govern the protection of critical infrastructure, specialists say.
The discovery of a malware tool targeting the operational technology in critical infrastructure like power plants and water treatment facilities is highlighting issues policymakers are grappling with in efforts to establish a regulatory regime for cybersecurity.
The tool enables the adversary to move laterally across industrial control system environments by effectively targeting their crucial programmable logic controllers.
“There are only a few places that can build something like this,” said Bryson Bort, CEO and Founder of cybersecurity firm Scythe. “This is not the kind of thing that the script kitty—the amateur—can all of a sudden, gen up and be like, ‘look, I’m doing things against PLCs.’ These are very complicated machines.”
“These are not protocols you can just go up, and, like, do against, like [web application penetration testing,]” Bort said. “So the complexity of this cannot be [overstated], the comprehensive nature of this particular malware cannot be [overstated]. This thing, I think calling it a shopping mall doesn’t quite capture it right. This was Mall of America. This thing had almost everything in it and the ability to add even more.”
Bort said the design of the tool suggests a switch in the mindset of the adversary—likely the Kremlin in the estimation of cyber intelligence analysts, although U.S. officials have not attributed the tool’s origin.
He connected the tool’s emergence to “what we’re seeing here in phase three on the ground in the Ukraine, which is the Russians seem to be going almost with a scorched earth approach. They are killing civilians, they are destroying the infrastructure. And that’s a complete, almost, 180 from what we saw within the first few days of the war where it looked like … they thought they were gonna kind of stroll into the country, take everything. And you don’t want to destroy what you’re about to take. And now it seems to be just to cause destruction.”
In response to a question about the role of global vendors to the industrial control systems community, and potentially limiting their production to trustworthy partner nations, Bort argued, if there is a need for regulations, the focus should be on the owners and operators of the critical infrastructure.
“This isn’t a vendor problem,” he said. “This is about ICS asset owners, and asset owners are working closely with their respective governments … and different countries of course, have different levels of regulation or partial regulation. We’re in a kind of partially regulated area with likely more regulation coming in these sectors. But I would say it’s the asset owners, not the vendors that I’d be looking to.”
But connected industrial control system environments are complicated, with many different vendors in the supply chain, including commercial information technologies like cloud services, which adversaries are increasingly targeting for their potential to create an exponential effect.
“Security matters on all of these sides,” Trey Herr, director of the Atlantic Council initiative, told Nextgov. “The vendors are the point of greatest regulatory leverage so addressing cybersecurity at the design stage can have the widest impact but with least understanding of the specific environments in which they’ll be used. Asset owners have the best picture about how they use this technology and security matters here in how they deploy and manage the security of these devices. Vendors might be OT focused or IT focused, like cloud vendors, so regulators need to keep focused on both communities.”
That is something lawmakers are currently deliberating on with the goal of introducing legislation this summer. But Herr said more of the community’s attention is currently on the asset-owner incorporation level than on the IT supply-chain elements that are also involved.
“We have a lot more effort and energy on the asset owner level with the Sector Risk Management Agencies at the moment than other parties, especially the IT vendors,” he said.