healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Declaration for the Future of the Internet – U.S. State Department

Posted by timmreardon on 05/02/2022
Posted in: Uncategorized. Leave a comment

The Internet has been revolutionary. It provides unprecedented opportunities for people around the world to connect and to express themselves, and continues to transform the global economy, enabling economic opportunities for billions of people. Yet it has also created serious policy challenges. Globally, we are witnessing a trend of rising digital authoritarianism where some states act to repress freedom of expression, censor independent news sites, interfere with elections, promote disinformation, and deny their citizens other human rights. At the same time, millions of people still face barriers to access and cybersecurity risks and threats undermine the trust and reliability of networks.

Democratic governments and other partners are rising to the challenge. Today, the United States with more than 60 partners from around the globe launched the Declaration for the Future of the Internet.

This Declaration represents a political commitment among Declaration partners to advance a positive vision for the Internet and digital technologies. It reclaims the promise of the Internet in the face of the global opportunities and challenges presented by the 21st century. It also reaffirms and recommits its partners to a single global Internet – one that is truly open and fosters competition, privacy, and respect for human rights. The Declaration’s principles include commitments to:

  • Protect human rights and fundamental freedoms of all people;
  • Promote a global Internet that advances the free flow of information;
  • Advance inclusive and affordable connectivity so that all people can benefit from the digital economy;
  • Promote trust in the global digital ecosystem, including through protection of privacy; and
  • Protect and strengthen the multi-stakeholder approach to governance that keeps the Internet running for the benefit of all.

In signing this Declaration, the United States and partners will work together to promote this vision and its principles globally, while respecting each other’s regulatory autonomy within our own jurisdictions and in accordance with our respective domestic laws and international legal obligations.

Over the last year, the United States has worked with partners from all over the world – including civil society, industry, academia, and other stakeholders to reaffirm the vision of an open, free, global, interoperable, reliable, and secure Internet and reverse negative trends in this regard. Under this vision, people everywhere will benefit from an Internet that is unified unfragmented; facilitates global communications and commerce; and supports freedom, innovation, education and trust.

DOWNLOAD THE DECLARATION (PDF) [62 KB]

Article link: https://www.state.gov/declaration-for-the-future-of-the-internet/

THE DECLARATION FOR THE FUTURE OF THE INTERNET PARTNERS

Albania | Andorra | Argentina | Australia | Austria | Belgium | Bulgaria | Cabo Verde | Canada | Colombia | Costa Rica | Croatia | Cyprus | Czech Republic | Denmark | Dominican Republic | Estonia | The European Commission | Finland | France | Georgia | Germany | Greece | Hungary | Iceland | Ireland | Israel | Italy | Jamaica | Japan | Kenya | Kosovo | Latvia | Lithuania | Luxembourg | Maldives | Malta | Marshall Islands | Micronesia | Moldova | Montenegro | Netherlands | New Zealand | Niger | North Macedonia | Palau | Peru | Poland | Portugal | Romania | Serbia | Slovakia | Slovenia | Spain | Sweden | Taiwan | Trinidad and Tobago | the United Kingdom | Ukraine | Uruguay


OPEN CALL FOR PARTICIPATION

The Declaration remains open to all governments or relevant authorities willing to commit and implement its vision and principles. Contact the nearest U.S. embassy, mission, or representative to learn more.

From seawater to drinking water, with the push of a button MIT News

Posted by timmreardon on 05/01/2022
Posted in: Uncategorized. Leave a comment

Researchers build a portable desalination unit that generates clear, clean drinking water without the need for filters or high-pressure pumps.

Adam Zewe | MIT News Office

April 28, 2022

MIT researchers have developed a portable desalination unit, weighing less than 10 kilograms, that can remove particles and salts to generate drinking water.

The suitcase-sized device, which requires less power to operate than a cell phone charger, can also be driven by a small, portable solar panel, which can be purchased online for around $50. It automatically generates drinking water that exceeds World Health Organization quality standards. The technology is packaged into a user-friendly device that runs with the push of one button.

Unlike other portable desalination units that require water to pass through filters, this device utilizes electrical power to remove particles from drinking water. Eliminating the need for replacement filters greatly reduces the long-term maintenance requirements.

This could enable the unit to be deployed in remote and severely resource-limited areas, such as communities on small islands or aboard seafaring cargo ships. It could also be used to aid refugees fleeing natural disasters or by soldiers carrying out long-term military operations.

“This is really the culmination of a 10-year journey that I and my group have been on. We worked for years on the physics behind individual desalination processes, but pushing all those advances into a box, building a system, and demonstrating it in the ocean, that was a really meaningful and rewarding experience for me,” says senior author Jongyoon Han, a professor of electrical engineering and computer science and of biological engineering, and a member of the Research Laboratory of Electronics (RLE).

Joining Han on the paper are first author Junghyo Yoon, a research scientist in RLE; Hyukjin J. Kwon, a former postdoc; SungKu Kang, a postdoc at Northeastern University; and Eric Brack of the U.S. Army Combat Capabilities Development Command (DEVCOM). The research has been published online in Environmental Science and Technology.

Video thumbnail

Play video

Filter-free technology

Commercially available portable desalination units typically require high-pressure pumps to push water through filters, which are very difficult to miniaturize without compromising the energy-efficiency of the device, explains Yoon.

Instead, their unit relies on a technique called ion concentration polarization (ICP), which was pioneered by Han’s group more than 10 years ago. Rather than filtering water, the ICP process applies an electrical field to membranes placed above and below a channel of water. The membranes repel positively or negatively charged particles — including salt molecules, bacteria, and viruses — as they flow past. The charged particles are funneled into a second stream of water that is eventually discharged.

The process removes both dissolved and suspended solids, allowing clean water to pass through the channel. Since it only requires a low-pressure pump, ICP uses less energy than other techniques.

But ICP does not always remove all the salts floating in the middle of the channel. So the researchers incorporated a second process, known as electrodialysis, to remove remaining salt ions.

Yoon and Kang used machine learning to find the ideal combination of ICP and electrodialysis modules. The optimal setup includes a two-stage ICP process, with water flowing through six modules in the first stage then through three in the second stage, followed by a single electrodialysis process. This minimized energy usage while ensuring the process remains self-cleaning.

“While it is true that some charged particles could be captured on the ion exchange membrane, if they get trapped, we just reverse the polarity of the electric field and the charged particles can be easily removed,” Yoon explains.

They shrunk and stacked the ICP and electrodialysis modules to improve their energy efficiency and enable them to fit inside a portable device. The researchers designed the device for nonexperts, with just one button to launch the automatic desalination and purification process. Once the salinity level and the number of particles decrease to specific thresholds, the device notifies the user that the water is drinkable.

The researchers also created a smartphone app that can control the unit wirelessly and report real-time data on power consumption and water salinity.

Beach tests

After running lab experiments using water with different salinity and turbidity (cloudiness) levels, they field-tested the device at Boston’s Carson Beach.

Yoon and Kwon set the box near the shore and tossed the feed tube into the water. In about half an hour, the device had filled a plastic drinking cup with clear, drinkable water.

“It was successful even in its first run, which was quite exciting and surprising. But I think the main reason we were successful is the accumulation of all these little advances that we made along the way,” Han says.

The resulting water exceeded World Health Organization quality guidelines, and the unit reduced the amount of suspended solids by at least a factor of 10. Their prototype generates drinking water at a rate of 0.3 liters per hour, and requires only 20 watts of power per liter.

“Right now, we are pushing our research to scale up that production rate,” Yoon says.

One of the biggest challenges of designing the portable system was engineering an intuitive device that could be used by anyone, Han says.

Yoon hopes to make the device more user-friendly and improve its energy efficiency and production rate through a startup he plans to launch to commercialize the technology.

In the lab, Han wants to apply the lessons he’s learned over the past decade to water-quality issues that go beyond desalination, such as rapidly detecting contaminants in drinking water.

“This is definitely an exciting project, and I am proud of the progress we have made so far, but there is still a lot of work to do,” he says.

For example, while “development of portable systems using electro-membrane processes is an original and exciting direction in off-grid, small-scale desalination,” the effects of fouling, especially if the water has high turbidity, could significantly increase maintenance requirements and energy costs, notes Nidal Hilal, professor of engineering and director of the New York University Abu Dhabi Water research center, who was not involved with this research.

“Another limitation is the use of expensive materials,” he adds. “It would be interesting to see similar systems with low-cost materials in place.”

The research was funded, in part, by the DEVCOM Soldier Center, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), the Experimental AI Postdoc Fellowship Program of Northeastern University, and the Roux AI Institute.

Article link: https://news.mit.edu/2022/portable-desalination-drinking-water-0428

https://pubs.acs.org/doi/10.1021/acs.est.1c08466?ref=pdf

Pentagon eyeing the cloud to help firms meet CMMC cybersecurity requirements – Breaking Defense

Posted by timmreardon on 04/28/2022
Posted in: Uncategorized. Leave a comment

By Jaspreet Gill on April 21, 2022 at 2:26 PM

Story updated 4/22/2022 at 10:25 am ET to include a clarification that Bostjanick was referring to May 2023 for when a new CMMC interim rule might go into effect.

WASHINGTON: The Pentagon is assessing whether to develop cloud service offerings to help contractors meet requirements for its cyber certification program, according to the Defense Department’s deputy chief information officer. 

The Cybersecurity Maturity Model Certification (CMMC) program aims to strengthen the cybersecurity of the defense industrial base by holding contractors accountable for following best practices to protect their network, but can be an onerous undertaking both for the companies and their assessors. The Pentagon last November rolled out CMMC version 2.0, streamlining the security tiers of the program from five to three and resulting in some requirements changes for its first two levels. 

David McKeown, DoD deputy chief information officer and senior information security officer, whose office leads the CMMC effort, said Tuesday at the AFCEA Cyber Mission Summit he’s looking for “innovative solutions” to help contractors meet at least 85 out of 110 controls in NIST Special Publication 800-171 in order to achieve certification required for Level 2 of CMMC. 

“For instance, in the CMMC realm, rather than go out and assess each and every network of our industry partners, I’m kind of keen on establishing some sort of cloud services that either achieve many of the 110 controls in [NIST SP] 800-171 or all of them that industry partners can consume to store our data and safeguard our data without us having to go out onto your network,” McKeown said. 

Pentagon CIO John Sherman in February said he hoped the upgraded CMMC program would raise the cybersecurity “waterline” across DoD to keep potential adversaries away from critical data. 

“This is basic hygiene to raise the water level to make sure we can protect our sensitive data so that when our service members have to go into action, they’re not going to have an unfair position because our adversary’s already stolen key data and technologies that’ll put them at an advantage,” Sherman said at the AFCEA Space Force IT conference. 

RELATED: Pentagon CIO Hopes CMMC 2.0 Will ‘Raise’ Cybersecurity ‘Waterline’

Meanwhile, CMMC’s policy director said Wednesday another interim rule for the program could come in May next year. The Pentagon released its first interim rule, which define some mandatory compliance requirements, in September 2020 for the first version of CMMC, prompting hundreds of comments and criticism from industry regarding the timeframe and complexity of the program. 

“Our anticipation is that we will be allowed to have another interim rule like we did last time,” Stacy Bostjanick, CMMC policy director for the Office of the Undersecretary of Defense for Acquisition and Sustainment, said. “We’re hoping that the interim rule will go into effect by May. In fact, my team is very frustrated with me today because I’m sitting here with you guys and they’re stuck in a room going through a rule that’s like hundreds of pages long.” 

Once the rulemaking process is over, she said she hopes “there will be only one more aspect that we’ll have to address and that will be the international partners.” 

“That will probably take some rulemaking effort,” Bostjanick said. “We’re working through how that’s going to work in getting that laying flat today.”

Article link: https://breakingdefense-com.cdn.ampproject.org/c/s/breakingdefense.com/2022/04/pentagon-eyeing-the-cloud-to-help-firms-meet-cmmc-cybersecurity-requirements/amp/

The AI Learning Revolution And The End Of One-Size-Fits-All Learning – Forbes

Posted by timmreardon on 04/28/2022
Posted in: Uncategorized. Leave a comment

Markus Bernhardt Forbes Councils Member Forbes Technology Council COUNCIL POST| Membership (fee-based

Apr 28, 2022,06:30am EDT

Markus Bernhardt is the Chief Evangelist at Obrizum, pioneering deeptech AI digital learning solutions for corporate learning.

Google Director of Research Peter Norvig famously stated in his keynote speech for the Association for Learning Technology Conference in 2007 that if you only had to read one research paper to learn about learning, it would beBenjamin Bloom’s The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.

To aid his preparation for the keynote, this brilliant piece of advice had been given to Peter by his friend Hal Abelson, an educator and professor at MIT. In Bloom’s well-known and often-cited paper, the outcome of an experiment is reported comparing the efficacy of three types of teaching: the conventional lecture, the conventional lecture with regular testing and feedback, and one-to-one tuition.

Using the “straight lecture” as the mean, Bloom found an 84% increase in mastery above the mean for a “formative feedback” approach to teaching and an astonishing 98% increase in mastery for one-to-one tuition. While intuitively one might of course have expected one-to-one tuition to stand out in such an experiment, the measured impact compared to the other two methods must surely be considered staggering, and we can immediately see why Hal would have chosen to recommend this paper to Peter.

The highly personalized approach for individuals or to small groups in regard to tuition, as well as many forms of training and practice, had of course been the preferred modality long before this piece of research, albeit it being extraordinarily expensive and thus not widely available. However, Bloom’s research provided the data, and extraordinarily so, that cemented one-to-one tuition and all forms of personalization as the gold standard in learning.

Lecturing is a form of one-size-fits-all learning, and it was invented in 1350, a time when books were extremely expensive and not available to the masses since they were arduously written by hand, traditionally by monks. The idea of a lecture arose since this approach would allow one person to read the material out loud to a large group of learners, allowing everyone in the group to access the content and learn. Like all one-size-fits-all learning, lecturing aims to cater to the “average” learner in a group. This idea of an “average” person turns out foolishly inadequate when the realization sets in that given human complexity, the average will almost never adequately describe a given individual, an idea explored in great detail by Todd Rose in his book The End of Average: How We Succeed in a World That Values Sameness.

In recent years, the term one-size-fits-all has mainly been used to denounce click-through e-learning, where such a one-size-fits-all approach and delivery has been the norm—and the pain for employees across the globe and across sectors. Most notably, this has been the case for annual compliance training, delivered digitally and as a tick-box exercise to allow organizations to present completion certificates to regulators.

This is already changing and fast gathering momentum, as artificial intelligence (AI) is not only changing the way we learn but also the way trainers train, coaches coach and teachers teach. Through AI technology, the individual learner can now access fully personalized programs of learning, in their own time, and thus cover almost any topic via self-guided learning. This personalization at a granular level can be achieved, as now the AI is able to measure and take into account prior existing knowledge, as well as continuously measure learner progress, in both competence and confidence throughout the learning journey. It can do this for each topic or learning objective, respectively. AI is thus able continuously to adapt to the learner, their individual pace of progress and their individual learning needs.

Furthermore, AI can not only deliver the learning fully personalized but also in line with evidence-based theory, such as variation (including multiple-media and multiple model content), utilizing worked examples, retrieval practice, spaced practice, low-stakes quizzes and interleaving of topics.

Being able to deliver fully personalized learning journeys through AI is in itself a huge game changer for digital learning. It will drastically change the way we learn, and it will play a major role in the way employees will reskill and upskill in the future of work.

As we enter an era of more effective and efficient learning, asynchronously delivered, individually personalized and at scale, we are looking at the AI learning revolution.

However, there is even more to this. Often overlooked and not covered in nearly as much depth is what exciting and positive impact this will have on in-person tutoring, training, coaching, mentoring and workshops. We are looking at a whole new realm of learning analytics that the AI piece will deliver as an automatic output—strengths, weaknesses, competence, confidence and learner self-awareness.

For human-led sessions, this available data from the AI piece will allow organizations to better plan and deliver far more personalized in-person sessions than previously possible. Organizations will be able to identify learning needs and gaps and, where necessary, group participants accordingly, greatly increasing the potential learning and performance impact of these in-person sessions.

Where previously these in-person sessions, trainings and workshops would have been delivered with the same approach, and often the same slide deck and supporting materials and content, we will see a higher degree of personalization to the individual or learning group and their respective learning needs.

Utilizing the power of AI for learning is already revolutionizing the level of personalization the learner is receiving, across industries and sectors. Learning impact and learning efficiency are being lifted to excitingly new, previously inaccessible levels — both digitally as well as for in-person learning. Learners, designers, trainers, coaches, and educators — everyone is benefiting as we reach into our toolbox and deploy the new AI learning and automation tools available.

Article link: https://www.forbes.com/sites/forbestechcouncil/2022/04/28/the-ai-learning-revolution-and-the-end-of-one-size-fits-all-learning/amp/


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOD CIO John Sherman on Department’s Future Priorities, JWCC & CMMC – GOVCON Wire

Posted by timmreardon on 04/27/2022
Posted in: Uncategorized. Leave a comment

A few years from now, the Department of Defense’s Joint Warfighter Cloud Capability will be in place and operational as the foundation for the department’s JADC2 initiative, according to John Sherman, chief information officer of DOD and a 2022 Wash100 Award winner.

The department’s cloud adoption efforts have been difficult and remain difficult, Sherman said during his keynote address at the Potomac Officers Club’s 3rd Annual CIO Summit. Over the last year, DOD has pivoted from its first single-source cloud program, the Joint Enterprise Defense Infrastructure contract, to its new multi-cloud, multi-award JWCC.

Although JWCC has seen delays recently, the potential $9 billion effort aims to award contracts by the end of 2022.

“Within a few years, not only will the contract be in place, we’re going to be running workloads top secret, secret and unclassified across multiple clouds all the way from CONUS out to the very tactical edge,” Sherman forecasted.

Sherman also shared that the DOD’s Cybersecurity Maturity Model Certification program, which was recently rolled under his office, is expected to undergo further changes in the coming years to reduce the barriers for businesses to achieve compliance.

Join the 2022 CMMC Forum hosted by the Potomac Officers Club on May 18 to learn more about the future of the program and its implications on the public and private sectors. Register here.

Of the CMMC updates, Sherman said, “I’m going to expect you to see something that is understandable, that makes maybe a little more sense of where we started – looking at maybe three versus five levels – but very importantly, for small and medium sized businesses, that we have taken steps to make this 800-171 implementation a bit more palatable and doable.”

Sherman’s priority in amending the CMMC program expands beyond the nation’s capital and is driven by his own family’s history of small business ownership “We all live in the Beltway,” he said, addressing the in-person audience. “I’m thinking about the companies that are out in the Midwest, the West coast, the Northeast, the Southeastern United States, you name it – folks who aren’t near the 495 here and how CMMC and DIB security looks to them.”

Cybersecurity, and CMMC in particular, are areas in which we’re “going to have to move the needle” as adversarial threats continue to escalate, he said. “This is where the Chinese and the Russians and non-state actors are trying to lift our information.” 

In regards to other focus areas, buying down the department’s technical debt, getting zero trust on good footing, electromagnetic spectrum operations, 5G and PNT – or position, navigation and timing – all still remain top priorities for the department in the near future, Sherman noted.

Article link: https://www.govconwire.com/2022/04/dod-cio-john-sherman-on-departments-future-priorities-jwcc-and-cmmc/

Cloud foundations: Ten commandments for faster—and more profitable—cloud migrations – McKinsey

Posted by timmreardon on 04/26/2022
Posted in: Uncategorized. Leave a comment

April 21, 2022 | Article

By Aaron Bawcom, Sebastian Becerra, Beau Bennett, and Bill Gregg

Cloud migrations flounder quickly unless organizations invest in building the right cloud foundations.

DOWNLOADS

 Article (6 pages)

For many companies moving to the cloud, a focus on short-term gain leads to long-term pain and prevents them from capturing a much bigger portion of cloud’s estimated $1 trillion of value potential. It happens when IT departments, usually with the assistance of systems integrators, migrate a set of applications to the cloud as quickly as possible to capture initial gains. That short-term focus frequently has significant consequences.

Sidebar

What is a cloud foundation?

A cloud foundation is a set of design decisions that is implemented in code files and defines how the cloud is used, secured, and operated. We have found that the ideal cloud foundation is split into three layers to reduce risk, accelerate change, and provide appropriate levels of isolation (exhibit):

  1. Application patterns: Code artifacts that automate the secure, compliant, and standardized configuration and deployment of applications with similar functional and nonfunctional requirements through the use of infrastructure as code (IaC), pipeline as code (PiaC), policy as code (PaC), security as code (SaC), and compliance as code (CaC):
    • Policy as code: The translation of an organization’s standards and policies into actual executable code that secures the infrastructure and environment of the organization automatically in accordance with the policy
    • Security as code:Software that verifies the configuration of an infrastructure definition before deployment and after deployment to meet a particular defined standard (for more, see “Security as code: The best (and maybe only) path to securing cloud applications and systems”)
    • Compliance as code: A composed set of rules interpreted by a software-based policy engine that enforces compliance policy for a specific cloud environment
  2. Isolation zones: A set of separate CSP-specific zones (sometimes called landing zones) that isolate application environments to prevent concentration risk. Each zone contains CSP services, identity and access management (IAM), network isolation, capacity management, shared services scoped to the isolation zone, and change control where one or more related applications run
  3. Base: A set of CSP-agnostic capabilities that are provided to a set of isolation zones, including network connectivity and routing; centralized firewall and proxy capabilities; identity standardization; enterprise logging, monitoring, and analytics (ELMA); shared enterprise services; golden-image (or primary-image) pipelines; and compliance enforcement
There are three layers in the ideal cloud foundation.

The culprit? Lack of attention to the cloud foundation, that unsexy but critical structural underpinning that determines the success of a company’s entire cloud strategy (see sidebar, “What is a cloud foundation?”). Several large banks are paying that price, resulting in the need to hire hundreds of cloud engineers because they did not put the right foundational architecture in place at the beginning.

Building a solid cloud foundation as part of a transformation enginedoes not mean delaying financial returns or investing significant resources. It just requires knowing what critical steps to take and executing them well. Our experience shows that companies that put in place a solid cloud foundation reap benefits in the form of a potential eightfold acceleration in the pace of cloud migration and adoption and a 50 percent reduction in migration costs over the long term—without delaying their cloud program.

A top consumer-packaged-goods (CPG) company was running into significant delays with its cloud-migration program—each application was taking up to two months to migrate. With portions of the business divesting, finance and legal were pressuring the company to isolate them from the company quickly. Realizing that deficiencies in its cloud foundation were causing the delays, it made the counterintuitive decision to pause the migration to focus on strengthening the cloud foundation.

For example, it automated critical infrastructure capabilities, deployed security software to automate compliance, deployed reusable application patterns, and created isolation zones to insulate workloads from one another and prevent potential problems in one zone from spreading. Once these improvements were in place, the company was able to migrate applications quickly, safely, and securely, with single applications taking days rather than weeks.

Ten actions to get your cloud foundation right

Building a strong cloud foundation is not the “cost of doing business.” It’s a critical investment that will reap significant rewards in terms of speed and value. The following ten actions are the most important in building this foundation.

1. Optimize technology to enable the fastest ‘idea-to-live’ process

Whether their workloads are in the cloud or in traditional data centers, many companies have outdated and bureaucratic work methods that introduce delays and frustrations. Your cloud foundation should be constructed to enable an idea’s rapid progression from inception to up and running in a production environment, without sacrificing safety and security.

In practice, that means automating as many steps of the production journey as possible, including sandbox requests, firewall changes, on-demand creation of large numbers of isolated networks, identity and access management (IAM), application registration, certificate generation, compliance, and so on. Automating these steps is as valuable in traditional data centers as it is in the cloud. But since the cloud offers unique tools that make automation easier, and because the move to cloud leads organizations to rethink their entire strategy, the beginning of the migration process is often the right time to change how IT operates.

2. Design the cloud architecture so it can scale

If companies do it right, they can build a cloud architecture based on five people that can scale up to support 500 or more without significant changes. As the cloud footprint grows, a well-designed architecture should be able to accommodate more components, including more application patterns, isolation zones, and capabilities. Support for this scaling requires simple, well-designed interfaces between components. Because this is difficult to get right the first time, cloud-architecture engineers who have done it before at scale are a big advantage.

3. Build an organization that mirrors the architecture

According to Conway’s law, the way teams are organized will determine the shape of the technology they develop. IT organizations have a set structure for teams, and that can lead them to build things that don’t fit the shape of the cloud architecture.

For example, some companies have a separate cloud team for each of its business units. This can lead to each team building different cloud capabilities for its respective business unit and not architecting them for reuse by other business units. That can create slowdowns and even delays when changes made by one team affect the usage of another.

IT needs to design its cloud architecture first and then build an organization based on that structure. That means building out an organization that has a base team, isolation-zone teams, and application-pattern teams in order to reduce dependencies and redundancies between groups and ultimately deliver well-architected components at a lower cost.

4. Use the cloud that already exists

Many companies operate in fear of being locked into a specific cloud service provider (CSP), so they look for ways to mitigate that risk. A common pattern is an overreliance on containers, which can be expensive and time consuming and keep businesses from realizing the genuine benefits available from CSPs. One example of this was a company that created a container platform in the cloud as opposed to using the cloud’s own resiliency tools. When there was an outage, the impact was so large that it took multiple days to get its systems back online because the fault was embedded in the core of its non-cloud tooling.

There are other ways to mitigate against CSP lock-in, such as defining a limited lock-in time frame and putting practices and systems in place that enable a rapid shift, if necessary. By attempting to build non-native resiliency capabilities, companies are essentially competing with CSPs without having their experience, expertise, or resources. The root of this issue is that companies still tend to treat CSPs as if they were hardware vendors rather than software partners.

5. Offer cloud products, not cloud services

It is common for companies to create internal cloud-service teams to help IT and the business use the cloud. Usually these service teams operate like fulfillment centers, responding to requests for access to approved cloud services. The business ends up using dozens of cloud services independently and without a coherent architecture, resulting in complexity, defects, and poor transparency into usage.

Instead, companies need dedicated product teams staffed with experienced cloud architects and engineers to create and manage simple, scalable, and reusable cloud products for application teams. The constraints imposed by aligning around cloud products can help to ensure that the business uses the correct capabilities in the correct way.

Once the product team has an inventory of cloud products, it can encourage application teams to use them to fast-track their cloud migration. The aptitude and interest of each application team, however, will influence how quickly and easily it adopts the new cloud products. Teams with little cloud experience, skill, or interest will need step-by-step assistance, while others will be able to move quickly with little guidance. The product team, therefore, needs to have an operating model that can support varying levels of application-team involvement in the cloud-migration journey.

One effective route offers three levels of engagement (exhibit):

  • Concierge level: The engagement team builds everything needed by an application team.
  • Embedded level: Architects from the central cloud team are embedded into application teams to help them build the right application patterns.
  • Partner level: A partner team builds and runs its own isolation zone using the core capabilities from the base foundation, such as networking, logging, and identity.
An effective cloud engagement model has three levels of engagement.

By establishing the cloud products, the teams to support them, and the model by which application teams can engage with product teams, the business has the mechanisms in place to thoughtfully scale its cloud strategy.

6. Application teams should not reinvent how to design and deploy applications in cloud

When organizations give free rein to application teams to migrate applications to the cloud provider, the result is a menagerie of disparate cloud capabilities and configurations that makes ongoing maintenance of the entire inventory difficult.

Instead, organizations should treat the deployment capabilities of an application as a stand-alone product, solving common problems once using application patterns. Application patterns can be responsible for configuring shared resources, standardizing deployment pipelines, and ensuring quality and security compliance. The number of patterns needed to support the inventory of applications can be small, therefore maximizing ROI. For example, one large bank successfully used just ten application patterns to satisfy 95 percent of its necessary use cases.

Cloud by McKinsey Insights

Read the articles

7. Provide targeted change management by using isolation zones

Isolation zones are cloud environments where applications live. In an effort to accelerate cloud migration, CSPs and systems integrators usually start with a single isolation zone to host all applications. That’s a high-risk approach, because configuration changes to support one application can unintentionally affect others. Going to the other extreme—one isolation zone for each application—prevents the efficient deployment of configuration changes, requiring the same work to be carried out across many isolation zones.

As a rule of thumb, a company should have from five to 100 isolation zones, depending on the size of the business and how it answers the following questions:

  • Does the application face the internet?
  • What level of resiliency is required?
  • What is the risk-assurance level or security posture required for applications running in the zone?
  • Which business unit has decision rights on how the zone is changed for legal purposes?

8. Build base capabilities once to use across every CSP

Most companies will be on multiple clouds. The mix often breaks down to about 60 percent of workloads in one, 30 percent in another, and the rest in a third. Rather than building the same base capabilities (for example, network connectivity and routing, identity services, logging, and monitoring) across all the CSPs, companies should build them once and reuse the capabilities across all isolation zones, even those that reside in a different CSP from the base.

9. Speed integration of acquisitions by putting in place another instance of the base foundation

During an acquisition, merging IT assets is difficult and time consuming. The cloud can speed the merger process and ease its complexity if the acquiring company creates an “integration-base foundation” that can run the assets of the company being acquired. This enables the IAM, security, network, and compliance policies already in place at the acquired company to continue, allowing its existing workloads to continue to function as designed. Over time, those workloads can be migrated from the integration base to the main base at a measured and predictable pace.

Using this approach, companies can efficiently operate their core cloud estate as well as the acquisition’s using the same software with a different configuration. This typically can reduce integration time from two to three years to closer to three to nine months.

10. Make preventative and automated cloud security and compliance the cornerstone

All software components and systems must go through a security layer. Traditional cybersecurity mechanisms are dependent on human oversight and review, which cannot match the tempo required to capture the cloud’s full benefits of agility and speed. For this reason, companies must adopt new security architectures and processes to protect their cloud workloads.

Security as code (SaC) has been the most effective approach to securing cloud workloads with speed and agility. The SaC approach defines cybersecurity policies and standards programmatically so they can be referenced automatically in the configuration scripts used to provision cloud systems. Systems running in the cloud can be evaluated against security policies to prevent changes that move the system out of compliance.


“Start small and grow” is a viable cloud strategy only if the fundamental building blocks are created from the start. Companies need to design and build their cloud foundation to provide a reusable, scalable platform that supports all the IT workloads destined for the cloud. This approach unlocks the benefits that the cloud offers and ultimately captures its full value.

Article link: https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/cloud-foundations-ten-commandments-for-faster-and-more-profitable-cloud-migrations?

ABOUT THE AUTHOR(S)

Aaron Bawcom is a distinguished cloud architect in McKinsey’s Atlanta office, where Sebastian Becerra, Beau Bennett, and Bill Gregg are principal cloud architects.

Air Force software factory looks to unleash ‘chaos’ on civilian IT shops – Nextgov

Posted by timmreardon on 04/24/2022
Posted in: Uncategorized. Leave a comment

By LAUREN C. WILLIAMSAPRIL 1, 2022

The Kessel Run group is currently developing a playbook that would make it easier for organizations across the federal government to adopt engineering and security best practices.

The Air Force’s Kessel Run software factory wants to share its recipes for success with the whole federal government when it comes to engineering and security best practices. 

“We’re talking to other software factories and part of our initiative is to release all these templates and playbooks that not just [Defense Department] entities can use, right, from a software factory perspective or just a program office perspective, but any agency can just grab them off our site and say, hey, this is how Kessel Run does chaos engineering, this is how we do performance engineering,” said Omar Marrero, Kessel Run’s deputy test chief and the chaos and performance tech lead. 

Marrero told FCW that Kessel Run, which is part of the Air Force Life Cycle Management Center and focuses on software development and acquisitions, is routinely looking for partnerships, consulting with organizations that are looking to “start their own chaos engineering journey” by sharing Kessel Run’s templates, playbooks, or tech stacks.

But what is chaos engineering and why does it matter?

The goal, he said, is to bring industry best practices across engineering, security, and performance to an organization “like a vaccine that you’re injecting into a system” to bolster preparedness.

That means working to prove or test assumptions and develop a process to address the worst case scenarios: “just think of it as pre-emptive practicing where you can practice fire drills: like we know what happens if all sudden we have a surge, does it happen, did we get an alert to put resources on it, that kind of stuff.”

It might sound routine, but in practice, the testing concept has already proven beneficial through a partnership with the General Services Administration, which recently teamed up with Kessel Run to make sure the Cloud.gov service could handle a surge in users. 

Lindsay Young, the acting director of Cloud.gov, told FCW that the partnership was a “fantastic opportunity.” 

“It was so much fun,” Young said, “really spending the time to understand each other’s setups and things like that, and then figure out bigger and bigger ways to make trouble and see if we could stand up to it.”

Young said the aim was to ensure Cloud.gov users could get a seamless experience and that testing out extreme scenarios like scaling ability was “invaluable” because “you don’t know you can do something until you can prove you can do something.”

The next challenge, Young said, is finding and then partnering with other federal civilian agencies that have similar problems. Kessel Run will be doing the same as it is currently partnering with other software factories across the Defense Department, including the Navy’s Black Pearl, but is also planning to release its playbooks to broaden its impact.

The Air Force’s software factory darling has long been heralded as a Defense Department success story and mold for addressing emerging technology needs with plans to expand its influence in the Air Force and beyond. 

But the release of the guidebooks, which are being drafted, could speed that along. Marrero said there isn’t a hard release date and many of the materials being drafted are also being worked on with other software factories, including the Army. The newer concept of security chaos will also be included.

“That’s what we do. We just share what we learn. And if another opportunity comes like this one where we’re going to collaborate again, we’ll jump on that,” he said. “And hopefully, we can spread this chaos engineering thing to the rest of the government and that initiative helps us deliver more resilient stuff.”

Article link: https://www.nextgov.com/it-modernization/2022/04/air-force-software-factory-looks-unleash-chaos-civilian-it-shops/363927/

Cybersecurity Pros Signal Regulatory Challenge for Securing Industrial Control Systems – Nextgov

Posted by timmreardon on 04/24/2022
Posted in: Uncategorized. Leave a comment

By MARIAM BAKSHAPRIL 22, 2022

A studious adversary may be hellbent on destruction, and a comprehensive approach is needed to successfully govern the protection of critical infrastructure, specialists say.

The discovery of a malware tool targeting the operational technology in critical infrastructure like power plants and water treatment facilities is highlighting issues policymakers are grappling with in efforts to establish a regulatory regime for cybersecurity.

The tool enables the adversary to move laterally across industrial control system environments by effectively targeting their crucial programmable logic controllers.

“There are only a few places that can build something like this,” said Bryson Bort, CEO and Founder of cybersecurity firm Scythe. “This is not the kind of thing that the script kitty—the amateur—can all of a sudden, gen up and be like, ‘look, I’m doing things against PLCs.’ These are very complicated machines.”

Bort and other fellows of the Atlantic Council’s Cyber Statecraft Initiative hosted a webinar Friday on the new tool, which is built to commoditize cyberattacks on industrial control systems with a modular design that would make it more accessible to less skilled adversaries as well.  

“These are not protocols you can just go up, and, like, do against, like [web application penetration testing,]” Bort said. “So the complexity of this cannot be [overstated], the comprehensive nature of this particular malware cannot be [overstated]. This thing, I think calling it a shopping mall doesn’t quite capture it right. This was Mall of America. This thing had almost everything in it and the ability to add even more.”

Bort said the design of the tool suggests a switch in the mindset of the adversary—likely the Kremlin in the estimation of cyber intelligence analysts, although U.S. officials have not attributed the tool’s origin.

He connected the tool’s emergence to “what we’re seeing here in phase three on the ground in the Ukraine, which is the Russians seem to be going almost with a scorched earth approach. They are killing civilians, they are destroying the infrastructure. And that’s a complete, almost, 180 from what we saw within the first few days of the war where it looked like … they thought they were gonna kind of stroll into the country, take everything. And you don’t want to destroy what you’re about to take. And now it seems to be just to cause destruction.”

In response to a question about the role of global vendors to the industrial control systems community, and potentially limiting their production to trustworthy partner nations, Bort argued, if there is a need for regulations, the focus should be on the owners and operators of the critical infrastructure.

“This isn’t a vendor problem,” he said. “This is about ICS asset owners, and asset owners are working closely with their respective governments … and different countries of course, have different levels of regulation or partial regulation. We’re in a kind of partially regulated area with likely more regulation coming in these sectors. But I would say it’s the asset owners, not the vendors that I’d be looking to.”

But connected industrial control system environments are complicated, with many different vendors in the supply chain, including commercial information technologies like cloud services, which adversaries are increasingly targeting for their potential to create an exponential effect.

“Security matters on all of these sides,” Trey Herr, director of the Atlantic Council initiative, told Nextgov. “The vendors are the point of greatest regulatory leverage so addressing cybersecurity at the design stage can have the widest impact but with least understanding of the specific environments in which they’ll be used. Asset owners have the best picture about how they use this technology and security matters here in how they deploy and manage the security of these devices. Vendors might be OT focused or IT focused, like cloud vendors, so regulators need to keep focused on both communities.” 

That is something lawmakers are currently deliberating on with the goal of introducing legislation this summer.  But Herr said more of the community’s attention is currently on the asset-owner incorporation level than on the IT supply-chain elements that are also involved.

“We have a lot more effort and energy on the asset owner level with the Sector Risk Management Agencies at the moment than other parties, especially the IT vendors,” he said.

Article link: https://www.nextgov.com/cybersecurity/2022/04/cybersecurity-pros-signal-regulatory-challenge-securing-industrial-control-systems/366023/

Through the Times: 5 Waves of AI Computing – NVIDIA

Posted by timmreardon on 04/24/2022
Posted in: Uncategorized. Leave a comment

April 18, 2022 by TIFFANY YEUNG

I is the most impactful technological advance of our time, transforming every aspect of the global economy.

Five waves of growth have carried AI from inception to ubiquity: the big bang of AI, cloud services, enterprise AI, edge AI and autonomy.

Like other technical breakthroughs — such as industrial machinery, transistors, the internet and mobile computing — AI was conceived in academia and commercialized in successive phases. It first took hold in large, well-resourced organizations before spreading over years to smaller organizations, professionals and consumers.

Since the term “AI” was coined at Dartmouth University in 1956, people in this field have explored many approaches to solving the world’s toughest problems. One of the most popular, deep learning, exploits data structures called neural networks that mirror how human brain cells operate.

Data scientists using deep learning configure a neural network with the parameters that work best for a particular problem, and then feed the AI up to millions of sample questions and answers. With each sample answer, the AI adjusts its neural weights until it can answer the questions on its own — even new ones it hasn’t seen before.

Learn more about the five waves of modern AI, determine which wave your organization is in and gear up for what comes next.

The Big Bang of AI 

The first wave of AI computing was its “big bang,” which started with the discovery of deep neural networks.

Three fundamental factors fueled this explosion: academic breakthroughs in deep learning, the widespread availability of big data, and the novel application of GPUs to accelerate deep learning development and training.

Where computer scientists used to specify each AI instruction, algorithms can now write other algorithms, software can write software, and computers can learn on their own. This marked the beginning of the machine learning era.

And over the last decade, deep learning has migrated from academia to commerce, carried by the next four waves of growth.

The Cloud

The first businesses to use AI were large tech companies with the scientific know-how and computing resources to adapt neural networks to benefit their customers. They did so using the cloud — the second wave of AI computing.

Google, for example, applied deep learning to natural language processing to offer Google Translate. Facebook applied AI to identify consumer goods from images to make them shoppable. Through these types of cloud applications, Google, Amazon and Microsoft introduced many of AI’s first real-world applications.

Soon, these large tech companies created infrastructure-as-a-service platforms, unleashing the power of public clouds for enterprises and startups alike, and driving AI adoption further.

Now, companies of all sizes rely on the cloud to get started with AI quickly and affordably. It offers an easy onramp for companies to deploy AI, allowing them to focus on developing and training models, instead of building underlying infrastructure.

Enterprise AI

As tools are developed to make AI more accessible, large enterprises are embracing the technology to improve the quality, safety and efficiency of their workflows — and leading the third wave of AI computing. Data scientists in finance, healthcare, environmental services, retail, entertainment and other industries started training neural networks in their own data centers or the cloud.

For example, conversational AI chatbots enhance call centers, and fraud-detection AI monitors unusual activity in online marketplaces. Computer vision acts as a virtual assistant for mechanics, doctors and pilots, providing them with information to make more accurate decisions.

While this wave of AI computing has widespread applications and garners headlines each week, it’s just getting started. Companies are investing heavily in data scientists who can prepare data to train models and machine learning engineers who can create and automate AI training and deployment pipelines.

The Edge

The fourth wave pushes AI from the cloud or data center to the edge, to places like factories, hospitals, airports, stores, restaurants and power grids. The advent of 5G is furthering the ability for edge computing devices to be deployed and managed anywhere. It’s created an explosive opportunity for AI to transform workplaces and for enterprises to realize the value of data from their end users.

With the adoption of IoT devices and advances in compute infrastructure, the proliferation of big data allows enterprises to create and train AI models to be deployed at the edge, where end users are located.

This wave requires machine learning engineers and data scientists to consider the design constraints of AI inference at the edge. Such limits include connectivity, storage, battery power, compute power and physical access for maintenance. Designs must also align with the needs of business owners, IT teams and security operations to better ensure the success of deployments.

Edge AI is also in its early days, but already used across many industries. Computer vision monitors factory floors for safety infractions, scans medical images for anomalous growths and drives cars safely down the freeway. The potential for new applications is limitless.

Autonomy 

The fifth wave of AI will be the rise of autonomy — the evolution of AI to the point where AI navigates mobile machinery without human intervention. Cars, trucks, ships, planes, drones and other robots will operate without human piloting. For this to unfold, the network connectivity of 5G, the power of accelerated computing, and continued innovation in the capabilities of neural networks are necessary.

Autonomous AI is making headway, driven by the pandemic, global supply chain constraints and the related need for automation for efficiency in business processes. 

Incorporating domains of engineering beyond deep learning, autonomous AI requires machine learning engineers to collaborate with robotics engineers. Together, they work to fulfill the four pillars of a robotics system workflow: collecting and generating ground-truth data, creating the AI model, simulating with a digital twin and operating the robot in the real world.

For robotics, simulation capabilities are especially important in modeling and testing all possible corner cases to mitigate the safety risks of deploying robots in the real world.

Autonomous machines also face novel challenges around deployment, management and security that require coordination across teams in engineering, operations, manufacturing, networking, security and compliance.

Getting Started With AI 

Starting with the big bang of AI, the industry has grown quickly and spawned further waves of computing, including cloud services, enterprise AI, edge AI and autonomous machines. These advancements are carrying AI from laboratories to living rooms, improving businesses and the daily lives of consumers.

NVIDIA has spent decades building the computational products and software necessary to enable the AI ecosystem to drive these waves of growth. In addition to developing and implementing AI into the company, NVIDIA has helped countless enterprises, startups, factories, healthcare firms and more to adopt, implement and scale their own AI initiatives.

Whether starting an initial AI project, transitioning a team into AI workloads or looking at infrastructure blueprints and expansions, set your AI projects up for success.

Article link: https://blogs.nvidia.com/blog/2022/04/18/five-waves-ai-computing/?

Inside the plan to fix America’s never-ending cybersecurity failures – MIT Tech Review

Posted by timmreardon on 04/23/2022
Posted in: Uncategorized. Leave a comment

The specter of Russian hackers and an overreliance on voluntary cooperation from the private sector means officials are finally prepared to get tough.

By Patrick Howell O’Neillarchive page

March 18, 2022

The 2021 hack of Colonial Pipeline, the biggest fuel pipeline in the United States, ended with thousands of panicked Americans hoarding gas and a fuel shortage across the eastern seaboard. Basic cybersecurity failures let the hackers in, and then the company made the unilateral decision to pay a $5 million ransom and shut down much of the east coast’s fuel supply without consulting the US government until it was time to clean up the mess.

From across the Atlantic, Ciaran Martin looked on in baffled amazement.

“The brutal assessment of the Colonial hack is that the company made decisions off of narrow commercial self-interest, everything else is for the federal government to pick up,” says Martin, previously the United Kingdom’s top cybersecurity official.

Now some of the US’s top cybersecurity officials—including the White House’s current Cyber director—say the time has come for a stronger government role and regulation in cybersecurity so that fiascos like Colonial don’t happen again. 

The change in tack comes just as the war in Ukraine, and the heightened threat of new cyberattacks from Russia, is forcing the White House to rethink how it keeps the nation safe.

“We’re at an inflection point,” Chris Inglis, the White House’s national cyber director and Biden’s top advisor on cybersecurity, tells MIT Technology Review in his first interview since Russia’s invasion of Ukraine. “When critical functions that serve the needs of society are at issue, some things are just not discretionary.”

The White House’s new cybersecurity strategy consists of stronger government oversight, rules mandating that organizations meet minimum cybersecurity standards, closer partnerships with the private sector, a move away from the current market-first approach, and enforcement to make sure any new rules are followed. It will take its cue from some of the nation’s most famous regulatory landmarks, such as the Clean Air Act or the formation of the Food and Drug Administration.

With looming threats from Russian hackers, the FCC is planning for the prospect of Russians hijacking internet traffic, a tactic they’ve seen Moscow employ in the past. A new FCC initiative, announced March 11, aims to investigate if US telecom companies are doing enough to be secure against the threat. However, it’s a real test for the agency because it doesn’t have the power to force companies to comply. They are relying on the possibility of a national security crisis to get them to toe the line.

For many officials, this almost total reliance on the goodwill of the market to keep citizens safe cannot continue. 

“The purely voluntary approach [to cybersecurity] simply has not gotten us to where we need to be, despite decades of effort,” says Suzanne Spaulding, previously a senior Obama administration cybersecurity official. “Externalities have long justified regulation and mandates such as with pollution and highway safety.”

Crucially, the White House’s top officials concur. “I’m a strong fan of what Suzanne says and I agree with her,” says Inglis.

Without a dramatic change, advocates argue, history will repeat itself.

“It’s no secret that companies don’t want strong cybersecurity rules,” says Senator Ron Wyden, one of congress’s loudest voices on cybersecurity and privacy issues. “That’s how our country got where it is on cybersecurity. So I’m not going to pretend that changing the status quo is going to be easy. But the alternative is to let hackers from Russia and China and even North Korea run wild in critical systems all across America. I sincerely hope the next hack doesn’t cause more damage than the Colonial Pipeline breach, but unless Congress gets serious it’s almost inevitable.”

A shift won’t be easy. Many experts, both inside and outside government, worry that poorly written regulation could do more harm than good and some officials have misgivings about regulators’ lack of cybersecurity expertise. For example, the Transportation Security Administration’s recent cyber regulations on pipelines were criticized loudly by some as “screwed up” due to what several critics say are inflexible, inaccurate rules that cause more problems than they solve. The detractors point to it as the result of a regulator with a huge remit but not nearly enough time, resources, and expert staff to do the job right.  

“TSA maintains regular and frequent contact with owners and operators, and many of these pipeline companies appreciate the significance and pace of this public-private endeavor for improvements in protection and resilience against future cyberattacks,” says R. Carter Langston, a TSA spokesperson, who disputes critics of the pipeline regulation.

Glenn Gerstell, who was general counsel at the National Security Agency until 2020, argues that the current scattershot approach–a host of different regulators working on their own specific sectors–doesn’t work and that the US needs one central cybersecurity authority with the expertise and resources that can scale across different critical industries.

Pushback against the pipeline regulations signals how difficult the process might be. But despite that, there is a growing consensus that the status quo—a litany of security failures and perverse incentives—is unsustainable.

Landmark law

The Colonial Pipeline incident proved what many cyber experts already know: most attacks are the result of opportunistic hackers exploiting years-old problems that companies fail to invest in and solve. 

Related Story

How a Russian cyberwar in Ukraine could ripple out globally

Soldiers and tanks may care about national borders. Cyber doesn’t.

“The good news is that we actually know how to solve these problems,” says Glenn Gerstell. “We can fix cybersecurity. It may be expensive and difficult but we know how to do it. This is not a technology problem.”

Another major recent cyberattack proves the point again: SolarWinds, a Russian hacking campaign against the US government and major companies, could have been neutralized if the victims had followed well-known cybersecurity standards.

“There’s a tendency to hype the capabilities of the hackers responsible for major cybersecurity incidents, practically to the level of a natural disaster or other so-called acts of God,” Wyden says. “That conveniently absolves the hacked organizations, their leaders, and government agencies of any responsibility. But once the facts come out, the public has seen repeatedly that the hackers often get their initial foothold because the organization failed to keep up with patches or correctly configure their firewalls.”

It’s clear to the White House that many businesses do not and will not invest enough in cybersecurity on their own. In the past six months, the administration has enacted new cybersecurity rules for banks, pipelines, rail systems, airlines, and airports. Biden signed a cybersecurity executive order last year to bolster federal cybersecurity and impose security standards on any company making sales to the government. Changing the private sector has always been the more challenging task and, arguably, the more important one. The vast majority of critical infrastructure and technology systems belong to the private sector. 

Most of the new rules have amounted to very basic requirements and a light government touch—yet they’ve still received pushback from the companies. Even so, it’s clear that more is coming. 

“There are three major things that are needed to fix the ongoing sorry state of US cybersecurity,” says Wyden. “Mandatory minimum cybersecurity standards enforced by regulators; mandatory cybersecurity audits, performed by independent auditors who are not picked by the companies they are auditing, with the results delivered to regulators; and steep fines, including jail time for senior execs, when a failure to practice basic cyber hygiene results in a breach.”

The new mandatory incident reporting regulation, which became law on Tuesday, is seen as a first step. The law requires private companies to quickly share information about shared threats that they used to keep secret—even though that exact information can often help build a stronger collective defense.

Previous attempts at regulation have failed but the latest push for a new reporting law gained steam due to key support from corporate giants like Mandiant CEO Kevin Mandia and Microsoft president Brad Smith. It’s a sign that private sector leaders now see regulation as both inevitable and, in key areas, beneficial.

Inglis emphasizes that crafting and enforcing new rules will require close collaboration at every step between government and the private companies. And even from inside the private sector, there is agreement that change is needed.

“We’ve tried purely voluntary for a long time now,” says Michael Daniel, who leads the Cyber Threat Alliance, a collection of tech companies sharing cyber threat information to form a better collective defense. “It’s not going as fast or as well as we need.”

The view from across the Atlantic

From the White House, Inglis argues that the United States has fallen behind its allies. He points to the UK’s National CyberSecurity Centre (NCSC) as a pioneering government cybersecurity agency that the US needs to learn from. Ciaran Martin, the founding CEO of the NCSC, views the American approach to cyber with confused amazement.

“If a British energy company had done to the British government what Colonial did to the US government, we’d have torn strips off them verbally at the highest level,” he says. “I’d have had the prime minister calling the chairman to say, ‘What the fuck do you think you’re doing paying a ransom and switching off this pipeline without telling us?’”

Related Story

The US is unmasking Russian hackers faster than ever

The White House was quick to publicly blame Russia for a cyberattack against Ukraine, the latest sign that cyber attribution is a crucial tool in the American arsenal.

The UK’s cyber regulations work so that banks must be resilient against both a global financial shock and cyber stresses. The UK has also focused stronger regulation on telecoms as a result of a major British telecom being “completely owned” by Russian hackers, says Martin, who says the new security rules make the telecom’s previous security failures illegal.

On the other side of the Atlantic, the situation is different. The Federal Communications Commission, which oversees telecommunications and broadband in the US, had its regulatory power significantly rolled back during the Trump presidency and relies mostly on voluntary cooperation from internet giants. 

The UK’s approach of tackling specific industries one at a time by building on the regulatory powers they already have, as opposed to a single new centralized law that covers everything, is similar to how the Biden White House strategy on cyber will work.

“We have to exhaust the [regulation] authorities we already have,” Inglis says. 

For Wyden, the White House strategy signals a much needed change.

“Federal regulators, across the board, have been afraid to use the authority they have or to ask Congress for new authorities to regulate industry cybersecurity practices,” he says. “It’s no wonder that so many industries have atrocious cybersecurity. Their regulators have essentially let the companies regulate themselves.”

Why the cybersecurity market fails 

There are three fundamental reasons why the cybersecurity market, worth hundreds of billions of dollars and growing globally, falls short.

Companies have not figured out how cybersecurity makes them money, Daniel says. The market fails at measuring cybersecurity and, more importantly, often cannot connect it to a company’s bottom line–so they often can’t justify spending the necessary money.

The second reason is secrecy. Companies have not had to report hacks, so crucial data about big hacks has been kept locked away to protect companies from bad press, lawsuits, and lawmakers. 

Third is the problem of scale. The price that the government and society paid for the Colonial hack went well beyond what the company itself would pay for. Just like with the issue of pollution, “the costs don’t show up on your bottom line as a business,” Spaulding says, so the market incentives to fix the problems are weak.

Advocates for reform say that a stronger government hand can change the equation on all of that, exactly the way reform has in dozens of industries over the last century.

Gerstell sees pressure building slowly to do something different than the status quo.

“I have never seen such near unanimity and awareness ever before,” says Gerstell. “This looks and feels different. Whether it’s enough to really push change is not yet clear. But the temperature is increasing.”

Inglis points to the nearly $2 billion in cybersecurity money from Biden’s 2021 $1 trillion infrastructure bill as a “once in a generation opportunity” for the government to step up on cybersecurity and privacy.

“We have to make sure we don’t overlook the stunning opportunities we have to invest in the resilience and robustness of digital infrastructure,” Inglis argues. “We have to ask, what are the systemically critical functions that our society depends on? Will market forces alone attend to that? And when that falls short, how do we determine what we should do? That’s the course ahead for us. It doesn’t need to be a process that lasts years. We can do this with a sense of urgency.”

Article link: https://www.technologyreview.com/2022/03/18/1047395/inside-the-plan-to-fix-americas-never-ending-cybersecurity-failures/

The article has been updated to clarify that Ciaran Martin was an official, not a minister; and to add TSA’s comment on the pipeline regulation.

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (8)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...