healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Watchdog: Data Officer Council Has Made Progress on Policy Efforts, But Work Remains – Nextgov

Posted by timmreardon on 01/12/2023
Posted in: Uncategorized. Leave a comment

By KIRSTEN ERRICKDECEMBER 16, 2022

The Government Accountability Office found that the Council must follow its new detailed plan to fully meet statutory requirements.

The Government Accountability Office found that the Chief Data Officer Council has made good strides towards fulfilling its statutory requirements to further evidence-based policymaking, but needs to take additional measures—like following its own new fully formed plan and performance management activities and assessing its progress—in order to satisfy these requirements. 

Agencies need evidence—such as performance information, program evaluations and data— for their policymaking to see whether programs are having the intended results. The Chief Data Officer Council is supposed to help with this process. 

The Council—comprised of agency CDOs and other officials and under the Office of Management and Budget—was created to improve how the government collects and uses data, which agencies can expand upon. The Council was established under the Foundations for Evidence-Based Policymaking Act of 2018—or the Evidence Act—whereby agencies must develop concrete data and information to support their policymaking. The Council is supposed to terminate by January 2025.

While early Council activities focused on its set up and organization, it is now working towards meeting the requirements outlined in the law. According to Thursday’s GAOreport, the Council has taken at least one action related to its five requirements that detail how it should contribute to government efforts to create and use evidence. These requirements include:

  • “Establish governmentwide best practices for the use, protection, dissemination and generation of data.” 
  • “Promote and encourage data sharing agreements between agencies.”  
  • “Identify ways in which agencies can improve upon the production of evidence for use in policymaking.”
  • “Consult with the public and engage private users of government data and other stakeholders on how to improve access to federal data assets.”
  • “Identify and evaluate new technology solutions for improving the collection and use of data.”

In alignment with its first requirement, in April, the Council’s Data Inventory working group issued a report recommending practices to help agencies develop inventories of their data assets in a way that helps staff and the public have a clear, comprehensive understanding of the agency’s data assets and how they can request access. That working group has also identified data-sharing needs across the government and related challenges. For example, while data sharing agreements take a long time to complete and are often done one at a time, the working group recommended that agencies build different templates to draft these agreements. 

GAO added that, in June, it told the Council that it did not have fully formed plan and performance management activities to meet the statutory requirements. That deficiency could hinder the Council’s future progress because, thus far, the Council had been primarily working on what it deemed most important. However, the Council provided updated planning documents to GAO for 2022 and 2023 for how it would work towards its requirements. GAO stated that this will help the Council as it works on its goals and requirements and help determine if additional actions are required. 

GAO summarized sample work of agencies that use evidence to support their decision-making. 

The Council and OMB and six selected agencies had no substantive comments, according to GAO. The Energy and Treasury Departments gave technical comments, which GAO incorporated where appropriate.

Article link: https://www.nextgov.com/analytics-data/2022/12/watchdog-data-officer-council-has-made-progress-policy-efforts-work-remains/381005/

Today’s Most Critical Workplace Challenges Are About Systems – HBR

Posted by timmreardon on 01/10/2023
Posted in: Uncategorized. Leave a comment

by Ludmila N. Praslova

January 10, 2023

Summary

Critical workplace issues — e.g., the problematic quality of leadership within organizations, the threats to employee mental health and well-being, and the lack of belonging and inclusion — are primarily attributable to systemic factors embedded in organizational cultures and processes. And yet, many of these and other issues are still mainly addressed on the individual level. Why do organizations keep investing in remedies that don’t work and have little chance of working? An automatic bias in how we perceive and explain the world is a likely culprit. The author explains how that “superbias” manifests — and what leaders can do to combat it in their organizations.

W. Edwards Deming, a forward-thinking American who helpedengineer the Japanese economic miracle and was the father of the continuous quality improvement philosophy, wrote that 94% of issues in the workplace are systemic. Only 6% are attributable to individual-level, idiosyncratic factors. Improvements, therefore, should also focus on systems — not individuals.

Recent research supports Deming’s thinking. Systemic factors embedded in organizational cultures and processes are the primary cause of critical workplace issues — for example, leaders failing to execute strategy within organizations, threats to employee mental health and well-being, and a lack of belonging and inclusion.

And yet, many of these and other issues are still mainly addressed on the individual level. Here are just a few examples of those issues and their individualized interventions:

  • Mental health apps, resilience training, or lunch-break yoga are often seen as solutions to employee stress, burnout, and moral injury.
  • Systemic issues that interfere with performance (e.g., operational bottlenecks and systematic understaffing) are ignored, while individual employees are invasively monitored and “squeezed.”
  • Chronic lack of diversity and inclusion is “addressed” by advancing a token person or two.
  • Bullying problems in toxic environments are tackled with assertiveness training for targets and self-awareness coaching for bullies.
  • Lackluster leadership performance is expected to improve after attending an off-the-shelf training program.
  • Training is thrown at problems that can’t be solved with training — for example, ineffective decision-making processes.

Individual-level interventions do have value; they just don’t work long enough or well enough without improving organizational-level and management practices. For example, how effective is a five-minute stress-relief meditation for an employee who works in a chronically understaffed department with an erratically-behaving boss who is in turn harassed by their boss? About as effective as rinsing the salt off a pickle and then putting it back into the brine.

Why do organizations keep investing in remedies that don’t work and have little chance of working? An automatic bias in how we perceive and explain the world is a likely culprit. Here’s how that “superbias” manifests — and what leaders can do to combat it in their organizations.

The Superbias: How Nurture, Nature, and Stressful Situations Prompt Mental Shortcuts

A common glitch in human cognition might be partially responsible for a tendency to pay more attention to individual rather than systemic factors. When we perceive people, it shows up as fundamental attribution error (FAE), or a dispositional bias: a cognitive bias that leads us to explain others’ behavior largely by their disposition (i.e., their personality, ability, or character) while ignoring situational and contextual factors (e.g., a worker is irresponsible rather than overstretched).

Similarly, the group attribution error (GAE) may bias our perceptions of social groups that are not our own. As a result, we attribute these group members’ behavior to internal characteristics rather than circumstances. For example, many assume that women don’t negotiate because they lack negotiation skills, while in fact many women may hold back because they’re worried about negotiation backlash — a social penalty for violating gendered norms of “niceness.”

FAE and GAE show how many of us are inclined to underestimate the influence of situations and systems — and together represent the dispositional superbias that causes us to favor individual interventions over systemic ones. (Note that “superbias” is a term I created to consolidate the varied terminology used in research to describe an array of concepts that overlap with FAE and GAE: dispositional bias or the correspondence bias in perceiving individuals, and the ultimate attribution error and the intergroup attribution bias in perceiving groups.)

So, where do these patterns of thinking come from? Extensive research points to cultural learning as a major influence on our minds. However, some cognitive differences might also be genetically determined. Both culture and neural wiring — nurture and nature — impact our cognition and perception, creating a tendency to think in a more individual-focused or context-focused way.

Importantly, a tendency does not mean we can only think in one way — rather, for most people, one way of thinking feels easier and more “natural,” like using a dominant hand. Some researchers describe perceiving others as a two-stepprocess. The first step is easy and automatic, and the second is hard and deliberate. Stress and time pressure — or other aspects of our situation — make us more likely to take the first, easy step of attributing someone’s behavior to their disposition, but not the second, harder step of analyzing the context.

Here’s a deeper look at the three powerful influences that shape how we think:

Nurture

People from non-Western and collectivistic societies are less likely to attribute others’ behavior to individual characteristics (e.g., “she is kind”). Instead, they often focus on group-level influences (e.g., “she was with her friends”). Thinkers from collectivist cultures are generally more attuned to context when perceiving people or objects. For example, East Asian research participants saw background elements of visuals more readily and remembered them better than Westerners, who tended to focus on the central figure while ignoring — and forgetting — the context. Additionally, within the same society, people growing up in a lower socioeconomic class develop a more interdependent way of thinking, while wealthier people are more likely to focus on the individual sense of control.

Nature

Neural wiring also impacts thinking patterns. Individual-level neurological differences in brain activation are related to the likelihood of committing FAE. For example, some research suggests that autistic people are predisposed to think systemically, are less susceptible to bias, and make more accurate predictions of human behavior in groups.

Situations

Stressful environments make people more likely to take mental shortcuts. For example, when placed under stress, Western research participants evaluating a legal case were more likely to ignore mitigating circumstances, committing an FAE. In addition, for multicultural people, environmental cues can prime more- or less-contextual mindsets. This was shown in experiments by bilingual, bicultural people from Hong Kong, who switched between traditionally Chinese and traditionally Western thinking and perception patterns in response to cultural symbols (e.g., national flags). When primed with seeing Western symbols, they were more likely to use dispositional attributions.

The evidence is clear that global diversity, socioeconomic and experiential diversity, and neurodiversity shape our minds, including the propensity toward FAE and GAE. Our thinking habits, in turn, shape workplaces — and can create environments that perpetuate one dominant mindset. But having one dominant mindset may lead to biased decisions, and it’s important to guard against this tendency.

Building the Systemic Thinking Advantage in Organizations

Overlooking contextual and systemic influences on performance and organizational effectiveness results in costly errors. On the individual level, for instance, ignoring the role of supply issues and understaffing may lead to unfairly blaming work delays on dedicated employees. These employees may in turn leave, worsening the understaffing problem.

On the group level, decision makers may assume that women need to “lean in” or develop confidence amid systemic sexism. However, “fixing” women is not the solution — fixing environments to create systemic inclusion and allow women to succeed authentically is. In another example, disabled people might be perceived as not “fixable enough” and be summarily excluded, perpetuating ableist systems. A systemic perspective, such as the social modelof disability, points toward fixing the lack of accessibility that “disables” individuals who could deliver outstanding results if using well-matched technology or simply working from home.

How can organizations prevent wasting resources and even committing an injustice by trying to fix and excluding individuals when a systemic intervention is called for?

On the individual level, FAE and GAE are difficult to overcome, mainly because of our tendency to revert to automatic patterns and biases when under stress. However, developing empathy and perspective-taking, as well as expanding one’s cultural experience, may help. Perspective-taking training reduces dispositional error, at least in the short term, because it allows us to think about others the way we think about ourselves: contextually. Training individuals in systems thinking and practicing drawing and explaining systems diagrams may also increase cognitive flexibility.

However, while individual-level solutions for “changing minds” are limited, group-level solutions can help develop a more balanced and flexible collective cognition and enrich the collective contextual intelligence. Here are five ways organizations can tame the superbias at the group level:

Diversify the collective cognition in leadership.

When decision-maker groups are homogenous, groupthink is likely. When neurotypical individuals from affluent or Western backgrounds dominate groups, decisions will likely converge toward the shared assumption that everything others do is an individual problem. Including those whose cultural learning or brain wiring make them more inclined to automatically focus on context and systems — even under stress — can expand the groups’ perspective. Welcoming the viewpoints of people who grew up without socioeconomic privilege or in non-Western cultural environments, being inclusive of neurodivergent people, and identifying and involving individuals with high levels of contextual intelligence can support a more balanced collective decision making.

This diversity should go beyond tokenistic representation or a temporary “intervention” — organizations should ensure that mechanisms for true inclusion and developing a critical mass of different voices are embedded within systems and processes.

Integrate contextual thinking into forms and procedures.

Questions about contextual considerations can be integrated into forms and templates used for decision making. Training-needs analysis and professional development planning, for example, may include questions about contextual issues that could prevent employees from implementing the new learning, such as bureaucraticstructures or a lack of support. The planning of well-being programs may include not only considerations for which apps or events could be added, but also which organizational stressors could be eliminated. And diversity and inclusion assessments and plans must focus on removing systemic barriers, such as biased selection instruments (like unstructured interviews) and inequities in access to high-visibility projects.

Address the stress.

Stress increases the likelihood of ignoring contextual factors. Removing unnecessary time pressure, the need to multitask or juggle multiple projects, physical discomfort (e.g., an indoor temperature that’s too cold or hot), and other stressors that can be controlled allows for more deliberate thinking and reduces the likelihood of taking “mental shortcuts” and reverting to biases. People need time, energy, and mental capacity to consider issues in all their complexity.

Invite broad input.

Employees “in the trenches,” from customer service and frontline supervisors, to research and development, to entry-level HR specialists, see organizational life and industry dynamics from different perspectives. Pooling this collective knowledge via regular surveys, focus groups, and involvement in decision making can help leaders develop a contextually rich picture of organizational pain points and bottlenecks and result in nuanced and systemic solutions.

Regularly asking frontline employees for their perspectives can be a major source of competitive advantage and process improvement, too. Customer service representatives understand the diversity of customer needs and the barriers preventing organizations from meeting them. HR specialists, line supervisors, and employees in the trenches will know whether more employee training, more scheduling options, or even better workplace lighting will be particularly beneficial. And cleaning crews will have a unique insight into organizational work patterns.

Appoint a systems champion.

Human attention is limited, and thinking about our own thinking while trying to solve pressing problems is nearly impossible. However, groups can appoint an individual whose role is to remind all its members about the importance of taking a systemic perspective. This is a more specific variation of the “devil’s advocate” role in preventing groupthink. The champion could remind the group of the pickle principle: Before investing in a pickle intervention, consider the brine.

. . .

Developing a more balanced way of thinking that carefully considers both individual and systemic factors can help leaders be more objective and compassionate, earning employee trust while making more accurate decisions. It can help leadership teams address inclusion systemically and go beyond fragmented efforts. It can help organizations create effective productivity systems that don’t burn out and alienate employees. And on national and global levels, appreciating systemic interdependencies between businesses and communities can help create a healthier and systemically sustainable future of work — and the world.

  • LPLudmila N. Praslova, PhD, SHRM-SCP, uses her extensive experience with global, cultural, and neurodiversity to help create inclusive and talent-rich workplaces. She is a professor of Graduate Programs in Industrial-Organizational Psychology and accreditation liaison officer at Vanguard University of Southern California.

Article link: https://hbr.org/2023/01/todays-most-critical-workplace-challenges-are-about-systems?

10 BreakthroughTechnologies 2023 – MIT Technology Review

Posted by timmreardon on 01/09/2023
Posted in: Uncategorized. Leave a comment
Every year, we pick the 10 technologies that matter the most right now. You’ll recognize some; others mightsurprise you.

We look for advances that will have a big impact on our lives and then break down why they matter.

CRISPR for high cholesterol

Over the past decade, the gene-editing tool CRISPR has rapidly evolved from the lab to the clinic. It started with experimental treatments for rare genetic disorders and has recently expanded into clinical trials for common conditions, including high cholesterol. New forms of CRISPR could take things further still. Why it matters

AI that makes images

This is the year of the AI artists. Software models developed by Google, OpenAI, and others can now generate stunning artworks based on just a few text prompts. Type in a short description of pretty much anything, and you get a picture of what you asked for in seconds. Nothing will be the same again. Why it matters


A chip design that changes everything

The chip industry is undergoing a profound shift. Manufacturers have long licensed chip designs from a few big firms. Now, a popular open standard called RISC-V is upending those power dynamics by making it easier for anyone to create a chip. Many startups are exploring the possibilities. Why it matters

Mass-market military drones

Military drones were once out of reach for smaller nations due to their expense and strict export controls. But advances in consumer componentry and communications technology have helped drone manufacturers build complex war machines at much lower prices. The Turkish Bayraktar TB2 and other cheap drones have changed the nature of drone warfare. Why it matters


Abortion pills via telemedicine

Abortion ceased to be a constitutional right in the US in 2022, and state bans now prevent many people from accessing them. So healthcare providers and startups have turned to telehealth to prescribe and deliver pills that allow people to safely induce abortions at home. Why it matters

Organs on demand

Every day, an average of 17 people in the US alone die awaiting an organ transplant. These people could be saved—and many others helped—by a potentially limitless supply of healthy organs. Scientists are genetically engineering pigs whose organs could be transplanted into humans and 3D-printing lungs using a patient’s own cells. Why it matters

The inevitable EV

Electric vehicles are finally becoming a realistic option. Batteries are getting cheaper and governments have passed stricter emissions rules or banned gas-powered vehicles altogether. Major automakers have pledged to go all-electric, and consumers everywhere will soon find there are more good reasons to buy an EV than not. Why it matters

James Webb Space Telescope

The first breathtaking images of the distant cosmos captured by the world’s most powerful space telescope inspired a collective sense of awe and wonder. And this thing’s just getting started. Discoveries will come almost as rapidly as scientists can analyze the data now flooding in. A new era of astronomy has begun. Why it matters

Ancient DNA analysis

Genomic sequencing tools now let us read very old strands of human DNA. Studying traces from humans who lived long ago reveals much about who we are and why the modern world looks the way it does. It also helps scientists understand the lives of regular people living back then—not just those who could afford elaborate burials. Why it matters

Battery recycling

Recycling is vital to prevent today’s growing mountains of discarded batteries from ending up in landfills, and it could also provide a badly needed source of metals for powering tomorrow’s EVs. Companies are building facilities that will reclaim lithium, nickel, and cobalt and feed these metals back to lithium-ion battery manufacturers, helping reduce the cost. Why it matters

Article link: https://www.technologyreview.com/2023/01/09/1066394/10-breakthrough-technologies-2023/

What’s next for quantum computing – MIT Technology Review

Posted by timmreardon on 01/06/2023
Posted in: Uncategorized. Leave a comment


Companies are moving away from setting qubit records in favor of practical hardware and long-term goals.

By Michael Brooks. January 6, 2023

This story is a part of MIT Technology Review’s What’s Next series, where we look across industries, trends, and technologies to give you a first look at the future

In 2023, progress in quantum computing will be defined less by big hardware announcements than by researchers consolidating years of hard work, getting chips to talk to one another, and shifting away from trying to make do with noise as the field gets ever more international in scope.

For years, quantum computing’s news cycle was dominated by headlines about record-setting systems. Researchers at Google and IBM have had spats over who achieved what—and whether it was worth the effort. But the time for arguing over who’s got the biggest processor seems to have passed: firms are heads-down and preparing for life in the real world. Suddenly, everyone is behaving like grown-ups.

As if to emphasize how much researchers want to get off the hype train, IBM is expected to announce a processor in 2023 that bucks the trend of putting ever more quantum bits, or “qubits,” into play. Qubits, the processing units of quantum computers, can be built from a variety of technologies, including superconducting circuitry, trapped ions, and photons, the quantum particles of light. 

IBM has long pursued superconducting qubits, and over the years the company has been making steady progress in increasing the number it can pack on a chip. In 2021, for example, IBM unveiled one with a record-breaking 127 of them. In November, it debuted  its 433-qubit Osprey processor, and the company aims to release a 1,121-qubit processor called Condor in 2023. 

But this year IBM is also expected to debut its Heron processor, which will have just 133 qubits. It might look like a backwards step, but as the company is keen to point out, Heron’s qubits will be of the highest quality. And, crucially, each chip will be able to connect directly to other Heron processors, heralding a shift from single quantum computing chips toward “modular” quantum computers built from multiple processors connected together—a move that is expected to help quantum computers scale up significantly. 

Heron is a signal of larger shifts in the quantum computing industry. Thanks to some recent breakthroughs, aggressive roadmapping, and high levels of funding, we may see general-purpose quantum computers earlier than many would have anticipated just a few years ago, some experts suggest. “Overall, things are certainly progressing at a rapid pace,” says Michele Mosca, deputy director of the Institute for Quantum Computing at the University of Waterloo. 

Here are a few areas where experts expect to see progress.

Stringing quantum computers together

IBM’s Heron project is just a first step into the world of modular quantum computing. The chips will be connected with conventional electronics, so they will not be able to maintain the “quantumness” of information as it moves from processor to processor. But the hope is that such chips, ultimately linked together with quantum-friendly fiber-optic or microwave connections, will open the path toward distributed, large-scale quantum computers with as many as a million connected qubits. That may be how many are needed to run useful, error-corrected quantum algorithms. “We need technologies that scale both in size and in cost, so modularity is key,” says Jerry Chow, director at IBM Quantum Hardware System Development.

Other companies are beginning similar experiments. “Connecting stuff together is suddenly a big theme,” says Peter Shadbolt, chief scientific officer of PsiQuantum, which uses photons as its qubits. PsiQuantum is putting the finishing touches on a silicon-based modular chip. Shadbolt says the last piece it requires—an extremely fast, low-loss optical switch—will be fully demonstrated by the end of 2023. “That gives us a feature-complete chip,” he says. Then warehouse-scale construction can begin: “We’ll take all of the silicon chips that we’re making and assemble them together in what is going to be a building-scale, high-performance computer-like system.”

The desire to shuttle qubits among processors means that a somewhat neglected quantum technology will come to the fore now, according to Jack Hidary, CEO of SandboxAQ, a quantum technology company that was spun out of Alphabet last year. Quantum communications, where coherent qubits are transferred over distances as large as hundreds of kilometers, will be an essential part of the quantum computing story in 2023, he says.

“The only pathway to scale quantum computing is to create modules of a few thousand qubits and start linking them to get coherent linkage,” Hidary told MIT Technology Review. “That could be in the same room, but it could also be across campus, or across cities. We know the power of distributed computing from the classical world, but for quantum, we have to have coherent links: either a fiber-optic network with quantum repeaters, or some fiber that goes to a ground station and a satellite network.”

Many of these communication components have been demonstrated in recent years. In 2017, for example, China’s Micius satellite showed that coherent quantum communications could be accomplished between nodes separated by 1,200 kilometers. And in March 2022, an international group of academic and industrial researchers demonstrated a quantum repeater that effectively relayed quantum information over 600 kilometers of fiber optics. 

Taking on the noise

At the same time that the industry is linking up qubits, it is also moving away from an idea that came into vogue in the last five years—that chips with just a few hundred qubits might be able to do useful computing, even though noise easily disrupts their operations. 

This notion, called “noisy intermediate-scale quantum” (NISQ), would have been a way to see some short-term benefits from quantum computing, potentially years before reaching the ideal of large-scale quantum computers with many hundreds of thousands of qubits devoted to correcting errors. But optimism about NISQ seems to be fading. “The hope was that these computers could be used well before you did any error correction, but the emphasis is shifting away from that,” says Joe Fitzsimons, CEO of Singapore-based Horizon Quantum Computing.

Some companies are taking aim at the classic form of error correction, using some qubits to correct errors in others. Last year, both Google Quantum AI and Quantinuum, a new company formed by Honeywell and Cambridge Quantum Computing, issued papersdemonstrating that qubits can be assembled into error-correcting ensembles that outperform the underlying physical qubits.

Other teams are trying to see if they can find a way to make quantum computers “fault tolerant” without as much overhead. IBM, for example, has been exploring characterizing the error-inducing noise in its machines and then programming in a way to subtract it (similar to what noise-canceling headphones do). It’s far from a perfect system—the algorithm works from a prediction of the noise that is likely to occur, not what actually shows up. But it does a decent job, Chow says: “We can build an error-correcting code, with a much lower resource cost, that makes error correction approachable in the near term.”

Maryland-based IonQ, which is building trapped-ion quantum computers, is doing something similar. “The majority of our errors are imposed by us as we poke at the ions and run programs,” says Chris Monroe, chief scientist at IonQ. “That noise is knowable, and different types of mitigation have allowed us to really push our numbers.”

Getting serious about software

For all the hardware progress, many researchers feel that more attention needs to be given to programming. “Our toolbox is definitely limited, compared to what we need to have 10 years down the road,” says Michal Stechly of Zapata Computing, a quantum software company based in Boston. 

The way code runs on a cloud-accessible quantum computer is generally “circuit-based,” which means the data is put through a specific, predefined series of quantum operations before a final quantum measurement is made, giving the output. That’s problematic for algorithm designers, Fitzsimons says. Conventional programming routines tend to involve looping some steps until a desired output is reached, and then moving into another subroutine. In circuit-based quantum computing, getting an output generally ends the computation: there is no option for going round again.

Horizon Quantum Computing is one of the companies that have been building programming tools to allow these flexible computation routines. “That gets you to a different regime in terms of the kinds of things you’re able to run, and we’ll start rolling out early access in the coming year,” Fitzsimons says.

Helsinki-based Algorithmiq is also innovating in the programming space. “We need nonstandard frameworks to program current quantum devices,” says CEO Sabrina Maniscalco. Algorithmiq’s newly launched drug discovery platform, Aurora, combines the results of a quantum computation with classical algorithms. Such “hybrid” quantum computing is a growing area, and it’s widely acknowledged as the way the field is likely to function in the long term. The company says it expects to achieve a useful quantum advantage—a demonstration that a quantum system can outperform a classical computer on real-world, relevant calculations—in 2023. 

Competition around the world

Change is likely coming on the policy front as well. Government representatives including Alan Estevez, US undersecretary of commerce for industry and security, have hinted that trade restrictions surrounding quantum technologies are coming. 

Tony Uttley, COO of Quantinuum, says that he is in active dialogue with the US government about making sure this doesn’t adversely affect what is still a young industry. “About 80% of our system is components or subsystems that we buy from outside the US,” he says. “Putting a control on them doesn’t help, and we don’t want to put ourselves at a disadvantage when competing with other companies in other countries around the world.”

And there are plenty of competitors. Last year, the Chinese search company Baidu opened access to a 10-superconducting-qubit processorthat it hopes will help researchers make forays into applying quantum computing to fields such as materials design and pharmaceutical development. The company says it has recently completed the design of a 36-qubit superconducting quantum chip. “Baidu will continue to make breakthroughs in integrating quantum software and hardware and facilitate the industrialization of quantum computing,” a spokesman for the company told MIT Technology Review. The tech giant Alibaba also has researchers working on quantum computing with superconducting qubits.

In Japan, Fujitsu is working with the Riken research institute to offer companies access to the country’s first home-grown quantum computer in the fiscal year starting April 2023. It will have 64 superconducting qubits. “The initial focus will be on applications for materials development, drug discovery, and finance,” says Shintaro Sato, head of the quantum laboratory at Fujitsu Research.

Not everyone is following the well-trodden superconducting path, however. In 2020, the Indian government pledged to spend 80 billion rupees ($1.12 billion when the announcement was made) on quantum technologies. A good chunk will go to photonics technologies—for satellite-based quantum communications, and for innovative “qudit” photonics computing.

Qudits expand the data encoding scope of qubits—they offer three, four, or more dimensions, as opposed to just the traditional binary 0 and 1, without necessarily increasing the scope for errors to arise. “This is the kind of work that will allow us to create a niche, rather than competing with what has already been going on for several decades elsewhere,” says Urbasi Sinha, who heads the quantum information and computing laboratory at the Raman Research Institute in Bangalore, India.

Though things are getting serious and internationally competitive, quantum technology remains largely collaborative—for now. “The nice thing about this field is that competition is fierce, but we all recognize that it’s necessary,” Monroe says. “We don’t have a zero-sum-game mentality: there are different technologies out there, at different levels of maturity, and we all play together right now. At some point there’s going to be some kind of consolidation, but not yet.”

Michael Brooks is a freelance science journalist based in the UK.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/01/06/1066317/whats-next-for-quantum-computing/amp/

Turkey to use blockchain-based digital identity for online public services – Cointelegraph

Posted by timmreardon on 01/05/2023
Posted in: Uncategorized. Leave a comment

HAZAL ORTA JAN 02, 2023

Shortly after its central bank completed its first CBDC tests, Turkey announced a blockchain-based digital identity application.

Turkey plans to use blockchain technology during the login process for online public services. E-Devlet, Turkey’s digital government portal used to access a wide range of public services, will use a blockchain-based digital identity to verify Turkish citizens during login.

Fuat Oktay, the vice president of Turkey, announced during the Digital Turkey 2023 event that citizens will be able to use blockchain-based digital identity to access e-wallet applications, Cointelegraph Turkey reported. 

Oktay called the blockchain-based application a revolution for e-government efforts, adding that online services will be more secure and accessible with blockchain. Users will be able to keep their digital information on their mobile phones.

“With the login system that will work within the scope of the e-wallet application, our citizens will be able to enter the e-Devlet with a digital identity created in the blockchain network,” the vice president said.

Related: Turkey’s central bank completes first CBDC test with more to come in 2023

Turkey has announced several projects powered by blockchain over the years, but very few have been realized yet. The country’s plans for a national blockchain infrastructure date back to 2019. However, aside from some proof-of-concept projects and its central bank digital currency test — executed after several delays — its blockchain ambitions have yet to bear fruits. 

As of January 2020, Turkey’s cultural hub of Konya was developing a “City Coin” project to be used by citizens to pay for public services, but no further updates have been shared with the public in the last two years.

Article link: https://cointelegraph.com/news/tuerkiye-to-use-blockchain-based-digital-identity-for-online-public-services

Application Security Maturity Models

Posted by timmreardon on 01/03/2023
Posted in: Uncategorized. Leave a comment

A look at OWASP’s Software Assurance Maturity Model (SAMM)

By Chris Hughes

We’ve spoken a fair bit about secure software development and efforts to improve the overall security maturity of an organizations applications and software. In this article, we will in particular look at OWASP’s SAMM.

We’ve discussed emerging requirements in the U.S. Federal sector such as the Office of Management and Budget (OMB) calling for third-party software vendors to self-attest to aligning with NIST’s Secure Software Development Framework (SSDF). 

As covered in that article, SSDF utilizes a myriad of existing industry sources, such as OWASP’s ASVS, BSIMM and OWASP’s SAMM when it comes to its specific practices, tasks and examples for secure software development. 

For an example of the synergy between NIST’s SSDF and OWASP SAM, let’s take a look at a specific SSDF Practice, such as “Identity and Confirm Vulnerabilities on an Ongoing Basis” (RV.1) – in the Respond to Vulnerabilities area. You can see that it cites OWASP SAMM IM1-A, IM2-B and EH1-B among its references on the right. 

Looking at models such as Building Security in Maturity Model (BSIMM) or Software Assurance Maturity Model (SAMM) can be very effective ways to document the maturity of a supplier’s software program or even internal software development efforts across organizational development teams. 

When it comes to third-party software vendors, it should be noted that receiving a self-attestation of adherence for compliance should be taken cautiously, as it isn’t concrete evidence of true alignment with requirements, just take a quick look at the DoD Defense Industrial Base (DIB) who has seen several years of notable security incidents among vendors who have historically self-attested to NIST 800-171 security controls, which has now led to the push for 3rd party attestation through their emerging Cybersecurity Maturity Model Certification (CMMC) framework, much like FedRAMP’s use of a 3PAO. 

However, as we have discussed in our article on FedRAMP, 3PAO compliance schemes come with their own challenges, most notably, scalability and severely limiting the body of vendors an organization/industry get access to, which isn’t always desirable.

This is also where other artifacts such as SBOM’s which show the software component inventory of applications/software and their associated vulnerabilities can be valuable, along with traditional technical measures such as vulnerability scanning and penetration testing. 

All of that aside, the focus of this article is OWASP’s SAMM, so let’s dive in.

OWASP SAMM

OWASP SAMM, also known as OpenSAMM in prior versions. (https://owasp.org/www-project-samm/) aims to help organizations formulate and implement strategies for software security, and the project cites 4 key areas it can help organizations, such as: 

  • Evaluate an organization’s existing software security practices
  • Build a balanced software security assurance program in well-defined iterations
  • Demonstrate concrete improvements to a security assurance program
  • Define and measure security-related activities throughout an organization

BSIMM is also widely used, but as it is a proprietary model, we will opt to examine open-source approaches, such as OWASP’s SAMM.

SAMM defines the following domains: 

Source: OWASP SAMM v2 Model (https://owasp.org/www-project-samm/)  

SAMM defines 5 business functions, as depicted in the image above, which are Governance, Design, Implementation, Verification and Operations. Within those business functions there are 3 security practices. Each security practice involves 2 streams of activities that complement and build upon one another. Since SAMM is an open model, it can be used internally to assess an organization, or by a third-party.

Like all OWASP projects, SAMM is a community-driven effort and aims to be measurable, actionable, and versatile. Unlike BSIMM, SAMM is prescriptive, meaning it prescribes specific actions and practices organizations can take to improve their software assurance. SAMM is, as the name states, a maturity model. 

It ranges from levels 1-3 across the security practices it specifies, with an implicit starting point of 0. While SAMM is a maturity model, it does not state that all organizations must achieve the highest level of maturity across all practices. Maturity requirements and goals will depend on the organization’s resources, compliance requirements, resources and mission sets. Each “Stream” has a corresponding maturity level ranging from 1-3, building on lower maturity levels and activities. See the example below:

Let us dive into some of the business functions and associated security practices within SAMM a bit.

Governance

The first business function is Governance, which is focused on the processes and activities related to how an organization manages their software development activities. The practices involved include Strategy & Metrics, Policy & Compliance and Education & Guidance. 

This involves creating and promoting strategies and metrics and then measuring and improving them over time. On the policy and compliance front, it involves creating policies and standards and then managing their implementation and adherence across the organization. Underneath Education & Guidance you have streams such as Training and Awareness, and Organization and Culture. 

Training and Awareness focuses on organizations improving knowledge around software security with their various stakeholders and organization and culture is oriented around promoting a culture of security within the organization.

Design

The second business function is Design, which focuses on processes and activities for how organizations create and design software. The security practices include Threat Assessment, Security Requirements and Security Architecture. Threat Assessment focuses on streams such as application risk profiling and threat modeling. 

As part of profiling, organizations determine which applications pose serious threats to the organization if compromised and threat modeling, as we have discussed elsewhere is helping teams understand what is being built, what can go wrong and how to mitigate those risks. 

Security Requirements involves requirements for how software is built and protected as well as requirements for relevant supplier organizations that may be involved in the development context of an organization’s applications, such as outsourced developers. Security Architecture deals with the various components and technologies involved in the architecture design of a firm’s software. 

This includes the architecture design to ensure secure design as well as technology management which involves understanding risks associated with the various technologies, frameworks, tools, and integrations that applications use.

Implementation

The third business function is Implementation, which involves how an organization builds and deploys software components and their associated defects. The security practices involved are Secure Build, which is consistently repeatable build processes and accounting of dependencies, Secure Deployment which increases the security of software deployments to production and Defect Management which involves managing security defects of deployed software. 

The streams within the Secure Build practice are Build Process and Software Dependencies. The build process ensures you are deploying predictable, repeatable secure build processes. Software dependencies focus on external libraries and their security posture matches the organizational requirements and risk tolerance. The Secure Deployment security practice focuses on the final stages of delivering software to production environments and ensuring its integrity and security during that process. 

The streams associated with this practice are the Deployment Process and Secrets Management. The deployment process ensures organizations have a repeatable and consistent deployment process to push software artifacts to production as well as the requisite test environments. 

Secret management is focused on properly handling of sensitive data such as credentials, API keys and other secrets which can be abused by malicious actors to compromise environments and systems involved in software development. Defect Management is the last security practice in this business function and focuses on collecting, recording, and analyzing software security defects to make data-driven decisions. 

The streams involved include defect tracking and metrics and feedback. Both involve managing the collection and follow-up of defects as well as driving improvement of security through these activities.

Verification

The Verification business function is the processes and activities for how organizations check and test artifacts throughout software development. The security practices associated with verification are architecture assessment, requirements-driven testing, and security testing. 

Architecture assessment validates the security and compliance for the software and supporting architecture while requirements and security testing based on things such as user stories to detect and resolve the security issues through automation. Architecture assessment has streams involved that include both validation and mitigation. 

This means validating the provision of security objectives and requirements in the supporting architecture and mitigating the identified threats in the existing architecture. The testing streams under these practices ensure organizations are doing activities such as misuse/abuse testing to use methods such as fuzzing and others to identify functionality that can be abused to attack an application. 

Security testing will involve both a broad baseline of automated testing and deep understanding that involves manual testing for high-risk components and complex attack vectors that automated testing cannot complete.

Operations

The Operations business function ensures the Confidentiality, Integrity and Availability of applications and their associated data is maintained throughout their lifecycle, including in runtime environments. 

Security practices include incident, environment, and operational management. Going further, streams encompass various areas such as incident detection and response as well as configuration hardening and patching. 

Lastly, Operational Management ensures that data protection occurs throughout the lifecycle of creation, handling, storage and processing and that legacy management to ensure end of life services and software are no longer actively deployed or supported. This reduces organizations attack surface and removes potentially vulnerable components from systems and applications.  

By utilizing SAMM and covering the various Business Functions, Security Practices and Streams, organizations can get more assurance around their application security maturity, and the same goes for their software consumers who benefit from software suppliers maturing their software development practices. 

OWASP provides a set of useful resources for organizations looking to use SAMM, such as a How-To Guide, Quick-Start Guide and a SAMM Tool Box. If you’re interested in digging deeper into the practices and associated details, be sure to check out the web or PDF versions of the full SAMM model. This will help you understand each Business Function, Security Practice, their associated Streams and Maturity Levels. 

For example, the OWASP SAMM Toolkit provides a structured worksheet where organizations can capture and document their existing OWASP SAMM maturity levels across the various Business Functions and Security Practices and produce correlating scores to see how they measure up and where they have gaps they want to address.

Article link: https://resilientcyber.substack.com/p/application-security-maturity-models

Inside a radical new project to democratize AI – MIT Technology Review

Posted by timmreardon on 01/02/2023
Posted in: Uncategorized. Leave a comment


A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

By Melissa Heikkilä

July 12, 2022

PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels. 

They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed.

Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained. 

BLOOM was created over the last year by over 1,000 volunteer researchers in a project called BigScience, which was coordinated by AI startup Hugging Face using funding from the French government. It officially launched on July 12. The researchers hope developing an open-access LLM that performs as well as other leading models will lead to long-lasting changes in the culture of AI development and help democratize access to cutting-edge AI technology for researchers around the world. 

The model’s ease of access is its biggest selling point. Now that it’s live, anyone can download it and tinker with it free of charge on Hugging Face’s website. Users can pick from a selection of languages and then type in requests for BLOOM to do tasks like writing recipes or poems, translating or summarizing texts, or writing programming code. AI developers can use the model as a foundation to build their own applications. 

At 176 billion parameters (variables that determine how input data is transformed into the desired output), it is bigger than OpenAI’s 175-billion-parameter GPT-3, and BigScience claims that it offerssimilar levels of accuracy and toxicity as other models of the same size. For languages such as Spanish and Arabic, BLOOM is the first large language model of this size. 

But even the model’s creators warn it won’t fix the deeply entrenched problems around large language models, including the lack of adequate policies on data governance and privacy and the algorithms’ tendency to spew toxic content, such as racist or sexist language.

Out in the open

Large language models are deep-learning algorithms that are trained on massive amounts of data. They are one of the hottest areas of AI research. Powerful models such as GPT-3 and LaMDA, which produce text that reads as if a human wrote it, have huge potential to change the way we process information online. They can be used as chatbots or to search for information, moderate online content, summarize books, or generate entirely new passages of text based on prompts. But they are also riddled with problems. It takes only a little prodding before these models start producing harmful content.

The models are also extremely exclusive. They need to be trained on massive amounts of data using lots of expensive computing power, which is something only large (and mostly American) technology companies such as Google can afford. 

Most big tech companies developing cutting-edge LLMs restrict their use by outsiders and have not released information about the inner workings of their models. This makes it hard to hold them accountable. The secrecy and exclusivity are what the researchers working on BLOOM hope to change.

Meta has already taken steps away from the status quo: in May 2022 the company released its own large language model, Open Pretrained Transformer (OPT-175B), along with its code and a logbook detailing how the model was trained. 

But Meta’s model is available only upon request, and it has a license that limits its use to research purposes. Hugging Face goes a step further. The meetings detailing its work over the past year are recorded and uploaded online, and anyone can download the model free of charge and use it for research or to build commercial applications.  

A big focus for BigScience was to embed ethical considerations into the model from its inception, instead of treating them as an afterthought. LLMs are trained on tons of data collected by scraping the internet. This can be problematic, because these data sets include lots of personal information and often reflect dangerous biases. The group developed data governance structures specifically for LLMs that should make it clearer what data is being used and who it belongs to, and it sourced different data sets from around the world that weren’t readily available online.  

The group is also launching a new Responsible AI License, which is something like a terms-of-service agreement. It is designed to act as a deterrent from using BLOOM in high-risk sectors such as law enforcement or health care, or to harm, deceive, exploit, or impersonate people. The license is an experiment in self-regulating LLMs before laws catch up, says Danish Contractor, an AI researcher who volunteered on the project and co-created the license. But ultimately, there’s nothing stopping anyone from abusing BLOOM.

The project had its own ethical guidelines in place from the very beginning, which worked as guiding principles for the model’s development, says Giada Pistilli, Hugging Face’s ethicist, who drafted BLOOM’s ethical charter. For example, it made a point of recruiting volunteers from diverse backgrounds and locations, ensuring that outsiders can easily reproduce the project’s findings, and releasing its results in the open. 

All aboard

This philosophy translates into one major difference between BLOOM and other LLMs available today: the vast number of human languages the model can understand. It can handle 46 of them, including French, Vietnamese, Mandarin, Indonesian, Catalan, 13 Indic languages (such as Hindi), and 20 African languages. Just over 30% of its training data was in English. The model also understands 13 programming languages.

This is highly unusual in the world of large language models, where English dominates. That’s another consequence of the fact that LLMs are built by scraping data off the internet: English is the most commonly used language online.

The reason BLOOM was able to improve on this situation is that the team rallied volunteers from around the world to build suitable data sets in other languages even if those languages weren’t as well represented online. For example, Hugging Face organized workshops with African AI researchers to try to find data sets such as records from local authorities or universities that could be used to train the model on African languages, says Chris Emezue, a Hugging Face intern and a researcher at Masakhane, an organization working on natural-language processing for African languages.

Including so many different languages could be a huge help to AI researchers in poorer countries, who often struggle to get access to natural-language processing because it uses a lot of expensive computing power. BLOOM allows them to skip the expensive part of developing and training the models in order to focus on building applications and fine-tuning the models for tasks in their native languages. 

“If you want to include African languages in the future of [natural-language processing] … it’s a very good and important step to include them while training language models,” says Emezue.

Handle with caution

BigScience has done a “phenomenal” job of building a community around BLOOM, and its approach of involving ethics and governance from the beginning is a thoughtful one, says Percy Liang, director of Stanford’s Center for Research on Foundation Models. 

However, Liang doesn’t think it will lead to significant changes to LLM development. “OpenAI and Google and Microsoft are still blazing ahead,” he says.

Ultimately, BLOOM is still a large language model, and it still comes with all the associated flaws and risks. Companies such as OpenAI have not released their models or code to the public because, they argue, the sexist and racist language that has gone into them makes them too dangerous to use that way. 

BLOOM is also likely to incorporate inaccuracies and biased language, but since everything about the model is out in the open, people will be able to interrogate the model’s strengths and weaknesses, says Margaret Mitchell, an AI researcher and ethicist at Hugging Face.

BigScience’s biggest contribution to AI might end up being not BLOOM itself, but the numerous spinoff research projects its volunteers are getting involved in. For example, such projects could bolster the model’s privacy credentials and come up with ways to use the technology in different fields, such as biomedical research.  

“One new large language model is not going to change the course of history,” says Teven Le Scao, a researcher at Hugging Face who co-led BLOOM’s training. “But having one good open language model that people can actually do research on has a strong long-term impact.”

When it comes to the potential harms of LLMs, “ Pandora’s box is already wide open,” says Le Scao. “The best you can do is to create the best conditions possible for researchers to study them.”

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2022/07/12/1055817/inside-a-radical-new-project-to-democratize-ai/amp/

Time for Resilient Critical Material Supply Chain Policies – RAND

Posted by timmreardon on 12/30/2022
Posted in: Uncategorized. Leave a comment

by Fabian Villalobos, Jonathan L. Brosmer, Richard Silberglitt, Justin M. Lee, Aimee E. Curtright

DOWNLOAD EBOOK FOR FREE

PDF file 0.3MB Technical Details »

Research Questions

  1. What is the nature of the critical materials problem?
  2. How did the current rare earth element (REE) supply chain form and what lessons learned are applicable to other critical materials, such as those found in LIB materials?
  3. What are the potential risks of a disruption to the critical material supply chain?
  4. What should be the aim of policies used to prevent or mitigate the effects of shocks to critical material supply chains?

The ongoing coronavirus disease 2019 pandemic and Russian invasion of Ukraine highlight the vulnerabilities of supply chains that lack diversity and are dependent on foreign inputs. This report presents a short, exploratory analysis summarizing the state of critical materials — materials essential to economic and national security — using two case studies and policies available to the U.S. Department of Defense (DoD) to increase the resilience of its supply chains in the face of disruption.

China is the largest producer and processor of rare earth oxides (REOs) worldwide and a key producer of lithium-ion battery (LIB) materials and components. China’s market share of REO extraction has decreased, but it still has large influence over the downstream supply chain–processing and magnet manufacturing. Chinese market share of the LIB supply chain mirrors REO supply bottlenecks. If it desired, China could effectively cut off 40 to 50 percent of global REO supply, affecting U.S. manufacturers and suppliers of DoD systems and platforms.

Although a deliberate disruption is unlikely, resilience against supply disruption and building domestic competitiveness are important. The authors discuss plausible REO disruption scenarios and their hazards and synthesize insights from a “Day After . . .” exercise and structured interviews with stakeholders to identify available policy options for DoD and the U.S. government to prevent or mitigate the effects of supply disruptions on the defense industrial base (DIB) and broader U.S. economy. They explore these policies’ applicability to another critical material supply chain — LIB materials — and make recommendations for policy goals.

Key Findings

  • China has used a variety of economic practices to capture a large portion of the REE supply chain. It has used this disproportionate market share to manipulate the availability and pricing of these materials outside China.
  • Economic coercion by China has usually been executed through denying access to the domestic markets. REOs have been the lone exception: China threatened to restrict access to Chinese exports to manipulate U.S. partner nations (Japan) to geopolitical ends.
  • Projects planned to increase the extraction and processing capacity of REOs fall short of meeting estimated future demand outside China; there is not enough supply outside China to mitigate a disruption event.
  • China could effectively cut off 40–50 percent of global REO supply, which would affect manufacturers and suppliers of advanced components used in DoD systems and platforms. The DIB has a limited time frame in which to respond to a disruption before industrial readiness suffers.
  • The potential risks associated with a disruption could affect the broader U.S. economy and the DIB’s ability to procure critical materials, as well as interrupt military operations in some cases.
  • DoD has a variety of policies available to mitigate the effects of disruption. They can be categorized as proactive or reactive policies, or both. These policies have an effective time to impact, or the time needed for implementation and benefits to materialize.
  • Policy options used thus far have yielded mixed results.

Recommendations

  • Proactive policies should aim to diversify critical material supply chains away from Chinese industry by expanding extraction capacity or increasing material recycling efforts.
  • Proactive efforts should aim to co-locate the upstream and downstream sectors to better leverage industrial efficiencies.
  • Reactive policies should aim to increase the DIB’s resiliency in the face of supply disruption by reducing its time to recover and increasing its time to survive.
  • Both proactive and reactive policy options with the longest time to impact should be implemented sooner rather than later to realize benefits.
  • Policies, planning, and coordination should also aim to reduce the time to impact for both proactive and reactive policies.
  • Both proactive and reactive policies should leverage U.S. ally and partner capabilities, or build relationships with nontraditional countries, to establish free access to critical materials at a fair market price wherever possible.
  • Working with nontraditional partners will be necessary because these countries have geographic access to critical materials and extraction capacity. Depending only on traditional allies and partners may only divert part of the supply chain away from Chinese industry.
  • Chinese disinformation campaigns should be expected in other critical material supply chains. This sector should work with cybersecurity experts and the U.S. intelligence community to educate executives and local governments about risks. Businesses should communicate their plans — and any influence operations underway — to local communities. The intelligence community should educate policymakers, U.S. allies and partners, and the public about the extent of Chinese interference in critical material supply chains.

Article link; https://www.rand.org/pubs/research_reports/RRA2102-1.html

Software Defines Tactics – Hudson Institute

Posted by timmreardon on 12/22/2022
Posted in: Uncategorized. Leave a comment

Structuring Military Software Acquisitions for Adaptability and Advantage in a Competitive Era

Jason Weiss & Dan Patt

View Full PDF

Executive Summary

You would not be reading this if you did not realize that it is important for the Department of Defense (DoD) to get software right. There are two sides to the coin of ubiquitous software for military systems. On one side lies untold headaches—new cyber vulnerabilities in our weapons and supporting systems, long development delays and cost overruns, endless upgrade requirements for software libraries and underlying infrastructure, challenges in modernizing legacy systems, and unexpected and undesirable bugs that emerge even after successful operational testing and evaluation. On the other side lies a vast potential for future capability, with surprising new military capabilities deployed to aircraft during and between sorties, seamless collaboration between military systems from different services and domains, and rich data exchange between allies and partners in pursuit of military goals. This report offers advice to help maximize the benefits and minimize the liabilities of the software-based aspects of acquisition, largely through structuring acquisition to enable rapid changes across diverse software forms.

This report features a narrower focus and more technical depth than typical policy analysis. We believe this detail is necessary to achieve our objectives and reach our target audience. We intend this to be a useful handbook for the DoD acquisition community and, in particular, the program executive officers (PEOs)1 and program managers as they navigate a complex landscape under great pressure to deliver capability in an environment of strategic competition. All of the 83 major defense acquisition programs and the many smaller acquisition category II and III efforts that make up the other 65 percent of defense investment scattered across the 3,112 program, project, and activity (PPA) line items found in the president’s budget request now include some software activity by our accounting.2 We would be thrilled if a larger community—contracting officers, industry executives, academics, engineers and programmers, policy analysts, legislators, staff, and operational military service members—also gleaned insight from this document. But we know that some terms may come across as jargon and that not everyone is familiar with the names of common software development tools or methods. We encourage them to read this nonetheless and are confident that the core principles and insights we present are still accessible to a broader audience.

While other recent analyses have focused on the imperative for software and offered high-level visions for a better future,3 we believe most of the acquisition community already recognizes the potential of a digital future and is engaged in a more tactical set of battles and decisions: how to structure their organizations, how to manage expectations, how to structure their deliverables, and how to write solicitations and let contracts meet requirements and strive for useful outcomes. We attempt to present background and principles that can assist them in navigating this complex landscape.

The real motivation for getting military software right is not to make the DoD more like commercial industry through digital modernization. Instead, it is to create a set of competitive advantages for the United States that stem from the embrace of distributed decision-making and mission command. The strategic context of this work stems from the observation that advantage in future military contests is less likely to come from the mere presence of robotics or artificial intelligence in military systems and is more likely to come from using these (and other) components effectively as part of a force design, and specifically from maintaining sufficient adaptability in the creation and implementation of tactics. Software, software systems, and information processing systems (including artificial intelligence and machine learning, or AI/ML) that leverage software-generated data are the critical ingredients in future force design, force employment, tactics generation, and execution monitoring. Software will bind together human decision-makers and automation support in future conflict.

This report aims to accomplish two goals:

  • Elucidate a set of principles that can help the acquisition community navigate the software world—a turbulent sea of buzzwords, technological change, bureaucratic processes, and external pressures.
  • Recruit this community to apply energy to a handful of areas that will enable better decisions and faster actions largely built around an alternative set of processes for evolutionary development.

The report is structured in chapters, each built around a core insight. Chapter 1 notes that the speed and ubiquity of digital information systems is driving military operations ever closer to capability development, and that this trend brings promise for capability and peril for bureaucratic practices.

Chapter 2 offers the military framing for software development, pointing out that it can enable greater adaptability in force employment and that the DoD cannot derive future concepts of more distributed forces and greater disaggregation of capability from requirements and specifications alone. Instead, software should enable employment tactics to evolve, especially as the elements of future combat are outside the control of any one program manager or office.

Chapter 3 introduces a framing analogy between software delivery and logistics. As logistics practitioners recognize, there is no one-size-fits-all solution to the problem of moving physical goods, but a set of principles that helps navigate many gradients of logistics systems. Similarly, the world of software is full of diversity—different use cases, computing environments, and security levels—and this heterogeneity is an intrinsic feature that the DoD needs to embrace.

Chapter 4 introduces the essential tool of the modern program office—the software factory—the tooling and processes via which operational software is created and delivered. All existing operational software was made and delivered somehow and, whether we like it or not, we will have to update it in the future. Thus, the production process matters. In many ways, the most important thing a PEO can do is identify, establish, or source an effective software factory suited to the particular needs of their unique deliverables.

Chapter 5 seeks to break down the ideas introduced earlier into actionable atomic principles that the DoD can use to navigate the tough decisions that the acquisition community faces; we refer to these ideas as ACTS. To aid the reader in understanding these, we also provide a fictitious vignette.

We recognize the inherent tension between making a report compact and accessible and covering the myriad facets of a broad and fast-changing scene. To balance this, we believe that we have focused this report on the most impactful aspects of software acquisition.

View Full PDF

Article link: https://www.hudson.org/national-security-defense/software-defines-tactics

Pentagon Supply Chain Fails Minimal Standards for US National Security – Technewsworld

Posted by timmreardon on 12/16/2022
Posted in: Uncategorized. Leave a comment
  • By Jack M. Germain
  • December 9, 2022 10:37 AM PT

Most contractors the Department of Defense hired in the last five years failed to meet the required minimum cybersecurity standards, posing a significant risk to U.S. national security.

Managed service vendor CyberSheathon Nov. 30 released a report showing that 87% of the Pentagon supply chain fails to meet basic cybersecurity minimums. Those security gaps are subjecting sizeable prime defense contractors and their subcontractors to cyberattacks from a range of threat actors putting U.S. national security at risk.

Those risks have been well-known for some time without attempts to fix them. This independent study of the Defense Industrial Base (DIB) is the first to show that federal contractors are not properly securing military secrets, according to CyberSheath.

The DIB is a complex supply chain comprised of 300,000 primes and subcontractors. The government allows these approved companies to share sensitive files and communicate securely to get their work done.

Defense contractors will soon be required to meet Cybersecurity Maturity Model Certification (CMMC) compliance to keep those secrets safe. Meanwhile, the report warns that nation-state hackers are actively and specifically targeting these contractors with sophisticated cyberattack campaigns.

“Awarding contracts to federal contractors without first validating their cybersecurity controls has been a complete failure,” Eric Noonan, CEO at CyberSheath, told TechNewsWorld.

Defense contractors have been mandated to meet cybersecurity compliance requirements for more than five years. Those conditions are embedded in more than one million contracts, he added.

Dangerous Details

The Merrill Research Report 2022, commissioned by CyberSheath, revealed that 87% of federal contractors have a sub-70 Supplier Performance Risk System (SPRS) score. The metric shows how well a contractor meets Defense Federal Acquisition Regulation Supplement (DFARS) requirements.

DFARS has been law since 2017 and requires a score of 110 for full compliance. Critics of the system have anecdotally deemed 70 to be “good enough.” Even so, the overwhelming majority of contractors still come up short.

“The report’s findings show a clear and present danger to our national security,” said Eric Noonan. “We often hear about the dangers of supply chains that are susceptible to cyberattacks.”

The DIB is the Pentagon’s supply chain, and we see how woefully unprepared contractors are despite being in threat actors’ crosshairs, he continued.

“Our military secrets are not safe, and there is an urgent need to improve the state of cybersecurity for this group, which often does not meet even the most basic cybersecurity requirements,” warned Noonan.

More Report Findings

The survey data came from 300 U.S.-based DoD contractors, with accuracy tested at the 95% confidence level. The study was completed in July and August 2022, with CMMC 2.0 on the horizon.

Roughly 80% of the DIB users failed to monitor their computer systems around-the-clock and lacked U.S.-based security monitoring services. Other deficiencies were evident in the following categories that will be required to achieve CMMC compliance:

  • 80% lack a vulnerability management solution
  • 79% lack a comprehensive multi-factor authentication (MFA) system
  • 73% lack an endpoint detection and response (EDR) solution
  • 70% have not deployed security information and event management (SIEM)

These security controls are legally required of the DIB, and since they are not met, there is a significant risk facing the DoD and its ability to conduct armed defense. In addition to being largely non-compliant, 82% of contractors find it “moderately to extremely difficult to understand the governmental regulations on cybersecurity.

Confusion Rampant Among Contractors

Some defense contractors across the DIB have focused on cybersecurity only to be stalled by obstacles, according to the report.

When asked to rate DFARS reporting challenges on a scale from one-to-10 (with 10 being extremely challenging), about 60% of all respondents rated “understanding requirements” a seven in 10 or higher. Also high on the list of challenges were routine documentation and reporting.

The primary obstacles contractors listed are challenges in understanding the necessary steps to achieve compliance, the difficulty with implementing sustainable CMMC policies and procedures, and the overall cost involved.

Unfortunately, those results closely paralleled what CyberSheath expected, admitted Noonan. He noted that the research confirmed that even fundamental cybersecurity measures like multi-factor authentication had been largely ignored.

“This research, combined with the False Claims Act case against defense giant Aerojet Rocketdyne, shows that both large and small defense contractors are not meeting contractual obligations for cybersecurity and that the DoD has systemic risk throughout their supply chain,” Noonan said.

No Big Surprise

Noonan believes the DoD has long known that the defense industry is not addressing cybersecurity. News reporting of seemingly never-ending nation-state breaches of defense contractors, including large-scale incidents like the SolarWinds and False Claims Act cases, proves that point.

“I also believe the DoD has run out of patience after giving contractors years to address the problem. Only now is the DoD going to make cybersecurity a pillar of contract acquisition,” said Noonan.

He noted the planned new DoD principle would be “No cybersecurity, no contract.”

Noonan admitted that some of the struggles that contractors voiced about difficulties in understanding and meeting cyber requirements have merit.

“It is a fair point because some of the messaging from the government has been inconsistent. In reality, though, the requirements have not changed since about 2017,” he offered.

What’s Next

Perhaps the DoD will pursue a get-tougher policy with contractors. If contractors complied with what the law required in 2017, the entire supply chain would be in a much better place today. Despite some communication challenges, the DoD has been incredibly consistent on what is required for defense contractor cybersecurity, Noonan added.

The current research now sits atop a mountain of evidence that proves federal contractors have a lot of work to do to improve cybersecurity. It is clear that work will not be done without enforcement from the federal government.

“Trust without verification failed, and now the DoD appears to be moving to enforce verification,” he said.

DoD Response

TechNewsWorld submitted written questions to the DoD about the supply chain criticism in the CyberSheath report. A spokesperson for CYBER/IT/DOD CIO for the Department of Defense replied, stating that it would take a few days to dig into the issues. We will update this story with any response we receive.


Update: Dec. 9, 2022 – 3:20 PM PT
DoD Spokesperson and U.S. Navy Commander Jessica McNulty provided this response to TechNewsWorld:

CyberSheath is a company that has been evaluated by the Cyber Accreditation Body (Cyber AB) and met the requirements to become a Registered Practitioner Organization, qualified to advise and assist Defense Industrial Base (DIB) companies with implementing CMMC. The Cyber AB is a 501(c)(3) that authorizes and accredits third-party companies conducting assessments of companies within the DIB, according to U.S. Navy Commander Jessica McNulty, a Department of Defense spokesperson.

McNulty confirmed that the DoD is aware of this report and its findings. The DoD has not taken any action to validate the findings, nor does the agency endorse this report, she said.

However, the report and its findings are generally not inconsistent with other prior reports (such as the DoD Inspector General’s Audit of Protection of DoD Controlled Unclassified Information on Contractor-Owned Networks and Systems (ref. DODIG-2019-105) or with results of compliance assessments performed by the DoD, as allowed/required by DFARS clause 252.204-7020 (when applicable), she noted.

“Upholding adequate cybersecurity standards, such as those defined by the National Institute of Standards and Technology (NIST) and levied as contractual requirements through application of DFARS 252.204-7012, is of the utmost importance for protecting DoD’s controlled unclassified information. DoD has long recognized that a mechanism is needed to assess the degree to which contract performers comply with these standards, rather than taking it on faith that the standards are met,” McNulty told TechNewsWorld.

For this reason, the DoD’s Cybersecurity Maturity Model Certification (CMMC) program was initiated, and the DoD is working to codify its requirements in part 32 of the Code of Federal Regulations, she added.

“Once implemented, CMMC assessment requirements will be levied as pre-award requirements, where appropriate, to ensure that DoD contracts are awarded to companies that do, in fact, comply with underlying cybersecurity requirements,” McNulty concluded.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics.

Article link: https://www.technewsworld.com/story/pentagon-supply-chain-fails-minimal-standards-for-us-national-security-177497.html

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Closing the Digital Divide in Government: 5 Strategies for Digital Transformation – Nextgov 01/31/2023
    • Twelve Problems Negatively Impacting Defense Innovation – AEI 01/27/2023
    • NIST Debuts Long-Anticipated AI Risk Management Framework – Nextgov 01/26/2023
    • Artificial Intelligence Risk Management Framework – NIST 01/26/2023
    • Plan for Federal AI Research and Development Resource Emphasizes Diversity in Innovation – Nextgov 01/24/2023
    • SOFTWARE DEFINES TACTICS – War on the Rocks 01/23/2023
    • Is China About To Destroy Encryption As We Know It? Maybe – Defense One 01/22/2023
    • Scientists Weigh in on the Ethics of Next-Generation AI – Nextgov 01/18/2023
    • Health care is a universal right, not a luxury, pope says – NCR 01/18/2023
    • Healthcare Justice 01/16/2023
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

    No upcoming events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Follow Following
    • healthcarereimagined
    • Join 124 other followers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...