healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

CISA, NSA, and ODNI Release Guidance for Customers on Securing the Software Supply Chain – HST

Posted by timmreardon on 12/01/2022
Posted in: Uncategorized. Leave a comment

The Securing Software Supply Chain Series is an output of the Enduring Security Framework (ESF), a public-private cross-sector working group led by NSA and CISA.

ByHomeland Security Today

November 18, 2022

Today, CISA, the National Security Agency (NSA), and the Office of the Director of National Intelligence (ODNI), published the third of a three-part series on securing the software supply chain: Securing Software Supply Chain Series – Recommended Practices Guide for Customers. This publication follows the August 2022 release of guidance for developers and October 2022 release of guidance for suppliers.

The guidance released today, along with the accompanying fact sheet, provides recommended practices for software customers to ensure the integrity and security of software during the procuring and deployment phases.

The Securing Software Supply Chain Series is an output of the Enduring Security Framework (ESF), a public-private cross-sector working group led by NSA and CISA. This series complements other U.S. government efforts underway to help the software ecosystem secure the supply chain, such as the software bill of materials (SBOM) community.

CISA encourages all organizations that participate in the software supply chain to review the guidance. See CISA’s Information and Communications Technology (ICT) Supply Chain Risk Management Task Force, ICT Supply Chain Resource Library, and National Risk Management Center (NRMC)webpages for additional guidance.

Article link: https://www.hstoday.us/federal-pages/dhs/cisa-nsa-and-odni-release-guidance-for-customers-on-securing-the-software-supply-chain/

Read more at CISA

Securing tomorrow today: Why Google now protects its internal communications from quantum threats – Google Cloud Blog

Posted by timmreardon on 11/25/2022
Posted in: Uncategorized. Leave a comment

November 18, 2022

Editor’s note: The ISE Crypto PQC working group is comprised of Google Senior Cryptography Engineers Stefan Kölbl, Rafael Misoczki, and Sophie Schmieg.


When you visit a website and the URL starts with HTTPS, you’re relying on a secure public key cryptographic protocol to shield the information you share with the site from casual eavesdroppers. Public key cryptography underpins most secure communication protocols, including those we use internally at Google as part of our mission to protect our assets and our users’ data against threats. Our own internal encryption-in-transit protocol, Application Layer Transport Security (ALTS), uses public key cryptography algorithms to ensure that Google’s internal infrastructure components talk to each other with the assurance that the communication is authenticated and encrypted. 

Widely-deployed and vetted public key cryptography algorithms (such as RSA and Elliptic Curve Cryptography) are efficient and secure against today’s adversaries. However, as Google Cloud CISO Phil Venables wrote in July, we expect large-scale quantum computers to completely break these algorithms in the future. The cryptographic community already has developed several alternatives to these algorithms, commonly referred to as post-quantum cryptography (PQC), that we expect will be able to resist quantum computer-driven attacks. We’re excited to announce that Google Cloud has already enabled one of the algorithms on our internal ALTS protocol today.

The PQC threat model

While current quantum computers do not have the capability to break widely-used cryptography schemes like RSA in practice, we still need to start planning our defense for two reasons:

  • An attacker might store encrypted data today, and decrypt it when they gain access to a quantum computer (also known as the store-now-decrypt-later attack).
  • Product lifetime might overlap with the arrival of quantum computers, and it will be difficult to update systems.

The first threat applies to encryption in transit, which uses quantum vulnerable asymmetric key agreements. The second threat applies to hardware devices with a long lifespan — for example, certain secure boot applications which rely on digital signatures. We focus on encryption in transit in this blog post, as ALTS traffic is often exposed to the public internet, and will discuss the implications for secure boot in our next blog post.

Why we chose NTRU-HRSS internally

With the PQC standards by the National Institute of Standards and Technology (NIST) still pending, rolling out quantum-resistant cryptography can currently only happen on an ephemeral basis, where the exchanged data is used once and never needed anymore. Google’s internal encryption in transit protocol, ALTS, is an ideal candidate for such a rollout since we control all endpoints using this protocol and can switch to a different algorithm with relative ease if NIST adopts different standards. Controlling all the endpoints can give us the confidence of defeating store-now-decrypt-later attacks without worrying about having to maintain a non-standard solution.

Deploying new cryptography is risky because it has not been field-tested. In fact, several of the candidates in the NIST process suffered devastating attacks that did not even require a quantum computer. We avoided a scenario where our attempt to secure our infrastructure against a theoretical computing architecture renders it defenseless against a laptop recovering private keys over a weekend by adding the post-quantum algorithm as an additional layer. This tactic helps ensure that the security properties of our currently-deployed vetted and tested cryptography are still in place.

Note that we do not need to address signature algorithms yet. An adversary who can forge a signature in the future will not affect past sessions of the protocol. For now, we only need to address “store now, decrypt later” attacks, as these can affect our data today. Since signature algorithm threats are not immediate, we were able to simplify the vetting process in two ways: 

  • We only had to add PQC for the key agreement parts of the protocol.
  • It allowed us to only change parts which rely on ephemeral keys. For authenticity we still rely on classic cryptography, which likely will only be affected when a large-scale quantum computer exists.

Among the more promising quantum-resistant choices, NIST has favored lattice-based algorithms, with NIST recently announcing the selection of Kyber to become the first NIST-approved post-quantum cryptography key encapsulation mechanism (KEM). Kyber has high performance (it has a more balanced latency cost when considering operations than alternative lattice-based counterparts), but still lacks some clarification from NIST about its Intellectual Property status (see the third round status report by NIST). 

From the same realm of lattice-based KEMs, there’s the NTRU-HRSS KEM algorithm. This is a direct descendant of the well-known, time-vetted NTRU scheme proposed back in 1996, and it is considered by many experts as one of the more conservative choices among the structured, efficient lattice-based schemes. Given its high performance and maturity, we have selected this scheme to protect our internal communication channels using the ALTS protocol.

The post-quantum cryptography migration brings unique challenges in scale, scope, and technical complexity which have not been attempted before in the industry, and therefore require additional care. That’s why we are deploying NTRU-HRSS in ALTS using the hybrid approach. By hybrid we mean combining two schemes into a single mechanism in such a way that an adversary interested in breaking the mechanism needs to break both underlying schemes. Our choice for this setup was: NTRU-HRSS and X25519, thus matching the insightful choice of our Google Chrome 2018’s CECPQ2 experiment and allowing us to reuse BoringSSL’s CECPQ2 implementation.

Protecting ALTS against quantum-capable adversaries is a huge step forward in Google’s mission to protect our assets and users’ data against current and future threats. We continue to actively participate in the Post-Quantum Cryptography standardization efforts: Googlers co-authored one of the signature schemes selected for standardization (SPHINCS+), and two proposals currently considered by NIST in the fourth round of their PQC KEM competition (BIKE and Classic McEliece). We may re-evaluate our algorithmic choices when Kyber’s IP status is clarified, and when these fourth round selected standards are published.


The ISE Crypto PQC working group would like to acknowledge the contributions of Vlad Vysotsky and Dexiang Wang, software engineers on the ALTS team, and Adam Langley, principal cryptography engineer on BoringSSL.

Article link: https://cloud.google.com/blog/products/identity-security/why-google-now-uses-post-quantum-cryptography-for-internal-comms/?

China just announced a new social credit law. Here’s what it means – MIT Tech Review

Posted by timmreardon on 11/22/2022
Posted in: Uncategorized. Leave a comment


The West has largely gotten China’s social credit system wrong. But draft legislation introduced in November offers a more accurate picture of the reality.

By Zeyi Yang

November 22, 2022

It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced a six-year plan to build a system to reward actions that build trust in society and penalize the opposite, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

For most people outside China, the words “social credit system” conjure up an instant image: a Black Mirror–esque web of technologies that automatically score all Chinese citizens according to what they did right and wrong. But the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.

Instead, the system that the central government has been slowly working on is a mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values—however vague that last goal in particular sounds. There’s no evidence yet that this system has been abused for widespread social control (though it remains possible that it could be wielded to restrict individual rights). 

While local governments have been much more ambitious with their innovative regulations, causing more controversies and public pushback, the countrywide social credit system will still take a long time to materialize. And China is now closer than ever to defining what that system will look like. On November 14, several top government agencies collectively released a draft law on the Establishment of the Social Credit System, the first attempt to systematically codify past experiments on social credit and, theoretically, guide future implementation. 

Yet the draft law still left observers with more questions than answers. 

“This draft doesn’t reflect a major sea change at all,” says Jeremy Daum, a senior fellow of the Yale Law School Paul Tsai China Center who has been tracking China’s social credit experiment for years. It’s not a meaningful shift in strategy or objective, he says. 

Rather, the law stays close to local rules that Chinese cities like Shanghai have released and enforced in recent years on things like data collection and punishment methods—just giving them a stamp of central approval. It also doesn’t answer lingering questions that scholars have about the limitations of local rules. “This is largely incorporating what has been out there, to the point where it doesn’t really add a whole lot of value,” Daum adds. 

So what is China’s current system actually like? Do people really have social credit scores? Is there any truth to the image of artificial-intelligence-powered social control that dominates Western imagination? 

First of all, what is “social credit”?

When the Chinese government talks about social credit, the term covers two different things: traditional financial creditworthiness and “social creditworthiness,” which draws data from a larger variety of sectors.

The former is a familiar concept in the West: it documents individuals’ or businesses’ financial history and predicts their ability to pay back future loans. Because the market economy in modern China is much younger, the country lacks a reliable system to look up other people’s and companies’ financial records. Building such a system, aimed to help banks and other market players make business decisions, is an essential and not very controversial mission. Most Chinese policy documents refer to this type of credit with a specific word: “征信” (zhengxin, which some scholars have translated to “credit reporting”).

The latter—“social creditworthiness”—is what raises more eyebrows. Basically, the Chinese government is saying there needs to be a higher level of trust in society, and to nurture that trust, the government is fighting corruption, telecom scams, tax evasion, false advertising, academic plagiarism, product counterfeiting, pollution …almost everything. And not only will individuals and companies be held accountable, but legal institutions and government agencies will as well.

This is where things start to get confusing. The government seems to believe that all these problems are loosely tied to a lack of trust, and that building trust requires a one-size-fits-all solution. So just as financial credit scoring helps assess a person’s creditworthiness, it thinks, some form of “social credit” can help people assess others’ trustworthiness in other respects. 

As a result, so-called “social” credit scoring is often lumped together with financial credit scoring in policy discussions, even though it’s a much younger field with little precedent in other societies. 

What makes it extra confusing is that in practice, local governments have sometimes mixed up these two. So you may see a regulation talking about how non-financial activities will hurt your financial credit, or vice versa. (In just one example, the province of Liaoning said in August that it’s exploring how to reward blood donation in the financial credit system.) 

But on a national level, the country seems to want to keep the two mostly separate, and in fact, the new draft law addresses them with two different sets of rules.

Has the government built a system that is actively regulating these two types of credit?

The short answer is no. Initially, back in 2014, the plan was to have a national system tracking all “social credit” ready by 2020. Now it’s almost 2023, and the long-anticipated legal framework for the system was just released in the November 2022 draft law. 

That said, the government has mostly figured out the financial part. The zhengxin system—first released to the public in 2006 and significantly updated in 2020—is essentially the Chinese equivalent of American credit bureaus’ scoring and is maintained by the country’s central bank. It records the financial history of 1.14 billion Chinese individuals (and gives them credit scores), as well as almost 100 million companies (though it doesn’t give them scores). 

On the social side, however, regulations have been patchy and vague. To date, the national government has built only a system focused on companies, not individuals, which aggregates data on corporate regulation compliance from different government agencies. Kendra Schaefer, head of tech policy research at the Beijing-based consultancy Trivium China, has described it in a report for the US government’s US-China Economic and Security Review Commission as “roughly equivalent to the IRS, FBI, EPA, USDA, FDA, HHS, HUD, Department of Energy, Department of Education, and every courthouse, police station, and major utility company in the US sharing regulatory records across a single platform.” The result is openly searchable by any Chinese citizen on a recently built website called Credit China.

But there is some data on people and other types of organizations there, too. The same website also serves as a central portal for over three dozen (sometimes very specific) databases, including lists of individuals who have defaulted on a court judgment, Chinese universities that are legitimate, companies that are approved to build robots, and hospitals found to have conducted insurance fraud. Nevertheless, the curation seems so random that it’s hard to see how people could use the portal as a consistent or comprehensive source of data.

How will a social credit system affect Chinese people’s everyday lives?

The idea is to be both a carrot and a stick. So an individual or company with a good credit record in all regulatory areas should receive preferential treatment when dealing with the government—like being put on a priority list for subsidies. At the same time, individuals or companies with bad credit records will be punished by having their information publicly displayed, and they will be banned from participating in government procurement bids, consuming luxury goods, and leaving the country. 

The government published a comprehensive list detailing the permissible punishment measures last year. Some measures are more controversial; for example, individuals who have failed to pay compensation decided by the court are restricted from traveling by plane or sending their children to costly private schools, on the grounds that these constitute luxury consumption. The new draft law upholds a commitment that this list will be updated regularly. 

So is there a centralized social credit score computed for every Chinese citizen?

No. Contrary to popular belief, there’s no central social credit score for individuals. And frankly, the Chinese central government has never talked about wanting one. 

So why do people, particularly in the West, think there is? 

Well, since the central government has given little guidance on how to build a social credit system that works in non-financial areas, even in the latest draft law, it has opened the door for cities and even small towns to experiment with their own solutions. 

As a result, many local governments are introducing pilot programs that seek to define what social credit regulation looks like, and some have become very contentious.

The best example is Rongcheng, a small city with only half a million in population that has implemented probably the most famous social credit scoring system in the world. In 2013, the city started giving every resident a base personal credit score of 1,000 that can be influenced by their good and bad deeds. For example, in a 2016 rule that has since been overhauled, the city decided that “spreading harmful information on WeChat, forums, and blogs” meant subtracting 50 points, while “winning a national-level sports or cultural competition” meant adding 40 points. In one extreme case, one resident lost 950 points in the span of three weeks for repeatedly distributing letters online about a medical dispute.

Such scoring systems have had very limited impact in China, since they have never been elevated to provincial or national levels. But when news of pilot programs like Rongcheng’s spread to the West, it understandably rang an alarm for activist groups and media outlets—some of which mistook it as applicable to the whole population. Prominent figures like George Sorosand Mike Pence further amplified that false idea.

How do we know those pilot programs won’t become official rules for the whole country?

No one can be 100% sure of that, but it’s worth remembering that the Chinese central government has actually been pushing back on local governments’ rogue actions when it comes to social credit regulations. 

In December 2020, China’s state council published a policy guidance responding to reports that local governments were using the social credit system as justification for punishing even trivial actions like jaywalking, recycling incorrectly, and not wearing masks. The guidance asks local governments to punish only behaviors that are already illegal under China’s current legislative system and not expand beyond that. 

“When [many local governments] encountered issues that are hard to regulate through business regulations, they hoped to draw support from solutions involving credits,” said Lian Weiliang, an official at China’s top economic planning authority, at a press conference on December 25, 2020. “These measures are not only incompatible with the rule of law, but also incompatible with the need of building creditworthiness in the long run.” 

And the central government’s pushback seems to have worked. In Rongcheng’s case, the city updated its local regulation on social credit scores and allowed residents to opt out of the scoring program; it also removed some controversial criteria for score changes. 

Is there any advanced technology, like artificial intelligence, involved in the system?

For the most part, no. This is another common myth about China’s social credit system: people imagine that to keep track of over a billion people’s social behaviors, there must be a mighty central algorithm that can collect and process the data.

But that’s not true. Since there is no central system scoring everyone, there’s not even a need for that kind of powerful algorithm. Experts on China’s social credit system say that the entire infrastructure is surprisingly low-tech. While Chinese officials sometimes name-drop technologies like blockchain and artificial intelligence when talking about the system, they never talk in detail about how these technologies might be utilized. If you check out the Credit China website, it’s no more than a digitized library of separate databases. 

“There is no known instance in which automated data collection leads to the automated application of sanctions without the intervention of human regulators,” wrote Schaefer in the report. Sometimes the human intervention can be particularly primitive, like the “information gatherers” in Rongcheng, who walk around the village and write down fellow villagers’ good deeds by pen.

However, as the national system is being built, it does appear there’s the need for some technological element, mostly to pool data among government agencies. If Beijing wants to enable every government agency to make enforcement decisions based on records collected by other government agencies, that requires building a massive infrastructure for storing, exchanging, and processing the data.

To this end, the latest draft law talks about the need to use “diverse methods such as statistical methods, modeling, and field certification” to conduct credit assessments and combine data from different government agencies. “It gives only the vaguest hint that it’s a little more tech-y,” says Daum.

How are Chinese tech companies involved in this system?

Because the system is so low-tech, the involvement of Chinese tech companies has been peripheral. “Big tech companies and small tech companies … play very different roles, and they take very different strategies,” says Shazeda Ahmed, a postdoctoral researcher at Princeton University, who spent several years in China studying how tech companies are involved in the social credit system.

Smaller companies, contracted by city or provincial governments, largely built the system’s tech infrastructure, like databases and data centers. On the other hand, large tech companies, particularly social platforms, have helped the system spread its message. Alibaba, for instance, helps the courts deliver judgment decisions through the delivery addresses it collects via its massive e-commerce platform. And Douyin, the Chinese version of TikTok, partnered with a local courtin China to publicly shame individuals who defaulted on court judgments. But these tech behemoths aren’t really involved in core functions, like contributing data or compiling credit appraisals.

“They saw it as almost like a civic responsibility or corporate social responsibility: if you broke the law in this way, we will take this data from the Supreme People’s Court, and we will punish you on our platform,” says Ahmed.

There are also Chinese companies, like Alibaba’s fintech arm Ant Group, that have built private financial credit scoring products. But the result, like Alibaba’s Sesame Credit, is more like a loyalty rewards program, according to several scholars. Since the Sesame Credit score is mostly calculated on the basis of users’ purchase history and lending activities on Alibaba’s own platforms, the score is not reliable enough to be used by external financial institutions and has very limited effect on individuals.

Given all this, should we still be concerned about the implications of building a social credit system in China?

Yes. Even if there isn’t a scary algorithm that scores every citizen, the social credit system can still be problematic.

The Chinese government did emphasize that all social-credit-related punishment has to adhere to existing laws, but laws themselves can be unjust in the first place. “Saying that the system is an extension of the law only means that it is no better or worse than the laws it enforces. As China turns its focus increasingly to people’s social and cultural lives, further regulating the content of entertainment, education, and speech, those rules will also become subject to credit enforcement,” Daum wrote in a 2021 article.

Moreover, “this was always about making people honest to the government, and not necessarily to each other,” says Ahmed. When moral issues like honesty are turned into legal issues, the state ends up having the sole authority in deciding who’s trustworthy. One tactic Chinese courts have used in holding “discredited individuals” accountable is encouraging their friends and family to report their assets in exchange for rewards. “Are you making society more trustworthy by ratting out your neighbor? Or are you building distrust in your very local community?” she asks.

But at the end of the day, the social credit system does not (yet) exemplify abuse of advanced technologies like artificial intelligence, and it’s important to evaluate it on the facts. The government is currently seeking public feedback on the November draft document for one month, though there’s no expected date on when it will pass and become law. It could still take years to see the final product of a nationwide social credit system.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2022/11/22/1063605/china-announced-a-new-social-credit-law-what-does-it-mean/amp/

DoD’s Lack of Agility is a National Security Risk – MITRE

Posted by timmreardon on 11/15/2022
Posted in: Uncategorized. Leave a comment

by Pete Modigliani | Nov 15, 2022 | Agile, Bureaucracy, Innovation

​Flexibility is the key to airpower. Speed and agility are the keys to national security.

Yet the DoD lacks the agility to be responsive to change in this dynamic national security environment.

Gallup interviewed nearly 10,000 Americans and Europeans regarding agility in their organization. Do they have the right tools, processes, and mindset to respond quickly to business needs?

Gallup found eight factors that drive organizational agility

Meanwhile, DoD is the world’s biggest bureaucracy. As McKinsey noted in 2017:

“Volatility, uncertainty, complexity, and ambiguity are so pervasive they have spawned a military acronym: VUCA. Yet the inherently bureaucratic national-security institutions have failed to keep pace. The DoD still largely governed by the Goldwater-Nichols Act of 1986, which focused more on procurement efficiency and unity of command than responsiveness to volatility, uncertainty, and the rest.”

However, in this dynamic national security environment, agility is critical to our success as a nation. Whether it’s responding to the war in Ukraine, a global pandemic, or China threating Taiwan, the one thing constant is change. With a growing array of threats, operations, and technologies shaping the Digital Age, DoD must value agility.

Let’s assess DoD against the eight agility factors Gallop found:

1. Cooperation: MIXED.

We fight as a Joint Force, with a collation of allies, yet much more work is needed. There is a lack of trust between DoD and Congress, the military Services focus more on their priorities than Joint needs, more can be done for international cooperation, and DoD’s relations with the defense industrial base and commercial tech companies can be improved.

2. Speed of Decision Making: FAIL.

As it can take months to staff a one-page memo through the Pentagon, DoD and Congress continue to value oversight over speed. There are far too many Hall Monitor organizations to ensure compliance with outmoded policies and statutes. It can take two years to formalize requirements via JCIDS, allocate a budget via PPBE, and approve program strategies.

3. Trial Tolerance: WEAK.

The DoD made great progress in implementing the Middle Tier of Acquisition to enable rapid prototyping and rapid fielding. Some organizations continue to prototype and experiment with potential solutions. Yet the bureaucracy continues to impose legacy mindsets that every effort must have a cost, schedule, and performance baseline and assume a high success rate instead of experimenting and pivoting.

4. Empowerment: WEAK.

While OSD delegated decision authorities to most major programs to the Services, who delegated some to PEOs, there’s a trend of key officials seeking to rescind delegations and elevate many decisions to the most senior DoD boards. Can someone other than an Under Secretary of Defense decide on a $10M effort? Can someone other than a Service chief approve requirements for a rapid prototype?

5. Technology Adoption. MIXED.

Our major systems have some breathtaking and game changing technologies, yet DoD continues to lag China on adopting the leading technologies. As DoD is no longer the primary driver of global R&D, it must harness commercial solutions which account for most of DoD’s critical technology priorities. Far too many technology research projects fall into the infamous Valley of Death, never to be integrated into an acquisition program and fielded.

6. Simplicity: EPIC FAIL.

DoD at every turn values complexity over simplicity. Whether its in the system designs we develop to the processes to acquire them. Even when Congress directs in statute new streamlined, simple processes, the bureaucracy finds a way to impose complexity to the system.

7. Knowledge Sharing: WEAK.

DoD invented the ARPANET as the precursor to the internet over 50 years ago. Yet in this Digital Age, the DoD is in the stone age with public pleas to “fix our computers.” DoD needs to do so much more to share the knowledge across the enterprise on operations, technologies, and processes. AiDA was developed to demonstrate integrated, digital knowledge solutions which became the foundation of the AAFand Contracting Cone.

8. Innovation Focus: MIXED.

DoD regularly talks about innovation. DoD has an innovation board, an innovation unit, and an innovation ecosystem. Yet DoD struggles to rapidly harness defense and commercial technology innovations for military advantage. It has many attempts to reform the bureaucracy, yet few innovative ways to do business. As AI, autonomy, and quantum computing technologies emerge, DoD has mixed results on adapting its way of fighting to rethink operations and force structure.

It should come as no surprise that DoD rated poorly on most of the eight agility factors.

The new National Defense Strategy highlighted the Complex Escalation Dynamics with rapidly evolving domains and technologies. Furthermore it laid out:

“Our current system is too slow and too focused on acquiring systems not designed to address the most critical challenges we now face. This orientation leaves little incentive to design open systems that can rapidly incorporate cutting-edge technologies, creating longer-term challenges with obsolescence, interoperability, and cost effectiveness. The Department will instead reward rapid experimentation, acquisition, and fielding. We will better align requirements, resourcing, and acquisition, and undertake a campaign of learning to identify the most promising concepts, incorporating emerging technologies in the commercial and military sectors for solving our key operational challenges.”

The new DoD Strategic Management Plan covers a wide array of goals, objectives, and measures. Future updates should consider integrating more of these agility factors to enable the department to be more responsive to national security changes.

Disclaimer: The opinions expressed here are those of the authors only and do not represent the positions of the MITRE Corporation or its sponsors.

Article link: https://aida.mitre.org/blog/2022/11/15/dods-lack-of-agility-is-a-national-security-risk/

Expert Analysis of Dangerous Artificial Intelligences in Government – Nextgov

Posted by timmreardon on 11/14/2022
Posted in: Uncategorized. Leave a comment

By JOHN BREEDEN IINOVEMBER 14, 2022 01:11 PM ET

The “real risks” of AI come from a lack of governance and risk understanding, according to Navrina Singh, CEO of Credo AI and a member of the Department of Commerce’s National AI Advisory Committee.

Of all the emerging sciences and technologies that government and private industry are working on, artificial intelligence—mostly just called AI—is one of the most promising. And a big reason for that is because AI can be designed to work in almost any role or capacity. At the most basic level, AIs using robotic process automation can accurately route mail, packages and payroll checks to thousands of destinations. Meanwhile, advanced AIs in the military are learning how to control swarms of intelligent combat drones across sprawling modern battlefields. And AIs are also being tasked with just about everything in between those extremes.

The United States is the world leader in the field of AI development, with new applications and capabilities being developed almost every day. But that development does not come without some level of risk. All new technologies have some risks associated with them, and experts are already pointing out specific vulnerabilitiesthat could affect a large number of AIs. But beyond short-term vulnerabilities, there are also longer term concerns regarding AIs because they are becoming so ubiquitous. 

While few people seriously think that an AI could take over the world, asking a flawed AI to manage a critical function could negatively affect everything from hiring decisions to healthcare treatments, law enforcement operations, financial planning or even battlefield strategies. As such, it’s important to study AIs and their development to ensure that they are free from bias and other flaws before giving them too much responsibility.

That is an area that the founder and CEO of Credo AI, Navrina Singh, knows well. She has been studying the development of AI—and both the potential advantages and dangers associated with it—for many years. She was recently asked to speak about those issuesbefore the House Committee on Science, Space and Technology subcommittee on research and technology. And she was also recently appointed to the Department of Commerce’s National AI Advisory Committee to counsel President Joe Biden on AI topics.

Nextgov talked with Singh about the development of AI, and especially some of the dangers of deploying this emerging technology before it’s been fully vetted and tested. We also asked her about some smart strategies to make sure that we are always able to get the most out of AI technology, while mitigating some of the biggest risks.

Nextgov: Thank you for talking with us. Before we get started, can you tell us a little bit about your impressive background in technology and the field of AI?

Singh: In 2001, I immigrated to the United States from India to pursue an engineering degree at the University of Wisconsin Madison. I spent nearly two decades as an engineer working on various technologies, including machine learning, augmented reality and mobile for tech heavyweights like Qualcomm and Microsoft.

In 2011 I started exploring machine learning, which led me to building emerging businesses at Qualcomm. Following that, I led the commercialization of AI products at Microsoft, including conversational AI chatbots.

It was through building these enterprise-scale AI products that I realized there was a lack of AI oversight and accountability within the industry. This technology that is so pervasive across our daily lives is also the perfect breeding ground for bias if the right guardrails aren’t put in place. This is why I founded Credo AI, to help enterprises have the tools to build responsible AI at scale and ensure that this technology would work for humanity instead of against it.

Nextgov: Let’s get this question out of the way. There are a few very smart technology leaders who are warning that a rogue AI could eventually take over the world or ultimately destroy humanity. Assuming you don’t think that is a real danger right now, can you explain what some of the real risks of AI might be for humanity?

Singh: In my view, the real risks associated with AI are a lack of technology governance and a lack of risk understanding, including unintentional risks—or intentionally via bad actors—and the threat it can pose to our society and economy. We see AI being used everywhere: in job hiring, in finance, in education, in healthcare, in our criminal justice system, and the list goes on.

AI is so prevalent in our society, and, even though its intentions may be good, ungoverned AI can present damaging consequences, especially for those in marginalized and minority communities. For example, AI can filter out qualified candidates from a hiring process due to gender, deny a family from obtaining a mortgage loan, make college admissions less inclusive, reject life-saving health procedures and even identify the wrong individual for a crime.

With the recent advancements in generative AI, where AI is generating something new rather than analyzing something that already exists, we are facing a new generation of AI risks that need to be deeply thought about and managed. From creating fake news stories to influencing the political decisions of citizens and leaders, malicious actors can use AI for deceitful purposes like identity theft and digital fraud, exponentially increasing already prevalent fairness issues. The increasing scale and impact of AI is raising the stakes for major ethical questions. Now more than ever, the governance of these AI systems is critical for us to ensure that AI works in service of humanity.

Nextgov: That is interesting. And I suppose the reason that anyone is even thinking about the dangers of AI is the fact that the science behind it is advancing so rapidly. Have there been any recent advances that are really helping to improve the capabilities of AI?

Singh: Absolutely! And at a pace that requires careful monitoring and assurance that governance and oversight of these systems can keep up with those advancements. There has been a lot of progress made in the past year in large language models, natural language processing, multi-model learning, bias mitigation in machine learning and more. 

One area that is gaining a lot of momentum and marketing attention is generative AI, an umbrella term for AI that creates text, videos, images and code, rather than just analyzing data. Generative AI is gaining traction in gaming, programming, advertising, retail and many other sectors—pushing AI further into our economy and society.

Nextgov: And you recently testified before the House of Representatives Committee on Science, Space and Technology subcommittee on research and technology about how to manage the risks of AI. What were the key factors that you asked Congress to consider in this area?

Singh: It was an honor to testify before Congress, and I commend the House committee for taking a very evidence-focused approach to the hearing. In addition, I also want to commend the National Institute of Science and Technology for their work on the AI Risk Management Framework and Practice Guide, both of which have been developed with a significant amount of stakeholder engagement. These demonstrate a good first step in our journey to responsible AI governance, especially on informed policy and standards development.

There are three key factors that Congress should consider as they explore ways to manage AI risks.

First, responsible AI  requires a full lifecycle approach. AI systems cannot be considered “responsible” based on one point-in-time snapshot, but instead must be continuously evaluated for responsibility, and transparently reported on throughout the entire AI lifecycle, from the design, to development, to testing and validation, to production and use.

Secondly, context is critical for AI governance. We believe that achieving trustworthy AI depends on a shared understanding that AI is industry specific, application specific, data specific and context driven. There is no one-size-fits-all approach to “what good looks like” for most AI use cases. This requires a collaborative approach to assessments, and we advocate for context-based tests for AI systems with reporting requirements that are: specific, regular and transparent.

And finally, transparency reporting and system assessments is a good first step to accomplish trustworthy and responsible AI. The importance of transparency reporting and system assessments cannot be overstated as a critical foundation for AI governance for all organizations. 

Reporting allows policymakers to start to evaluate different approaches, and potentially opens the door for benchmarking—reporting is the step that gets us to standards that can be enforced. We have seen firsthand how comprehensive and accurate assessments of the AI applications and the associated models/datasets, coupled with transparency and disclosure reporting, encourage responsible practices to be cultivated, engineered and managed throughout the AI development life cycle. Fundamental to this is access to compliant and comprehensive data for assessments.

Nextgov: And do you have any advice for government policymakers to help the U.S. achieve those goals?

Singh: I encourage policymakers to approach AI governance with the mindset that responsible AI—RAI—is a core competitive differentiator, not just for companies, but for countries. Any government helping to set up RAI requirements on testing and metrics now will have a competitive advantage in creating trustworthy AI in the future. Building trustworthy AI is not just about “doing the right thing” and setting “values” that make people feel good. It is about building systems that work better—systems that do not have unintended harmful consequences.

Nextgov: And you were also recently appointed to the Department of Commerce’s National AI Advisory Committee to counsel President Biden on AI topics. Recently, the White House has advocated for the creation of an AI Bill of Rights. Can you explain how the bill of rights might work, and whether or not you think it’s an idea that could move us closer to responsible AI? 

Singh: I can only speak in my personal capacity and not as a representative of NAIAC. In my personal view, the AI Bill of Rights is an important step toward education protections citizens should have from the organizations’ use or misuse of AI, and also informing a responsible AI governance for organizations. The principles in the AI Bill of Rights align with emerging AI regulations in the U.S., the E.U. and Canada, among other key jurisdictions. The evaluation and transparency efforts emphasized in the AI Bill of Rights blueprint provide the foundation for effective AI accountability. 

We see consistent themes in our work at Credo AI to formally codify these principles into standard practices to empower organizations to build responsible AI that is safe, fair, compliant and auditable.

Nextgov: You mention your company, Credo AI, and how you are trying to build a truly responsible AI platform. What does that entail, and how can other companies or government agencies ensure that their AIs are also responsible?

Singh: Building trust in AI is a dynamic process. At Credo AI we believe that organizations can build that trust by weaving accountability and oversight into their AI lifecycle. This means that organizations take on the responsibility to align and commit to enterprise and business values, follow through, observe impact and repeat the process, with diverse voices providing input at every stage.

Nextgov: Thank you for your time. We know how busy you are these days. Do you have any parting advice for government agencies as they continue to take the lead in many areas of AI development?

Singh: As with anything, it is important for governments to lead by example. It will be crucial for government agencies that deploy AI to ensure these projects have a transparent process to explain what decisions were made by the AI developer and why.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Article link; https://www.nextgov.com/emerging-tech/2022/11/expert-analysis-dangerous-artificial-intelligences-government/379690/

The Role of Information in U.S. Concepts for Strategic Competition – RAND

Posted by timmreardon on 11/08/2022
Posted in: Uncategorized. Leave a comment

by Christopher Paul, Michael Schwille, Michael Vasseur, Elizabeth M. Bartels, Ryan Bauer

  • Related Topics: 
  • Cyber Warfare, 
  • Information Operations, 
  • Intelligence Analysis, 
  • Joint Operations, 
  • Military Strategy, 
  • Russia, 
  • Security Cooperation
Download
DOWNLOAD EBOOK FOR FREE

PDF file1.3 MB

Research Questions

  1. Where are there points of disagreement and consensus among experts on great-power competition and those responsible for protecting U.S. interests?
  2. What role does information play in competitors’ activities in the gray zone below the threshold of conflict, and how can the information environment support strategic responses to these activities?
  3. How can the United States better organize for competition and more effectively harness all elements of national power—diplomatic, information, military, and economic?

Strategic competition is a long game between those with a vested interest in preserving the international order of rules and norms dating back to the post–World War II era and revisionist powers seeking to disrupt or reshape this order. The gray zone occupies a position with blurred boundaries on the spectrum from cooperation to competition and then conflict. Gray zone activities provide a strategic advantage for one competitor while complicating the response calculus of another. This is because competition in the gray zone is characterized by incrementalism, deception, and ambiguity, all of which make it difficult to decipher what is occurring, who is responsible, and how an action supports broader or longer-term interests. Competitors gain an advantage when they can harness all elements of national power—diplomatic, information, military, and economic—but success hinges on the effective use of the information environment, in particular.

There is emerging consensus that the United States needs to reject the traditional notion that peace and war are dichotomous states. Competition today occurs in the space between. To mount an effective response to adversary activities in the gray zone, it is important to understand how adversaries leverage information, the ends that gray zone activities serve, and the capabilities and authorities needed to respond to them. This report offers a detailed enumeration of gray zone activities that support competition, a synthesis of expert consensus on challenges to gray zone competition, and a dynamic menu of solutions to enhance the U.S. competitive position in the gray zone and beyond.

Key Findings

There is no broadly shared understanding of competitionamong scholars and defense practitioners

  • The lexicon for discussing competition is vast and varied.
  • Operations in the information environment are central to gaining competitive advantages, but they are not always a priority when U.S. leaders consider how to compete militarily.
  • Competition requires a whole-of-government approach and solutions, but there are coordination gaps across the U.S. government and between the U.S. government and U.S. military.

There is consensus on some essential concepts and challenges related to competition

  • Strategic competition is fundamentally a long game between revisionist powers and those that want to preserve the status quo of the current international order. Rather than engaging in isolated contests, competitors undertake activities to gain an advantage in pursuit of one of these long-term goals.
  • Great-power competition blurs the line between peace and war and occurs on a spectrum that runs from cooperation through competition and to conflict of varying intensities. The blurring of these thresholds can complicate decisions about how to respond appropriately to a competitor’s actions.
  • Ambiguity and uncertainty are enablers of gray zone competition. It is difficult to mount a response when it is unclear what action has occurred and who is responsible.
  • Great-power competition uses all elements of national power: diplomatic, information, military, and economic. When competing in the gray zone, states often respond to an activity related to one element of national power by harnessing an entirely different element of national power, as when military incursions are met with economic sanctions.

Recommendations

  • Ensure that the appropriate authorities and permissions are in place for the United States to maintain advantages in great-power competition and to compete effectively with adversaries in the gray zone. A whole-of-government approach to competition will improve coordination and progress toward U.S. goals.
  • Adopt a campaigning mindset by viewing adversary activities and U.S. response options as part of a competitive long game rather than discrete events. To better support this long-term vision and protect mutual interests, strengthen relationships with partners and allies and enlist their capabilities.
  • Fight ambiguity with transparency. Adversaries thrive in the gray zone when it is difficult to decipher their activities or assign attribution. “Naming and shaming” is one way to disrupt this kind of incremental aggression.
  • Be proactive rather than reactive, maintain a robust forward presence, and increase the risk tolerance of U.S. political and military leaders. Great-power competition has historically benefited revisionist states by putting the United States in a reactive position.
  • Take a multipronged approach to managing competitors by harnessing all elements of national power in mounting a response (diplomatic, information, military, and economic), increase adversaries’ costs to compete by overextending their capabilities and limiting their response options, and empower civil society institutions in partner countries to reject adversaries’ information campaigns before they can have their intended effect.

Table of Contents

  • Chapter OneIntroduction and Background
  • Chapter TwoDiffering Views and Consensus on Competition
  • Chapter ThreeEnumeration and Categorization of Gray Zone Activities
  • Chapter FourAn Overview of Challenges
  • Chapter FivePossible Solutions
  • Chapter SixImplications for DoD and Future Research Directions

Article link: https://www.rand.org/pubs/research_reports/RRA1256-1.html?

Planning Doesn’t Have to Be the Enemy of Agile – HBR

Posted by timmreardon on 11/08/2022
Posted in: Uncategorized. Leave a comment
  • Alessandro Di Fiore September 13, 2018

Summary.

Summary.   Planning was one of the cornerstones of management, but it’s now fallen out of fashion. It seems rigid, bureaucratic, and ill-suited to a volatile, unpredictable world. However, organizations still need some form of planning. And so, universally valuable, but desperately unfashionable, planning waits like a spinster in a Jane Austen novel for someone to recognize her worth. The answer is agile planning, a process that can coordinate and align with today’s agile-based teams. Agile planning also helps to resolve the tension between traditional planning’s focus on hard numbers, and the need for “soft data,” or human judgment.close

Planning has long been one of the cornerstones of management. Early in the twentieth century Henri Fayol identified the job of managers as to plan, organize, command, coordinate, and control. The capacity and willingness of managers to plan developed throughout the century. Management by Objectives (MBO) became the height of corporate fashion in the late 1950s. The world appeared predictable. The future could be planned. It seemed sensible, therefore, for executives to identify their objectives. They could then focus on managing in such a way that these objectives were achieved.

This was the capitalist equivalent of the Communist system’s five-year plans. In fact, one management theorist of the 1960s suggested that the best managed organizations in the world were the Standard Oil Company of New Jersey, the Roman Catholic Church and the Communist Party. The belief was that if the future was mapped out, it would happen.

Later, MBO evolved into strategic planning. Corporations developed large corporate units dedicated to it. They were deliberately detached from the day-to-day realities of the business and emphasized formal procedures around numbers. Henry Mintzberg defined strategic planning as “a formalized system for codifying, elaborating and operationalizing the strategies which companies already have.” The fundamental belief was still that the future could largely be predicted.

Now, strategic planning has fallen out of favor. In the face of relentless technological change, disruptive forces in industry after industry, global competition, and so on, planning seems like pointless wishful thinking.

And yet, planning is clearly essential for any company of any size. Look around your own organization. The fact that you have a place to work which is equipped for the job, and you and your colleagues are working on a particular project at a particular time and place, requires some sort of planning. The reality is that plans have to be made about the use of a company’s resources all of the time. Some are short-term, others stretch into an imagined future.

Universally valuable, but desperately unfashionable, planning waits like a spinster in a Jane Austen novel for someone to recognize her worth.

But executives are wary of planning because it feels rigid, slow, and bureaucratic. The Fayol legacy lingers. A 2016 HBR Analytics survey of 385 managers revealed that most executives were frustrated with planning because they believed that speed was important and that plans frequently changed anyway. Why engage in a slow, painful planning exercise when you’re not even going to follow the plan?

The frustrations with current planning practices intersect with another fundamental managerial trend: organizational agility. Reorganizing around small self-managing teams — enhanced by agility methods like Scrum and LeSS — is emerging as the route to the organizational agility required to compete in the fast-changing business reality. One of the key principles underpinning team-based agility is that teams autonomously decide their priorities and where to allocate their own resources.

The logic of centralized long-term strategic planning (done once a year at a fixed time) is the antithesis of an organization redesigned around teams who define their own priorities and resources allocation on a weekly basis.

But if planning and agility are both necessary, organizations have to make them work. They have to create a Venn diagram with planning on one side, agility on the other, and a practical and workable sweet-spot in the middle.  This is why the quest to rethink strategic planning has never been more urgent and critical. Planning twenty-first century style should be reconceived as agile planning.

Agile planning has a number of characteristics:

  • frameworks and tools able to deal with a future that will be different;
  • the ability to cope with more frequent and dynamic changes;
  • the need for quality time to be invested for a true strategic conversation rather than simply being a numbers game;
  • resources and funds are available in a flexible way for emerging opportunities.

The intersection of planning with organizational agility generates two other paramount requirements:

A process able to coordinate and align with agile teams

Agile organizations face the challenge of managing the local autonomy of squads (bottom-up input) consistently with a bigger picture represented by the tribe’s goals and by cross-tribe interdependencies and the strategic priorities of the organization (top-down view). Governing this tension requires new processes and routines for planning and coordination.

Consider the Dutch financial services firm ING Bank. It restructured its operations in the Netherlands by reorganizing 3,500 employees into agile squads. These are autonomous multidisciplinary teams (up to nine people per team) able to define their work and make business decisions quickly and flexibly. Squads are organized into a Tribe (of no more than 150 people), a collection of squads working on related areas.

ING Bank revisited its process and introduced routine meetings and formats to create alignment between and within tribes. Each tribe develops a QBR (Quarterly Business Review), a six-page document outlining tribe-level priorities, objectives and key results.   This is then discussed in a large alignment meeting (labelled the QBR Marketplace) attended by tribe leads and other relevant leaders. At this meeting one fundamental question is addressed: when we add up everything, does this contribute to our company’s strategic goals?

The alignment within a tribe happens at what is called a Portfolio Marketplace event: representatives of each of the squads which make up the tribe come together to agree on how the set goals are going to be achieved and to address opportunities for synergies.

The ING Bank example shows how the planning process is still necessary and essential to an agile company although in a different fashion with different processes, mechanisms and routines.

As more and more companies transform into agile organizations, agile planning will likely become the new normal replacing the traditional centralized planning approach.

A process that makes use of both limitless hard data and human judgment

Planners have traditionally been obsessed with gathering hard data on their industry, markets, competitors. Soft data — networks of contacts, talking with customers, suppliers and employees, using intuition and using the grapevine — have all but been ignored.

From the 1960s onwards, planning was built around analysis.  Now, thanks to Big Data, the ability to generate data is pretty well limitless.  This does not necessarily allow us to create better plans for the future.

Soft data is also vital. “While hard data may inform the intellect, it is largely soft data that generate wisdom. They may be difficult to ‘analyze’, but they are indispensable for synthesis — the key to strategy making,” saysHenry Mintzberg.

Companies need first to imagine possibilities and second, pick the one for which the most compelling argument can be made.  In deciding which is backed by the most compelling argument, they should indeed take into account all data that can be crunched. But in addition, they should use qualitative judgment.

In an agile organization, teams use design thinking and other exploratory techniques (plus data) to make rapid decisions and change the course on a weekly basis. Decision making is done by a team of people, offsetting in this way the potential biases of a single person making a decision based on her individual judgement. To some extent, an agile team-based organization enables the possibility to leverage qualitative data and judgement — combined today with infinite hard data — for better decisions.

Relying solely on hard data has unquestionably killed many potential great businesses. Take Nespresso, the coffee pod pioneer developed by Nestle.  Nespresso took off when it stopped targeting offices and started marketing itself to households. There was little data on how households would respond to the concept and whatever information was available suggested a perceived consumer value of just 25 Swiss centimes versus a company-wide threshold requirement of 40 centimes. The Nespresso team had to interpret the data skillfully to present a better case to top management. Because it believed strongly in the idea, it forced the company to take a bigger-than-usual risk. If Nestle had been guided solely by quantitative market research the concept would never have gotten off the ground.

The traditional planning approach needs to be revisited to better serve the purposes of the agile enterprise of the twenty-first century. Agile planning is the future of planning. This new approach will require two fundamental elements. First, replacing the traditional obsessions on hard data and playing the numbers-game with a more balanced co-existence of hard and soft data where judgment also plays an important role. Second, introducing new mechanisms and routines to ensure alignment between the hundreds of self-organizing autonomous local teams and the overarching goals and directions of the company.

  • Alessandro Di Fiore is the founder and CEO of the European Centre for Strategic Innovation (ECSI) and ECSI Consulting. He is based in Boston and Milan. He can be reached at adifiore@ecsi-consulting.com. Follow him on twitter @alexdifiore.

Article link: https://hbr.org/2018/09/planning-doesnt-have-to-be-the-enemy-of-agile?

The White House’s ‘AI Bill of Rights’ Outlines Five Principles to Make Artificial Intelligence Safer, More Transparent and Less Discriminatory – Nextgov

Posted by timmreardon on 11/02/2022
Posted in: Uncategorized. Leave a comment

By CHRISTOPHER DANCY The ConversationOCTOBER 31, 2022

Will it have any significant impact?

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the U.S. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism.

Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they have been shown to bolster existing racially biased policies.

There are some government recommendations and guidance regarding AI use. But in early October 2022, the White House Office of Science and Technology Policy added to federal guidance in a big way by releasing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems. The blueprint spells out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can act as a guide to help prevent AI systems from limiting the rights of U.S. residents.

As a computer scientist who studies the ways people interact with AI systems – and in particular how anti-Blackness mediates those interactions – I find this guide a step in the right direction, even though it has some holes and is not enforceable.

IMPROVING SYSTEMS FOR ALL

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk of AI furthering discrimination.

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without having much say in their development. Research has shown that direct and genuine community involvement in the development process is important for deploying technologies that have a positive and lasting impact on those communities.

The second principle focuses on the known problem of algorithmic discrimination within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks for companies to develop AI systems that do not treat people differently based on their race, sex or other protected class status. It suggests companies employ tools such as equity assessments that can help assess how an AI system may impact members of exploited and marginalized communities.

These first two principles address big issues of bias and fairness found in AI development and use.

PRIVACY, TRANSPARENCY AND CONTROL

The final three principles outline ways to give people more control when interacting with AI systems.

The third principle is on data privacy. It seeks to ensure that people have more say about how their data is used and are protected from abusive data practices. This section aims to address situations where, for example, companies use deceptive design to manipulate users into giving away their data. The blueprint calls for practices like not taking a person’s data unless they consent to it and asking in a way that is understandable to that person.

The next principle focuses on “notice and explanation.” It highlights the importance of transparency – people should know how an AI system is being used as well as the ways in which an AI contributes to outcomes that might affect them. Take, for example the New York City Administration for Child Services. Research has shown that the agency uses outsourced AI systems to predict child maltreatment, systems that most people don’t realize are being used, even when they are being investigated.

The AI Bill of Rights provides a guideline that people in New York in this example who are affected by the AI systems in use should be notified that an AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The last principle of the AI Bill of Rights outlines a framework for human alternatives, consideration and feedback. The section specifies that people should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, take the case of someone applying for a mortgage. They would be informed if an AI algorithm was used to consider their application and would have the option of opting out of that AI use in favor of an actual person.

SMART GUIDELINES, NO ENFORCEABILITY

The five principles laid out in the AI Bill of Rights address many of the issues scholars have raised over the design and use of AI. Nonetheless, this is a nonbinding document and not currently enforceable.

It may be too much to hope that industry and government agencies will put these ideas to use in the exact ways the White House urges. If the ongoing regulatory battle over data privacy offers any guidance, tech companies will continue to push for self-regulation.

One other issue that I see within the AI Bill of Rights is that it fails to directly call out systems of oppression – like racism or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in health care have led to worse care for Black patients. I have argued that anti-Black racism should be directly addressed when developing AI systems. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a notable hole and a known issue within AI development.

Despite these shortcomings, this blueprint could be a positive step toward better AI systems, and maybe the first step toward regulation. A document such as this one, even if not policy, can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems.

Article link: https://www.nextgov.com/ideas/2022/10/white-houses-ai-bill-rights-outlines-five-principles-make-artificial-intelligence-safer-more-transparent-and-less-discriminatory/379101/

Christopher Dancy, Associate Professor of Industrial & Manufacturing Engineering and Computer Science & Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The US military wants to understand the most important software on Earth – MIT Tech Review

Posted by timmreardon on 10/26/2022
Posted in: Uncategorized. Leave a comment

Open-source code runs on every computer on the planet—and keeps America’s critical infrastructure going. DARPA is worried about how well it can be trusted

  • Patrick Howell O’Neillarchive page

July 14, 2022

It’s not much of an exaggeration to say that the whole world is built on top of the Linux kernel—although most people have never heard of it.

It is one of the very first programs that load when most computers power up. It enables the hardware running the machine to interact with the software, governs its use of resources, and acts as the foundation of the operating system.

It is the core building block of nearly all cloud computing, virtually every supercomputer, the entire internet of things, billions of smartphones, and more.

But the kernel is also open source, meaning anyone can write, read, and use its code. And that’s got cybersecurity experts inside the US military seriously worried. Its open-source nature means the Linux kernel—along with a host of other pieces of critical open-source software—is exposed to hostile manipulation in ways that we still barely understand.

“People are realizing now: wait a minute, literally everything we do is underpinned by Linux,”  says Dave Aitel, a cybersecurity researcher and former NSA computer security scientist. “This is a core technology to our society. Not understanding kernel security means we can’t secure critical infrastructure.”

Related Story

broken pieces of log4j

The internet runs on free open-source software. Who pays to fix it?Volunteer-run projects like Log4J keep the internet running. The result is unsustainable burnout, and a national security risk when they go wrong.

Now DARPA, the US military’s research arm, wants to understand the collision of code and community that makes these open-source projects work, in order to better understand the risks they face. The goal is to be able to effectively recognize malicious actors and prevent them from disrupting or corrupting crucially important open-source code before it’s too late.

DARPA’s  “SocialCyber” program is an 18-month-long, multimillion-dollar project that will combine sociology with recent technological advances in artificial intelligence to map, understand, and protect these massive open-source communities and the code they create. It’s different from most previous research because it combines automated analysis of both the code and the social dimensions of open-source software.

“The open-source ecosystem is one of the grandest enterprises in human history,” says Sergey Bratus, the DARPA program manager behind the project.

“It’s now grown from enthusiasts to a global endeavor forming the basis of global infrastructure, of the internet itself, of critical industries and mission-critical systems pretty much everywhere,” he says. “The systems that run our industry, power grids, shipping, transportation.”

Threats to open source

Much of modern civilization now depends on an ever-expanding corpus of open-source code because it saves money, attracts talent, and makes a lot of work easier.

But while the open-source movement has spawned a colossal ecosystem that we all depend on, we do not fully understand it, experts like Aitel argue. There are countless software projects, millions of lines of code,  numerous mailing lists and forums, and an ocean of contributors whose identities and motivation are often obscure, making it hard to hold them accountable. 

That can be dangerous. For example, hackers have quietly inserted malicious code into open-source projects numerous times in recent years. Back doors can long escape detection, and, in the worst case, entire projects have been handed over to bad actors who take advantage of the trust people place in open-source communities and code. Sometimes there are disruptions or even takeovers of the very social networks that these projects depend on. Tracking it all has been mostly—though not entirely—a manual effort, which means it does not match the astronomical size of the problem.

Bratus argues that we need machine learning to digest and comprehend the expanding universe of code—meaning useful tricks like automated vulnerability discovery—as well as tools to understand the community of people who write, fix, implement, and influence that code. 

The ultimate goal is to detect and counteract any malicious campaigns to submit flawed code, launch influence operations, sabotage development, or even take control of open-source projects.

To do this, the researchers will use tools such as sentiment analysis to analyze the social interactions within open-source communities such as the Linux kernel mailing list, which should help identify who is being positive or constructive and who is being negative and destructive. 

The researchers want insight into what kinds of events and behavior can disrupt or hurt open-source communities, which members are trustworthy, and whether there are particular groups that justify extra vigilance. These answers are necessarily subjective. But right now there are few ways to find them at all.

Experts are worried that blind spots about the people who run open-source software make the whole edifice ripe for potential manipulation and attacks. For Bratus, the primary threat is the prospect of “untrustworthy code” running America’s critical infrastructure—a situation that could invite unwelcome surprises. 

Unanswered questions

Here’s how the SocialCyber program works. DARPA has contracted with multiple teams of what it calls “performers,” including small, boutique cybersecurity research shops with deep technical chops.

One such performer is New York–based Margin Research, which has put together a team of well-respected researchers for the task.

“There is a desperate need to treat open-source communities and projects with a higher level of care and respect,” said Sophia d’Antoine, the firm’s  founder. “A lot of existing infrastructure is very fragile because it depends on open source, which we assume will always be there because it’s always been there. This is walking back from the implicit trust we have in open-source code bases and software.”

Margin Research is focused on the Linux kernel in part because it’s so big and critical that succeeding here, at this scale, means you can make it anywhere else. The plan is to analyze both the code and the community in order to visualize and finally understand the whole ecosystem.

Margin’s work maps out who is working on what specific parts of open-source projects. For example, Huawei is currently the biggest contributor to the Linux kernel. Another contributor works for Positive Technologies, a Russian cybersecurity firm that—like Huawei—has been sanctioned by the US government, says Aitel. Margin has also mapped code written by NSA employees, many of whom participate in different open-source projects.

“This subject kills me,” says d’Antoine of the quest to better understand the open-source movement, “because, honestly, even the most simple things seem so novel to so many important people. The government is only just realizing that our critical infrastructure is running code that could be literally being written by sanctioned entities. Right now.”

This kind of research also aims to find underinvestment—that is critical software run entirely by one or two volunteers. It’s more common than you might think—so common that one common way software projects currently measure risk is the “bus factor”: Does this whole project fall apart if just one person gets hit by a bus? 

While the Linux kernel’s importance to the world’s computer systems may be the most pressing issue for SocialCyber, it will tackle other open-source projects too. Certain performers will focus on projects like Python, an open-source programming language used in a huge number of artificial-intelligence and machine-learning projects. 

The hope is that greater understanding will make it easier to prevent a future disaster, whether it’s caused by malicious activity or not. 

“Pretty much everywhere you look, you find open-source software,” says Bratus.“Even when you look at proprietary software, a recent study showed it’s actually 70% or more open source.”

“This is a critical infrastructure problem,” Aitel says. “We don’t have a grip on it. We need to get a grip on it. The potential impact is that malicious hackers will always have access to Linux machines. That includes your phone. It’s that simple.”

Article link: https://trib.al/U5Tu11y

What Defines a Successful Organization? – HBR

Posted by timmreardon on 10/19/2022
Posted in: Uncategorized. Leave a comment
  • Ram Charan

September 19, 2022

Summary. The characteristics that help an organization succeed have changed over the past century. While a highly structured, top-down management style used to be companies’ preferred approach to organization, the internet has made this structure (and the layers of hierarchy that developed over decades) irrelevant. In this article, the author discusses how a successful organization today moves from mass markets to markets of one, routinely replaces core competencies, shifts to team-based structures, and manages from the outside in, among other features.

Organizations succeed over time only when they adapt to the speed and character of external change. Every aspect of an organization — from how it operates and is structured to how it is led — must match the current yet ever-shifting context in which it exists.

As the world changes at a faster pace than ever before, companies must change more rapidly as well. Yet the practices, structures, and behaviors at many large companies are not designed for such responsiveness. A century ago companies implemented such approaches because control, consistency, and predictability were top concerns; core competencies were cherished foundations to build on; and leaders viewed the world from inside the organization looking out.

To thrive today, however, companies must be able to detect external changes from the outside in and have a built-in fluidity so they can continually adapt. They need to focus on individual customers, make their core competencies dynamic, and rely on teams instead of a hierarchy — all by using the power of data and algorithms.

As we look back on a century of Harvard Business Review, we should also take time to look ahead — and recognize what the next iteration of a successful organization looks like.

Where the Organization Has Been

Since its founding, HBR has sought answers to fundamental questions such as these: What’s the best way to organize a business? What is the right structure, and how should day-to-day decisions be made?

About a century ago Henry Ford had a world-changing answer to these questions. He built a business around top-down management and assembly line mass production. By standardizing his automobiles and the steps taken to manufacture them, he lowered the cost per unit across the entire end-to-end value chain. That innovation, combined with his decision to raise workers’ pay to $5 per day, was adopted by other companies. This change drove industrial activity — making it possible for millions more people to own and enjoy everything from cars to Coca-Cola — and raised living standards around the world. Even as Ford’s company grew, though, its structure remained the same: a hierarchy in which business decisions were made in the C-suite and functions reported up to the president and chief executive.

What Makes a Successful Organization, According to HBR Readers

An organization that not only delivers on customers’ …

Alfred P. Sloan, Jr., CEO of Ford’s major competitor, General Motors (GM), decided to pursue a new strategy by offering different makes and models of cars for every budget in a range of styles and colors. GM used consumer research to segment the market, but it always kept the focus on markets large enough to ensure the manufacturing would be cost-effective. To manage multiple products and brands, Sloan realized that GM needed a different organizational structure too — one with divisions that each had a top leader responsible for profit and loss (P&L).

After World War II, large companies became even larger as they extended their sales and production and built networks of suppliers (what we now call “ecosystems”) in countries and even on continents other than their own. Many multinational corporations adopted a P&L organization structure to maintain control of their sprawling businesses.

In the 1950s, as industrial manufacturer General Electric (GE) prepared to adopt a P&L structure, it consulted with leading management thinker Peter Drucker, who pointed out that executives would need training to make the new system work. That led GE to create a 16-week course on how to be a general manager, birthing its now-famous training center in Crotonville, New York. Soon after, Harvard Business School created an Advanced Management Program course (which I taught for 30 years), and consulting firms created product lines around leadership training.

Companies continued to expand in size, breadth, and hierarchical levels, yet they also needed to coordinate across their existing structures. A host of companies — including TRW, Bechtel, Citibank, and Texas Instruments — began to use a matrix arrangement in which reporting relationships and accountability were shared across product, functional, and geographic structures.

In the 1990s, of course, the internet changed everything. Marc Andreessen co-created a browser that made the web useful for commercial purposes, coders began developing software and then algorithms to make decisions more quickly than humans can, and computer processing capacity became increasingly cheaper and more powerful.

Jeff Bezos saw the internet growing at 2,300% per annum, left his job at investment firm DE Shaw, and founded an online bookstore called Amazon, which has since morphed into not just the “everything store” but also the leading provider of web services to a host of other companies.

Where the Organization Is Now 

Bezos discovered early what every twenty-first-century leader should now know: We have entered an age of discontinuity in which breaks in the external world are deeper and more frequent, rendering prevailing organizational structures and practices ineffective, if not harmful. Successful organizations exploit those changes, as Amazon has since its inception, and take advantage of what’s new.

Current realities make it imperative that companies shift in several ways:

1. From mass market to markets of one.

Bezos recognized that an online retailer could offer far more choices than a local shop could, along with the convenience of delivery right to the customer’s home — all at a lower cost. This kind of affordable personalization and service was a new value proposition. What’s more, each transaction provided Amazon with data that its algorithmic engine could analyze to ensure that future offerings were better tailored to each consumer’s preferences, needs, and desires, or what I call “markets of one.” 

Businesses that use technology to predict and personalize what a vast number of individuals want at low incremental cost can scale up and increase cash gross margins. Companies that don’t make this shift will find it hard to compete.

2. From building on core competencies to routinely replacing them.

The conventional wisdom has been that companies should use their core competencies to sustain a competitive advantage. But in an age of discontinuity, this approach doesn’t work for long.

Leaders of every company today must ask these questions: Given changing external realities, is this core competence becoming less important or irrelevant? Do we still need to expend resources to support it, or are there new competencies we need to build and shift resources toward?

Right now, for example, if you don’t have a core competence in using algorithms, you will need to acquire or develop this capacity. Amazon eclipsed many retailers that were too slow to adapt, but now we see brick-and-mortar competitors such as Walmart building their data analytics and e-commerce capabilities.

Recruiting the right people and deploying them in ways that allow them to apply their expertise and energy are core competencies themselves and critical in the age of discontinuity. In an escalating war for talent, businesses that are the most skilled in acquiring and developing these competencies will have a distinct competitive advantage.

3. From hierarchical layers to a team-based structure.

Nearly every company needs to reduce the hierarchical layers that have accumulated in their organizations over time and channel more work to teams.

The benefits of doing so are manifold. When teams include people who are on the front lines, the information flow is both faster and more accurate; with this increased speed comes greater flexibility to respond to customer and market changes. The improved flow of information also creates transparency that removes a lot of organizational politics and encourages collaboration.

Next In

100 Years of HBR

Visualizing a Century of Management Ideas

How HBR’s coverage has evolved over time.

Fidelity Investments, the financial services firm, recently restructured its personal investing group in a team format. Each team has a clear mission and much autonomy in how to accomplish it. The roughly 5,000-person organization now has just three layers below the president and operates at lower cost and with shorter cycle times for innovation. The control function that managerial layers used to perform is now done through software that generates detailed metrics. Reports are produced 24/7 and highlight any red flags in the data.

4. From inside-out to outside-in management.

To keep a company competitive over the long term, leaders must know what is happening far beyond their own industry, geography, and existing customers.

Societal issues around sustainability, racial justice, and geopolitics affect many aspects of business, from strategy to the ability to hire the best talent. Today, for example, it is impossible to ignore the tricky issues raised by China’s intermixing of business and the Chinese Communist Party, which has a presence in every major Chinese company.

Business leaders need a wide lens and a routine for detecting early-warning signals of external changes. Some leaders set aside 10 minutes in every team meeting to discuss any new dynamics people are observing, sometimes prompted by a newspaper article or an outside event. When leaders repeatedly ask, “What’s new?” — as Jack Welch used to do when he was CEO of GE — they can help the organization develop an awareness of subtle shifts that might indicate where things are heading. The goal of such vigilance is to get the organization ready for change so it’s poised to drop what won’t work in the near future and jump on new opportunities.

The Path Ahead

Amazon is not alone in its adaptation to the age of discontinuity, but it continues to lead the charge, always reinvesting in its future.

Others must choose: Adapt or die. Some may be getting a late start in developing technologically driven personalization, dynamic core competencies, flatter and team-based hierarchies, and an outside-in focus. In many industries and geographies, however, there is still ample opportunity to build the structures and processes that take advantage of the new external realities. As companies start on that task, they should recognize that the best way to organize and manage a business is always changing — and that the answers to fundamental questions may be very different 100 years from now than they are today.

Article link: https://hbr.org/2022/09/what-defines-a-successful-organization?

  • Ram Charan advises the CEOs and boards of some of the world’s biggest corporations and serves on seven boards. He is the author or coauthor of 33 books, such as Talent, Strategy, Risk: How Investors and Boards Are Redefining TSR and Talent Wins: The New Playbook for Putting People First (both from Harvard Business Review Press), and four of which are best sellers.

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (8)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...