healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Supporting Efforts to Reform Planning, Programming, Budgeting, and Execution (PPBE) – RAND

Posted by timmreardon on 06/05/2024
Posted in: Uncategorized.

The U.S. national security community faces a rise in global threats and a rapidly changing technological environment that offers both challenges and opportunities for the future fight. Adversaries and competitors are contesting the United States’ traditional edge in innovation, agility, global power projection, and ability to shape the strategic environment. To stay competitive, the United States must be able to engage with industry, harness technological advances, and field new capabilities with unaccustomed speed and flexibility—and to do so within ever-tightening budget constraints.

The U.S. Department of Defense’s (DoD’s) Planning, Programming, Budgeting, and Execution (PPBE) System was originally developed in the 1960s as a structured approach for planning long-term resource development, assessing program cost-effectiveness, and aligning resources to strategies. Yet changes to the strategic environment, the industrial base, and the nature of military capabilities have raised the question of whether existing U.S. defense budgeting processes remain well aligned with national security needs.

In response, Congress called for the establishment of the Commission on PPBE Reform. As part of its data collection efforts, the commission asked RAND researchers to conduct case studies of budgeting processes across nine comparative organizations: five international defense organizations and four U.S. federal government agencies. Congress also specifically requested two case studies of near-peer competitors, and the research team selected the other seven cases in close partnership with the commission. This site collects the findings from the RAND project in three volumes, plus an executive summary.

RAND’s Findings on PPBE Reform

  • Planning, Programming, Budgeting, and Execution in Comparative Organizations: Volume 1, Case Studies of China and Russia
  • Planning, Programming, Budgeting, and Execution in Comparative Organizations: Volume 2, Case Studies of Selected Allied and Partner Nations
  • Planning, Programming, Budgeting, and Execution in Comparative Organizations: Volume 3, Case Studies of Selected Non-DoD Federal Agencies
  • Planning, Programming, Budgeting, and Execution in Comparative Organizations: Volume 4, Executive Summary
  • PPlanning, Programming, Budgeting, and Execution in Comparative Organizations: Volume 5, Additional Case Studies of Selected Allied and Partner Nations
  • Planning, Programming, Budgeting, and Execution in Comparative Organizations: Volume 6, Additional Case Studies of Selected Non-DoD Federal Agencies
  • Planning, Programming, Budgeting, and Execution in Comparative Organizations: Volume 7, Executive Summary for Additional Case Studies

In Their Own Words

  • Reforming DoD’s Planning, Programming, Budgeting, and Execution Process for a Competitive FutureCongress, the Department of Defense, and other key stakeholders are working on once-in-a-generation changes to the planning, programming, budgeting, and execution (PPBE) process to foster greater speed, agility, and innovation. Watch guests, including former secretary of defense Chuck Hagel and chair of the PPBE Reform Commission Bob Hale, discuss PPBE reform during this video from a recent RAND event.Feb 8, 2024

Article link: https://www.rand.org/nsrd/projects/PPBE-reform.html?

Feds beware: New studies demonstrate key AI shortcomings – Nextgov

Posted by timmreardon on 06/03/2024
Posted in: Uncategorized.

By JOHN BREEDEN IIMAY 14, 2024

Recent studies have started to show that there are serious downsides when it comes to such programs’ ability to produce secure code.

It’s no secret that artificial intelligence is almost everywhere these days. And while some groups are worried about potentially devastating consequences if the technology continues to advance too quickly, most government agencies are pretty comfortable adopting AI for more practical purposes, employing it in ways that can help advance agency missions.

And the federal government has plenty of guidelines in place for using AI. For example, the AI Accountability Frameworkfor Federal Agencies provides guidance for agencies that are building, selecting or implementing AI systems. According to GAO and the educational institutions that helped to draft the framework, the most responsible uses of AI in government should be centered around four complimentary principals. They include governance, data, performance and monitoring.

Writing computer code, or monitoring code written by humans to look for vulnerabilities, fits within that framework. And it’s also a core capability that most of the new generative AIs easily demonstrate. For example, when the most popular generative AI program, ChatGPT, upgraded to version 4.0, one of the first things that developer OpenAI did at the unveiling was to have the AI write the code to quickly generate a live webpage.

Given how quickly most generative AIs can code, it’s little wonder that according to a recent survey by GitHub, more than 90% of developers are already using AI coding tools to help speed up their work. That means that the underlying code for most applications and programs being created today is at least partially made by AI, and that includes code that is both written or used by government agencies. However, while the quick pace that AI is able to generate code is impressive, recent studies have started to show that there are serious downsides that come along with that speed, especially when it comes to security.

Trouble in AI coding paradise

The new generative AIs have only been successfully coding for, at most, a couple of years depending on the model. So, it’s little wonder that evaluations of their coding prowess are slow to catch up. But studies are being conducted, and the results don’t bode well for the future of AI coding, especially for mission critical areas within government, at least without some serious improvements.

While AIs are generally able to quickly create apps and programs that work, many of those AI-created applications are also riddled with cybersecurity vulnerabilities that could equate to huge problems if dropped into a live environment. For example, in a recent study conducted by the University of Quebec, researchers asked ChatGPT to generate 21 different programs and applications in a variety of programming languages. While every single one of the created applications the AI coded worked as intended, only five of them were secure from a cybersecurity standpoint. The rest had dangerous vulnerabilities that attackers could easily use to compromise anyone who deployed them. 

And these were not minor security flaws either. They included almost every single vulnerability listed by the Open Web Application Security Project, and many others.

In an effort to find out why AI coding was so dangerous from a cybersecurity standpoint, researchers at the University of Maryland, UC Berkeley and Google decided to switch things up a bit and task generative AI not with writing code, but with examining already assembled programs and applications to look for vulnerabilities. That study used 11 AI models, which were each fed hundreds of examples of programs in multiple languages. Applications rife with known vulnerabilities were mixed in with other code examples which were certified as secure by human security experts.

The results of that study were really bad for the AIs. Not only did they fail to detect hidden vulnerabilities, with some AIs missing over 50% of them, but most also flagged secure code as being vulnerable when it was not, leading to a high rate of false positives. It seems that those dismal results even surprised the researchers, who decided to try and correct the problem by training the AIs in better vulnerability detection. They fed the generative AIs thousands of examples of both secure and insecure code, along with explanations whenever a vulnerability was introduced.

Surprisingly, that intense training did little to improve AI performance. Even when expanding the large language models the AIs used to look for vulnerable code, the final results were still unacceptably bad both in terms of false positives and letting vulnerabilities slip through undetected. That led the researchers to conclude that no matter how much they tweaked the models, that the current generation of AI and “deep learning is still not ready for vulnerability detection.” 

Why is AI so bad at secure coding?

All of the surveys referenced here are relatively new, so there is not a lot of explanation yet as to why generative AI, which performs well at so many tasks, would be so bad when it comes to spotting vulnerabilities or writing secure code. The experts that I talked with said the most likely reason is that generative AIs are trained on thousands or even millions of examples of code written by humans that come from open sources, code libraries and other repositories, and much of that is heavily flawed. Generative AI may simply be too poisoned by all those bad examples used in its training to redeem. Even when researchers from the University of Maryland and UC Berkeley study tried to correct the models with fresh data, their new examples were just a drop in the bucket, and not nearly enough to improve performance.

One study conducted by Secure Code Warrior did try and address this question directly with an experiment that selectively fed generative AIs specific examples of both vulnerable and secure code, tasking them with identifying any security threats. In the case of that study, the difference between the secure and vulnerable code examples presented to the AIs were very subtle, which helped researchers determine what factors were specifically tripping up the AIs when it came to vulnerability detection in code.

According to SCW, one of the biggest reasons that generative AIs struggle with secure coding is a lack of contextual understanding about how the code in question fits in with larger projects or the overall infrastructure, and all of the subsequent security issues that can stem directly from that. They give several examples to prove this point, where a snippet of code should be considered secure if it is being used to trigger a standalone function but then becomes vulnerable with business logic flaws, improper permissions or security misconfigurations when integrated into a larger system or project. Since the generative AIs don’t generally understand the context of how code they are examining is being used, it will often flag secure code as vulnerable or code that has vulnerabilities as safe. 

In a sense, because an AI does not know the context of how code will be used, it sometimes ends up guessing about its vulnerability status, since AIs almost never admit that they don’t know something. The other area that AIs struggled with in the SCW study was when a vulnerability came down to something small, like the order of various input parameters. Generative AIs may simply not be experienced enough to know how something small like the order of input parameters in the middle of a large snippet of code can lead to security problems.

The study does not offer up a solution for fixing an AI’s inability to spot insecure code, but does say that generative AI could still have a role in coding, but only when paired tightly with experienced human developers who can keep a watchful eye on their AI companions. For now, without a good technical solution, that may be the best path forward for agencies that need to tap into the speed that generative AI can offer when coding, but can’t accept the risks that come along with unsupervised AI acting independently when creating government applications and programs.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Article link: https://www.nextgov.com/artificial-intelligence/2024/05/feds-beware-new-studies-demonstrate-key-ai-shortcomings/396526/?

Army lifts curtains on planned $1B software development contract

Posted by timmreardon on 05/29/2024
Posted in: Uncategorized.

GETTYIMAGES.COM / ANDREY SUSLOV

By ROSS WILKERSMAY 28

The Army calls out specific modern practices it wants to incorporate and asks industry about others that could work here too.

The Army has started to develop a new software development support services contract with a touted ceiling value north of $1 billion over 10 years.

A new sources sought notice describes the New Modern Software Development IDIQ as a means to hire a group of contractors that can perform the work on rapidly-awarded task orders as they come.

At this juncture, the Army plans to choose up to 10 companies in total that include those reserved for small businesses.

The Army also envisions holding an on-ramp process to bring more contractors into the fold, along with the potential of off-ramps to move away from those with high rates of unsuccessful task order proposals or no bids.

Customization is a key element of the requirements, whether that be in the development of a new product or the modification of a current offering. Awardees will also be responsible for enabling software-as-a-service hosting and security.

The Army is emphasizing modern software development practices that include DevSecOps, Agile, lean and continuous integration/continuous delivery.

At the same time, the Army’s list of questions in the request for information also asks about other areas of software development it should include in the contract.

A second key question worth highlighting is whether interested parties can provide examples or recommendations of how to use experience in coding challenges to establish employee qualifications. The Army sees that approach as a potential alternative to certifications and proposals, depending on the answers, of course.

Significant question number three to call out zeroes in on the two-phase evaluation approach the Army is mulling for this requirement.

For phase two, the Army is considering technology challenges versus demonstrations as the focus there. The Army wants to know what respondents think about that.

Responses to the RFI are due June 10.

Article link: https://washingtontechnology.com/contracts/2024/05/army-lifts-curtains-planned-1b-software-development-contract/396919/?

Defense Innovation Inflection Point? – Forbes

Posted by timmreardon on 05/28/2024
Posted in: Uncategorized.

Mike BrownContributor

Michael Brown is a partner at the VC firm Shield Capital.

Almost a decade ago, Secretary of Defense Ash Carter came to Stanford University in 2015 to announce a bridge between Silicon Valley and the Pentagon, a new organization he termed the Defense Innovation Unit Experimental or DIUx. Secretary Carter realized earlier than many of the technologies the military needed would come from commercial companies at the forefront of AI, autonomy, cyber and commercial space—not from government labs or defense primes.

Why? The Defense Department provides a much smaller proportion of global R&D than in the past (as other countries invested) and commercial businesses now invest more R&D dollars than the government. Sixty years ago, the Defense Department represented 36% of global R&D vs. 3% today. The five largest tech companies spend more than 10x on R&D in comparison to what the five largest defense primes spend.

Inauspicious Beginnings

Changing how the Department views and incorporates commercial technology has not been easy. DIU was initially conceived as a defense embassy to Silicon Valley. What became apparent was that Silicon Valley was not interested in an embassy but instead wanted opportunities to compete for defense contracts. In the fail-fast mode of Silicon Valley, Secretary Carter restarted the operation with a Silicon Valley entrepreneur and F-16 pilot, Raj Shah. Raj had the insight that DIU should be a front door to Pentagon projects which offered revenue contracts for companies that could solve military problems. His team pioneered the use of a fast-track acquisition authority and a competitive process that mirrored how most companies do sourcing.

However, there was a struggle getting Congress to approve and sustain a budget for this startup organization of about $30 million in 2017, and another hiccup when DIU’s reporting relationship changed from the Secretary to an Undersecretary whose primary focus was defense-developed technology—not commercial technology. I succeeded Raj as DIU director in 2018. Despite inconsistent support from defense leadership for budget and manpower, DIU gained approval for its own contracting capability and subsequently scaled to add 100 new vendors who, in turn, attracted 10-20x in venture capital for every $1 awarded in a DIU prototype contract. DIU also increased its transition rate for production contracts to 50% of every prototype effort begun, which in turn attracted 43 potential suppliers for every prototyping effort. Some of the companies now most associated with defense tech like Anduril, C3.ai, Rhombus Power, Shield AI and Vannevar Labs all got a start with DIU prototype contracts but have gone on to develop much more significant businesses serving the Defense Department.

Commercial Technology In The DoD Today

The Defense Department has again realized that accelerating the incorporation of disruptive technology is most effective when reporting to the Secretary of Defense. The Department is now applying commercial technology to its most important problems, specifically, what the military commands of the Indo-Pacific and Europe face in deterring China and Russia. Now with nearly $1 billion from Congress, DIU can field new capabilities in one to two years to support those military commands without waiting for multi-year budget alignment or traditional acquisition processes which on average field new capabilities in 17 years.

In addition, venture capital has begun to support defense tech in an unprecedented way. New firms like ours, Shield Capital, have formed to support national security applications while more established venture firms like Andreesen Horowitz, Lightspeed, and Bessemer Ventures have developed practices focused on defense within their firms. In aggregate, investment in defense tech is up 5x in the past six years and now represents $40 billion annually. With a larger amount of DoD’s budget focused on commercial tech, this is a win-win for the nation—new paths for businesses to ramp revenues faster while modernizing capabilities for the warfighter.

A New Age Bridge From The Pentagon To Silicon Valley

The early days of DIU began to attract the best and brightest of the Valley through contracts, but now with increased resources, there can be a much stronger flywheel effect when more companies receive larger production contracts and, in turn, the companies invest more in their defense product lines and production capacity. Expanding the defense industrial base is critical to any future war effort since the base has consolidated to such a degree that it would be severely constrained in wartime as production constraints in supplying Ukraine have highlighted. As more dollars flow to commercial companies, investors will back more entrepreneurs creating solutions for warfighters. A virtuous circle forms when the defense industrial base grows, is more competitive, offers more choice for the Department in terms of cost/performance and warfighters gain access to leading technology. For too long, our soldiers, sailors and airmen have had access to more modern technology in their civilian consumer lives than while in uniform.

Change is hard and it has taken dedicated civilians, investors, men and women in uniform as well as Congressional leaders years to realize the opportunity and momentum in leveraging commercial technology for defense applications. There is an opportunity at this inflection point to provide the nation and our allies a hedge that can complement the exquisite defense platforms today such as the F-35 and Columbia-class subs.

Today, DIU has the reporting relationship and widespread support to provide the best of what Silicon Valley can offer to equip our warfighters. In August 2023, Deputy Secretary of Defense Kathleen Hicks announced that DIU would be a focal point of the Replicator initiative with the aim of delivering thousands of autonomous, low-cost drones to the Indo-Pacific Command within 18-24 months. There appears to be progress 8 months in with an announcement this week that “the delivery of Replicator systems to the warfighter began earlier this month.”

Today, DIU has the reporting relationship and widespread support to provide the best of what Silicon Valley can offer to equip our warfighters. In August 2023, Deputy Secretary of Defense Kathleen Hicks announced that DIU would be a focal point of the Replicator initiative with the aim of delivering thousands of autonomous, low-cost drones to the Indo-Pacific Command within 18-24 months. There appears to be progress 8 months in with an announcement this week that “the delivery of Replicator systems to the warfighter began earlier this month.”

Article link: https://www.forbes.com/sites/mikebrown/2024/05/25/defense-innovation-inflection-point/?

The world needs a global AI observatory —

Posted by timmreardon on 05/24/2024
Posted in: Uncategorized.

Here’s why it Matters

So little is known about what’s happening in artificial intelligence and what might lie ahead. An international body could help fix that, experts argue. 

Across the globe, there is growing awareness of the risks of unchecked artificial intelligence research and development. Governments are moving fast in an attempt to address this, using existing legal frameworks or introducing new standards and assurance mechanisms. Recently, the White House proposed an AI Bill of Rights.

But the great paradox of a field founded on data is that so little is known about what’s happening in AI, and what might lie ahead.

This is why we believe the time is right for the creation of a global AI observatory — a GAIO — to better identify risks, opportunities, and developments and to predict AI’s possible global effects.

The world already has a model in the Intergovernmental Panel on Climate Change. Established in 1988 by the United Nations with member countries from around the world, the IPCC provides governments with scientific information they can use to develop climate policies. A comparable body for AI would provide a reliable basis of data, models, and interpretation to guide policy and broader decision-making about AI.

At present, numerous bodies collect valuable AI-related metrics. Nation-states track developments within their borders; private enterprises gather relevant industry data; and organizations like the OECD’s AI Policy Observatory focus on national AI policies and trends. While these initiatives are a crucial beginning, much about AI remains opaque, often deliberately. It is impossible to regulate what governments don’t understand. A GAIO could fill this gap through four main areas of activity.

It is impossible to regulate what governments don’t understand. A GAIO could fill this gap.

Thomas Malone et al. Advocates for a global AI observatory

1. Create a global, standardized incident-reporting databaseconcentrating on critical interactions between AI systems and the real world. For example, in the domain of biorisk, where AI could aid in creating dangerous pathogens, a structured framework for documenting incidents related to such risks could help mitigate threats. A centralized database would record essential details about specific incidents involving AI applications and their consequences in diverse settings — examining factors such as the system’s purpose, use cases, and metadata about training and evaluation processes. Standardized incident reports could enable cross-border coordination.

2. Assemble a registry of crucial AI systems focused on AI applications with the largest social and economic impacts, as measured by the number of people affected, the person-hours of interaction, and the stakes of their effects, to track their potential consequences.

3. Bring together global knowledge about the impacts of AI on critical areas such as labor markets, education, media, and health care. Subgroups could orchestrate the gathering, interpretation, and forecasting of data. A GAIO would also include metrics for both positive and negative impacts of AI, such as the economic value created by AI products and the impact of AI-enabled social media on mental health and political polarization.

4. Orchestrate global debate through an annual report on the state of AIthat analyzes key issues, patterns that arise, and choices governments and international organizations need to consider. This would involve rolling out a program of predictions and scenarios focused primarily on technologies likely to go live in the succeeding two to three years. The program could build on existing efforts, such as the AI Index produced by Stanford University.

A focus on facts rather than prescriptions

A GAIO would also need to innovate. Crucially, it would use collective intelligence methods to bring together inputs from thousands of scientists and citizens, which is essential in tracking emergent capabilities in a fast-moving and complex field. In addition, a GAIO would introduce whistleblowing mechanisms similar to the U.S. government’s incentives for employees to report on harmful or illegal actions.

RELATED ARTICLES

Why generative AI needs a creative human touch

net pioneer Geoffrey Hinton sounds the AI alarm 

3 ways to center humans in artificial intelligence efforts

To succeed, a GAIO would need a comparable legitimacy to the IPCC. This can be achieved through its members including governments, scientific bodies, and universities, among others, and by ensuring a sharp focus on facts and analysis more than prescription, which would be left in the hands of governments.

Contributors to the work of a GAIO would be selected, as with the IPCC, on the basis of nominations by member organizations, to ensure depth of expertise, disciplinary diversity, and global representativeness. Their selection would also require maximum transparency, to minimize both real and perceived conflicts of interest.

The AI community and businesses using AI tend to be suspicious of government involvement, often viewing it as a purveyor of restrictions. But the age of self-governance is now over. We propose an organization that exists in part for governments, but with the primary work undertaken by scientists. All international initiatives related to AI would be welcomed.

In order to grow, a GAIO will need to convince key players from the U.S., China, the U.K., the European Union, and India, among others, that it will fill a vital gap. The fundamental case for its creation is that no country will benefit from out-of-control AI, just as no country benefits from out-of-control pathogens.

The greatest risk now is multiple unconnected efforts. Unmanaged artificial intelligence threatens important infrastructure and the information space we all need to think, act, and thrive. Before nation-states squeeze radically new technologies into old legal and policy boxes, the creation of a GAIO is the most feasible step.

This article is written by Sir Geoff Mulgan, a professor of collective intelligence, public policy, and social innovation at University College London;Thomas Malone, a professor of information management at MIT Sloan and the director of the MIT Center for Collective Intelligence; Divya Siddharth and Saffron Huang of the Collective Intelligence Project; Joshua Tanof the Metagovernance Project; and Lewis Hammond of the Cooperative AI Foundation.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/world-needs-a-global-ai-observatory-heres-why?

MITRE to Establish New AI Experimentation and Prototyping Capability for U.S. Government Agencies

Posted by timmreardon on 05/22/2024
Posted in: Uncategorized.

MAY 7, 2024

MITRE Federal AI Sandbox to be Powered by NVIDIA DGX SuperPOD 

McLean, Va., May 7, 2024 – MITRE is building a new capability intended to give its artificial intelligence (AI) researchers and developers access to a massive increase in computing power. The new capability, MITRE Federal AI Sandbox, will provide better experimentation of next generation AI-enabled applications for the federal government. The Federal AI Sandbox is expected to be operational by year’s end and will be powered by an NVIDIA DGX SuperPOD™ that enables accelerated infrastructure scale and performance for AI enterprise work and machine learning.

As U.S. government agencies seek to apply AI across their operations, few have adequate access to supercomputers and the deep expertise required to operate the technology and test potential applications on secure infrastructure. 

“The recent executive order on AI encourages federal agencies to reduce barriers for AI adoptions, but agencies often lack the computing environment necessary for experimentation and prototyping,” says Charles Clancy, MITRE, senior vice president and chief technology officer. “Our new Federal AI Sandbox will help level the playing field, making the high-quality compute power needed to train and test custom AI solutions available to any agency.” 

MITRE will apply the Federal AI Sandbox to its work for federal agencies in areas including national security, healthcare, transportation, and climate. Agencies can gain access to the benefits of the Federal AI Sandbox through existing contracts with any of the six federally funded research and development centers MITRE operates.

Sandbox capabilities offer computing power to train cutting edge AI applications for government use including large language models (LLMs) and other generative AI tools. It can also be used to train multimodal perception systems that can understand and process information from multiple types of data at once such as images, audio, text, radar, and environmental or medical sensors, and reinforcement learning decision aids that learn by trial and error to help humans make better decisions.

“MITRE’s purchase of a DGX SuperPOD to assist the federal government in its development of AI initiatives will turbocharge the U.S. federal government’s efforts to leverage the power of AI,” says Anthony Robbins, vice president of public sector, NVIDIA. “AI has enormous potential to improve government services for citizens and solve big challenges, like transportation and cyber security.” 

The NVIDIA DGX SuperPOD powering the sandbox is capable of an exaFLOP of performance to train and deploy custom LLMs and other AI solutions at scale.

About MITRE

MITRE’s mission-driven teams are dedicated to solving problems for a safer world. Through our public-private partnerships and federally funded R&D centers, we work across government and in partnership with industry to tackle challenges to the safety, stability, and well-being of our nation.

Article link: https://www.mitre.org/news-insights/news-release/mitre-establish-new-ai-experimentation-and-prototyping-capability-us

U.S. Secretary of Commerce Gina Raimondo Releases Strategic Vision on AI Safety, Announces Plan for Global Cooperation Among AI Safety Institutes – NIST

Posted by timmreardon on 05/22/2024
Posted in: Uncategorized.

Raimondo announces plans for global network of AI Safety Institutes and future convening in the San Francisco area, where the U.S. AI Safety Institute recently established a presence.

May 21, 2024

Today, as the AI Seoul Summit begins, U.S. Secretary of Commerce Gina Raimondo released a strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI), describing the department’s approach to AI safety under President Biden’s leadership. At President Biden’s direction, the National Institute of Standards and Technology (NIST) within the Department of Commerce launched the AISI, building on NIST’s long-standing work on AI. In addition to releasing a strategic vision, Raimondo also shared the department’s plans to work with a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices, and to convene the institutes later this year in the San Francisco area, where the AISI recently established a presence.

COMMERCE DEPARTMENT AI SAFETY INSTITUTE STRATEGIC VISION

The strategic vision released today, available here, outlines the steps that the AISI plans to take to advance the science of AI safety and facilitate safe and responsible AI innovation. At the direction of President Biden, NIST established the AISI and has since built an executive leadership team that brings together some of the brightest minds in academia, industry and government.

The strategic vision describes the AISI’s philosophy, mission and strategic goals.  Rooted in two core principles — first, that beneficial AI depends on AI safety; and second, that AI safety depends on science — the AISI aims to address key challenges, including a lack of standardized metrics for frontier AI, underdeveloped testing and validation methods, limited national and global coordination on AI safety issues, and more. 

The AISI will focus on three key goals: 

  1. Advance the science of AI safety;
  2. Articulate, demonstrate, and disseminate the practices of AI safety; and
  3. Support institutions, communities, and coordination around AI safety.  

Read the full Department of Commerce release.

Article link: https://www.linkedin.com/posts/nist_ai-artificialintelligence-responsibleai-activity-7198709049062760448-6SMp?

DoD’s acquisition workforce is stretched thin – Federal News Network

Posted by timmreardon on 05/20/2024
Posted in: Uncategorized.

“If I worry about one workforce, it’s the contracting workforce,” said Doug Bush.


Anastasia Obis

May 16, 2024 7:03 am

While the Defense Department’s acquisition budgets have grown significantly in recent years, there are not enough contracting officersto manage their ever-increasing workload. 

The Army, for example, has about 9,000 acquisition workers, but they have been stretched thin since their workload has doubled in the last couple of years.

Doug Bush, the Army’s acquisition chief, said the service is currently focused on giving its contracting officers better technology and tools to increase efficiency, but more people are needed to provide acquisition support to the Army.

“If I worry about one workforce, it’s the contracting workforce,” Bush said during a Senate Appropriations Subcommittee on Defense hearing on Wednesday.

“They doubled their workload, frankly. They did COVID and then they rolled straight into Ukraine. I think [we need] a little help in both realms. Efficiency investment and perhaps some more people would be warranted.”

Bush said they would not be able to beef up their acquisition workforce in the coming year with the current level of funding.

Undersecretary of Defense for Acquisition and Sustainment William LaPlante said that the department is in the midst of rebuilding some parts of its contracting workforce. 

Several factors contribute to the department’s acquisition worker shortage. The Defense Department’s acquisition workers are highly sought after by private companies and other agencies within the federal government, which creates a gap for the department. In addition, many contracting officers were deployed to active conflict zones in Iraq and Afghanistan, which led to burnout.

“I would say in pockets we still have work to do,” said LaPlante.

Nickolas Guertin, the assistant secretary of the Navy for research, development and acquisition, said he has long advocated for a better implementation of modular and open architectures in military equipment and systems, which increases competition and fosters innovation. It also increases the volume of contracts that need to be managed.

“Contract officers are a key component of our future. We need contract officers to grow,” said Guertin. “Honestly, the Navy grows great contract officers because people keep hiring them. That’s something we continually have to refresh.”

Guertin said while there is a plan in place to increase the number of its contracting officers, the service needs help from Congress.

        Read more: Defense 

The Air Force’s main area of need is the software workforce expertise, said Andrew Hunter, the Air Force’s acquisition chief. But the service has been utilizing its acquisition workforce development pilot for the last two decades to bring in its acquisition workforce.

“I personally believe it shouldn’t be permanent. But at a minimum, we need to extend it because that is a key way of how we keep our talent,” said Hunter.

A study from Rand found that the department’s acquisition workforce grew by 57,677, from 128,187 people in 2006 to 185,864 people in 2021. And almost all of the increase was in the civilian acquisition workforce. 

The age distribution of the workforce shifted, with 29% under age 40 in fiscal 2011 to 35% in 2021, indicating that the department has improved the generational profile of its acquisition workforce. And the majority of contracting officers meet or exceed the certification requirements they need to fill their positions. 

Article link: https://federalnewsnetwork.com/defense-main/2024/05/dods-acquisition-workforce-is-stretched-thin/

Army changing the color of money used to modernize software – Federal News Network

Posted by timmreardon on 05/19/2024
Posted in: Uncategorized.

The Army will keep most software development efforts in ongoing development mode and not transition them to sustainment as part of its modernization efforts.

Jason Miller@jmillerWFED

May 14, 2024 11:58 

When it comes to software development, the Army is going to stop worrying about the color of money.

That’s because as part of its new approach to software modernization, the Army is rethinking what sustainment means.

Margaret Boatner is the deputy assistant secretary of the Army for strategy and acquisition reform, said one of the main tenets of the policy signed by Army Secretary Christine Wormuth in March is to reform several legacy processes that is keeping the service from adopting modern software development approaches.

“We are targeting a couple of really key processes like our test and evaluation processes, and importantly, our cybersecurity processes. We really are trying to modernize and streamline those as well as changing the way we think about sustainment because software is really never done. We really have to retrain ourselves to think about and to acknowledge the fact that software really needs to stay in development all the time,” Boatner said in an exclusive interview with Federal News Network. “Right now, our systems and our acquisition programs, once they’re done being developed, they go through a process that we call transition to sustainment, meaning they’ve been fully developed and are now going to live in our inventory for 10, 20, 30 years. We’re going to sustain them for a long period of time. When a system makes that transition, the financial management regulations dictate that they use a certain color of money, operations and maintenance dollars. With that color of money, we can really only do minor patches, fixes and bug updates. So that’s an example of a legacy process that, when you’re talking about a software system, really tied our arms behind our back. It really prevented us from doing true development over the long term with the software solutions.”

Boatner said under the new policy, software will no longer make the transition to sustainment. Instead, the program office will keep operating under research, development, test and evaluation (RDT&E) funding.

“It’s recognizing that a continuous integration/continuous delivery (CI/CD) model software is never done. That way, our program managers can plan to use the appropriate color of money, which in many cases might be RDT&E, which is the color money you need to do true development,” she said. “So, that will give our program managers a lot more flexibility to determine the appropriate color money based on what they want to do, such that our software systems can really continue to be developed over time.”

The Army has been on this path to software modernization path for several years, with it culminating with the March memo.

With the lessons from the 11 software pathways to testing out a new approach to a continuous authority to operate to the broad adoption of the Adaptive Acquisition Framework, Boatner and Leo Garciga, the Army’s chief information officer, are clearing obstacles, modernizing policies and attempting to change the culture of how the Army buys, builds and manages software.

Army updating ATO policy

Garciga said by keeping programs under the RDT&E bucket, the Army is recognizing the other changes it needs to complete to make these efforts more successful.

“We need to relook at processes like interoperability. Historically, that was not a parallel process, but definitely a series process. How do we change the way we look at that to bring it into this model where we’re developing at speed and scale all the time?” he said. “I think we’re starting to see the beginnings of the second- and third-order effects of some of these decisions. The software directive really encapsulated some big rocks that need to move. We’re finding things in our processes that we’re going to have to quickly change to get to the end state we’re looking for.”

Since taking over the CIO role in July, Garciga has been on a mission to modernize IT policiesthat are standing in the way. The latest one is around a continuous ATO (C-ATO).

He said the new policy could be out later this summer.

        Read more: Army 

“We’ve told folks to do DevSecOps and to bring agile into how they deliver software, so how do we accredit that? How do we certify that? What does that model look like? We’re hyper-focused on building out a framework that we can push out to the entire Army,” Garciga said. “Whether you’re at a program of record, or you’re sitting at an Army command, who has an enterprise capability, we will give some guidelines on how we do that, or at least an initial operational framework that says these are the basic steps you need to be certified to do DevSecOps, which really gets to the end state that we’re shooting for.”

He added the current approach to obtaining an ATO is too compliance focused and not risk based.

Pilot demonstrated what is possible

Garciga highlighted a recent example of the barriers to getting C-ATO.

“We started looking at some initial programs with a smart team and we found some interesting things. There was some things that were holding us back like a program that was ready to do CI/CD and actually could do releases every day, but because of interoperability testing and the nature of how we were implementing that in the Army, it was causing them to only release two times a year, which is insane,” he said. “We very quickly got together and rewickered the entire approach for how we were going to do interoperability testing inside the Army. We’re hoping that leads to the department also taking a look at that as we look at the joint force and joint interoperability and maybe they follow our lead, so we can break down some of those barriers.”

Additionally, the Army undertook a pilot to test out this new C-ATO approach.

Garciga said the test case proved a program could receive at least an initial C-ATO in less than 90 days by bringing in red and purple teams to review the code.

“I’d say about three months ago, we actually slimmed down the administrative portion and focused on what were the things that would allow us to protect our data, protect access to a system and make a system survivable. We really condensed down the entire risk management framework (RMF) process to six critical controls,” he said. “On top of that, we added a red team and a purple team to actually do penetration testing in real time against that system as it was deployed in production. What that did is it took our entire time from no ATO to having at least an ATO with conditions down to about less than 90 days. That was really our first pilot to see if we can we actually do this, and what are our challenges in doing that.”

Garciga said one of the big challenges that emerged was the need to train employees to take a more threat-based approach to ATOs. Another challenge that emerged was the Army applied its on-premise ATO approach to the cloud, which Garciga said didn’t make a lot of sense.

“We put some new policy out to really focus on what it means to accredit cloud services and to make that process a lot easier. One of our pilots, as we looked at how do we speed up the process and get someone to a viable CI/CD pipeline, we found things that were really in the way like interoperability testing and how do we get that out of the way and streamline that process,” he said. “In our pilots, the one part that we did find very interesting was this transition of our security control assessors from folks that have historically looked at some very specific paperwork to actually now getting on a system and looking at code, looking at triggers that have happened inside some of our CI/CD tools and making very difficult threshold decisions based on risk and risk that an authorizing official would take to make those decisions. We’re still very much working on what our training plan would be around that piece. That’ll be a big portion of how we’re going to certify CI/CD work and DevSecOps pipelines in the Army moving forward.”


Jason Miller

Jason Miller is executive editor of Federal News Network and directs news coverage on the people, policy and programs of the federal government.  

Follow @jmillerWFED

Article link: https://federalnewsnetwork.com/army/2024/05/army-changing-the-color-of-money-used-to-modernize-software/

The Future of Medicine Is in Your Poop – NIST

Posted by timmreardon on 05/15/2024
Posted in: Uncategorized.

In the fall of 2023, NIST’s scientists in Charleston, South Carolina, received a special shipment of containers packed with baggies full of frozen human feces.

Teams of scientists there and at an outside lab worked together to grind the material into fine dust and blend it with water until it had the consistency of a smoothie. It was then poured into 10,000 tubes and distributed among NIST’s staff in Charleston and Gaithersburg, Maryland.

Scientists in both cities have been rigorously analyzing and studying the waste-matter mixture ever since.

All this excretory experimentation is helping to lay the groundwork for a new generation of treatments and medicines derived from human feces.

The power of poop comes from the microbes it contains. They are a rich sampling of the trillions of microbes living inside our gut, all part of the gut microbiome. In the last decade, scientists have linked the gut microbiome to a raft of human diseases, including inflammatory bowel disease, bacterial infections, autoimmune disorders, obesity and even cancer and mental illness.

Isolating fecal microbes and then turning them into therapies may be a way to treat many of these diseases. In fact, the FDA has recently approved two drugs for treating recurring bacterial infections, both of which are derived from highly processed human stool samples.

“This isn’t just wishful thinking. It’s already happening,” said NIST molecular geneticist Scott Jackson. “We are at the beginning of a new era of medicine.”

Why Poop?

Scott Jackson wears safety glasses and a lab coat as he looks up a two petri dishes he is holding.
Scott Jackson has spearheaded NIST’s efforts to develop a fecal reference material.Credit: R. Wilson/NIST

While human feces also contain water, undigested food and assorted inorganic matter, anywhere from 30% to 50% is made up of bacteria, viruses, fungi and other organisms that once lived in our guts.

We could not survive without these fellow travelers. They play a critical role in metabolism, vitamin production and digestion. By regulating the immune system, they help ward off harmful bacteria and toxins.

Their activity also impacts the nervous system via the gut-brain connection, affecting mood and mental health and influencing many neurological conditions, including Alzheimer’s and autism.

We are only beginning to understand the relationship between microbes and diseases. We have significant gaps in our knowledge about how microbes affect other systems and processes in the body. And certainly, just because changes in the gut microbiome correlate with a particular disease doesn’t mean they cause it.

Still, it’s clear that the signals gut microbes send to each other and cells elsewhere in the body significantly impact our health.

Doctors could get a sample of your microbiome directly from your gut, but that means undergoing an invasive procedure like a colonoscopy or biopsy.

Getting a specimen of stool is (ironically) less messy.

“Fecal material is convenient,” Jackson said. “Everybody poops.”

But Really, Poop Medicine?

Everyone’s stool is different. The amount and types of microorganisms vary based on your genes, environment, health and diet. But scientists have discerned similarities in the poop of individuals with certain diseases. People with Parkinson’s disease, for example, show both higher and lower concentrations of certain bacterial species. For people with asthma, poop has reduced levels of microbial diversity.

These correlations, some quite clear and others still unclear, may make it possible to use stool samples to diagnose a wide range of illnesses and conditions. You’d send a fecal sample to a lab, which would then identify the microorganisms in it by decoding, or “sequencing,” their DNA.

Jackson said the results could be used to not only diagnose certain illnesses but also evaluate the risk of getting the illnesses in the future.

“We would analyze microbial DNA in stool for the same reason we test human DNA — to tell us about your risk of disease,” he said.

Beyond their use in diagnosing diseases, could feces-derived microbes be used to treat them?

This is actually already happening.

Fecal microbiota transplants (FMTs) are now used to treat recurrent Clostridioides difficile infection (CDI), a sometimes-deadly bacterial infection commonly picked up in hospitals. FMT is like any other kind of transplant, though in this case, it’s someone else’s fecal matter that’s transferred into the sick patient.

The microorganisms from the transplant help regenerate the healthy ecosystem in the gut microbiome, assisting the immune system to fight off the infection. On recurrent CDI, the procedure has a success rate of 95%, a remarkable result for just about any therapy.

Research into other uses for FMT is exploding. According to clinicaltrials.gov, there are dozens of studies in the United States right now involving FMT, with clinicians and researchers testing it out on everything from cancer to colitis to alcoholic hepatitis.

Researchers are also exploring alternative approaches that involve genetically modifying fecal bacteria, creating disease-fighting microbes that would take root in your gut and help restore the microbiome to full health. There’s even the possibility of altering your own fecal microbes, a personalized medicine approach that customizes the therapy to the individual patient.

In addition to the gut microbiome, there are multitudes of microorganisms in the nose, skin, throat and vagina, all part of what’s known as the human microbiome.

Jackson said that the next generation of microbial medicines will be derived from all over the human microbiome. They will be much more scientifically proven and effective for treating diseases than today’s probiotics, which are bacteria derived from fermented foods and categorized as dietary supplements.

“If things keep going the way they are now, I think in  30 years, medical doctors will have an arsenal of new microbial therapies to treat a broad spectrum of diseases,” Jackson said.

What’s NIST’s Role in All This?

Two researchers wearing protective gear are cutting open a bag labeled "Omnivore 1" over an icy freezer container.
NIST researchers handling frozen stool samples in Charleston. Credit: T. Schock/NIST

NIST produces reference materials (RMs) that help laboratories and manufacturers calibrate their instruments.

For example, the agency sells peanut butter that comes with a detailed analysis and measurements of its compounds and chemicals. Food companies need to know how much fat is in their products. To ensure they are measuring the amount correctly, they can perform tests on NIST’s peanut butter. If they get the same result as NIST, they know their equipment is accurate.

Having widely trusted and accepted reference materials, especially for complex materials, ensures quality control and accuracy across entire industries and research fields.

NIST is now developing an RM for human feces, officially called the Human Gut Microbiome (Whole Stool) Reference Material. NIST’s peanut butter lists its amount of calcium, copper, tetradecanoic acid and many other components. Similarly, the new RM, expected to be released this year, will itemize and describe the ingredients in feces.  

It will identify hundreds of species of microorganisms and detail the concentration of thousands of different metabolites in the gut, many of which are produced by microorganisms and help to convert nutrients into energy or synthesize molecules for cellular functions.

Other ingredients listed include many compounds you might not even have known were in feces: cholesterol (a type of metabolite), for example, and serotonin, most of which is found in the cells lining the gastrointestinal tract.

The RM aims to become the gold standard in human gut microbiome research and drug development. A single unit of the RM will consist of a milliliter tube filled with slurry fecal matter accompanied by a lengthy report that labs can use to check their measurements and fine-tune their instruments.

Many scientists believe a reference material for feces is desperately needed. Right now, “If you give two different laboratories the same stool sample for analysis, you’ll likely get strikingly different results,” Jackson said. Many discrepancies arise from the different protocols and tools the labs use. Others are the result of differing standards and definitions.

“NIST’s RM will help researchers develop, benchmark and harmonize their measurements,” Jackson said. “It’s the most detailed and comprehensive microbiological and biochemical breakdowns ever produced for human feces.” 

Article link: https://www.nist.gov/health/future-medicine-your-poop

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...