healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Closing the Digital Divide in Government: 5 Strategies for Digital Transformation – Nextgov

Posted by timmreardon on 01/31/2023
Posted in: Uncategorized. Leave a comment

By VIRAL CHAWDA AND ANDY GOTTSCHALK JANUARY 31, 2023 09:00 AM ET

Citizens are demanding a more connected government.

Change is seldom easy. Yet for government and public sector executives, the need to modernize has never been greater, as there is a growing digital divide between constituent expectations and what many governments can offer. The COVID-19 pandemic shifted the dynamic when it came to the digital imperative for government. More than ever, citizens are demanding a more connected government with integrated services, relevant experiences and self-driven service connection. Whether it’s paying taxes online or applying for benefits anytime and anywhere with the ease experienced in other aspects of their lives, citizens are seeking a government that is as modern as the technology they use in their daily lives. A failure to move forward with modernization efforts will only lead to a decline in government agencies’ ability to deliver services and potentially reinforce the prevailing view of government as inefficient, out of touch and ambivalent to the needs of its constituents. 

Today, some agencies are relying on legacy systems running programs that are more than a quarter century old and can no longer be supported or updated. Converting systems and applications of that age, scale and complexity to modern cloud-based solutions is difficult. And finding the right talent to orchestrate the change is not easy either. By contrast, leading agencies will move into the future by aggressively pursuing new technologies such as cloud, artificial intelligence, blockchain, low code platforms and analytical engineering. By some estimates, with digital transformation, savings for governments could exceed $1 trillion over the course of the next decade.  

As government and public sector agencies continue on their digital transformation journey this year, here are five strategies to adopt moving forward:

  1. Have a customer-centered design mindset when introducing new technology so that users will find new platforms and applications easy to use and will embrace them.
  2. Begin treating data like a product to be shared enterprisewide rather than continually duplicated in siloed departments. 
  3. Accelerate the use of a development, security and operations—DevSecOps—approach to creating new software applications, embedding security into them from the outset. 
  4. Continue to modernize technology stacks to include SaaS platforms, advanced cloud, AI and edge computing capabilities—as well as strong data management capabilities—and make it easy for end users to access the data insights they need for decision-making. 
  5. Embrace modularity and containerization to help with the challenge of modernizing large and complex legacy systems and applications. Modularity refers to dividing large software applications into smaller modules, while containerization refers to running those applications in an isolated environment.

Modernization is now more important than ever to bridge the digital divide and meet constituents’ rising expectations. These are just some of the many steps governments should consider as they continue on their digital journeys.

Viral Chawda is a principal and head of government technology at KPMG U.S. and Andy Gottschalk is a partner for health and government solutions at KPMG U.S. The views expressed are the authors alone and do not necessarily represent those of KPMG LLP.

Article link: https://www.nextgov.com/ideas/2023/01/closing-digital-divide-government-5-strategies-digital-transformation/382326/

Twelve Problems Negatively Impacting Defense Innovation – AEI

Posted by timmreardon on 01/27/2023
Posted in: Uncategorized. Leave a comment

By William C. Greenwalt AEIdeas January 26, 2023

As the US wrestles with a rapidly changing security environment, the creation of new military capabilities to counter these growing threats is essential. Defense leaders are being forced to relearn, as they have had to do in every conflict since the Revolutionary War, that one cannot just turn on a spigot and obtain weapons on demand. Industrial base constraints reliably manifest themselves in multi-year lead times such as we are seeing today to replace munitions used in Ukraine. Our quandary, however, is much greater than just reconstituting peacetime stocks of existing systems. Any impending conflict, or perhaps more optimistically the ability to deter a future conflict, will require not only production at scale, but innovation at scale that hasn’t been seen since WWII and the early Cold War.

The US is nowhere near being ready to embark on such an effort. Before doing so the Department of Defense (DOD) and Congress need to understand the depths of the issues that are holding back America’s ability to regain the level of technological dominance necessary to maintain deterrence or prevail in a war if deterrence fails. The following twelve problem areas are offered to begin to frame that understanding. We need to focus our attention on the right problems, as well-intended solutions to the wrong ones will end up just exacerbating our decline. This is not an all-inclusive list. One could easily nail 95 innovation theses to the doors of the Pentagon if security would allow it. Still, this represents an initial attempt to identify some of the more significant barriers that will prevent US military success unless we act soon. 

1) There is no sense of urgency yet. Defense management systems and the industrial base are optimized for a peacetime cadence after 30 years without a Great Power conflict. It took years to get to this point and without focused leadership we will never adjust to a different set of circumstances.

2) Process compliance is our most valued objective rather than time. Time to operational capability as described in the report “Competing in Time” has been the primary historical forcing function for disruptive innovation, and yet it is not valued in DOD or Congress.  

3) We are all communists now. Just as was the case in the Soviet Union, centrally planned, linear, predictive processes and mindsets destroy innovation and creativity. These processes took root at DOD in the 1960s under McNamara and have had 60 years to engrain themselves in culture.

4) Budget inflexibility in year of execution and long lead times to allocate resources are at the root cause of our declining competitiveness and innovation failures (especially in the many versions of the Valley of Death).

5) The predictive and lumbering requirements process forecloses innovation opportunities from the start as it is the gateway to the acquisition and budgeting system.

6) Operational interests are not aligned or supported within the acquisition and budgeting systems – both at the combatant command and service component command levels. 

7) The barriers to civil-military integration of the industrial base have continued to widen as DOD prefers to dictate solutions to defense unique monopoly providers that have taken on many of the characteristics of pre-WWII government run arsenals.

8) Defense contracting has become more of an enforcer of socio-economic programs and goals than an enabler of capability. Unique non-market rules keep out non-traditional and commercial companies and solutions and drive-up costs.

9) The authority and ability of program officials to do their jobs has been limited by adversarial oversight. Incentives and rules drive contracting officer enforcement of process rather than capability outputs or program objectives. Testing, technology, and auditing bureaucracies double down on “gotcha” check the box oversight rather than provide cooperative insights and proactive value add. 

10) Production capability is a key component to innovation and has been allowed to deteriorate both in the traditional and commercial industrial bases. DOD ignored the implications of the last two decades of commercial globalization and production outsourcing to China that has hollowed out the US industrial base. Just in time efficiency requirements and barely minimal sustainable production rates have destroyed defense specific industrial capabilities and undermined military readiness. 

11) Incentives for industry are not aligned to DOD innovation interests. The preponderance of cost contracts, counterproductive reimbursement rates and policies, and lack of program opportunities have left the traditional defense industrial base and government organic depots to be built around long-term maintenance revenues and decades long weapon systems franchises making it politically difficult to modernize. 

12) Security and technology control policies (ITAR) are built around an era of US defense technological dominance that has long passed and now serve as barriers to innovation. Both Silicon Valley and allied cooperation will be needed to compete against China but outdated thinking and processes hinder such cooperation.

Article link: https://www.aei.org/foreign-and-defense-policy/defense/twelve-problems-negatively-impacting-defense-innovation/

NIST Debuts Long-Anticipated AI Risk Management Framework – Nextgov

Posted by timmreardon on 01/26/2023
Posted in: Uncategorized. Leave a comment

By ALEXANDRA KELLEYJANUARY 26, 2023 02:00 PM ET

With the launch of the AI RMF 1.0, federal researchers focused on four core functions to structure how all organizations evaluate and introduce more trustworthy AI systems.

The National Institute of Standards and Technology unveiled its long-awaited Artificial Intelligence Risk Management Framework on Thursday morning, representing the culmination of an 18-month-long project that aims to be universally applicable to any AI technology across all sectors. 

Increasing trustworthiness and mitigating risk are the two major themes of the framework, which NIST Director Laurie Locascio introduced as guidance to help organizations develop low-risk AI systems. The document outlines types of risk commonly found in AI and machine learning technology and how entities can build ethical, trustworthy systems. 

“AI technologies have significant potential to transform individual lives and even our society. They can bring positive changes to our commerce and our health, our transportation and our cybersecurity,” Locascio said at the framework’s launch event. “The AI RMF will help numerous organizations that have developed and committed to AI principles to convert those principles into practice.”

The framework offers four interrelated functions as a risk mitigation method: govern, map, measure, and manage.

“Govern” sits at the core of the RMF’s mitigation strategy, and is intended to serve as a foundational culture of risk prevention and management bedrocking for any organization using the RMF.

Building atop the “Govern” foundation, “Map” comes next in the RMF game-plan. This step works to contextualize potential risks in an AI technology, and broadly identify the positive mission and uses of any given AI system, while simultaneously taking into account its limitations. 

This context should then allow framework users to “Measure” how an AI system actually functions. Crucial to the “Measure” component is employing sufficient metrics that represent universal scientific and ethical norms. Strong measuring is then applied through “rigorous” software testing, further analyzed by external experts and user feedback. 

“Potential pitfalls when seeking to measure negative risk or harms include the reality that development of metrics is often an institutional endeavor and may inadvertently reflect factors unrelated to the underlying impact,” the report cautions. “Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact and human-AI configurations.” 

The final step in the AI RMF mitigation strategy is “Manage,” whose main function is to allocate risk mitigation resources and ensure that previously established mechanisms are continuously implemented. 

“Framework users will enhance their capacity to comprehensively evaluate system trustworthiness, identify and track existing and emergent risks and verify efficacy of the metrics,” the report states.

Business owners participating in the AI RMF also expressed optimism at the framework’s guidance. Navrina Singh, the CEO of AI startup Credo.AI and member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee, said that customers seeking AI solutions want more holistic plans to mitigate bias.

“Most of our customers…are really looking for a mechanism to build capacity around operationalizing responsible AI, which has done really well in the ‘Govern’ function of the NIST AI RMF,” she said during a panel following the RMF release. “The ‘Map, Measure, Manage’ components and how they can be actualized in a contextual way, in all these specific use cases within these organizations, is the next step that most of our customers are looking to take.”

The new guidance was met with broad bipartisan support, with Rep. Zoe Lofgren, D-Calif., and Rep. Frank Lucas, R-Okla., both sending congratulatory messages for the launch event.

“By taking a rights affirming approach, the framework can maximize the benefits and reduce the likelihood of any degree of harm that these technologies may bring,” Lofgren said at the press briefing. 

Community participation from a diverse group of sectors was critical to the development of the framework. Alondra Nelson, the Deputy Director for Science and Society at the White House Office of Science and Technology Policy, said that her office was one of the entities that gave NIST extensive input into the AI RMF 1.0. She added that the framework, like the White House AI Bill of Rights, puts the human experience and impact from AI algorithms first. 

“The AI RMF acknowledges that when it comes to AI and machine learning algorithms, we can never consider a technology outside of the context of its impact on human beings,” she said. “The United States is taking a principled, sophisticated approach to AI that advances American values and meets the complex challenges of this technology and we should be proud of that.” 

Much like the AI Bill of Rights, NIST’s AI RMF is a voluntary framework, with no penalties or rewards associated with its adoption. Regardless, Locascio hopes that the framework will be widely utilized and asked for continued community feedback as the agency plans to issue an update this spring. 

“We’re counting on the broad community to help us to refine these roadmap priorities and do a lot of heavy lifting that will be called for,” Locascio said. “We’re counting on you to put this AI RMF 1.0 into practice.”

Comments on the AI RMF 1.0 will be accepted until February 27, 2023, with an updated version set to launch in Spring 2023.

Article link: https://www.nextgov.com/emerging-tech/2023/01/nist-debuts-long-anticipated-ai-risk-management-framework/382251/

Artificial Intelligence Risk Management Framework – NIST

Posted by timmreardon on 01/26/2023
Posted in: Uncategorized. Leave a comment

Artificial intelligence (AI) can learn and adapt. Organizations developing or using AI products can do the same, but the innovations will bring risks.

Now, NIST’s Artificial Intelligence Risk Management Framework can help.

The framework, released today, is voluntary guidance that equips organizations to think about AI and risk differently. It promotes a change in institutional culture, encouraging organizations to approach AI with a new perspective — including how to think about, communicate, measure and monitor AI risks.

Get your organization started: https://lnkd.in/ekJZJsTM

AI #ArtificialIntelligence #RiskManagement #NIST

https://www.linkedin.com/posts/nist_ai-artificialintelligence-riskmanagement-activity-7024400931756625920-bdpJ?

Plan for Federal AI Research and Development Resource Emphasizes Diversity in Innovation – Nextgov

Posted by timmreardon on 01/24/2023
Posted in: Uncategorized. Leave a comment

By ALEXANDRA KELLEYJANUARY 24, 2023 03:00 PM ET

The National Artificial Intelligence Research Resource Task Force released its operating framework, making the case for implementation.

The future of a federal artificial intelligence resource will focus on incorporating diversity and accessibility into the creation of new technologies, in a bid to both democratize access to the emerging technology and ensure a lack of bias in machine learning systems. 

On Tuesday, the task force charged with planning development of a National Artificial Intelligence Research Resource unveiled its Congressionally-mandated report that acts as guidance in establishing a formal AI research and development landscape in the U.S. 

Key pillars in the roadmap focus on how to open access to the emerging AI/ML fields for socioeconomically marginalized Americans, especially when these technologies have historically harmed vulnerable groups.

“So bottom line up front is that AI is driving scientific discovery and economic growth across a range of sectors, and at the same time, it’s raising new challenges related to its ethical and responsible development views,” a NAIRR Task Force member said during a press briefing. “Access to the computational and data resources that drive the cutting edge of AI remains primarily limited to those who are working at large tech companies and well-resourced universities.”

To better bridge this AI resource gap, the task force’s final report recommends establishing the NAIRR with four measurable foundations: spurring innovation, increasing diversity in the AI/ML field, improving career capacity and broadly advancing trustworthy AI systems.

“The key takeaway from the final report is that a NAIRR would connect America’s research community to the computational data and testbed resources that fuel AI research through a user friendly interface and associated training and user support,” the task force member said. 

The creation of the NAIRR was one of the provisions set out in the National Artificial Intelligence Initiative Act of 2020, designed to serve as a collection of federal resources for AI development, like testing data and software interfaces. 

In addition to the bevy of computational resources the NAIRR would offer, policies governing the usage of these platforms would hinge on civil rights and civil liberties research review criteria, in addition to ethics training for NAIRR users.

Task force members anticipate needing a budget of $2.6 billion over an initial six-year time frame to support NAIRR operations. Funding and oversight for the NAIRR would be directed by the National Science Foundation. A separate steering committee would be tasked with managing the resource’s activity and funding distribution.

AI regulation has been a chief priority among tech-oriented federal agencies. Both the National Institute for Standards and Technology and the White House Office of Science and Technology Policy have developed separate AI governing frameworks in recent years. These similar roadmaps, called the AI Risk Management Framework and the Blueprint for an AI Bill of Rights, respectively, will influence how the NAIRR implements safeguards for its resource offerings.

NIST will be releasing its Artificial Intelligence Risk Management Framework on Thursday.

“The rollout of these two reports on the same week is really showcasing the dual priority of advancing AI innovation and doing so in a manner that mitigates risk and advances best practices and responsible and trustworthy AI,” the NAIRR task force member said.

Article link: https://www.nextgov.com/emerging-tech/2023/01/plan-federal-ai-research-and-development-resource-emphasizes-diversity-innovation/382144/

SOFTWARE DEFINES TACTICS – War on the Rocks

Posted by timmreardon on 01/23/2023
Posted in: Uncategorized. Leave a comment

JASON WEISS AND DAN PATT

JANUARY 23, 2023

The book “The Bomber Mafia” revolves around the efforts of military officers to develop effective tactics for a bomber force, emphasizing the feedback cycles between operational results and new bombsight and aircraft development. This storyline is timeless: Major military advances like the Blitzkrieg tactics credited with Germany’s rapid gains at the opening of World War II are largely about pioneering new tactics enabled by emerging technology — at that time, aircraft, radios, and armor. While the ability of the U.S. military and its industrial base to bend metal and push the bounds of physics with aircraft design has slowed dramatically as technology and bureaucratic processes have matured, there still lies ahead a bold frontier of new innovation.

Currently, military-technical competitive advantage is driven more by software, data, and artificial intelligence and the new tactics they enable than by bent metal and physics. This is apparent across different military contingencies, from scrappy improvised battle networks in Ukraine to plans for future fighter aircraft that will teamwith a mix of humans and unmanned systems. New analytics tools and processing algorithms can make nearly real-time sense of data from thousands of sensors scattered across domains. Automated control systems manage vehicle guidance, sensor tasking, and occasionally introduce unexpected failure modes. Planning tools allow operators to innovate on new force packages and tactics and to replay and rehearse scenarios hundreds of times. Software is the glue that binds our military professionals and systems together, and increasingly the loop between development and operations is defined by digital data.

Acquisition leaders can, therefore, deliver results and create a vibrant environment of learning and improvement by following six principles that offer guidance on how to set up infrastructure permitting code to change quickly and securely, how to divvy up complex software efforts into manageable chunks that can still interoperate, and how to embed digital talent, user feedback, and security practices into a project.

The Department of Defense’s acquisition community has been thrown into a turbulent sea of software buzzwords and technological change that does not conform to current bureaucratic processes, industrial base momentum, and external pressures. Indeed, there are important efforts to recognize the need for acquisition authorities to reprioritize how money is spent before the annual budget is passed, new software acquisition frameworks, and even an experimental color of money designed to accommodate the fact that code can shuffle between development and operations several times in a day.

While new tools will be helpful, the acquisition community also recognizes the promise of software for military advantage and can scrape together enough tools and authorities to deliver on this promise today. What would be most useful would be a simple set of principles that can cut through the complexity of choices that program executive officers and program managers face. We’ve recently written a reportdesigned to pick up where the Defense Innovation Board’s seminal Software is Never Done study left off.

In particular, our report found that there is no one-size-fits-all best practice for software acquisition. Every effort is different: Much like logistics professionals orchestrate handoffs between ships, planes, and trucks to deliver goods on time, acquisition professionals must learn to orchestrate between web applications, embedded systems, and middleware systems. They must mix and match between software-as-a-service, source code delivery, and exposed interfaces in order to deliver digital capability for the mission.

There is a proven set of principles that can help program managers skillfully guide their acquisitions as though they are a software company seeking to deliver a weapon system. This is similar to how executives at Amazon, Netflix, Airbnb, and Uber harnessed software, data, and machine learning to deliver market advantage in what were legacy industries of retail, entertainment, hospitality, and transportation. We call these principles “Acquisition Competency Targets for Software.”

Software defines tactics

Source:  Software Defines Tactics: Structuring Military Software Acquisitions for Adaptability and Advantage in a Competitive Era by Jason Weiss and Dan Patt.

The Path to Artificial Intelligence Starts With Allowing Software to Evolve 

Data and artificial intelligence and machine learning are widely considered important to the military’s future: Just look at the establishment last year of a new organization that reports directly to the deputy secretary — the Chief Digital and Artificial Intelligence Office. But officials must recognize that the origin of all digital data is software. The fresher and higher quality data is, the better data scientists can define artificial intelligence/machine learning models. Feedback loops are required to validate and mature each specific model. Direct and immediate access to software applications that are designed and built to accept and deploy artificial intelligence/machine learning models create the closed loop. Simply put, software begets data, data begets artificial intelligence/machine learning models, and these models need to be integrated into software to be actualized. The buzz-worthy commercial AI developments of 2022 like ChatGPT or Midjourney are built by processing data at nearly the internet’s scale, but are founded, enabled, and delivered on a robust software foundation. To ignore the role of software in this closed loop in favor of more intentional pursuit of data or artificial intelligence/machine learning strategies offers limited returns: Each part of this digital triad must be advanced simultaneously to realize battlefield advantages.

Software, in particular, must be acquired with an eye to future change — not only to enable artificial intelligence but also to enable security in light of ever-evolving threats. The best means to enable this constant change is a software factory or pipeline, which is the set of tools and processes that take the raw ingredients of working software — including source code — and delivers operational capability. The foundational Acquisition Competency Target for Software charges program executive officers to use a software factory suited to the needs of their particular project, but recognizes that the costs and complexities of building out a functioning software factory are significant. Instead, programs should first evaluate and explore an expanding set of existing Defense Department software factories to determine if one is close to meeting program needs. Investing in closing a small gap is likely cost effective and going to result in access to a functioning software ecosystem faster than starting from scratch.

We also recognize the foundational importance of security in defense software. In fact, it is so important that security checks should be built into the delivery pipeline from the beginning.  The Acquisition Competency Target for Software #2 recognizes that this can only happen by working closely with an authorizing official to jointly partner and invest in the ingredients required to achieve a continuous authorization to operate. Early partnership here acknowledges that too many authorizing officials are still unsure what “shifting cybersecurity left into the delivery pipeline” means, what it accomplishes, or why they should support these activities. Education and dialogue along with adoption of a “Yes, if…” mentality will speed deployments into operationally relevant environments.

The DNA of a Better Joint Force is Born From Allowing Systems to Recombine

From the Defense Advanced Research Projects Agency ’s Mosaic Warfare concept to the new Joint Warfighting Concept, the U.S. military’s operational concepts are increasingly built on the idea of being able to flexibly recombine mission systems into an ever-larger array of kill chain options. But the mechanics of this are accomplished by splicing software systems together. The reality is that this process is closer to organ transplant than playing building blocks. There are some key principles that enable this, though — and they start with defining and exposing interfaces, often referred to as application programming interfaces.

Application programming interfaces are the quintessential mechanism for interacting with software. Well-defined interfaces can mitigate data access risks for programs. Acquisition Competency Target for Software #3 emphasizes this point by establishing the need to own your application programming interfaces, even more so than the source code or executable. After all, it is application programming interfaces that are the pre-requisite for establishing tip and cue capabilities between disparate platforms — not data or artificial intelligence/machine learning models. Furthermore, the enterprise architecture will ebb and flow with time, and a robust set of application programming interfaces marginalizes the impact of decommissioning old architectures in favor of new architectures in programs that measure their lifetime in years. Choosing how to define modularity does require good judgement, however.

A recurring theme heard in both government and private sector corporations is that people, not technology, are the most important assets. The need to attract, hire, and retain top-tier technologists means that programs must learn to behave like a software recruiter. Leadership at Kessel Run recognized the futility of recruiting staff using government websites and intentionally turned to social media to create demand. Kessel Run was depicted as an exclusive club, something people should seek to be a part of even if it meant abandoning an existing software start-up to take a government position with a reduced compensation package. The Acquisition Competency Target for Software #4 reminds program managers that they should behave like a software recruiter because digital modernization requires digital talent.

Use Modern Tools, Concepts, and Contracting Approaches to Deliver Resilience

In Silicon Valley venture capital circles, delivering software-as-a-service, usually by charging a monthly fee for a cloud-delivered web application, has been popularfor more than a decade. The critical concept that underpins this success is the idea of a contract — a service-level agreement — built on an indicator that defines and quantitatively measures abstract notions like system availability. Development teams use these service level indicators to design and operate a system at a specific level, like 99.9 percent uptime. When one organization partners with another to deliver a service, they forge a service-level agreement that legally stipulates expectations and financial penalties for not meeting a specific service level, like that uptime guarantee. Acquisition Competency Target for Software #5 aims to recognize that anything that can be easily measured in service levels should be a strong candidate for outsourced operation instead of investing time and money towards in-house custom development. This principle is critical and is worth expanding upon.

Accountants think in terms of spreadsheets, graphic artists in terms of page layout, and project managers in terms of the venerable Gantt chart, which depicts deliverables as diamonds plotted against their anticipated time of arrival to assist production planning. The Gantt chart is excellent for assembling a list of tasks, dependencies, and milestones and visualizing progress to date. This project management tool has proven indispensable for coordinating industrial activities across a production floor at a given time. In the industrial setting, the hard part is not connecting the pieces, it is getting all the pieces to arrive on the factory floor at just the right moment.

The very essence of software, and in particular DevOps, is the opposite. To generalize, advanced commercial software for operating a modern enterprise is available on-demand as software-as-a-service. Enterprises can access everything from state-of-the-art resource planning to product life cycle management software, from learning management systems to human resource management systems. The hard part is not getting all the pieces to arrive by a milestone on a calendar using software — the difficulty is connecting them to interoperate reliably and at speed.

The integration between different architectures and application programming interface styles, combined with dozens of competing programming standards, creates delivery delays and affects the reliability and operational speed of a system. Software integration is not analogous to 20th-century industrial integration, and the Gantt chart has proven to be an unreliable management technique for software integration activities.

When a program can cleanly express a need, including its reliability, in a service-level indicator, and if established vendors can have the capability to meet or exceed those metrics, then the program’s first instinct should be to outsource that capability to a commercial or dual-use provider. Doing this allows the program to focus on building novel solutions for the hard-to-define requirements that existing providers cannot demonstrably meet. When the act of production is difficult, Program Executive Offices should want diamonds (on a Gantt chart), and when the speed and reliability of data in motion are difficult, they should want service levels.

The final principle for navigating in an era where “software is eating the world” is Acquisition Competency Target for Software #6: defining zero-trust outcomes within the program’s software applications. This means breaking up software systems into defensible chunks to shift the cybersecurity conversation from the discovery of a vulnerability meaning total compromise to one where intruder actions trigger an active response at the speed of relevance. Designers try to “air gap” the most important networks, making sure that there are no connections to the internet or outside systems via which vulnerabilities can be injected, but even these formidable moats can be stormed by intruders. While this is not an excuse to fail to build in formidable defenses, software is so extensive and dynamic that vulnerabilities and attacks are inevitable.

In recognition of this, the Defense Department has published a Zero Trust Strategy, but program managers also have a role in establishing incentives for realizing software applications that natively exhibit resilience. When built into the structure of the code and the delivery pipeline, attackers are unable to discern if they breached an application’s defenses under a “permit” or a “deny with countermeasures” condition that has alerted its security team and returned invalid and illogical data.

A Strategic Advantage From Better Software Delivery

If the Department of Defense acquires and develops software according to these six Acquisition Competency Targets for Software, with high-speed delivery mechanisms capable of quick changes, then software will enable superior force adaptability for the U.S. military compared to its strategic competitors. Adaptability is not an inherent property of software-defined systems. Once software has been deployed, changing its design and behaviors becomes extremely expensive. Once code has been compiled and deployed and disconnected from the build chain that produced it, it becomes rigid and resistant to change. Together, these six Acquisition Competency Targets for Software offer a counterbalance.

Sustainable operation of a software factory (Acquisition Competency Target for Software #1) running on a system that has embraced the principles of a continuous authorization to operate (Acquisition Competency Target for Software #2) provides a secure pathway for continuous improvement of software, data, and the deployment and feedback required for successful artificial intelligence/machine learning activities. When a program owns its application programming interfaces (Acquisition Competency Target for Software #3), it has the ability to experiment with new vendors, from the small upstart to large prime, constantly adapting to new threats without incurring the overhead of large-scale custom integration. Talent (Acquisition Competency Target for Software #4) is required to create these application programming interfaces and to properly design and build zero trust-enabled software applications (Acquisition Competency Target for Software #6). Finally, those aspects of the software architecture that can easily be measured and delivered to well-defined service levels by existing vendors (Acquisition Competency Target for Software #5) enables programs to maximize investments in novel research and development, not mundane IT operations.

Software delivery is about getting the right bits to the right places at the right time. Making these acquisition competency targets clear for software is another step forward in helping the acquisition community more efficiently navigate the software world. We must recognize and explore the value of diversity of form — the heterogeneity of software and the fact that a one-size-fits-all approach is incongruent with the software-defined world the warfighter must fight in. The Department of Defense must act in a way that recognizes that software, not legacy warfighting platforms, controls the speed and efficacy of the modern kill chain and military dilemma. And finally, the Department of Defense needs to formally recognize the digital triad of software, data, and artificial intelligence/machine learning as equal peers. Doing so will help the Department of Defense find its footing in an era where software defines tactics.

Jason Weiss is a visiting scholar at the Hudson Institute, former chief software officer for the Department of Defense, and holds a leadership position at Conquest Cyber.

Dan Patt is a senior fellow at the Hudson Institute, a former Defense Advanced Research Projects Agency official, and the former CEO of Vecna Robotics.

Image: 78th Air Base Wing Public Affairs photo by Tommie Horton

Article link: https://warontherocks.com/2023/01/software-defines-tactics/

Is China About To Destroy Encryption As We Know It? Maybe – Defense One

Posted by timmreardon on 01/22/2023
Posted in: Uncategorized. Leave a comment

PATRICK TUCKER JANUARY 20, 2023

A new research paper claims to offer a quantum-powered code-breaker of spectacular power. “If it’s true, it’s pretty disastrous,” says one expert.

Late last month, a group of Chinese scientists quietly posted a paper purporting to show how a combination of classical and quantum computing techniques, plus a powerful enough quantum computer, could shred modern-day encryption. The breakthrough–if real–would jeopardize not only much U.S. military and intelligence-community communication but financial transactions and even your text messages. 

One quantum technology expert said simply “If it’s true, it’s pretty disastrous.” 

But the breakthrough may not be all it’s cracked up to be.

The paper, “Factoring integers with sublinear resources on a superconducting quantum processor,” is currently under peer review. It claims to have found a way to use a 372-qubit quantum computer to factor the 2,048-bit numbers of in the RSA encryption system used by institutions from militaries to banks to communication app makers. 

That’s a big deal because quantum experts believed that it would require a far larger quantum computer to break RSA encryption. And IBM already has a 433-qubit quantum processor.

The Chinese researchers claim to have achieved this feat by using a quantum computer to scale up a classical factoring algorithm developed by German mathematician Claus Peter Schnoor. 

“We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance,” they wrote.

Lawrence Gasman, founder and president of Inside Quantum Technology, says he’s a bit skeptical, but  “It’s enormously important that some people in the West come to some real conclusions on this because if it’s true, it’s pretty disastrous.” 

Gasman said the paper’s most alarming aspect is the idea that it might be possible to break key encryption protocols not with a hypothetical future quantum computer but a relatively simple one that could already exist, or exist soon. 

“If you look at the roadmaps that the major quantum computer companies are putting out there, talking about getting to a machine of the power that the Chinese are talking about, frankly, I don’t know. But you know, this year, next year, very soon. And having said that, I tend to be a believer that there’s going to happen soon.”

Yet Gasman said he was concerned about the numbers cited in the paper: “There’s a lot of hand-waving in there.” 

Anderson Cheng, CEO of the company Post Quantum, said via email: “The general consensus in the community is that whilst these claims cannot be proven to work there is no definitive evidence that the Chinese algorithm cannot be successfully scaled up either. I share this skepticism, but we should still be worried as the probability of the algorithm working is non-zero and the impact is potentially catastrophic. Even if this algorithm doesn’t work, a sufficiently powerful quantum computer to run Shor’s algorithm”—a method of factoring the very large numbers used by RSA—”will one day be designed – it is purely an issue of engineering and scaling the current generation of quantum computers.”

Defense One reached out to several U.S. government experts, who declined to comment on the paper. But University of Texas at Austin Computer science professor Scott Aaronson was a bit harsher on the paper in his blog earlier this month. To wit: “No. Just No.”

Wrote Aaronson: “It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so. All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen…many.” 

So is the paper a fraud, a “catastrophe,” or something in between? Gasman says that while the political race for quantum supremacy is tightening, it would be uncharacteristic of the Chinese research community to make a bold, easily punctured false claim. He described the majority of published quantum research out of China as fairly “conventional” and said it’s unlikely that China would risk its stature as a leader in quantum science by pushing bunk papers. 

“Nobody’s going to say, ‘Oh, it’s the Chinese and they, you know, they’re dissembling and it’s all about the rivalry with the West or the rivalry with the [United States]’,” he said.

Gasman added that while China leads in some aspects of quantum science (such as appalled networking) and quantum computer science, having built the world’s “fastest” quantum computer, the United States leads in many other aspects.. 

Even if this paper turns out to be wrong, it is a warning of what’s to come. The U.S. government has become increasingly concerned about how quickly key encryption standards could become obsolete in the face of a real quantum breakthrough. Last May, the White House told federal agencies to move quickly toward quantum-safe encryption in their operations. 

But even that might be too little, too late. Said Cheng: “We need to be prepared for the first [Cryptographically Relevant Quantum Computer] to be a secret – it is very likely that when a sufficiently powerful computer is created we won’t immediately know as there won’t be anything like mile-high mushroom clouds on the front covers, instead, it will be like the cracking of Enigma – a silent but seismic shift.”

Article link: https://www.defenseone.com/technology/2023/01/china-about-destroy-encryption-we-know-it-maybe/382041/

Scientists Weigh in on the Ethics of Next-Generation AI – Nextgov

Posted by timmreardon on 01/18/2023
Posted in: Uncategorized. Leave a comment

By JOHN BREEDEN IIJANUARY 12, 2023

The release of a powerful and publicly available AI has raised questions about the technology’s potential and points of concern.

Artificial intelligence is a science that seems to have reached critical mass, with lots of new announcements and advancements hitting the news on a regular basis. Perhaps the biggest AI story over the past year was the release of the ChatGPT AI, which promises to revolutionize not only how AI is trained and operates, but also how this incredibly powerful science can be made available to anyone by simply asking it questions in plain language. I reviewed ChatGPT when it debuted, and found that it not only lived up to the hype, but exceeded my highest expectations, doing everything I asked of it, from programming in C++ to creating a cute bedtime story.

I also had some fun over the holidays with the image generation component of the AI—a program called DALL-E—and directed it to generate both cute and powerful images using nothing but my words and imagination. Both ChatGPT and DALL-E are free to experiment with, so give them a try if you have not yet done so. 

And while ChatGPT is the first widely used and publicly available AI with a natural language processing interface—the company got millions of users the first week it was released—there are sure to be many others in the near future. But as amazing as this new flavor of AI technology is, it also brings up questions of ethics—not just for ChatGPT but all future projects in this area. In a previous Nextgov column, I talked with the Founder and CEO of Credo AI, Navrina Singh, about the importance of ethical AI and some of the dangers that the new technology could foster. This was several weeks before ChatGPT was released, so her warnings did not specifically take that into account.

Now that ChatGPT has shown what is possible, even for regular people who are not data scientists or AI specialists, other experts in the AI field are weighing in as well. While everyone I talked with was genuinely excited and impressed with the new technology, there were also some concerns. 

This week I talked with two AI experts on the topic. The first was Sagar Shah, a client partner with Fractal.ai, a global AI and advanced analytics company. And the second was Moses Guttmann, the CEO and co-founder of ClearML, a machine learning operations—MLOps—platform being used by some of the largest companies in the world.

Nextgov: ChatGPT has really shown us what is possible with AI technology in a way that anyone can experience. But beyond just that platform, is the science of AI also rapidly advancing in terms of complexity and capabilities?

Guttmann: I think that in the past five years we have seen an immense growth of AI, from academic proof of concepts to purposely built products reaching general audiences. With the increase in efforts of democratizing AI, we are seeing more companies adopt machine learning—ML—as part of their internal research and development efforts. I believe that will continue.

Shah: The field is rapidly changing and I think we’ve seen the fruits of that labor in the last few years. Continued progress in the development of more advanced and powerful machine learning algorithms. Natural language processing—NLP—and the data used to inform machine learning models have helped push the technology forward. I also believe that the growth of MLOps has played a major role in developing more sophisticated AI by creating a more efficient loop for experimentation, iteration and deployment of machine learning models, ultimately resulting in greater scalability and maintenance of more complex AI systems. Recently, deep reinforcement learning, quantum computing, generative AI, and neuroscience at Fractal research arms are generating good insights of what’s coming next in the world of AI.

Nextgov: The release of ChatGPT has really set the world on fire. What makes ChatGPT so special compared with everything that came before it?

Shah: The caliber of its NLP technology is one thing, and the team behind that, as it was designed to create text that’s difficult to distinguish from human-written text. I think a major factor that makes ChatGPT stand out, however, is the human element associated with its training methodology. Reinforcement learning through conversations and responses sourced from people, with a reward model algorithm, plays a critical role in its ability to generate the natural-sounding responses we see now, as well as learn from its mistakes and become better at engaging in conversations over time.

Guttmann: The main leap ChatGPT presents is, in essence, the ability to curate knowledge in an extractable way. Specifically, you can think of generative NLP models as entities with ability to generate content that has roots in a knowledge base. This connection is what makes them create coherent content. Well, sometimes coherent content. 

With the additional conversational capabilities, this knowledge can now be extracted via a simple question or a set of questions. Specifically, when you ask ChatGPT a question, you interact with the model’s understanding of the data it was trained on. This is exactly the interface we have as human beings to access knowledge from one another. That makes ChatGPT a true leap in creating models that really learn from data, because the definition of learning is creating patterns and rules, but also to have the ability to communicate them. This is truly amazing.

Nextgov: What are the dangers of having such powerful technology given to the public?

Guttmann: Well as they say, never trust anything you find on the internet. I guess now more than ever. The main difference with ChatGPT is scale. If someone wants to create enough content with a specific flavor to it, they can now automate that process and shift the weight. And that is truly alarming.

For the sake of completeness, someone could have trained an AI model for that specific task and then used that model for the rephrasing, but this process is definitely not accessible for the general public, and not even for software engineers.

Shah: The big danger is having misinformation presented as fact. Although ChatGPT’s AI can be used to create college essays or write applications, the sometimes technical nature of these tasks require human fact-checking to ensure the output is even usable. After a long enough prompt, think 500 words or more, ChatGPT’s cadence begins to become repetitive in sentence structure. And the knowledge base that it’s modeled on isn’t current either, as it only goes up to 2021, so the AI is, in effect, operating on a lag in terms of available data.

Nextgov: One of the things that some AI experts say is required to eliminate many of the dangers associated with AI is to have the technology be trained and deployed ethically. What actually is ethical AI?

Shah: The formal definition of ethical AI is one that upholds key standards, such as fairness, transparency, privacy, accountability, human centricity, adaptability and contestability. Every AI should be ethical, but we are currently far away from that.

Guttmann: Ethical AI can have a lot of definitions as “ethicality” is a fluid and ever changing concept. But, generally speaking, it refers to building ML models that are thoughtfully and transparently trained. If models are thoughtfully and transparently trained, they’re less likely to be biased or do harm to a company, a group of people or society at large.

Candidly, most ML models don’t need to be filtered through an ethical lens. The overwhelming majority are often automating more rote tasks and activities. In these instances, ethicality is less of a factor. 

But models that have more of a direct impact on a human’s decision making or livelihood should be built morally, ethically and transparently. Models that make decisions about people’s access to healthcare or employment, for example, need to be ethical.

Nextgov: Okay, that makes sense. So, how do we ensure that AIs are built ethically in the future?

Shah: We need to first establish clear ethical guidelines for AI development, such as ensuring AI systems respect user privacy and autonomy, promote fairness and non-discrimination and are transparent about their decision-making processes. It’s also important to incorporate ethical considerations into the design of AI systems, such as using AI algorithms that are able to explain their decisions, or using AI systems to promote the public good. 

As AI becomes more prominent, there might be a need to establish independent oversight bodies to help ensure AIs are created in a responsible manner. while educating developers and decision-makers on the potential risks of developing AI without ethical parameters. And finally, there should be investment in research on AI development techniques that mitigate bias in AI systems and promote its use for positive social good.

Guttmann: Ethicality has to be a guiding light from the beginning of a model’s development all the way through to the end. That means allowing for model auditing by humans, educating and training models with diverse datasets, bringing together a wide range of ML experts with different backgrounds to support the ML’s creation, and having explicit rules and guidelines in place for how models can be used or commercialized.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Article link: https://www.nextgov.com/emerging-tech/2023/01/scientists-weigh-ethics-next-generation-ai/381617/

Health care is a universal right, not a luxury, pope says – NCR

Posted by timmreardon on 01/18/2023
Posted in: Uncategorized. Leave a comment

Health care is not a luxury, it is a right that belongs to everyone, Pope Francis told health care workers.

“A world that rejects the sick, that does not assist those who cannot afford care, is a cynical world with no future. Let us always remember this: health care is not a luxury, it is for everyone,” the pope said.

The pope was speaking Jan. 16 with members of an Italian federation of professional associations of technicians and specialists working in the fields of radiology, rehabilitation and preventative medicine.

He expressed his deep gratitude for their work, especially during the pandemic.

“Without your commitment and effort many people who were ill would not have been looked after,” he said. “Your sense of duty inspired by the power of love enabled you to serve others, even putting your own health at risk.”

In a world marked by a throwaway culture, the health professionals promote a culture of care, embodied in the good Samaritan, who does not look the other way, but approaches and helps a person in need with compassion, the pope said.

People who are ill are “asking to be cared for and to feel cared for, and that is why it is important to engage with them with humanity and empathy” along with meeting the highest professional standards, he said.

However, he added, people working in the field of health care also need people to care for them, too.

That kind of care must come “through recognition of your service, protection of proper working conditions and involvement of an appropriate number of caregivers, so that the right to health care is recognized for everyone,” he said.

Every country must actively seek “strategies and resources in order to guarantee each person’s fundamental right to basic and decent health care,” he said, quoting this year’s message for the World Day of the Sick, to be celebrated Feb. 11.

Article link: https://www.ncronline.org/vatican/vatican-news/health-care-universal-right-not-luxury-pope-says

Healthcare Justice

Posted by timmreardon on 01/16/2023
Posted in: Uncategorized. Leave a comment

Posts navigation

← Older Entries
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Closing the Digital Divide in Government: 5 Strategies for Digital Transformation – Nextgov 01/31/2023
    • Twelve Problems Negatively Impacting Defense Innovation – AEI 01/27/2023
    • NIST Debuts Long-Anticipated AI Risk Management Framework – Nextgov 01/26/2023
    • Artificial Intelligence Risk Management Framework – NIST 01/26/2023
    • Plan for Federal AI Research and Development Resource Emphasizes Diversity in Innovation – Nextgov 01/24/2023
    • SOFTWARE DEFINES TACTICS – War on the Rocks 01/23/2023
    • Is China About To Destroy Encryption As We Know It? Maybe – Defense One 01/22/2023
    • Scientists Weigh in on the Ethics of Next-Generation AI – Nextgov 01/18/2023
    • Health care is a universal right, not a luxury, pope says – NCR 01/18/2023
    • Healthcare Justice 01/16/2023
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

    No upcoming events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Follow Following
    • healthcarereimagined
    • Join 124 other followers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...