healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Future NIST AI Guidance Could Take Industry-Specific Approach – Nextgov

Posted by timmreardon on 02/07/2023
Posted in: Uncategorized. Leave a comment

By ALEXANDRA KELLEYFEBRUARY 3, 2023

NIST leadership commented on the need for a tailored approach to artificial intelligence risk management, and how understanding an industry is key to anticipating AI system risks.

On the heels of the launch of its debut Artificial Intelligence Risk Management Framework, the National Institute of Standards and Technology continues to posture its standards-setting approach to prioritize human engagement with emerging technologies. 

One detail that NIST researchers may target in forthcoming editions of their AI RMF guidance is a set of more tailored recommendations on AI and machine learning algorithms for different industries, according to an agency official. 

Elham Tabassi, the chief of staff in the Information Technology Laboratory at NIST, spoke in a Friday discussion hosted by the George Washington University and expanded upon the new AI RMF’s purpose and applications.

While the RMF was designed to be broadly applicable to a variety of use cases as a general, high-level framework, Tabassi said she ultimately thinks that AI risk management techniques should vary between different sectors and implementation is best conducted with the help of groups with domain expertise. 

“AI is all about context and use case, and how either of these trustworthy characteristics…will manifest themselves in different use cases in [the] financial sector, versus hiring, versus face recognition…and they may have different priorities,” she said. “There is need for a tailored guidance to that specific technology or use case.”

She added that conversations have already begun on developing a tailored AI RMF for specific applications. An updated version of the accompanying AI playbook authored by NIST is slated to be released in Spring 2023.

“What we were trying to do is giving some general, technology-agnostic, sector-agnostic framework, but strong enough foundation that all these verticals could be built up,” she said. 

Tabassi also noted that, in the open-comment period to collect feedback and input on the current AI RMF, NIST received over 400 sets of comments from people in government, academia and the private sector.

Article link: https://www.nextgov.com/emerging-tech/2023/02/future-nist-ai-guidance-could-take-industry-specific-approach/382575/

Have we learnt nothing from SolarWinds supply chain attacks? Not yet it appears – The Register

Posted by timmreardon on 02/06/2023
Posted in: Uncategorized. Leave a comment

From frameworks to new federal offices it’s time to get busy

Jeff BurtSun 5 Feb 2023  //  12:00 UTC

comment bubble on white

25 


The hack of SolarWinds’ software more than two years ago pushed the threat of software supply chain attacks to the front of security conversations, but is anything being done?.

In a matter of days this week, at least four disparate efforts to shore up supply chain security were declared, an example of how front-of-mind such risks have become and a push from vendors and developers to reduce them.

The threat is growing. Gartner expectsthat by 2025, 45 percent of organizations globally will have experienced a software supply chain attack, a three-fold jump from 2021. It’s not a surprise, according to Neatsun Ziv, CEO of startup Ox Security that’s building an open MITRE ATT&CK-like framework for enterprises to check software supply chains.

“These kinds of attacks become super, super lucrative just because the [hits] that you could get from a single weapon is not proportional to anything else you see in the industry,” Ziv told The Register.

As with the SolarWinds attack, a miscreant can inject malicious code into a piece of software before the compromised software is sent out to customers and compromises those systems. Organizations seem to be slow in catching up to this.

More recently, attackers have targeted code repositories like GitHub and PyPIand companies like CI/CD platform provider CircleCI, an incident that expanded the definition of a supply chain attack, according to Matt Rose, field CISO for cybersecurity vendor ReversingLabs.

“What the CircleCI incident illustrates is that organizations have to not only be concerned about malware being injected into a compiled object or deliverable, but also of the tooling used to build them,” Rose wrote in a blog post. “That’s why the CircleCI hack is an eye opener to a lot of organizations out there.”

One framework for them all

The OSC&R (Open Software Supply Chain Attack Reference) was launched this week, founded by Ziv – former vice president of cybersecurity at Check Point – and other security pros with background at such places as Google, Microsoft, GitLab, and Fortinet.

The idea is to give enterprises a common framework for evaluating and measuring the risk to their supply chains, something that has traditionally been done with intuition and experience. OSC&R will give organizations a common language and tools for understanding the attack tactics and defenses, prioritize threats, and track threat group behavior.

It will be updated as new tactics crop up, will help with red-team penetration exercises, and will take contributions from other vendors. The group took concepts for ransomware and endpoints used in MITRE ATT&CK and applied them to the supply chain.

“The challenge was that there was no framework to get us from a basic understanding to our ability to check our environment if we are susceptible to the supply chain attacks,” Ziv said.

The framework touches on nine key areas – such as container and open-source security, secrets hygiene, and CI/CD posture – and outlines the techniques used by attackers in such areas as initial access, persistence, privilege escalation, and defense evasion. It will grow in both features and contributors, he said.

The OpenVEX spec

In the same spirit, supply chain security vendor Chainguard is heading up a group that includes HPE, VMware, and The Linux Foundation to jumpstart the adoption of the Visibility Exploitability eXchange (VEX), a tool for addressing vulnerabilities in enterprise software. It’s supported by agencies like the US National Telecommunications and Information Administration (NTIA) and Cybersecurity Infrastructure Security Agency (CISA).

Enter the OpenVEX specification and reference toolchain

“Up until today, VEX has been a concept the industry has invested time debating and building minimum requirements around,” Chainguard founder and CEO Dan Lorenc wrote. “With the release of OpenVEX, organizations can now put VEX into practice.”

OpenVEX will work as a companion to software bill of materials, which help with transparency but can create “noise” in the industry, Lorenc wrote. With OpenVEX, suppliers can more precisely describe how exploitable the products are and help end users filter out false positives.

Chainguard has put OpenVEX in some of its products, including its Wolfi container-specific Linux distribution and Images secure-by-default container base images.

For its part, cybersecurity vendor Checkmarx is building onto the supply chain security offering it released in March 2022 with a threat intelligence tool to focuses on the supply chain. It includes information such as identifying malicious packages by the type of attack – like typosquatting or dependency confusion — analysis of the operators behind the attack, how the packages operate, and the historical data behind them.

“This intel is all about tracking purpose-built, malicious packages that often contain ransomware, cryptomining code, remote code execution, and other common types of malware,” wroteStephen Gates, principal content marketing manager for Checkmarx.

CISA on the move

CISA reportedly is creating an office to address supply chain security and work with the public and private sectors to put federal policies in place. According to a report in the Federal News Network, Shon Lyublanovits is leading the initiative. She heads the project management office for cyber supply chain risk management (C-SCRM), which is part of CISA’s cybersecurity division.

The issues the office will address range from counterfeit components to open-source software vulnerabilities.

It’s the latest step for CISA, which has had a focus on supply chain security since creating a task force for IT and communications technology task for in 2018.

Varun Badhwar, co-founder and CEO at supply chain security vendor Endor Labs, applauded CISA’s decision to create the office, telling The Registerthat establishing “a new capability at such a high level stands out as a milestone.”

However, it’s important to understand the complexities of the problem, Badhwar said. There are open-source components through the software lifecycle and organizations need to first secure the open-source software they use. Enterprises and agencies use an average of more than 40,000 open-source software packages downloaded by developers, and each of those can bring in another 77 dependencies.

“This causes a massive, ungoverned sprawl that increases the supply chain attack surface across multiple dimensions,” he said, adding that Endor Labs has found that 95 percent of open source vulnerabilities are found in the transitive dependencies.

Article link: https://www.theregister.com/AMP/2023/02/05/supply_chain_security_efforts/

Why 2 Lawmakers Turned to AI to Advocate for Its Own Regulation – Nextgov

Posted by timmreardon on 02/02/2023
Posted in: Uncategorized. Leave a comment

By ALEXANDRA KELLEYFEBRUARY 1, 2023

The congressmen have been using artificial intelligence systems to write Congressional texts, as they push for both regulation and innovation in the field.

Two members of the House made waves last week by employing advanced technology to advocate for greater regulation of itself. Reps. Ted Lieu, D-Calif., and Jake Auchincloss, D-Mass., spearheaded conversation about the need for legislative action on artificial intelligence, by using such systems to write a bill and a speech for the House floor, respectively.

Both congressmen, who each have backgrounds in tech sectors, have been busy introducing legislation to regulate and responsibly foster development in the AI sector. Lieu introduced the first bill in Congress that was written with AI, using the popular software ChatGPT. 

Speaking with the Washington Post in a live interview on Wednesday, Lieu described the process of using AI to write a proposed piece of legislation as “enthralling,” and also concerning.

“I was blown away by how well it wrote that first paragraph and the fact they could do it [in] under a minute,” he said. “I was also deeply alarmed that it was able to write an amazing paragraph in under a minute that largely aligned with my own views.”

Lieu detailed his experience in writing a bill with AI technology to make a case for sophisticated federal legislation.

“We need to make sure that we use AI for good but also keep it from doing harm to society,” he said. 

Auchincloss made a similar play, reading out an AI-generated speech on the House floor when reintroducing the United States–Israel Artificial Intelligence Center Act, which would foster collaboration in AI research and development between the two nations. 

Speaking to Nextgov, Auchincloss said that his motivation to advocate for more AI research efforts and regulation within the U.S. stemmed from the looming ubiquitousness of machine learning systems. 

“This technology [AI]—I know—is going to be part of my career for decades to come. And it could be a general purpose technology for my children, meaning that, in any sector that they chose to work, it would be a key tool that they would need to use,” he said. “I wanted to spotlight this for Congress so that we have a debate now about purposeful policy for AI and not be 10 years behind the ball, like I think a lot [of] policy was for social media.” 

Lieu spoke in favor of regulations for AI, and noted that government should be prioritizing the specific technology systems that demand oversight due to their threat potential, such as national security systems or those that could impact road safety through augmented vehicles.

“I don’t think it’s possible to regulate artificial intelligence in every discrete instance in which it is used,” he said. “I think a better approach is to have a general agency do regulations. And also, when an agency gets it wrong, they can also reverse the decision without having to get another act of Congress to change whatever it is that they initially did.”

As the newest member appointed to the House Committee on Science, Space and Technology, Lieu confirmed he will continue to focus on marrying AI innovation with responsibility—through an announcement written by an AI chatbot. Lieu assured communication staffers that their jobs were not in danger of AI replacement in that same press release.

Lieu conceded during  the Washington Post event that AI will cause some level of disruption to the overall job market, both in the elimination and creation of jobs. 

To determine the broad impact of AI and how Congress can mitigate the negative and emphasize the positive, Lieu called for a “blue ribbon committee” staffed with experts from industry and academia to help the government navigate AI’s societal impacts. 

“It is very important, I think, to have stakeholders and experts—from industry, from government, from academia and other places—look at these issues and really make some good recommendations for Congress on how we proceed forward and what is going to be [an] enormous disruption in the next few years,” he said.

Article link: https://www.nextgov.com/emerging-tech/2023/02/why-2-lawmakers-turned-ai-advocate-its-own-regulation/382458/

Closing the Digital Divide in Government: 5 Strategies for Digital Transformation – Nextgov

Posted by timmreardon on 01/31/2023
Posted in: Uncategorized. Leave a comment

By VIRAL CHAWDA AND ANDY GOTTSCHALK JANUARY 31, 2023 09:00 AM ET

Citizens are demanding a more connected government.

Change is seldom easy. Yet for government and public sector executives, the need to modernize has never been greater, as there is a growing digital divide between constituent expectations and what many governments can offer. The COVID-19 pandemic shifted the dynamic when it came to the digital imperative for government. More than ever, citizens are demanding a more connected government with integrated services, relevant experiences and self-driven service connection. Whether it’s paying taxes online or applying for benefits anytime and anywhere with the ease experienced in other aspects of their lives, citizens are seeking a government that is as modern as the technology they use in their daily lives. A failure to move forward with modernization efforts will only lead to a decline in government agencies’ ability to deliver services and potentially reinforce the prevailing view of government as inefficient, out of touch and ambivalent to the needs of its constituents. 

Today, some agencies are relying on legacy systems running programs that are more than a quarter century old and can no longer be supported or updated. Converting systems and applications of that age, scale and complexity to modern cloud-based solutions is difficult. And finding the right talent to orchestrate the change is not easy either. By contrast, leading agencies will move into the future by aggressively pursuing new technologies such as cloud, artificial intelligence, blockchain, low code platforms and analytical engineering. By some estimates, with digital transformation, savings for governments could exceed $1 trillion over the course of the next decade.  

As government and public sector agencies continue on their digital transformation journey this year, here are five strategies to adopt moving forward:

  1. Have a customer-centered design mindset when introducing new technology so that users will find new platforms and applications easy to use and will embrace them.
  2. Begin treating data like a product to be shared enterprisewide rather than continually duplicated in siloed departments. 
  3. Accelerate the use of a development, security and operations—DevSecOps—approach to creating new software applications, embedding security into them from the outset. 
  4. Continue to modernize technology stacks to include SaaS platforms, advanced cloud, AI and edge computing capabilities—as well as strong data management capabilities—and make it easy for end users to access the data insights they need for decision-making. 
  5. Embrace modularity and containerization to help with the challenge of modernizing large and complex legacy systems and applications. Modularity refers to dividing large software applications into smaller modules, while containerization refers to running those applications in an isolated environment.

Modernization is now more important than ever to bridge the digital divide and meet constituents’ rising expectations. These are just some of the many steps governments should consider as they continue on their digital journeys.

Viral Chawda is a principal and head of government technology at KPMG U.S. and Andy Gottschalk is a partner for health and government solutions at KPMG U.S. The views expressed are the authors alone and do not necessarily represent those of KPMG LLP.

Article link: https://www.nextgov.com/ideas/2023/01/closing-digital-divide-government-5-strategies-digital-transformation/382326/

Twelve Problems Negatively Impacting Defense Innovation – AEI

Posted by timmreardon on 01/27/2023
Posted in: Uncategorized. Leave a comment

By William C. Greenwalt AEIdeas January 26, 2023

As the US wrestles with a rapidly changing security environment, the creation of new military capabilities to counter these growing threats is essential. Defense leaders are being forced to relearn, as they have had to do in every conflict since the Revolutionary War, that one cannot just turn on a spigot and obtain weapons on demand. Industrial base constraints reliably manifest themselves in multi-year lead times such as we are seeing today to replace munitions used in Ukraine. Our quandary, however, is much greater than just reconstituting peacetime stocks of existing systems. Any impending conflict, or perhaps more optimistically the ability to deter a future conflict, will require not only production at scale, but innovation at scale that hasn’t been seen since WWII and the early Cold War.

The US is nowhere near being ready to embark on such an effort. Before doing so the Department of Defense (DOD) and Congress need to understand the depths of the issues that are holding back America’s ability to regain the level of technological dominance necessary to maintain deterrence or prevail in a war if deterrence fails. The following twelve problem areas are offered to begin to frame that understanding. We need to focus our attention on the right problems, as well-intended solutions to the wrong ones will end up just exacerbating our decline. This is not an all-inclusive list. One could easily nail 95 innovation theses to the doors of the Pentagon if security would allow it. Still, this represents an initial attempt to identify some of the more significant barriers that will prevent US military success unless we act soon. 

1) There is no sense of urgency yet. Defense management systems and the industrial base are optimized for a peacetime cadence after 30 years without a Great Power conflict. It took years to get to this point and without focused leadership we will never adjust to a different set of circumstances.

2) Process compliance is our most valued objective rather than time. Time to operational capability as described in the report “Competing in Time” has been the primary historical forcing function for disruptive innovation, and yet it is not valued in DOD or Congress.  

3) We are all communists now. Just as was the case in the Soviet Union, centrally planned, linear, predictive processes and mindsets destroy innovation and creativity. These processes took root at DOD in the 1960s under McNamara and have had 60 years to engrain themselves in culture.

4) Budget inflexibility in year of execution and long lead times to allocate resources are at the root cause of our declining competitiveness and innovation failures (especially in the many versions of the Valley of Death).

5) The predictive and lumbering requirements process forecloses innovation opportunities from the start as it is the gateway to the acquisition and budgeting system.

6) Operational interests are not aligned or supported within the acquisition and budgeting systems – both at the combatant command and service component command levels. 

7) The barriers to civil-military integration of the industrial base have continued to widen as DOD prefers to dictate solutions to defense unique monopoly providers that have taken on many of the characteristics of pre-WWII government run arsenals.

8) Defense contracting has become more of an enforcer of socio-economic programs and goals than an enabler of capability. Unique non-market rules keep out non-traditional and commercial companies and solutions and drive-up costs.

9) The authority and ability of program officials to do their jobs has been limited by adversarial oversight. Incentives and rules drive contracting officer enforcement of process rather than capability outputs or program objectives. Testing, technology, and auditing bureaucracies double down on “gotcha” check the box oversight rather than provide cooperative insights and proactive value add. 

10) Production capability is a key component to innovation and has been allowed to deteriorate both in the traditional and commercial industrial bases. DOD ignored the implications of the last two decades of commercial globalization and production outsourcing to China that has hollowed out the US industrial base. Just in time efficiency requirements and barely minimal sustainable production rates have destroyed defense specific industrial capabilities and undermined military readiness. 

11) Incentives for industry are not aligned to DOD innovation interests. The preponderance of cost contracts, counterproductive reimbursement rates and policies, and lack of program opportunities have left the traditional defense industrial base and government organic depots to be built around long-term maintenance revenues and decades long weapon systems franchises making it politically difficult to modernize. 

12) Security and technology control policies (ITAR) are built around an era of US defense technological dominance that has long passed and now serve as barriers to innovation. Both Silicon Valley and allied cooperation will be needed to compete against China but outdated thinking and processes hinder such cooperation.

Article link: https://www.aei.org/foreign-and-defense-policy/defense/twelve-problems-negatively-impacting-defense-innovation/

NIST Debuts Long-Anticipated AI Risk Management Framework – Nextgov

Posted by timmreardon on 01/26/2023
Posted in: Uncategorized. Leave a comment

By ALEXANDRA KELLEYJANUARY 26, 2023 02:00 PM ET

With the launch of the AI RMF 1.0, federal researchers focused on four core functions to structure how all organizations evaluate and introduce more trustworthy AI systems.

The National Institute of Standards and Technology unveiled its long-awaited Artificial Intelligence Risk Management Framework on Thursday morning, representing the culmination of an 18-month-long project that aims to be universally applicable to any AI technology across all sectors. 

Increasing trustworthiness and mitigating risk are the two major themes of the framework, which NIST Director Laurie Locascio introduced as guidance to help organizations develop low-risk AI systems. The document outlines types of risk commonly found in AI and machine learning technology and how entities can build ethical, trustworthy systems. 

“AI technologies have significant potential to transform individual lives and even our society. They can bring positive changes to our commerce and our health, our transportation and our cybersecurity,” Locascio said at the framework’s launch event. “The AI RMF will help numerous organizations that have developed and committed to AI principles to convert those principles into practice.”

The framework offers four interrelated functions as a risk mitigation method: govern, map, measure, and manage.

“Govern” sits at the core of the RMF’s mitigation strategy, and is intended to serve as a foundational culture of risk prevention and management bedrocking for any organization using the RMF.

Building atop the “Govern” foundation, “Map” comes next in the RMF game-plan. This step works to contextualize potential risks in an AI technology, and broadly identify the positive mission and uses of any given AI system, while simultaneously taking into account its limitations. 

This context should then allow framework users to “Measure” how an AI system actually functions. Crucial to the “Measure” component is employing sufficient metrics that represent universal scientific and ethical norms. Strong measuring is then applied through “rigorous” software testing, further analyzed by external experts and user feedback. 

“Potential pitfalls when seeking to measure negative risk or harms include the reality that development of metrics is often an institutional endeavor and may inadvertently reflect factors unrelated to the underlying impact,” the report cautions. “Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact and human-AI configurations.” 

The final step in the AI RMF mitigation strategy is “Manage,” whose main function is to allocate risk mitigation resources and ensure that previously established mechanisms are continuously implemented. 

“Framework users will enhance their capacity to comprehensively evaluate system trustworthiness, identify and track existing and emergent risks and verify efficacy of the metrics,” the report states.

Business owners participating in the AI RMF also expressed optimism at the framework’s guidance. Navrina Singh, the CEO of AI startup Credo.AI and member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee, said that customers seeking AI solutions want more holistic plans to mitigate bias.

“Most of our customers…are really looking for a mechanism to build capacity around operationalizing responsible AI, which has done really well in the ‘Govern’ function of the NIST AI RMF,” she said during a panel following the RMF release. “The ‘Map, Measure, Manage’ components and how they can be actualized in a contextual way, in all these specific use cases within these organizations, is the next step that most of our customers are looking to take.”

The new guidance was met with broad bipartisan support, with Rep. Zoe Lofgren, D-Calif., and Rep. Frank Lucas, R-Okla., both sending congratulatory messages for the launch event.

“By taking a rights affirming approach, the framework can maximize the benefits and reduce the likelihood of any degree of harm that these technologies may bring,” Lofgren said at the press briefing. 

Community participation from a diverse group of sectors was critical to the development of the framework. Alondra Nelson, the Deputy Director for Science and Society at the White House Office of Science and Technology Policy, said that her office was one of the entities that gave NIST extensive input into the AI RMF 1.0. She added that the framework, like the White House AI Bill of Rights, puts the human experience and impact from AI algorithms first. 

“The AI RMF acknowledges that when it comes to AI and machine learning algorithms, we can never consider a technology outside of the context of its impact on human beings,” she said. “The United States is taking a principled, sophisticated approach to AI that advances American values and meets the complex challenges of this technology and we should be proud of that.” 

Much like the AI Bill of Rights, NIST’s AI RMF is a voluntary framework, with no penalties or rewards associated with its adoption. Regardless, Locascio hopes that the framework will be widely utilized and asked for continued community feedback as the agency plans to issue an update this spring. 

“We’re counting on the broad community to help us to refine these roadmap priorities and do a lot of heavy lifting that will be called for,” Locascio said. “We’re counting on you to put this AI RMF 1.0 into practice.”

Comments on the AI RMF 1.0 will be accepted until February 27, 2023, with an updated version set to launch in Spring 2023.

Article link: https://www.nextgov.com/emerging-tech/2023/01/nist-debuts-long-anticipated-ai-risk-management-framework/382251/

Artificial Intelligence Risk Management Framework – NIST

Posted by timmreardon on 01/26/2023
Posted in: Uncategorized. Leave a comment

Artificial intelligence (AI) can learn and adapt. Organizations developing or using AI products can do the same, but the innovations will bring risks.

Now, NIST’s Artificial Intelligence Risk Management Framework can help.

The framework, released today, is voluntary guidance that equips organizations to think about AI and risk differently. It promotes a change in institutional culture, encouraging organizations to approach AI with a new perspective — including how to think about, communicate, measure and monitor AI risks.

Get your organization started: https://lnkd.in/ekJZJsTM

AI #ArtificialIntelligence #RiskManagement #NIST

https://www.linkedin.com/posts/nist_ai-artificialintelligence-riskmanagement-activity-7024400931756625920-bdpJ?

Plan for Federal AI Research and Development Resource Emphasizes Diversity in Innovation – Nextgov

Posted by timmreardon on 01/24/2023
Posted in: Uncategorized. Leave a comment

By ALEXANDRA KELLEYJANUARY 24, 2023 03:00 PM ET

The National Artificial Intelligence Research Resource Task Force released its operating framework, making the case for implementation.

The future of a federal artificial intelligence resource will focus on incorporating diversity and accessibility into the creation of new technologies, in a bid to both democratize access to the emerging technology and ensure a lack of bias in machine learning systems. 

On Tuesday, the task force charged with planning development of a National Artificial Intelligence Research Resource unveiled its Congressionally-mandated report that acts as guidance in establishing a formal AI research and development landscape in the U.S. 

Key pillars in the roadmap focus on how to open access to the emerging AI/ML fields for socioeconomically marginalized Americans, especially when these technologies have historically harmed vulnerable groups.

“So bottom line up front is that AI is driving scientific discovery and economic growth across a range of sectors, and at the same time, it’s raising new challenges related to its ethical and responsible development views,” a NAIRR Task Force member said during a press briefing. “Access to the computational and data resources that drive the cutting edge of AI remains primarily limited to those who are working at large tech companies and well-resourced universities.”

To better bridge this AI resource gap, the task force’s final report recommends establishing the NAIRR with four measurable foundations: spurring innovation, increasing diversity in the AI/ML field, improving career capacity and broadly advancing trustworthy AI systems.

“The key takeaway from the final report is that a NAIRR would connect America’s research community to the computational data and testbed resources that fuel AI research through a user friendly interface and associated training and user support,” the task force member said. 

The creation of the NAIRR was one of the provisions set out in the National Artificial Intelligence Initiative Act of 2020, designed to serve as a collection of federal resources for AI development, like testing data and software interfaces. 

In addition to the bevy of computational resources the NAIRR would offer, policies governing the usage of these platforms would hinge on civil rights and civil liberties research review criteria, in addition to ethics training for NAIRR users.

Task force members anticipate needing a budget of $2.6 billion over an initial six-year time frame to support NAIRR operations. Funding and oversight for the NAIRR would be directed by the National Science Foundation. A separate steering committee would be tasked with managing the resource’s activity and funding distribution.

AI regulation has been a chief priority among tech-oriented federal agencies. Both the National Institute for Standards and Technology and the White House Office of Science and Technology Policy have developed separate AI governing frameworks in recent years. These similar roadmaps, called the AI Risk Management Framework and the Blueprint for an AI Bill of Rights, respectively, will influence how the NAIRR implements safeguards for its resource offerings.

NIST will be releasing its Artificial Intelligence Risk Management Framework on Thursday.

“The rollout of these two reports on the same week is really showcasing the dual priority of advancing AI innovation and doing so in a manner that mitigates risk and advances best practices and responsible and trustworthy AI,” the NAIRR task force member said.

Article link: https://www.nextgov.com/emerging-tech/2023/01/plan-federal-ai-research-and-development-resource-emphasizes-diversity-innovation/382144/

SOFTWARE DEFINES TACTICS – War on the Rocks

Posted by timmreardon on 01/23/2023
Posted in: Uncategorized. Leave a comment

JASON WEISS AND DAN PATT

JANUARY 23, 2023

The book “The Bomber Mafia” revolves around the efforts of military officers to develop effective tactics for a bomber force, emphasizing the feedback cycles between operational results and new bombsight and aircraft development. This storyline is timeless: Major military advances like the Blitzkrieg tactics credited with Germany’s rapid gains at the opening of World War II are largely about pioneering new tactics enabled by emerging technology — at that time, aircraft, radios, and armor. While the ability of the U.S. military and its industrial base to bend metal and push the bounds of physics with aircraft design has slowed dramatically as technology and bureaucratic processes have matured, there still lies ahead a bold frontier of new innovation.

Currently, military-technical competitive advantage is driven more by software, data, and artificial intelligence and the new tactics they enable than by bent metal and physics. This is apparent across different military contingencies, from scrappy improvised battle networks in Ukraine to plans for future fighter aircraft that will teamwith a mix of humans and unmanned systems. New analytics tools and processing algorithms can make nearly real-time sense of data from thousands of sensors scattered across domains. Automated control systems manage vehicle guidance, sensor tasking, and occasionally introduce unexpected failure modes. Planning tools allow operators to innovate on new force packages and tactics and to replay and rehearse scenarios hundreds of times. Software is the glue that binds our military professionals and systems together, and increasingly the loop between development and operations is defined by digital data.

Acquisition leaders can, therefore, deliver results and create a vibrant environment of learning and improvement by following six principles that offer guidance on how to set up infrastructure permitting code to change quickly and securely, how to divvy up complex software efforts into manageable chunks that can still interoperate, and how to embed digital talent, user feedback, and security practices into a project.

The Department of Defense’s acquisition community has been thrown into a turbulent sea of software buzzwords and technological change that does not conform to current bureaucratic processes, industrial base momentum, and external pressures. Indeed, there are important efforts to recognize the need for acquisition authorities to reprioritize how money is spent before the annual budget is passed, new software acquisition frameworks, and even an experimental color of money designed to accommodate the fact that code can shuffle between development and operations several times in a day.

While new tools will be helpful, the acquisition community also recognizes the promise of software for military advantage and can scrape together enough tools and authorities to deliver on this promise today. What would be most useful would be a simple set of principles that can cut through the complexity of choices that program executive officers and program managers face. We’ve recently written a reportdesigned to pick up where the Defense Innovation Board’s seminal Software is Never Done study left off.

In particular, our report found that there is no one-size-fits-all best practice for software acquisition. Every effort is different: Much like logistics professionals orchestrate handoffs between ships, planes, and trucks to deliver goods on time, acquisition professionals must learn to orchestrate between web applications, embedded systems, and middleware systems. They must mix and match between software-as-a-service, source code delivery, and exposed interfaces in order to deliver digital capability for the mission.

There is a proven set of principles that can help program managers skillfully guide their acquisitions as though they are a software company seeking to deliver a weapon system. This is similar to how executives at Amazon, Netflix, Airbnb, and Uber harnessed software, data, and machine learning to deliver market advantage in what were legacy industries of retail, entertainment, hospitality, and transportation. We call these principles “Acquisition Competency Targets for Software.”

Software defines tactics

Source:  Software Defines Tactics: Structuring Military Software Acquisitions for Adaptability and Advantage in a Competitive Era by Jason Weiss and Dan Patt.

The Path to Artificial Intelligence Starts With Allowing Software to Evolve 

Data and artificial intelligence and machine learning are widely considered important to the military’s future: Just look at the establishment last year of a new organization that reports directly to the deputy secretary — the Chief Digital and Artificial Intelligence Office. But officials must recognize that the origin of all digital data is software. The fresher and higher quality data is, the better data scientists can define artificial intelligence/machine learning models. Feedback loops are required to validate and mature each specific model. Direct and immediate access to software applications that are designed and built to accept and deploy artificial intelligence/machine learning models create the closed loop. Simply put, software begets data, data begets artificial intelligence/machine learning models, and these models need to be integrated into software to be actualized. The buzz-worthy commercial AI developments of 2022 like ChatGPT or Midjourney are built by processing data at nearly the internet’s scale, but are founded, enabled, and delivered on a robust software foundation. To ignore the role of software in this closed loop in favor of more intentional pursuit of data or artificial intelligence/machine learning strategies offers limited returns: Each part of this digital triad must be advanced simultaneously to realize battlefield advantages.

Software, in particular, must be acquired with an eye to future change — not only to enable artificial intelligence but also to enable security in light of ever-evolving threats. The best means to enable this constant change is a software factory or pipeline, which is the set of tools and processes that take the raw ingredients of working software — including source code — and delivers operational capability. The foundational Acquisition Competency Target for Software charges program executive officers to use a software factory suited to the needs of their particular project, but recognizes that the costs and complexities of building out a functioning software factory are significant. Instead, programs should first evaluate and explore an expanding set of existing Defense Department software factories to determine if one is close to meeting program needs. Investing in closing a small gap is likely cost effective and going to result in access to a functioning software ecosystem faster than starting from scratch.

We also recognize the foundational importance of security in defense software. In fact, it is so important that security checks should be built into the delivery pipeline from the beginning.  The Acquisition Competency Target for Software #2 recognizes that this can only happen by working closely with an authorizing official to jointly partner and invest in the ingredients required to achieve a continuous authorization to operate. Early partnership here acknowledges that too many authorizing officials are still unsure what “shifting cybersecurity left into the delivery pipeline” means, what it accomplishes, or why they should support these activities. Education and dialogue along with adoption of a “Yes, if…” mentality will speed deployments into operationally relevant environments.

The DNA of a Better Joint Force is Born From Allowing Systems to Recombine

From the Defense Advanced Research Projects Agency ’s Mosaic Warfare concept to the new Joint Warfighting Concept, the U.S. military’s operational concepts are increasingly built on the idea of being able to flexibly recombine mission systems into an ever-larger array of kill chain options. But the mechanics of this are accomplished by splicing software systems together. The reality is that this process is closer to organ transplant than playing building blocks. There are some key principles that enable this, though — and they start with defining and exposing interfaces, often referred to as application programming interfaces.

Application programming interfaces are the quintessential mechanism for interacting with software. Well-defined interfaces can mitigate data access risks for programs. Acquisition Competency Target for Software #3 emphasizes this point by establishing the need to own your application programming interfaces, even more so than the source code or executable. After all, it is application programming interfaces that are the pre-requisite for establishing tip and cue capabilities between disparate platforms — not data or artificial intelligence/machine learning models. Furthermore, the enterprise architecture will ebb and flow with time, and a robust set of application programming interfaces marginalizes the impact of decommissioning old architectures in favor of new architectures in programs that measure their lifetime in years. Choosing how to define modularity does require good judgement, however.

A recurring theme heard in both government and private sector corporations is that people, not technology, are the most important assets. The need to attract, hire, and retain top-tier technologists means that programs must learn to behave like a software recruiter. Leadership at Kessel Run recognized the futility of recruiting staff using government websites and intentionally turned to social media to create demand. Kessel Run was depicted as an exclusive club, something people should seek to be a part of even if it meant abandoning an existing software start-up to take a government position with a reduced compensation package. The Acquisition Competency Target for Software #4 reminds program managers that they should behave like a software recruiter because digital modernization requires digital talent.

Use Modern Tools, Concepts, and Contracting Approaches to Deliver Resilience

In Silicon Valley venture capital circles, delivering software-as-a-service, usually by charging a monthly fee for a cloud-delivered web application, has been popularfor more than a decade. The critical concept that underpins this success is the idea of a contract — a service-level agreement — built on an indicator that defines and quantitatively measures abstract notions like system availability. Development teams use these service level indicators to design and operate a system at a specific level, like 99.9 percent uptime. When one organization partners with another to deliver a service, they forge a service-level agreement that legally stipulates expectations and financial penalties for not meeting a specific service level, like that uptime guarantee. Acquisition Competency Target for Software #5 aims to recognize that anything that can be easily measured in service levels should be a strong candidate for outsourced operation instead of investing time and money towards in-house custom development. This principle is critical and is worth expanding upon.

Accountants think in terms of spreadsheets, graphic artists in terms of page layout, and project managers in terms of the venerable Gantt chart, which depicts deliverables as diamonds plotted against their anticipated time of arrival to assist production planning. The Gantt chart is excellent for assembling a list of tasks, dependencies, and milestones and visualizing progress to date. This project management tool has proven indispensable for coordinating industrial activities across a production floor at a given time. In the industrial setting, the hard part is not connecting the pieces, it is getting all the pieces to arrive on the factory floor at just the right moment.

The very essence of software, and in particular DevOps, is the opposite. To generalize, advanced commercial software for operating a modern enterprise is available on-demand as software-as-a-service. Enterprises can access everything from state-of-the-art resource planning to product life cycle management software, from learning management systems to human resource management systems. The hard part is not getting all the pieces to arrive by a milestone on a calendar using software — the difficulty is connecting them to interoperate reliably and at speed.

The integration between different architectures and application programming interface styles, combined with dozens of competing programming standards, creates delivery delays and affects the reliability and operational speed of a system. Software integration is not analogous to 20th-century industrial integration, and the Gantt chart has proven to be an unreliable management technique for software integration activities.

When a program can cleanly express a need, including its reliability, in a service-level indicator, and if established vendors can have the capability to meet or exceed those metrics, then the program’s first instinct should be to outsource that capability to a commercial or dual-use provider. Doing this allows the program to focus on building novel solutions for the hard-to-define requirements that existing providers cannot demonstrably meet. When the act of production is difficult, Program Executive Offices should want diamonds (on a Gantt chart), and when the speed and reliability of data in motion are difficult, they should want service levels.

The final principle for navigating in an era where “software is eating the world” is Acquisition Competency Target for Software #6: defining zero-trust outcomes within the program’s software applications. This means breaking up software systems into defensible chunks to shift the cybersecurity conversation from the discovery of a vulnerability meaning total compromise to one where intruder actions trigger an active response at the speed of relevance. Designers try to “air gap” the most important networks, making sure that there are no connections to the internet or outside systems via which vulnerabilities can be injected, but even these formidable moats can be stormed by intruders. While this is not an excuse to fail to build in formidable defenses, software is so extensive and dynamic that vulnerabilities and attacks are inevitable.

In recognition of this, the Defense Department has published a Zero Trust Strategy, but program managers also have a role in establishing incentives for realizing software applications that natively exhibit resilience. When built into the structure of the code and the delivery pipeline, attackers are unable to discern if they breached an application’s defenses under a “permit” or a “deny with countermeasures” condition that has alerted its security team and returned invalid and illogical data.

A Strategic Advantage From Better Software Delivery

If the Department of Defense acquires and develops software according to these six Acquisition Competency Targets for Software, with high-speed delivery mechanisms capable of quick changes, then software will enable superior force adaptability for the U.S. military compared to its strategic competitors. Adaptability is not an inherent property of software-defined systems. Once software has been deployed, changing its design and behaviors becomes extremely expensive. Once code has been compiled and deployed and disconnected from the build chain that produced it, it becomes rigid and resistant to change. Together, these six Acquisition Competency Targets for Software offer a counterbalance.

Sustainable operation of a software factory (Acquisition Competency Target for Software #1) running on a system that has embraced the principles of a continuous authorization to operate (Acquisition Competency Target for Software #2) provides a secure pathway for continuous improvement of software, data, and the deployment and feedback required for successful artificial intelligence/machine learning activities. When a program owns its application programming interfaces (Acquisition Competency Target for Software #3), it has the ability to experiment with new vendors, from the small upstart to large prime, constantly adapting to new threats without incurring the overhead of large-scale custom integration. Talent (Acquisition Competency Target for Software #4) is required to create these application programming interfaces and to properly design and build zero trust-enabled software applications (Acquisition Competency Target for Software #6). Finally, those aspects of the software architecture that can easily be measured and delivered to well-defined service levels by existing vendors (Acquisition Competency Target for Software #5) enables programs to maximize investments in novel research and development, not mundane IT operations.

Software delivery is about getting the right bits to the right places at the right time. Making these acquisition competency targets clear for software is another step forward in helping the acquisition community more efficiently navigate the software world. We must recognize and explore the value of diversity of form — the heterogeneity of software and the fact that a one-size-fits-all approach is incongruent with the software-defined world the warfighter must fight in. The Department of Defense must act in a way that recognizes that software, not legacy warfighting platforms, controls the speed and efficacy of the modern kill chain and military dilemma. And finally, the Department of Defense needs to formally recognize the digital triad of software, data, and artificial intelligence/machine learning as equal peers. Doing so will help the Department of Defense find its footing in an era where software defines tactics.

Jason Weiss is a visiting scholar at the Hudson Institute, former chief software officer for the Department of Defense, and holds a leadership position at Conquest Cyber.

Dan Patt is a senior fellow at the Hudson Institute, a former Defense Advanced Research Projects Agency official, and the former CEO of Vecna Robotics.

Image: 78th Air Base Wing Public Affairs photo by Tommie Horton

Article link: https://warontherocks.com/2023/01/software-defines-tactics/

Is China About To Destroy Encryption As We Know It? Maybe – Defense One

Posted by timmreardon on 01/22/2023
Posted in: Uncategorized. Leave a comment

PATRICK TUCKER JANUARY 20, 2023

A new research paper claims to offer a quantum-powered code-breaker of spectacular power. “If it’s true, it’s pretty disastrous,” says one expert.

Late last month, a group of Chinese scientists quietly posted a paper purporting to show how a combination of classical and quantum computing techniques, plus a powerful enough quantum computer, could shred modern-day encryption. The breakthrough–if real–would jeopardize not only much U.S. military and intelligence-community communication but financial transactions and even your text messages. 

One quantum technology expert said simply “If it’s true, it’s pretty disastrous.” 

But the breakthrough may not be all it’s cracked up to be.

The paper, “Factoring integers with sublinear resources on a superconducting quantum processor,” is currently under peer review. It claims to have found a way to use a 372-qubit quantum computer to factor the 2,048-bit numbers of in the RSA encryption system used by institutions from militaries to banks to communication app makers. 

That’s a big deal because quantum experts believed that it would require a far larger quantum computer to break RSA encryption. And IBM already has a 433-qubit quantum processor.

The Chinese researchers claim to have achieved this feat by using a quantum computer to scale up a classical factoring algorithm developed by German mathematician Claus Peter Schnoor. 

“We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance,” they wrote.

Lawrence Gasman, founder and president of Inside Quantum Technology, says he’s a bit skeptical, but  “It’s enormously important that some people in the West come to some real conclusions on this because if it’s true, it’s pretty disastrous.” 

Gasman said the paper’s most alarming aspect is the idea that it might be possible to break key encryption protocols not with a hypothetical future quantum computer but a relatively simple one that could already exist, or exist soon. 

“If you look at the roadmaps that the major quantum computer companies are putting out there, talking about getting to a machine of the power that the Chinese are talking about, frankly, I don’t know. But you know, this year, next year, very soon. And having said that, I tend to be a believer that there’s going to happen soon.”

Yet Gasman said he was concerned about the numbers cited in the paper: “There’s a lot of hand-waving in there.” 

Anderson Cheng, CEO of the company Post Quantum, said via email: “The general consensus in the community is that whilst these claims cannot be proven to work there is no definitive evidence that the Chinese algorithm cannot be successfully scaled up either. I share this skepticism, but we should still be worried as the probability of the algorithm working is non-zero and the impact is potentially catastrophic. Even if this algorithm doesn’t work, a sufficiently powerful quantum computer to run Shor’s algorithm”—a method of factoring the very large numbers used by RSA—”will one day be designed – it is purely an issue of engineering and scaling the current generation of quantum computers.”

Defense One reached out to several U.S. government experts, who declined to comment on the paper. But University of Texas at Austin Computer science professor Scott Aaronson was a bit harsher on the paper in his blog earlier this month. To wit: “No. Just No.”

Wrote Aaronson: “It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so. All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen…many.” 

So is the paper a fraud, a “catastrophe,” or something in between? Gasman says that while the political race for quantum supremacy is tightening, it would be uncharacteristic of the Chinese research community to make a bold, easily punctured false claim. He described the majority of published quantum research out of China as fairly “conventional” and said it’s unlikely that China would risk its stature as a leader in quantum science by pushing bunk papers. 

“Nobody’s going to say, ‘Oh, it’s the Chinese and they, you know, they’re dissembling and it’s all about the rivalry with the West or the rivalry with the [United States]’,” he said.

Gasman added that while China leads in some aspects of quantum science (such as appalled networking) and quantum computer science, having built the world’s “fastest” quantum computer, the United States leads in many other aspects.. 

Even if this paper turns out to be wrong, it is a warning of what’s to come. The U.S. government has become increasingly concerned about how quickly key encryption standards could become obsolete in the face of a real quantum breakthrough. Last May, the White House told federal agencies to move quickly toward quantum-safe encryption in their operations. 

But even that might be too little, too late. Said Cheng: “We need to be prepared for the first [Cryptographically Relevant Quantum Computer] to be a secret – it is very likely that when a sufficiently powerful computer is created we won’t immediately know as there won’t be anything like mile-high mushroom clouds on the front covers, instead, it will be like the cracking of Enigma – a silent but seismic shift.”

Article link: https://www.defenseone.com/technology/2023/01/china-about-destroy-encryption-we-know-it-maybe/382041/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (7)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...