healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

NIST Debuts Trustworthy and Responsible AI Resource Center – Nextgov

Posted by timmreardon on 03/31/2023
Posted in: Uncategorized.

By ALEXANDRA KELLEYMARCH 30, 2023 12:46 PM ET

The new resource builds upon NIST’s previously-issued framework and playbook, while offering a visual roadmap for the safe creation and risk management of AI systems.

The new Trustworthy & Responsible Artificial Intelligence Resource Center built by the National Institute of Standards and Technology will now serve as a repository for much of the current federal guidance on AI, featuring easy access to previously issued materials to help public and private entities alike create responsible AI systems.

Formally launched on Thursday, the new AI Resource Center builds upon NIST’s AI Resource Management Framework and AI Playbook to support industry best practices in researching and developing socially responsible AI and machine learning systems absent overarching federal law.   

“The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies,” the homepage reads.

Illustrated on the Resource Center homepage is a visualization of the AI Roadmap, containing several key pillars of the RMF, including U.S. alignment with international tech standards, expanding the Department of Defense’s Test and Evaluation, Verification and Validation efforts, using data to measure the RMF’s efficacy, enhancing RMF guidance and interpretability, expanded use cases, and more context on the human-AI component in risk management.

“The Roadmap identifies key activities for advancing the AI RMF that could be carried out by NIST in collaboration with private and public sector organizations—or by those organizations independently,” the homepage says. “Work described in the Roadmap is intended to help fill gaps in knowledge, practice or guidance and be useful to broader audience [sic] in pursuit of trustworthy and responsible AI.”

The homepage added that the Roadmap’s activities and plans stand to change as AI technologies evolve. NIST noted on the website that it expects “in the coming months” to add enhancements to the Resource Center to include new document links, access to an international standards hub, metrics resources for AI systems testing and software tools. 

In addition to quick access to the RMF and Playbook, NIST’s AI resource website offers a formal glossary of definitions to further establish a fundamental vernacular integral to the AI industry. A section spotlighting new events, such as training workshops and visiting fellows research, related to NIST’s AI agenda, is also featured on the site.

Article link: https://www.nextgov.com/emerging-tech/2023/03/nist-debuts-trustworthy-and-responsible-ai-resource-center/384618/

Streamlining the Joint Capabilities Integration and Development System (JCIDS) Processes – Acquisition Innovation Research Center

Posted by timmreardon on 03/31/2023
Posted in: Uncategorized.

In support of the Department of Defense(DoD) response to Congressional concerns regarding delays in JCIDSprocesses, the Acquisition Innovation Research Center (AIRC) assessed how it can improve the efficiency of developing and approving capability requirements and develop a model to show the effects of proposed alternatives.  

In a presentation titled “Assessments of the Process for Developing Capability Requirements for DoD Acquisition Programs” on research into a validation process used by the DoD to identify, develop, and meet the needs and requirements of warfighters, AIRC modeled JCIDS processes and proposed improvements such as incorporating lean approaches to the requirements validation process, rapid prototyping, changes to policy, and training for a cadre of requirements professionals.  

Currently, it takes more than two years to identify capability gaps and choose a solution approach through JCIDS, leaving interoperability between military forces at risk and warfighters vulnerable to threats. 

“That is a problem,” said Dr. Donald Schlomer, Acquisition Program Manager SOF AT&L – Policy Manager, US Special Operations Command, who spent three years researching JCIDS processes for his doctoral dissertation, “How do you use current technology if it takes you two and a half years just to get a requirement validated to address and acquire a technology?” 

Schlomer worked alongside Dr. Mo Mansouri, Stevens Institute of Technology, and Dr. Michael McGrath, AIRC Fellow, to develop a problem-focused approach to remedy JCIDS delays and identify improvements. The three researchers presented their findings at the 2022 Annual Research Review, during which they described how they modeled the requirements process based on a sample of Navy programs, assessed the interface between services and joint staff, and identified automotive industry and foreign acquisition best practices to produce a trio of recommended improvements that can potentially streamline the requirements validation process.  

“The idea was to figure out what’s causing the delays in the process and see if we can patch that in the best way using all of the optimization methodologies that we had in mind and could apply,” said Mansouri. 

Their approach relied heavily on the simulation model based on Joseph Wirthlin’s research on acquisition at MIT. “It didn’t include software acquisition programs,” Mansouri said. “We updated it, but we worked on the same level because that was a good baseline for us.” Then, they applied Value Stream Mapping based on Hugh McManus’ handbook for product development, a process first used by Toyota. “We used the same principles and methodology to make the process linear,” Mansouri continued. “The qualitative methodology was mostly focused on interviews with other companies and entities that have a similar approach to requirement development.”  

The first recommended improvement focuses on lean approaches and doesn’t require any policy changes but has the potential to reduce end-to-end time by 25 percent. The second recommendation is more intensive and requires an overhaul of the entire JCIDS process to combine the Initial Capabilities Document and Capabilities Development Document into one integrated document that, according to Mansouri, would “reduce the time by 50 percent, from 852 days to 444 days.” The third approach aligns JCIDS with the Defense Acquisition System using the Adaptive Acquisition Framework and will require significant changes in the law, policies, and manuals. 

“What we discovered here is a capability gap in the JCIDS process itself,” said McGrath. “There’s no authoring system, but authoring a requirement is a challenging thing. You’ve got this handbook that’s 341 pages. You’ve got a policy directive that’s another 114 pages. And the only professionals who really understand that in-depth are the gatekeepers in joint staff and services. The last recommendation was one that we thought was needed. Just as there are acquisition professionals and budget process professionals, there needs to be a cadre of requirements professionals.”  

McGrath continued: “When you look at the agile acquisition framework, we heard that there’s a strong incentive for middle-tier acquisition work around the delays of the JCIDS process. Rapid prototyping and putting a prototype in the hands of a warfighter and then using that as the basis for defining the requirement for what you want to go to production offers a much faster path and one that can keep pace with technology.” 

As is the case with many exploratory research projects, there were constraints. Because of how the project was defined, the team had to look at this problem from the perspective of gatekeepers. “That was one caveat of this research: the level of abstraction was set for us,” said Mansouri. And the model relied on a limited data set. “That’s another restriction of this research. Obviously, they couldn’t give us a lot of these cases, and it has to be sanitized and categorized based on the type of program as well.” 

As further developments occur, success will hinge on embracing better end-to-end governance processes, streamlined documents and staffing, agile requirements, and missions and systems engineering solutions — indicating that digital transformation is integral to improving JCIDS processes. 

“Automating whatever we can is a principle of lean systems — an AI-assisted workflow management that enforces a schedule and is also useful in collecting data,” said Mansouri. “Digitalization is upon us, and that’s probably one of the most important things we should invest in.” 

View the recording from the third quarterly AIRC-DAU Research Forum to learn more about this ongoing research. Follow AIRC on LinkedIn for updates on acquisition innovation.

EXECUTIVE SUMMARY AND REPORT

Article link: https://acqirc.org/news/treamlining-the-joint-capabilities-integration-and-development-system-jcids-processes/

FDA requires medical devices be secured against cyberattacks CNN

Posted by timmreardon on 03/30/2023
Posted in: Uncategorized.

By Jennifer Korn

Published 3:56 PM EDT, Wed March 29, 2023

New YorkCNN — 

The Food and Drug Administration will now require medical devices meet specific cybersecurity guidelines after years of concerns that a growing number of internet-connected products used by hospitals and healthcare providers could be hit by hacks and ransomware attacks. 

Under FDA guidance issued this week, all new medical device applicants must now submit a plan on how to “monitor, identify, and address” cybersecurity issues, as well as create a process that provides “reasonable assurance” that the device in question is protected. Applicants will also need to make security updates and patches available on a regular schedule and in critical situations, and provide the FDA with “a software bill of materials,” including any open-source or other software their devices use.

The new security requirements came into effect as part of the sweeping $1.7 trillion federal omnibus spending bill signed by President Joe Biden in December. As part of the new law, the FDA must also update its medical device cybersecurity guidance at least every two years. 

A 2022 report released by the FBI cited research finding 53% of digital medical devices and other internet-connected products in hospitals had known critical vulnerabilities. The report listed a number of medical devices that are susceptible to cyber attacks, including insulin pumps, intracardiac defibrillators, mobile cardiac telemetry and pacemakers. 

“Malign actors who compromise these devices can direct them to give inaccurate readings, administer drug overdoses, or otherwise endanger patient health,” according to the FBI report.

In 2021, a group of researchersinvestigating software used in medical devices and machinery used in other industries found over a dozen vulnerabilities that, if exploited by a hacker, could cause critical equipment such as patient monitors to crash.

The FDA has faced criticisms over the years for not doing enough. 

A 2018 report from the US Department of Health and Human Services’ Office of the Inspector General said the FDA was not adequately protecting devices from getting hacked.

“FDA had plans and processes for addressing certain medical device problems in the postmarket phase, but its plans and processes were deficient for addressing medical device cybersecurity compromises,” the report said. 

CNN’s John D. Sutter contributed to this report.

Article link: https://www.cnn.com/2023/03/29/tech/fda-medical-devices-secured-cyberattacks/index.html

How fear of future quantum hacks could expose sensitive data now – Bulletin of the Atomic Scientists

Posted by timmreardon on 03/30/2023
Posted in: Uncategorized.

By Nicolas Ayala Arboleda | March 30, 2023

Quantum computers that can crack standard encryption algorithms may arrive in a few years, a few decades, or maybe never. However, they are already having a significant impact. A new cryptographic arms race is developing around quantum computers, in a dynamic that threatens much of the modern world’s digital infrastructure.

Governments and big tech companies are frantically searching for ways to apply and counter the power of this technology. Most notably, they are developing encryption schemes that would be resistant to cyberattacks from quantum computers, also known as post-quantum encryption schemes. The challenge: Quantum-resistant algorithms can be vulnerable to conventional hacking.

Last summer, a classical computer that is not even capable of running Windows 11 broke a quantum-resistant encryption algorithm in less than an hour. For comparison, classical computers require hundreds of trillions of years to break the public key encryption schemes that are standard in everything from banking to security systems. The cracked algorithm, Microsoft’s SIKE (Supersingular Isogeny Key Encapsulation), was one of the US government’s late-stage candidates for modernizing current cryptography standards and preventing potential adversaries with quantum computers from unveiling highly sensitive data.

There is a real danger that, by attempting to address the risk of quantum hacking, security architectures might be opening another more devastating and immediate vulnerability. Had SIKE been prematurely deployed on critical systems, and its weakness been discovered by an adversary, the economic and security consequences would have been terrible.

However, continuing to rely on potentially outdated conventional encryption may be equally dangerous. If quantum computers were to break public key encryption, there would be significant consequences for the economy, privacy, and security. For example, hackers could use this capability to compromise US national security systems. This would potentially expose classified information, including intelligence and military data. Moreover, financial transactions, emails, digital signatures, and other confidential information could be decrypted.

A lack of understanding of quantum technologies in policy circles, international tensions, and infrastructural challenges further complicate the dilemma and increase the risk of miscalculations.

The race to post-quantum. Most current public key encryption relies on mathematical operations that are easy to solve but difficult to reverse. For example, it is easy to multiply two large prime numbers. However, finding these exact numbers by factorizing their product is extremely time-consuming for a classical computer.

Quantum computers could approach factorization by either turning it into an optimization problem or by applying a method called Shor’s algorithm. Shor’s is one of the few currently known algorithms that could allow quantum computers to perform dramatically better than conventional computers. This improved capability could crack the most popular types of public key encryption exponentially fasterthan classical computers.

The systems theoretically capable of these feats are known as “cryptographically significant” quantum computers, and they are likely decades away from existing. Initially, only powerful governments and big tech companies are expected to have access to these systems, because of their enormous cost and complexity.

These quantum computers would threaten encrypted messages sent before and after their invention. Countries are currently intercepting and storing data with the hope of decrypting it later, if these countries manage to develop a capable quantum computer. This method is known as “harvest now, decrypt later.” Some expertsestimate that every message sent today is being collected by at least two countries or private organizations. However, the real extent of this practice is unknown.

The US National Institute of Standards and Technology is attempting to improve public key encryption by establishing a new post-quantum cryptographic standard. The new algorithms that the institute is compiling do not require quantum computers for their development. However, the schemes have taken considerable time to engineer. After a six-year competition, the organizers selected four initial winners and short-listed four additional algorithms as finalists for possible future implementation. SIKE was among the latter group.

Will DIANA—NATO’s DARPA-style innovation hub—improve or degrade global stability?

After announcing the results, the institute encouraged the cryptography community to try to break the new algorithms. This vetting process brought in outside perspectives, in an attempt to identify issues that insiders might have overlooked. Only a month after the announcement, cryptographers from the research university KU Leuven were able to break SIKE’s encryption. Their research showed that a single-core computer, which applies mathematics developed in the 1990s and 2000s, can decrypt the algorithm in about an hour.

This type of hack is partial proof that the vetting process is working. If members of the public find vulnerabilities and communicate them to standard-setters, the standards institute can prevent malicious actors from exploiting these flaws at a later stage. In SIKE’s case, Microsoft encouraged hackers in the general public to share their findings by offering a $50,000 bounty. The reward system was successful in this instance. However, it is unclear whether money will always be enough to prevent hackers from trying to sell their findings to higher bidders.

There might be other reasons to worry. Jonathan Katz, a professor of computer science at the University of Maryland and core faculty member in the Maryland Cybersecurity Center, told Ars Technica: “It is perhaps a bit concerning that this is the second example in the past six months of a scheme that made it to the 3rd round of the [National Institute of Standards and Technology] review process before being completely broken using a classical algorithm.”

The other algorithm Katz refers to is Rainbow, which researchers cracked earlier in the year. Katz goes on to advise caution, noting that three of the four winners of the institute’s process “rely on relatively new assumptions whose exact difficulty is not well understood.”

Additionally, vetters do not currently have a cryptographically significant quantum computer to deploy against these new encryption schemes. Therefore, standard-setters are limited to running purely theory-based testing. Until post-quantum schemes undergo practical tests, there will be questions surrounding the algorithms’ reliability.

The dangers of fear. Quantum technologies have gained an aura of extreme complexity and occasional urgency. Periodically, a flurry of headlines on the imminent impact of quantum technologies rouses policy makers and industry members who do not understand how the technology works. This combination of alleged inexplicability and urgency is counterproductive, even dangerous.

Q-Day is a clear example. Q-Day is a narrative claiming that a large quantum computer will one day be able to suddenly crack existing public key encryption systems. In this scenario, the quantum computer would almost immediately decrypt crucial elements of international security and finance. Catastrophic consequences for defense would follow, along with monetary losses and a collapse of trust in the international financial and security architectures. It is a grim picture.

While Q-Day does identify a few genuine potential risks, some of its assumptions are contestable. Chief among them is viewing the advent of cryptographically significant quantum computers as a question of “when” and not “if.” Experts and institutions have argued that there is no guarantee cryptographically significant quantum computers will ever be a reality.

Additionally, Q-Day’s apocalyptic scenario assumes that quantum computers will develop explosively, almost overnight. Considering the numerous significant technical challenges that remain, it is unlikely that quantum computers will improve at this pace, although it is difficult to predict when or if breakthroughs will happen.

Finally, the Q-Day narrative treats certain important hurdles for conducting quantum hacking as trivial. For example, it ignores that hackers bent on decrypting data would require access to encrypted files and time on a quantum computer, which would be a precious finite resource.

Unchecked, these assumptions could lead policy makers to rush standard-setting processes, producing vulnerabilities in cryptography schemes. Policy makers can prevent this by refusing to see quantum technology as an impenetrable domain, and instead making an effort to better understand it. Free resources published by nations, companies, and science communicators can help the “quantum curious” learn about this new technology.

International competition is also putting pressure on the United States, European Union, China, and other nations to develop quantum-resistant cryptography. Every so often, one of these competitors claims to be on the cusp of breaking public key encryption. The latest episode featured a group of Chinese researchers claiming that they had engineered a new quantum decryption algorithm. This method would allegedly work on significantly smaller quantum computers than initially thought necessary for efficient cryptanalysis. However, as is often the case in this field, the results were not as significant as initially thought.

Given the highly sensitive nature of technologies used for quantum cryptography, it is very difficult to assess the true progress being made. International tensions and imperfect information allow fears of strategic surprise to fester. A strategic surprise is an unexpected change that challenges current strategic assumptions. In this case, the development would be societies losing an essential, secure channel of communications. A government could develop and use a cryptographically significant quantum computer without the knowledge of others, leaving competitors guessing whether they are being hacked or not.

Mitigating the risk of strategic surprise will require careful development and implementation of robust post-quantum encryption. The issue has recently gained more political attention in the United States, with the Senateand the Biden administrationtaking action. The latter instructed federal agencies to begin preparing for the transition to post-quantum encryption. However, a wider migration will require software and hardware changes to an extensive set of devices, likely taking at least a decade and costing billions of dollars. This level of effort and time horizon will require consistent action.

There is a long road ahead before societies can achieve a reliable post-quantum public key encryption system. However, companies and governments can currently implement security measures to better protect data and support the transition to quantum-resistant encryption. These measures range from less technical, such as risk assessments, to highly complex, such as implementing quantum key distribution. Data managers can also set up honeypots(encrypted but useless data) to mislead attackers, and compartmentalize their data and encrypt each part separately.

Beyond these technical solutions, governments and industry can educate themselves to avoid falling prey to hype. Governments could partly ease international tensions by holding—or at least not preventing—dialogues between scientists, engineers, and policy makers to better communicate and understand threat perceptions. These discussions could help avoid unwanted confrontations.

Moreover, discussing quantum hacking in policy and military doctrines could help clarify its use. However, it would be difficult for outsiders to verify whether countries are following doctrines or international agreements that may limit the use of quantum computers.

Ideally, governments and industry should mitigate the technical and political risks of quantum hacking, a capability that might never come to fruition—while also being careful not to provide everyone who has a moderately modern computer with the tools to hack government secrets.

Article link: https://thebulletin.org/2023/03/how-fear-of-future-quantum-hacks-could-expose-sensitive-data-now/

Technologists, Experts Call for Halt on Advanced AI Development Over ‘Risks to Society’ – Nextgov

Posted by timmreardon on 03/30/2023
Posted in: Uncategorized.

High-profile signees include Tesla and SpaceX chief Elon Musk and Apple co-founder Steve Wozniak.

More than 1,000 artificial intelligence experts, industry chiefs and technologists have signed an open letter calling for a six-month pause in developing more advanced artificial intelligence systems, citing “profound risks to society and humanity.”

The open letter, authored by the non-profit Future of Life Institute, comes as OpenAI launches the next iteration of its ChatGPT AI platform—ChatGPT-4—the successor to ChatGPT-3.5, which generated headlines for its ability to perform human-like activities, like writing reports and engaging in realistic conversation.

The technology has also caused controversy, from its potential to eliminate certain jobs to its ability to produce powerful disinformation in the hands of bad actors. The letter calls on AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than ChatGPT-4.”

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter states.

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter continued. “Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The letter has accumulated the signatures of numerous high-profile technologists, scientists and AI and policy experts. Among them: Elon Musk, chief executive officer at SpaceX, Tesla and Twitter; Steve Wozniak, co-founder of Apple; Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions and professor of physics; and Lawrence M. Krauss, president of The Origins Project Foundation.

The government is not yet a major consumer of ChatGPT-4-like AI systems, though officials at the Defense Department have promoted the potential benefits of using similar technologies in recent months. At the state and local levels of government, officialshave suggested these technologies could work in areas that don’t require human subjectivity, like transcribing and summarizing constituent calls. At the federal level, the National Institute of Standards and Technology recently released its Artificial Intelligence Risk Management Framework, which seeks to guide agencies and organizations toward developing “low-risk AI systems.”

However, NIST officials early this month stressed that creating responsible AI presents “major technical challenges” for technologists.

“When it comes [to] measuring technology, from the viewpoint of ‘Is it working for everybody?’ ‘Is AI systems benefiting all people in [an] equitable, responsible, fair way?’, there [are] major technical challenges,” Elham Tabassi, chief of staff at NIST’s Information Technology Laboratory, saidMarch 6 in a panel discussion.

“It’s important when this type of testing is being done, that the impact of community are identified so that the magnitude of the impact can also be measured,” she said. “It cannot be over emphasized, the importance of doing the right verification and validation before putting these types of products out. When they are out, they are out with all of their risks there.”

Article link: https://www.nextgov.com/emerging-tech/2023/03/technologists-experts-call-halt-advanced-ai-development-over-risks-society/384573/

Defense Officials Testify About Cybersecurity – DVIDS

Posted by timmreardon on 03/29/2023
Posted in: Uncategorized.

Air Force Lt. Gen. Robert J. Skinner, director of the Defense Information Systems Agency, and John B. Sherman, the Defense Department’s chief information officer, testify on enterprise cybersecurity to protect DOD information networks at a hearing of the Senate Armed Services Committee’s cybersecurity subcommittee.

VIDEO INFO

https://www.dvidshub.net/video/877793/defense-officials-testify-about-cybersecurity

PUBLIC DOMAIN   

This work, Defense Officials Testify About Cybersecurity , must comply with the restrictions shown on https://www.dvidshub.net/about/copyright.

Helping patients protect their electronic health information – Oracle Health

Posted by timmreardon on 03/29/2023
Posted in: Uncategorized.

February 27, 2023 | 4 minute read

As our industry continues to move toward more people-focused care, we’re driven to improve patient engagement and enhance outcomes to create a better healthcare experience for everyone. By making it more convenient for people to access their health data wherever and whenever they need it, we can help empower individuals and clinicians to make better health and well-being decisions.

“We imagine a world where individuals are empowered and have their healthcare data at their fingertips,” says Rob Helton, vice president of platform development, Oracle Health. “They have ownership of their data and are able to review and add to it. They can choose to share it, and who it gets shared to. We believe by putting patients in control of their data, they’ll become active participants in managing their health.”

The US federal government is continuing its push to place patients at the center of their own care, including ensuring patients have access to all of their electronic health information(EHI). The 21st Century Cures Act Information Blocking Rule, among other things, ensures patients have electronic access to their EHI in the manner the patient chooses, a right originating in HIPAA’s individual right of access.

Additionally, the federal government is setting minimum standards for health IT (HIT) that chooses to participate in the HIT certification program under the Office of the National Coordinator for HIT. While most healthcare organizations are striving to comply, some struggle to stay on top of the extra work needed to reach the full potential of interoperability. Luckily, technology can help streamline the process, reducing the regulatory burden on providers and making it easier for people to securely access their data.

Automating patient health data access

Historically, for a patient to use a health app of their choosing, a health system would have to review and approve the health app beforehand. However, with Oracle Health’s newly released functionality, called consumer automatic provision, this process is streamlined enabling patients easier access to their health data.

Consumer automatic provision allows consumer-facing health apps to be immediately provisioned for FHIR API endpoints upon registration. From there, health data stored within the electronic health record (EHR) system can be retrieved by the app upon explicit authorization from the patient or their authorized representative. Patients maintain full control over authorizing an app to access any of their EHI or only certain data types.

Advancing meaningful health insights and maintaining regulatory compliance

When people can use apps on their smartphones to measure and record critical information like physical activity, weight, sleep, heartbeat, and blood sugar level or track their medicine to ensure they don’t miss a dose, they can make better decisions about their health.

In addition to the clinical benefits, the consumer automatic provision aligns with several regulatory requirements that impact healthcare providers, including rules regarding the Medicare Promoting Interoperability program, health IT certification, HIPAA privacy, and information blocking and sharing.

The bottom line of these regulations is patients have the right to access their information electronically for any purpose they may choose, and that right should be facilitated by health IT developers and healthcare providers. Automatic provisioning of successfully registered consumer apps for your FHIR API endpoint helps patients access health information through any app of their choice in a timely fashion without any additional action by your organization.

Protecting patient health data

Protections are built into the FHIR APIs to help keep health data private and secure, but there are potential risks involved with authorizing the disclosure of EHI to a third-party app. While these risks are largely mitigated by protections built into the API connections and access, we all have a responsibility to help educate patients on how to keep their data safe. Although the constraints and protections that Oracle Health can put in place under the 21st Century Cures Act regulations are limited, here are the actions we’re taking to further protect consumer privacy and make the health app ecosystem more reliable and secure for consumers.

  • Employing industry-standard privacy and security protections as a built-in feature of the APIs
  • Developing enhancements to provide consumers with details of an app’s posture in relation to certain security-related best practices as part of the authorization user interface
  • Instructing developers connected with Oracle Health APIs to align with industry best practices for the safe and ethical handling of EHI communicated by the CARIN Alliance
  • Implementing patient-directed scope selection and redaction in the consumer app authorization workflow

The CARIN Alliance provides useful tips for consumers including:

  • Only download from trusted app stores
  • Always review app privacy terms and conditions
  • Know the app developer
  • Find out if the app is sharing or selling your data
  • Use strong and unique credentials

View the full list of recommendations.

Streamlining patients’ ability to access personal health data through an app of their choice is an important part of prioritizing the human experience in healthcare. Consumer automatic provision is one step of several that we’re taking to help patients control their own health and care, enhance clinical decision-making, and improve regulatory compliance for greater interoperability.

Article link: https://blogs.oracle.com/healthcare/post/helping-patients-protect-their-electronic-health-information?

For more information, check out our resources on information blocking.

Related content:

  • How an open EHR advances usability for healthcare providers
  • The prior authorization process: What makes it painful and how to improve it
  • HIPAA violations and consequences: How the recent updates impact the healthcare industry

Elon Musk and more than 1,000 people sign an open letter calling for a pause on training AI systems more powerful than GPT-4 – Business Insider

Posted by timmreardon on 03/29/2023
Posted in: Uncategorized.

REUTERS/Jonathan Ernst

  • Elon Musk and multiple experts have signed a letter raising concerns about the development of AI.
  • The letter was issued by the non-profit Future of Life Institute. 
  • Its signatories called for a 6-month pause on the training of AI systems more powerful than GPT-4.

Elon Musk and 1,125 people, including AI experts, have signed an open lettercalling for a six-month pause on advanced AI development.

The letter, issued by the non-profit Future of Life Institute, called for AI labs to pause training any tech more powerful than OpenAI’s GPT-4, which launched earlier this month. The letter also called on governments to step in and institute a moratorium if the pause “cannot be enacted quickly.”

Reuters and multiple other publications reported the news.

The non-profit said powerful AI systems should only be developed “once we are confident that their effects will be positive and their risks will be manageable.” It cited potential risks to humanity and society, including the spread of misinformation and widespread automation of jobs.

The letter urged AI companies to create and implement a set of shared safety protocols for AI development, which would be overseen by independent experts. 

Apple cofounder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Alphabet’s AI lab DeepMind, and notable AI professors have also signed the letter. At the time of publication, OpenAI CEO Sam Altman had not added his signature.

The letter accused AI labs of being “locked in an out-of-control race to develop and deploy” powerful tech.

The launch of OpenAI’s viral chatbot ChatGPT appears to have prompted several other companies to launch a range of competing AI products. OpenAI’s latest offering, GPT-4, has been hailed as more creative and smarter than its popular predecessor. Altman said the new tech was capable of passing the bar exam and scoring a 5 on several AP exams.

The Future of Life Institute’s open letter said the six-month pause is not for general AI development, but rather a step back from the “dangerous race.”

OpenAI did not immediately respond to Insider’s request for comment, made outside normal working hours.

Read the original article on Business Insider

What’s next for quantum computing – MIT Technology Review

Posted by timmreardon on 03/28/2023
Posted in: Uncategorized.


Companies are moving away from setting qubit records in favor of practical hardware and long-term goals.

By Michael Brooks. January 6, 2023

This story is a part of MIT Technology Review’s What’s Next series, where we look across industries, trends, and technologies to give you a first look at the future

In 2023, progress in quantum computing will be defined less by big hardware announcements than by researchers consolidating years of hard work, getting chips to talk to one another, and shifting away from trying to make do with noise as the field gets ever more international in scope.

For years, quantum computing’s news cycle was dominated by headlines about record-setting systems. Researchers at Google and IBM have had spats over who achieved what—and whether it was worth the effort. But the time for arguing over who’s got the biggest processor seems to have passed: firms are heads-down and preparing for life in the real world. Suddenly, everyone is behaving like grown-ups.

As if to emphasize how much researchers want to get off the hype train, IBM is expected to announce a processor in 2023 that bucks the trend of putting ever more quantum bits, or “qubits,” into play. Qubits, the processing units of quantum computers, can be built from a variety of technologies, including superconducting circuitry, trapped ions, and photons, the quantum particles of light. 

IBM has long pursued superconducting qubits, and over the years the company has been making steady progress in increasing the number it can pack on a chip. In 2021, for example, IBM unveiled one with a record-breaking 127 of them. In November, it debuted  its 433-qubit Osprey processor, and the company aims to release a 1,121-qubit processor called Condor in 2023. 

But this year IBM is also expected to debut its Heron processor, which will have just 133 qubits. It might look like a backwards step, but as the company is keen to point out, Heron’s qubits will be of the highest quality. And, crucially, each chip will be able to connect directly to other Heron processors, heralding a shift from single quantum computing chips toward “modular” quantum computers built from multiple processors connected together—a move that is expected to help quantum computers scale up significantly. 

Heron is a signal of larger shifts in the quantum computing industry. Thanks to some recent breakthroughs, aggressive roadmapping, and high levels of funding, we may see general-purpose quantum computers earlier than many would have anticipated just a few years ago, some experts suggest. “Overall, things are certainly progressing at a rapid pace,” says Michele Mosca, deputy director of the Institute for Quantum Computing at the University of Waterloo. 

Here are a few areas where experts expect to see progress.

Stringing quantum computers together

IBM’s Heron project is just a first step into the world of modular quantum computing. The chips will be connected with conventional electronics, so they will not be able to maintain the “quantumness” of information as it moves from processor to processor. But the hope is that such chips, ultimately linked together with quantum-friendly fiber-optic or microwave connections, will open the path toward distributed, large-scale quantum computers with as many as a million connected qubits. That may be how many are needed to run useful, error-corrected quantum algorithms. “We need technologies that scale both in size and in cost, so modularity is key,” says Jerry Chow, director at IBM Quantum Hardware System Development.

Other companies are beginning similar experiments. “Connecting stuff together is suddenly a big theme,” says Peter Shadbolt, chief scientific officer of PsiQuantum, which uses photons as its qubits. PsiQuantum is putting the finishing touches on a silicon-based modular chip. Shadbolt says the last piece it requires—an extremely fast, low-loss optical switch—will be fully demonstrated by the end of 2023. “That gives us a feature-complete chip,” he says. Then warehouse-scale construction can begin: “We’ll take all of the silicon chips that we’re making and assemble them together in what is going to be a building-scale, high-performance computer-like system.”

The desire to shuttle qubits among processors means that a somewhat neglected quantum technology will come to the fore now, according to Jack Hidary, CEO of SandboxAQ, a quantum technology company that was spun out of Alphabet last year. Quantum communications, where coherent qubits are transferred over distances as large as hundreds of kilometers, will be an essential part of the quantum computing story in 2023, he says.

“The only pathway to scale quantum computing is to create modules of a few thousand qubits and start linking them to get coherent linkage,” Hidary told MIT Technology Review. “That could be in the same room, but it could also be across campus, or across cities. We know the power of distributed computing from the classical world, but for quantum, we have to have coherent links: either a fiber-optic network with quantum repeaters, or some fiber that goes to a ground station and a satellite network.”

Many of these communication components have been demonstrated in recent years. In 2017, for example, China’s Micius satellite showed that coherent quantum communications could be accomplished between nodes separated by 1,200 kilometers. And in March 2022, an international group of academic and industrial researchers demonstrated a quantum repeater that effectively relayed quantum information over 600 kilometers of fiber optics. 

Taking on the noise

At the same time that the industry is linking up qubits, it is also moving away from an idea that came into vogue in the last five years—that chips with just a few hundred qubits might be able to do useful computing, even though noise easily disrupts their operations. 

This notion, called “noisy intermediate-scale quantum” (NISQ), would have been a way to see some short-term benefits from quantum computing, potentially years before reaching the ideal of large-scale quantum computers with many hundreds of thousands of qubits devoted to correcting errors. But optimism about NISQ seems to be fading. “The hope was that these computers could be used well before you did any error correction, but the emphasis is shifting away from that,” says Joe Fitzsimons, CEO of Singapore-based Horizon Quantum Computing.

Some companies are taking aim at the classic form of error correction, using some qubits to correct errors in others. Last year, both Google Quantum AI and Quantinuum, a new company formed by Honeywell and Cambridge Quantum Computing, issued papersdemonstrating that qubits can be assembled into error-correcting ensembles that outperform the underlying physical qubits.

Other teams are trying to see if they can find a way to make quantum computers “fault tolerant” without as much overhead. IBM, for example, has been exploring characterizing the error-inducing noise in its machines and then programming in a way to subtract it (similar to what noise-canceling headphones do). It’s far from a perfect system—the algorithm works from a prediction of the noise that is likely to occur, not what actually shows up. But it does a decent job, Chow says: “We can build an error-correcting code, with a much lower resource cost, that makes error correction approachable in the near term.”

Maryland-based IonQ, which is building trapped-ion quantum computers, is doing something similar. “The majority of our errors are imposed by us as we poke at the ions and run programs,” says Chris Monroe, chief scientist at IonQ. “That noise is knowable, and different types of mitigation have allowed us to really push our numbers.”

Getting serious about software

For all the hardware progress, many researchers feel that more attention needs to be given to programming. “Our toolbox is definitely limited, compared to what we need to have 10 years down the road,” says Michal Stechly of Zapata Computing, a quantum software company based in Boston. 

The way code runs on a cloud-accessible quantum computer is generally “circuit-based,” which means the data is put through a specific, predefined series of quantum operations before a final quantum measurement is made, giving the output. That’s problematic for algorithm designers, Fitzsimons says. Conventional programming routines tend to involve looping some steps until a desired output is reached, and then moving into another subroutine. In circuit-based quantum computing, getting an output generally ends the computation: there is no option for going round again.

Horizon Quantum Computing is one of the companies that have been building programming tools to allow these flexible computation routines. “That gets you to a different regime in terms of the kinds of things you’re able to run, and we’ll start rolling out early access in the coming year,” Fitzsimons says.

Helsinki-based Algorithmiq is also innovating in the programming space. “We need nonstandard frameworks to program current quantum devices,” says CEO Sabrina Maniscalco. Algorithmiq’s newly launched drug discovery platform, Aurora, combines the results of a quantum computation with classical algorithms. Such “hybrid” quantum computing is a growing area, and it’s widely acknowledged as the way the field is likely to function in the long term. The company says it expects to achieve a useful quantum advantage—a demonstration that a quantum system can outperform a classical computer on real-world, relevant calculations—in 2023. 

Competition around the world

Change is likely coming on the policy front as well. Government representatives including Alan Estevez, US undersecretary of commerce for industry and security, have hinted that trade restrictions surrounding quantum technologies are coming. 

Tony Uttley, COO of Quantinuum, says that he is in active dialogue with the US government about making sure this doesn’t adversely affect what is still a young industry. “About 80% of our system is components or subsystems that we buy from outside the US,” he says. “Putting a control on them doesn’t help, and we don’t want to put ourselves at a disadvantage when competing with other companies in other countries around the world.”

And there are plenty of competitors. Last year, the Chinese search company Baidu opened access to a 10-superconducting-qubit processorthat it hopes will help researchers make forays into applying quantum computing to fields such as materials design and pharmaceutical development. The company says it has recently completed the design of a 36-qubit superconducting quantum chip. “Baidu will continue to make breakthroughs in integrating quantum software and hardware and facilitate the industrialization of quantum computing,” a spokesman for the company told MIT Technology Review. The tech giant Alibaba also has researchers working on quantum computing with superconducting qubits.

In Japan, Fujitsu is working with the Riken research institute to offer companies access to the country’s first home-grown quantum computer in the fiscal year starting April 2023. It will have 64 superconducting qubits. “The initial focus will be on applications for materials development, drug discovery, and finance,” says Shintaro Sato, head of the quantum laboratory at Fujitsu Research.

Not everyone is following the well-trodden superconducting path, however. In 2020, the Indian government pledged to spend 80 billion rupees ($1.12 billion when the announcement was made) on quantum technologies. A good chunk will go to photonics technologies—for satellite-based quantum communications, and for innovative “qudit” photonics computing.

Qudits expand the data encoding scope of qubits—they offer three, four, or more dimensions, as opposed to just the traditional binary 0 and 1, without necessarily increasing the scope for errors to arise. “This is the kind of work that will allow us to create a niche, rather than competing with what has already been going on for several decades elsewhere,” says Urbasi Sinha, who heads the quantum information and computing laboratory at the Raman Research Institute in Bangalore, India.

Though things are getting serious and internationally competitive, quantum technology remains largely collaborative—for now. “The nice thing about this field is that competition is fierce, but we all recognize that it’s necessary,” Monroe says. “We don’t have a zero-sum-game mentality: there are different technologies out there, at different levels of maturity, and we all play together right now. At some point there’s going to be some kind of consolidation, but not yet.”

Michael Brooks is a freelance science journalist based in the UK.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/01/06/1066317/whats-next-for-quantum-computing/amp/

Reimagining public health programs to deliver equitable impact – McKinsey

Posted by timmreardon on 03/24/2023
Posted in: Uncategorized.

March 20, 2023 | Article

By placing equity at the center of public health programs, government agencies could more effectively address disparities and improve health outcomes for all Americans.

https://soundcloud.com/mckinsey/listen-to-the-article-reimagining-public-health-programs

DOWNLOADS

Article (5 pages)

The COVID-19 pandemics sparked a global health crisis of unprecedented scale. While its effects were widespread, a disproportionate amount of the harm has been borne by marginalized communities—overwhelmingly low-income households, people of color, and immigrant populations with limited access to resources and health and social services. In the United States, Black, Hispanic, and American Indian or Alaskan Native people were up to 2.5 times more likely than their White counterparts to be hospitalized with COVID-19 and more than 1.7 times more likely to die from the disease.1These patterns are not new. Unequal outcomes persist across many aspects of public health, from nutrition to maternal and child health and disease management.2 The pandemic laid bare these long-standing disparities, underscoring the urgent need to fundamentally change the way critical public health programs and services are delivered.The current moment can serve as a powerful catalyst for change. As the United States enters the endemic stage of COVID-19,3decision makers have the opportunity to initiate bold and swift action. Increased investment in public health and a growing national focus on equity have generated momentum to reimagine public health delivery with a deliberate focus on equitable access and outcomes.4As leaders embark on this journey, they can draw on lessons learned from national, state, and local pandemic response efforts.Based on our work in the COVID-19 vaccine response, we outline in this article an approach for designing and implementing public health programs anchored in detailed analytics, on-the-ground relationships, rigorous program management, and an organizational mindset centered on equity. This approach could serve as a useful example of how public health institutions can deliver more equitable impact during crises and in the long term.Embedding equity in program deliveryNo single solution can fully address the complex and entrenched causes of inequity in public health delivery. However, lessons from the COVID-19 response suggest that an integrated approach to embedding equity into end-to-end program design and implementation could help create more equitable health outcomes (exhibit).

1. Make equity nonnegotiable

The approach centers on equity as an explicit and nonnegotiable objective of every aspect of the program, from setting strategic goals and metrics to designing a new operating model. The term “health equity” can mean different things to different people. Aligning on a clear definition of health equity goals—including how they are measured—is a critical first step in this approach.

Set ambitious health equity goals. Under the integrated approach, the highest office or entity sets and commits to aspirational equity targets at the onset of the program or, for existing programs, at important milestones. Each goal is structured as “SMART”—that is, specific, measurable, attainable, relevant, and time-bound. The specific directive to achieve 85 percent coverage of first-dose vaccinations for the adult population in the 25 highest-need locations (based on a social-vulnerability index) within three months is an example of a SMART goal.

Leverage existing data to delineate metrics. After equity goals have been set, leaders can identify clear metrics to track their progress. A common concern is the lack of precise data; for example, state vaccination rates can fluctuate because of interstate travel and overlap with vaccinations given by federal entities. In cases like these, it may be necessary to take a minimum viable product (MVP) approach, in which metrics are tracked using the best data available while the organization continuously works to improve data quality.

Adapt goals based on progress and data availability. As progress is made toward equity goals, and as better data becomes available, program leaders may need to adjust their initial goals to meet evolving circumstances. Examples include updating goals to include additional demographics as vaccines become available, increasing the granularity of data and metrics from the county to the zip code level, and expanding efforts to related program delivery areas such as maternal health or nutrition. Whenever goals change, it is important to gain stakeholder buy-in, acknowledge the shift, celebrate wins, and invest in any additional capabilities required to reach the new objectives. These actions can go a long way toward mitigating concerns about shifting goalposts that can derail goodwill and lead to burnout.

2. Embrace new ways of working

The next step is to implement new ways of working to ensure that ambitious equity goals are clearly translated and reinforced in day-to-day work.

Empower change from the top. In addition to setting equity-based goals, the approach calls for the highest office or leadership to publicly empower their teams to deliver on those objectives. During the COVID-19 response, this often meant creating new modes of engagement—for example, by breaking down agency-level silos; creating partnerships among public, private, and academic institutions; working across partisan lines; collaborating with the community in new ways; and harnessing the power of nontraditional media.

Adapt governance and decision making. Under this approach, governance structures and processes adapt to support these new ways of working and deliver equitable impact. An effective governance model could include periodic touchpoints to track outcomes against equity targets, clear accountability across teams, and escalation pathways to address challenges or falling short of targets.

Create a collaborative and interdisciplinary working model.A collaborative and cross-functional operating model builds alignment among internal stakeholders (such as agency heads, data analysts, logistics teams, communications teams, and local and regional officials) and external partners (such as community and faith-based organizations, private-sector organizations, and schools). Weekly “thematic” touchpoints can also facilitate close coordination and communication among stakeholders.

A collaborative and cross-functional operating model builds alignment among internal stakeholders and external partners.

Implement agile ways of working. Agile practices, such as sprint-based and iterative work cycles, can help organizations continuously track progress and adjust the strategy within a constantly shifting landscape. While this may be particularly important in times of crisis, such as the COVID-19 pandemic, long-term programs can also benefit from the greater flexibility and speed of agile principles. Feedback is critical; successful initiatives and learnings should be shared and scaled across the organization, while failures and setbacks should be quickly identified and addressed.

3. Walk the talk

The third element of the approach is relentless execution with a strong emphasis on data-driven decision making, capability building, and local engagement.

Fact check with data. Data can be a powerful tool for organizations to use to establish a robust fact base, create transparency around equity outcomes, and “myth bust” perceived roadblocks or misinformation. Organizations must usually meet two conditions: data that is granular enough to drive actionable insights, and decision makers at all levels with access to the latest data and insights. Then organizations can harness this data to make better decisions and quickly verify (or debunk) their assumptions. For example, after a public health agency tested various COVID-19 vaccine strategies (including adding multiple pop-up sites and extending hours for permanent sites), it found that low vaccine uptake in “hesitant” areas was due to lack of access rather than low user adoption, as had previously been assumed.

Build capabilities to execute.Translating data to insights, and insights to action, often requires a wide range of skills and capabilities that may not exist within the organization. For many institutions, the pandemic highlighted a need to train employees on unconstrained problem solving, stakeholder management, data analysis, event coordination, and verbal communication. Agencies found success through various capability-building innovations, including running on-the-ground trainings and workshops in response to immediate needs (for example, crisis management), collaborating with private and academic partners (such as in supply chain and logistics management), establishing rewards for high performance, flattening the traditional organizational hierarchy, and creating new career paths for those on the front lines.

Understand the local context.Even the most rigorous, data-driven initiative could fail if it is not grounded in the local context of the people it is meant to serve. Several COVID-19 vaccination efforts were hamstrung by a lack of cultural sensitivity and awareness of local needs. For example, the vaccine delivery program had limited success when it offered only pediatric vaccines in certain locations where multigenerational families preferred to get vaccinated together. This lesson—that all vaccine events need to provide all doses to all age groups—was then applied more broadly to other locations. Organizations can hire diverse teams and engage experts and community partners to ensure that local perspectives are consulted in the design and execution of any initiative.

While this approach found some success in vaccine delivery, we recognize that there are certain constraints and criteria for success. First, the approach does not address the various and interconnected aspects of systemic and structural inequity that shape individuals’ abilities to make certain choices about their health. In other words, it focuses on improving access to healthcare rather than changing consumer behaviors and decisions. Second, the transformational nature of the approach requires a significant level of commitment from public health agencies—including sustained engagement from the highest levels of leadership, a focus on talent management and capability building, a broader cultural shift in institutional ways of working, and effective execution at all levels of the organization.


The COVID-19 pandemic revealed the profound need to embed equity into public health programs at the federal, state, and local levels. As we emerge from the crisis, public and social institutions have an opportunity to reimagine their approach to delivering public health services and contribute to a more equitable and resilient future for all Americans.

ABOUT THE AUTHOR(S)

Angie Cui is an alumna of McKinsey’s New Jersey office, Ellen Feehan, MD,is a partner in the New Jersey office, JP Julien is a partner in the Philadelphia office, and Neeraja Nagarajan, MD, is an associate partner in the Washington, DC, office.

The authors wish to thank Jason Forrest, Dipti Pai, and Ashley Pitt for their contributions to this article.

Article link: https://www.mckinsey.com/industries/public-and-social-sector/our-insights/reimagining-public-health-programs-to-deliver-equitable-impact?

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (7)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...