healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

The terrible costs of a phone-based childhood – The Atlantic

Posted by timmreardon on 03/20/2024
Posted in: Uncategorized.

The decline of Generation Z’s mental health raises a red flag about the role phones play in childhood development. Although generations “are not monolithic,” Jonathan Haidt writes, “if a generation is doing poorly … the sociological and economic consequences will be profound for the entire society.”

Haidt looked to the 2010s for answers. What he found was the watershed transition from flip phones to smartphones, which likely contributed to rising levels of depression and anxiety in adolescents. “Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board,” he writes.

Today’s newsletter brings you stories about how smartphones have changed children’s lives.

• “End the Phone-Based Childhood Now,” by Jonathan Haidt. The environment in which kids grow up today is hostile to human development.


• “The Overprotected Kid,” by Hanna Rosin. A preoccupation with safety has stripped childhood of independence, risk-taking, and discovery—without making it safer. A new kind of playground points to a better solution. (From 2014)


• “I Won’t Buy My Teenagers Smartphones,” by Sarah P. Weeldreyer. Denying a teen a smartphone is a tough decision, and one that requires an organized and impenetrable defense. (From 2019)


• “Have Smartphones Destroyed a Generation?” by Jean M. Twenge. More comfortable online than out partying, post-Millennials are safer, physically, than adolescents have ever been. But they’re on the brink of a mental-health crisis. (From 2017)


• “The Dangerous Experiment on Teen Girls,” by Jonathan Haidt. The preponderance of the evidence suggests that social media is causing real damage to adolescents. (From 2021)

— Stephanie Bai, associate editor

Article link: https://www.linkedin.com/pulse/terrible-costs-phone-based-childhood-the-atlantic-rg1ce

Nobody knows how AI works – MIT Technology Review

Posted by timmreardon on 03/18/2024
Posted in: Uncategorized.


It’s still early days for our understanding of AI, so expect more glitches and fails as it becomes a part of real-world products.

By Melissa Heikkilä March 5, 2024

I’ve been experimenting with using AI assistants in my day-to-day work. The biggest obstacle to their being useful is they often get things blatantly wrong. In one case, I used an AI transcription platform while interviewing someone about a physical disability, only for the AI summary to insist the conversation was about autism. It’s an example of AI’s “hallucination” problem, where large language models simply make things up. 

Recently we’ve seen some AI failures on a far bigger scale. In the latest (hilarious) gaffe, Google’s Gemini refused to generate images of white people, especially white men. Instead, users were able to generate images of Black popes and female Nazi soldiers. Google had been trying to get the outputs of its model to be less biased, but this backfired, and the tech company soon found itself in the middle of the US culture wars, with conservative critics and Elon Musk accusing it of having a “woke” bias and not representing history accurately. Google apologized and paused the feature. 

In another now-famous incident, Microsoft’s Bing chat told a New York Times reporter to leave his wife. And customer service chatbots keep getting their companies in all sorts of trouble. For example, Air Canada was recently forced to give a customer a refund in compliance with a policy its customer service chatbot had made up. The list goes on. 

Tech companies are rushing AI-powered products to launch, despite extensive evidence that they are hard to control and often behave in unpredictable ways. This weird behavior happens because nobody knows exactly how—or why—deep learning, the fundamental technology behind today’s AI boom, works. It’s one of the biggest puzzles in AI. My colleague Will Douglas Heaven just published a piece where he dives into it. 

The biggest mystery is how large language models such as Gemini and OpenAI’s GPT-4 can learn to do something they were not taught to do. You can train a language model on math problems in English and then show it French literature, and from that, it can learn to solve math problems in French. These abilities fly in the face of classical statistics, which provide our best set of explanations for how predictive models should behave, Will writes. Read more here. 

It’s easy to mistake perceptions stemming from our ignorance for magic. Even the name of the technology, artificial intelligence, is tragically misleading. Language models appear smart because they generate humanlike prose by predicting the next word in a sentence. The technology is not truly intelligent, and calling it that subtly shifts our expectations so we treat the technology as more capable than it really is. 

Don’t fall into the tech sector’s marketing trap by believing that these models are omniscient or factual, or even near ready for the jobs we are expecting them to do. Because of their unpredictability, out-of-control biases, security vulnerabilities, and propensity to make things up, their usefulness is extremely limited. They can help humans brainstorm, and they can entertain us. But, knowing how glitchy and prone to failure these models are, it’s probably not a good idea to trust them with your credit card details, your sensitive information, or any critical use cases.

As the scientists in Will’s piece say, it’s still early days in the field of AI research. According to Boaz Barak, a computer scientist at Harvard University who is currently on secondment to OpenAI’s superalignment team, many people in the field compare it to physics at the beginning of the 20th century, when Einstein came up with the theory of relativity. 

The focus of the field today is how the models produce the things they do, but more research is needed into why they do so. Until we gain a better understanding of AI’s insides, expect more weird mistakes and a whole lot of hype that the technology will inevitably fail to live up to. 

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/03/05/1089449/nobody-knows-how-ai-works/amp/

Artificial Intelligence and Medical Products – FDA

Posted by timmreardon on 03/16/2024
Posted in: Uncategorized.

Today, the FDA published its new paper, “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” which outlines specific focus areas regarding the development and use of AI across the medical product lifecycle: https://lnkd.in/gPi8CcuX

The paper will help further align and streamline the agency’s work in AI. Read more about the agency’s AI initiatives on our website: https://lnkd.in/gUHD8-gZ

What is Artificial Intelligence? 

Artificial Intelligence (AI) is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. AI includes machine learning, which is a set of techniques that can be used to train AI algorithms to improve performance of a task based on data. 

How does AI intersect with Medical Products? 

AI has emerged as a transformative force. The Food and Drug Administration is responsible for protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, and medical devices; and by ensuring the safety of our nation’s food supply, cosmetics, and products that emit radiation. This landing page serves as your gateway to a wealth of information, resources, and insights into the intersection of AI and medical products. Learn more about how the FDA is shaping the future of health care through the responsible and innovative integration of AI.

How Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and Office of Combination Products (OCP) are working together?

Medicine doctor working laptop computer for medical record of patient on interface. DNA.medical technology and futuristic concept.Digital healthcare and network on modern virtual screen.
Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together
Catching Up with Califf Main Image includes photo of Dr. Califf
Commissioner’s Blog: Harnessing the Potential of Artificial Intelligence

For specific information related to devices, please visit:

  • Digital Health Center of Excellence 
  • Artificial Intelligence Program: Research on AI/ML-Based Medical Devices 

For specific information related to drugs and biological products, please visit:

  • Artificial Intelligence and Machine Learning (AI/ML) for Drug Development 

For more information on the laws, regulations, executive orders, and memoranda that drives HHS’s AI efforts, please visit:

  • U.S. Department of Health and Human Services Artificial Intelligence webpage

Article link: https://www.linkedin.com/posts/fda_artificial-intelligence-medical-products-activity-7174410284189540354-6dlL?

Former Google CEO Eric Schmidt on the challenges of regulating AI – MIT Sloan

Posted by timmreardon on 03/15/2024
Posted in: Uncategorized.


by Dylan Walsh Nov 7, 2022

Why It Matters

The former Alphabet chair talked about the challenge of defining what society wants from artificial intelligence, and called for a balance between regulation and investing in innovation.

Artificial intelligence was a thing, but not the thing, when Eric Schmidt became CEO of Google in 2001. Sixteen years later, when he stepped down from his post as executive chairman of Google’s parent company, Alphabet, the world had changed. Speaking at Princeton University that year, Schmidt declared that we are in “the AI century.”

Schmidt, who recently chaired the National Security Commission on Artificial Intelligence, and MIT computer science professor Aleksander Madry discussed how this transition should be managed and its broader implications at the 2022 MIT AI Policy Forum Summit.

Their conversation came at a moment when AI is ascendant in both public and private imaginations.

Headlines are touting its accomplishments both small — winning an art contest at the Colorado State Fair — and large, such as predicting the shape of nearly every protein known to science. And the White House just released a blueprint for an AI Bill of Rights to “protect the American public in the age of artificial intelligence.”

Companies are investing billions of dollars in the technology and the talent necessary for its development (this includes Schmidt, who attracted attention last month for not publicly disclosing his investments in several AI startups while chairing the NSC commission).

Schmidt talked about the core challenge of defining what our society wants to gain from AI, and called for a balance between regulating AI and investing in innovation.

A pragmatic approach to development

Schmidt said a naive utopianism often accompanies technological innovation.  “This … goes back to the way tech works: A bunch of people have similar backgrounds, build tools that make sense to them without understanding that these tools will be used for other people in other ways,” he said.

We should learn from these mistakes, Schmidt said. The tremendous potential of AI must not blind developers and regulators to the ways in which it can be abused. He cited the potential challenges of information manipulation, bioterrorism, and cyber threats, among many others. To the extent possible, guardrails must be in place from the beginning to prevent criminal or destructive applications, he said.

Schmidt also criticized the degree to which people working in AI have focused on the problem of bias. “We’ve all become obsessed with bias,” he said. It is an important challenge rooted in the data used to train AI systems, he acknowledged, but he said he was confident this would be fixed by using smaller data sets and zero-shot learning. “We’ll figure out a way to address bias,” he said. “Academics wrote all sorts of stuff about bias because that’s the thing that they could frame. But that’s not the real issue. The real issue is that when you start to manipulate the information space, you manipulate human behavior.”

Starting a productive discussion on regulation

One of the core challenges right now, according to Schmidt, is that we don’t have a clear definition of what we, as a society, want from AI. What role should it fill? What applications are appropriate? “If you can’t define what you want, it’s very hard to say how you’d regulate it,” he said.

To begin this process, one of Schmidt’s suggestions was a relatively small working group of 10 to 20 people who build a list of proposed regulations. These might include: making certain content, like hate speech, illegal; rules should be in place to distinguish humans from bots; all algorithms must be openly published.

This list, of course, is only a starting point. “Let’s assume that we got such a list — which we don’t have right now … How are you going to get the CEOs of the companies who are, independent of what they say, driven by revenue … to agree on anything?” Schmidt asked. 

Government should do more than regulate

The role of government is not simply to regulate AI, Schmidt said. It must simultaneously promote the technology. Alongside a regulatory plan, Schmidt suggested every country should have a “how-do-we-win-AI” plan.

Looking to Europe, he described the admirable model of deep and long-term investment in big physics challenges. The CERN particle accelerator is one of many examples. But Schmidt does not see commensurate levels of investment in AI. “That’s a huge mistake, and it’s going to hurt them,” he said.

Investing productively in novel technologies while also devising regulation for those new technologies is difficult, Schmidt admitted, but he believes the tendency is to over-regulate and under-promote. As an example, he pointed to the European Union’s stringent online data privacy aims, embodied in the General Data Protection Regulation. While these efforts appear to do a good job protecting consumer data, high compliance costs have the unintended consequence of stifling innovation, Schmidt contended.

“You have to have an attitude of innovation and regulation at the same time,” he said. “If you don’t have both, you’re not going to lead.”

The particular case of social media

Social media presents specific challenges, Schmidt said. He pointed to problems with present-day platforms, which often started as basic information feeds and developed into recommendation engines. And the rules by which these engines operate may not be the rules we care about as citizens.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/former-google-ceo-eric-schmidt-challenges-regulating-ai?

Is AI an Existential Risk? Q&A with RAND Experts

Posted by timmreardon on 03/15/2024
Posted in: Uncategorized.

March 11, 2024

What are the potential risks associated with artificial intelligence? Might any of these be catastrophic or even existential? And as momentum builds toward boundless applications of this technology, how might humanity reduce AI risk and navigate an uncertain future?

At a recent RAND event, a panel of five experts explored these emerging questions. While the gathering highlighted RAND’s diversity in academic disciplines and perspectives, the panelists were unsurprisingly unanimous that independent, high-quality research will play a pivotal role in exploring AI’s short- and long-term risks—as well as the implications for public policy.

What do you view as the biggest risks posed by AI?

BENJAMIN BOUDREAUX AI could pose a significant risk to our quality of life and the institutions we need to flourish. The risk I’m concerned about isn’t a sudden, immediate event. It’s a set of incremental harms that worsen over time. Like climate change, AI might be a slow-moving catastrophe, one that diminishes the institutions and agency we need to live meaningful lives.

This risk doesn’t require superintelligence or artificial general intelligence or AI sentience. Rather, it’s a continuation and worsening of effects that are already happening. For instance, there’s significant evidence that social media, a form of AI, has serious effects on institutions and mental well-being.

AI seems to promote mistrust that fractures shared identities and a shared sense of reality. There’s already evidence that AI has undermined the credibility and legitimacy of our election system. And there’s significant evidence that AI has exacerbated inequity and bias. There’s evidence that AI has impacted industries like journalism, producing cascading effects across society. And as AI becomes a driver of international competition, it could become harder to respond to other catastrophes, like another pandemic or climate change. As AI gets more capable, as AI companies become more powerful, and as we become dependent on AI, I worry that all those existing risks and harms will get worse.

JONATHAN WELBURN It’s important to note that we’ve had previous periods in history with revolutionary technological shocks: electricity, the printing press, the internet. This moment is similar to those. AI will lead to a series of technological innovations, some of which we might be able to imagine now, but many that we won’t be able to imagine.

AI might exacerbate existing risks and create new ones. I think about inequity and inequality. AI bias might undermine social and economic mobility. Racial and gender biases might be baked into models. Things like deepfakes might undermine our trust in institutions.

But I’m looking at more of a total system collapse as a worst-case scenario. The world in 2023 already had high levels of inequality. And so, building from that foundation, where there’s already a high level of concentration of wealth and power—that’s where the potential worst-case scenario is for me. Capital owners own all the newest technology that’s about to be created and owned, all the wealth, and all the decisionmaking. And this is a system that undermines many democratic norms.

But I don’t see this as a permanent state, necessarily. I see it as a temporary state that could have generations of harm. We would transition through this AI shock. But it’s still up to humanity to create the policy solutions that prevent the worst-case scenario.

JEFF ALSTOTT There’s a real diversity of which risks we’re all studying at RAND. And that doesn’t mean that any of these risks are invalid or necessarily more or less important than the others. So, I agree with everything that Ben and Jon have said.

One of the risks that keeps me up at night is the resurrection of smallpox. The story here is formerly exquisite technical knowledge being used by bad actors. Bioweapons happens to be one example where, historically, the barriers have been information and knowledge. You don’t need much in the way of specialized matériel or expensive sets of equipment any longer in order to achieve devastating effects, with the launching of pandemics. AI could close the knowledge gap. And bio is just one example. The same story repeats with AI and chemical weapons, nuclear weapons, cyber weapons.

Then, eventually, there’s the issue of not just bad people doing bad things but AIs themselves running off and doing bad things, plus anything else they want to do—AI run amok.

NIDHI KALRA To me, AI is gas on the fire. I’m less concerned with the risk of AI than with the fires themselves—the literal fires of climate change and potential nuclear war, and the figurative fires of rising income inequality and racial animus. Those are the realities of the world. We’ve been living those for generations. And so, I’m not any more awake at night because of AI than I was already because of the wildfires in Canada, for instance.

But I do have a concern: What does the world look like when we, even more than is already the case today, can’t distinguish fact from fiction? What if we can’t distinguish a human being from something else? I don’t know what that does to the kind of humanity that we’ve lived with for the entire existence of our species. I worry about a future in which we don’t know who we are. We can’t recognize each other. That vague foreboding of a loss of humanity is what, if anything, keeps me up. Otherwise, I think we’ll be just as fine as we were yesterday.

EDWARD GEIST AI threatens to be an amplifier for human stupidity. That characterization captures the types of harms that are already occurring, like what Ben was discussing, but also more speculative types of harms. So, for instance, the idea of machines that do what you ask for—rather than what you wanted or should have asked for—or machines that make the same kind of mistakes that humans make, only faster and in larger quantities.

Some of you have addressed this implicitly, but let’s tackle the question head on, as briefly as you can: With the caveat that there’s still much uncertainty surrounding AI, do you think it poses an existential risk?

WELBURN No. I don’t think AI poses an irreversible harm to humanity. I think it can worsen our lives. I think it can have long-lasting harm. But I think it’s ultimately something that we can recover from.

KALRA I second Jon: No. We are an incredibly resilient species, looking back over millions of years. I think that’s not to be taken lightly.

BOUDREAUX Yes. An existential risk is an unrecoverable harm to humanity’s potential. One way that could happen is that humans die. But the other way that can happen is if we no longer engage in meaningful human activity, if we no longer have embodied experience, if we’re no longer connected to our fellow humans. That, I think, is the existential risk of AI.

ALSTOTT Yes.

GEIST I’m not sure, and here’s why: I’m a nuclear strategist, and I’ve learned through my studies just how hard it can be to tell the difference.

For example, is the hydrogen bomb an existential risk? I think most laypeople would probably say, “Yes, of course.” But even using a straightforward definition of existential risk (humans go extinct), the answer isn’t obvious.

The current scientific understanding is that the nuclear weapons that exist today could probably not be used in a way that would result in human extinction. That scenario would require more and bigger bombs. That said, a nuclear arsenal that could plausibly cause human extinction via fallout and other effects does appear to be something a country could build if it really wanted to.

Are there AI policies that reasonable people could consider and potentially agree on, despite thinking differently about the question of whether AI poses an existential risk?

KALRA Perhaps ensuring that we’re not moving too quickly in integrating AI into our critical infrastructure systems.

BOUDREAUX Transparency or oversight, so we can actually audit tech companies for the claims they’re making. They extoll all these benefits of AI, so they should show us the data. Let us look behind the scenes, as much as we can, to see whether those claims are true. And this isn’t just a role for researchers. For instance, the Federal Trade Commission might need greater funding to ensure that it can prohibit unfair and deceptive trade practices. I think just holding companies to the standards that they themselves have set could be a good step.

What else are you thinking about the role that research can play as we move into this new era?

KALRA I want to see questions about AI policy problems asked in a very RAND-like way: What would we have to believe about our world and about the costs of various actions to prefer Action A over Action B? What’s the quantity of evidence you need for this to be a good decision? Do we have anything resembling that level of evidence? What are the trade-offs if we’re right? What are the trade-offs if we’re wrong? That’s the kind of framing I’d like to see in discussions of AI policy.

WELBURN The lack of diversity in the AI world is a huge concern. That leads to racial and gender biases in AI modelsthemselves.

I think RAND can make diversity a strong part of our AI research agenda. That can come in part from bringing together a lot of different stakeholders and not just people with computer science degrees. How can we bring people like sociologists into this conversation, too?

GEIST I’d like to see RAND play the kind of vital role in these discussions about AI policy that we played in shaping policy that mitigated the threat of thermonuclear war back in the 1950s and 1960s. In fact, our predecessors invented methodologies back then that could either serve as the inspiration for or perhaps even be directly adapted to AI-related policy problems today.

Zooming out, I think humanity needs to lay out a research agenda that will get us to answer the right questions. Because until very recently, AI as an academic exercise has been pursued in a very ad hoc way. There hasn’t been a systemic research agenda that’s been designed to answer some very concrete questions. It may be that there’s more low-hanging fruit than is obvious if we frame the questions in very practical terms, especially given now that there are so many more eyeballs on AI. The number of people trying to work on these problems has just exploded in the last few years.

ALSTOTT As Ed alludes to, it’s long-standing practice here at RAND to be looking forward multiple decades to contemplate different tech that could exist in the future, so we can understand the potential implications and identify evidence-based actions that might need to be taken now to mitigate future threats. We have started doing this with the AI of today and tomorrow and need to do much more.

We also need a lot more of the science of AI threat assessment. RAND is starting to be known as the place that’s doing that kind of analysis today. We just need to keep going. But we probably have at least a decade’s worth of work and analysis that needs to happen. So if we don’t have it all sorted out ahead of time, it might be that, whatever the threat is, it lands, and then it’s too late.

BOUDREAUX Researchers have a special responsibility to look at the harms and the risks. This isn’t just looking at the technology itself but also at the context—looking into how AI is being integrated into criminal justice and education and employment systems, so we can see the interaction between AI and human well-being. More could also be done to engage affected stakeholders before AI systems are deployed in schools, health care, and so on. We also need to take a systemic approach, where we’re looking at the relationship between AI and all the other societal challenges we face.

It’s also worth thinking about building communities that are resilient to this broad range of crises. AI might play a role by fostering more human connection or providing a framework for deliberation on really challenging issues. But I don’t think there’s a technical fix alone. We need to have a much broader view of how we build resilient communities that can deal with societal challenges.


Benjamin Boudreaux is a policy researcher who studies the intersection of ethics, emerging technology, and security.

Jonathan Welburn is a senior researcher who studies emerging systemic risks, cyber deterrence, and market failures.

Jeff Alstott is a senior information scientist and directs the RAND Center for Technology and Security Policy.

Nidhi Kalra is a senior information scientist whose research examines climate change mitigation, adaptation, and decarbonization planning, as well as decisionmaking amid deep uncertainty.

Edward Geist is a policy researcher whose interests include Russia, civil defense, AI, and the potential effect of emerging technologies on nuclear strategy.

Special thanks to Anu Narayanan, associate director of the RAND National Security Research Division, who moderated the discussion, and to Gary Briggs, who organized this event. Excerpts presented here were edited for length and clarity.

Article link: https://www.rand.org/pubs/commentary/2024/03/is-ai-an-existential-risk-qa-with-rand-experts.html?

US Federal LCNC Subcommunity of Practice Session on March 27th

Posted by timmreardon on 03/14/2024
Posted in: Uncategorized.

Join in our Next US Federal LCNC Subcommunity of Practice Session on March 27th and Embark on the Next Chapter of Innovation with Us!

US Federal IT Leaders, Low-Code/No-Code Program Managers, and Industry Partners:

We have gathered momentum since our inaugural US Federal LCNC Subcommunity of Practice meeting in December! Following our kickoff, we spent the last couple of months crystallizing our collective vision, solidifying our committee goals, and agreeing on a robust set of roadmap initiatives that we are excited to move forward on.  In addition, we’re proud to share that we’re also celebrating multiple quick-wins! Some highlights include:

·      Creating an Agency LCNC “Battle-Buddy” Mentorship Program where Agency “Mentees” can benefit from Mentor experiences to accelerate success.

·      Establishing a shared artifact repository where Agencies can now contribute and leverage artifacts, including best practices, training materials, product knowledge, and more.

·      Developing Cross-Agency relationships. The US Federal LCNC SCoP is successfully facilitating cross-Agency connections to empower members in their LCNC adoption and maturity efforts.

·      Expanding Membership & Corporate Knowledge: Our LCNC SCoP is growing, and the expanding membership is increasing our community’s corporate knowledge around LCNC and acting as a force-multiplier toward positive outcomes.

The seeds of big ideas are taking root, and the enthusiasm and efforts of this group is nothing short of phenomenal!

With that said, we’re just two weeks away from our next U.S. Federal Chief Information Officers (CIO) Council sponsored U.S. Federal LCNC Subcommunity of Practice full session taking place on March 27, 2024. This session will feature:

·      Through partnership with ATARC (Advanced Technology Academic Research Center), we’ll showcase the Appian Platform (we’re also slating others for future sessions)

·      A compelling Agency LCNC implementation vignette presented by the FDA

·      An insightful presentation from Gartner on Modernizing Your Application Stack with Low Code and No Code

Event Details are as follows:
 
📅 Date: March 27, 2024

🕒 Time: 0900-1200 EST

📍 Location: Virtual (provided upon registering)

To learn more and register, please visit: https://lnkd.in/gP_vyxfR

Help us in our effort to further expand Federal Agency participation by spreading the word! Agency LCNC representatives, Federal IT modernization champions, and Industry partners can register for Subcommittee membership by joining our listserv. To do this, send an email (from your government domain) to LCNC-subscribe-request@listserv.gsa.gov requesting US Federal Low Code No Code Subcommunity of membership.

Let’s continue shaping the future of US Government IT modernization together through LCNC!

oneteam #digitaltransformation #itmodernization #lowcode #lcnc #reinventinggovernment #oneteam

VA, DOD, and FEHRM roll out Federal Electronic Health Record in North Chicago

Posted by timmreardon on 03/14/2024
Posted in: Uncategorized.

March 9, 2024 9:00 am

WASHINGTON — Today, the U.S. Department of Veterans Affairs, Department of Defense, and Federal Electronic Health Record Modernization office launched the Federal Electronic Health Record (EHR) at the Captain James A. Lovell Federal Health Care Center (Lovell FHCC) in North Chicago, Illinois. This is the first joint deployment of the federal EHR, which DOD calls MHS GENESIS, at a joint VA and Department of Defense (DOD) facility.

As the only fully integrated, jointly run VA and DOD health care system in the country, Lovell FHCC provides health care to approximately 75,000 patients each year, including Veterans, service members and their families, and Navy recruits. The joint deployment ensures that all patients who visit the facility will receive care that is coordinated through a single fully integrated EHR system. The Federal EHR also improves the ability for VA and DOD to coordinate care and share data with each other and the rest of the U.S. health care system.

“The Federal EHR will enhance care for all beneficiaries who walk through our doors, whether they are Veterans, Navy recruits, students, active-duty service members, their dependents, or retirees,” said Dr. Robert Buckley, Lovell FHCC Director. “It enables a continuum of care that will enhance our operations as we work to optimize health outcomes for those we serve.”

“This joint deployment of the Federal EHR at Lovell FHCC will provide a more coordinated experience for patients Dr and the clinicians who care for them,” said acting program executive director of the Electronic Health Record Modernization Integration Office, Dr. Neil Evans. Additionally, while VA continues with the broader reset of our electronic health record modernization program, we are learning lessons from this deployment to inform our future decisions.”

“The launch of the Federal EHR at Lovell FHCC will help DOD and VA deliver on the promise made to those who serve our country to provide seamless care from their first day of active service to the transition to veteran status,” said HON Lester Martinez-Lopez, Assistant Secretary of Defense for Health Affairs. “A joint electronic health record system demonstrates the power of technology to improve health care delivery, and we look forward to continued collaboration with our VA partners.”

“Our deployment of the Federal EHR at Lovell FHCC will significantly advance interoperability,” said Mr. Bill Tinston, FEHRM Director. “This will not only benefit the patients and staff in North Chicago, but all joint sites that need joint solutions to effectively deliver care.”

The new, modernized federal EHR will meaningfully improve patient health outcomes and benefits decisions. It is particularly critical at Lovell FHCC, so all patients – and clinicians – at the facility can utilize one EHR system. The DOD has now completed installation of the federal EHR to all its garrison hospitals and clinics throughout the world.

VA is moving forward with deployment at Lovell FHCC despite pausing all other deployments under a reset of its Electronic Health Record Modernization (EHRM) program. VA continues to closely examine the issues that clinicians and other end users are experiencing at its sites using the Federal EHR and is developing the success criteria to determine when to exit the program reset and restart deployments at other facilities. As the first deployment at a larger, more complex VA health care facility, the experience at Lovell FHCC will help inform these decisions. Additional deployments will not be scheduled until VA is confident that the new EHR is highly functioning at all current sites and ready to deliver for Veterans and VA clinicians at future sites.

For more information about VA’s overall EHR modernization effort, visit https://www.ehrm.va.gov/. For more information about DOD’s effort, visit https://www.health.mil/MHSGENESIS. For more information about the Federal EHR, visit https://www.FEHRM.gov.

Article link: https://news.va.gov/press-room/va-dod-fehrm-launch-ehr-lovell-health-care/

NSA releases zero-trust guidance to limit adversaries on the network

Posted by timmreardon on 03/14/2024
Posted in: Uncategorized.

The National Security Agency (NSA) has released a Cybersecurity Information Sheet (CSI) that details curtailing adversarial lateral movement within an organization’s network to access sensitive data and critical systems. The CSI, entitled “Advancing Zero Trust Maturity Throughout the Network and Environment Pillar,” provides guidance on how to strengthen internal network control and contain network intrusions to a segmented portion of the network using Zero Trust principles.

“Organizations need to operate with a mindset that threats exist within the boundaries of their systems,” said NSA Cybersecurity Director Rob Joyce. “This guidance is intended to arm network owners and operators with the processes they need to vigilantly resist, detect, and respond to threats that exploit weaknesses or gaps in their enterprise architecture.

”The network and environment pillar–one of seven pillars that make up the Zero Trust framework–isolates critical resources from unauthorized access by defining network access, controlling network and data flows, segmenting applications and workloads, and using end-to-end encryption, according to the CSI.

The CSI outlines the key capabilities of the network and environment pillar, including data flow mapping, macro and micro segmentation, and software defined networking.

NSA is assisting DoD customers in piloting Zero Trust systems and is developing additional Zero Trust guidance for incorporating Zero Trust principles and designs into enterprise networks.This guidance expands on NSA’s previously released CSIs, “Embracing a Zero Trust Security Model,” “Advancing Zero Trust Maturity Throughout the User Pillar,” and “Advancing Zero Trust Maturity Throughout the Device Pillar.” 

Read the full report here.

Article link: https://www.linkedin.com/pulse/nsa-releases-zero-trust-guidance-limit-adversaries-3l9fe 

VAST Data’s NVIDIA DPU-BASED AI Cloud Architecture – Forbes

Posted by timmreardon on 03/12/2024
Posted in: Uncategorized.

Steve McDowellContributor

Chief Analyst & CEO, NAND Research.

Mar 12, 2024,05:15pm EDT

VAST Data introduced a new AI cloud architecture based on Nvidia’s BlueField-3 DPU technology. The architecture is designed to improve performance, security, and efficiency for AI data services. The approach seeks to enhance data center operations and introduce a secure, zero-trust environment by integrating storage and database processing into AI servers.

Nvidia DPUs Remove the Bottleneck

VAST Data is leveraging Nvidia’s BlueField-3 DPU to innovate within its AI cloud solution. A DPU is a specialized processor designed to offload, accelerate, and isolate data center workloads, enabling higher performance, increased security, and more efficient data processing.

VAST disaggregates its resources into Nvidia BlueField-3 DPUs. This means that the DPU takes over certain data processing tasks traditionally handled by the server, such as networking, security, and storage operations. By offloading these functions to the DPU, VAST can reduce the load on the main CPU, allowing it to focus on AI and machine learning computations.

Here’s how it works: using the Nvidia BlueField-3 DPU, VAST creates a parallel system architecture where storage and database processing services are embedded directly into AI servers.

This setup provides a dedicated, stateless container for each GPU server running the VAST parallel services operating system. It promotes true linear scalability of data services across a vast number of GPUs without the bottlenecks typically introduced by traditional x86 hardware and networking layers.

By removing the dependency on multiple layers of traditional hardware and leveraging the processing power of the DPU, VAST’s network-attached Data Platform infrastructure becomes significantly more efficient. This efficiency translates into what VAST tells us is a 70% reduction in the power usage and data center footprint for VAST infrastructure, contributing to overall energy consumption savings.

The approach also yields a nice benefit for GPU cloud providers with multi-tenant environments. With VAST’s zero-trust security model, the DPU enables data isolation and data management from the host operating system. By hosting data services on the DPU and utilizing standard client protocols, VAST minimizes potential attack vectors and ensures that data remains secure.

Analyst’s Take

When Nvidia launched its first BlueField DPU, based on technology acquired with Nvidia’s acquisition of Mellanox, the industry saw it as just another intelligent network adapter. It could offload expensive storage and networking tasks such as deep packet inspection or compression. But Nvidia proved that the accelerator is capable of much more.

Not long after Nvidia launched BlueField, VMware (now called “VMware by Broadcom”) took things a step further. It demonstrated that properly designed infrastructure software could leverage an Nvidia BlueField DPU to significantly boost overall system performance. In its vSphere 8.0 release, VMware movedcritical elements of its vSphere Distributed Switch and NSX networking and observability stack to Nvidia’s DPU. VAST Data is now taking a similar approach.

The move towards a disaggregated computing model facilitated by the DPU technology is a significant departure from traditional, monolithic designs. By embedding the entirety of VAST’s operating system natively into an AI cluster, VAST capitalizes on the inherent strengths of Nvidia’s BlueField-3 DPUs and effectively transforms supercomputers into highly specialized AI data engines. This is a significant step towards removing storage bottlenecks in AI and similarly performance-sensitive environments.

Beyond offload, VAST’s zero-trust security model is a critical element. Today, AI training is often a “cloud first” environment, with organizations using GPU cloud providers to train models. VAST Data excels in this market, partnering with top-tier providers like Lambda, CoreWeave, and Core42. Multi-tenant environments like these require a robust and hardware-enforced security model, such as the one VAST Data delivers with its DPU-based architecture.

Large AI clusters are already moving away from traditional storage solutions that struggle to keep up with the increasing scale and performance required for AI workloads. In this market, VAST Data competes with companies like WEKA, which is also finding solid success in the GPU cloud market, and parallel file systems like IBM’s GPFS and the open-source Lustre.

The approach taken by VAST Data and Nvidia is a significant leap forward in optimizing data services for the unique demands of AI. Leveraging DPUs to further remove performance bottles in the data path is a significant differentiator for VAST Data as it competes in this hyper-competitive environment. With this announcement, VAST delivers a compelling and possibly game-changing solution for high-performance data.

Disclosure: Steve McDowell is an industry analyst, and NAND Research is an industry analyst firm that engages in, or has engaged in, research, analysis and advisory services with many technology companies, including those mentioned in this article. Mr. McDowell does not hold any equity positions with any company mentioned in this article.

Article link: https://www-forbes-com.cdn.ampproject.org/c/s/www.forbes.com/sites/stevemcdowell/2024/03/12/vast-datas-nvidia-dpu-based-ai-cloud-architecture/amp/

Building the Quantum Workforce

Posted by timmreardon on 03/12/2024
Posted in: Uncategorized.

A DISCUSSION OF

Inviting Millions Into the Era of Quantum Technologies

BY SEAN DUDLEY, MARISA BRAZIL

READ RESPONSES FROM
  • Bradley Holt 
  • Kiera Peltz 
  • Lincoln D. Carr 
  • Daniel J. Rozell 

In “Inviting Millions Into the Era of Quantum Technologies” (Issues, Fall 2023), Sean Dudley and Marisa Brazil convincingly argue that the lack of a qualified workforce is holding back this field from reaching its promising potential. We at IBM Quantum agree. Without intervention, the nation risks developing useful quantum computing alongside a scarcity of practitioners who are capable of using quantum computers. An IBM Institute for Business Value study found that inadequate skills is the top barrier to enterprises adopting quantum computing. The study identified a small subset of quantum-ready organizations that are talent nurturers with a greater understanding of the quantum skills gap, and that are nearly three times more effective than their cohorts at workforce development.

Quantum-ready organizations are nearly five times more effective at developing internal quantum skills, nearly twice as effective at attracting talented workers in science, technology, engineering, and mathematics, and nearly three times more effective at running internship programs. At IBM Quantum, we have directly trained more than 400 interns at all levels of higher education and have seen over 8 million learner interactions with Qiskit, including a series of online seminars on using the open-source Qiskit tool kit for useful quantum computing. However, quantum-ready organizations represent only a small fraction of the organizations and industries that need to prepare for the growth of their quantum workforce.

As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve. As we move even further toward the age of quantum-centric supercomputing, we will need a larger workforce capable of orchestrating quantum and classical computational resources in order to address domain-specific problems.

Looking to academia, we need more quantum-ready institutions that are effective not only at teaching advanced mathematics, quantum physics, and quantum algorithms, but also are effective at teaching domain-specific skills such as machine learning, chemistry, materials, or optimization, along with teaching how to utilize quantum computing as a tool for scientific discovery.

As we enter the era of quantum utility, meaning the ability for quantum computers to solve problems at a scale beyond brute-force classical simulation, we need a focused workforce capable of discovering the problems quantum computing is best-suited to solve.

Critically, it is imperative to invest in talent early on. The data on physics PhDs granted by race and ethnicity in the United Statespaint a stark picture. Industry cannot wait until students have graduated and are knocking on company doors to begin developing a talent pipeline. IBM Quantum has made a significant investment in the IBM-HBCU Quantum Center through which we collaborate with more than two dozen historically Black colleges and universities to prepare talent for the quantum future.

Academia needs to become more effective in supporting quantum research (including cultivating student contributions) and partnering with industry, in connecting students into internships and career opportunities, and in attracting students into the field of quantum. Quoting Charles Tahan, director of the National Quantum Coordination Office within the White House Office of Science and Technology Policy: “We need to get quantum computing test beds that students can learn in at a thousand schools, not 20 schools.”

Rensselaer Polytechnic Institute and IBM broke ground on the first IBM Quantum System One on a university campus in October 2023. This presents the RPI community with an unprecedented opportunity to learn and conduct research on a system powered by a utility-scale 127-qubit processor capable of tackling problems beyond the capabilities of classical computers. And as lead organizers of the Quantum Collaborative, Arizona State University—using IBM and other industry quantum computing resources—is working with other academic institutions to provide training and educational pathways across high schools and community colleges through to undergraduate and graduate studies in the field of quantum.

Our hope is that these actions will prove to be only part of a broader effort to build the quantum workforce that science, industry, and the nation will need in years to come.

BRADLEY HOLT

IBM Quantum

Program Director, Global Skills Development

Sean Dudley and Marisa Brazil advocate for mounting a national workforce development effort to address the growing talent gap in the field. This effort, they argue, should include educating and training a range of learners, including K–12 students, community college students, and workers outside of science and technology fields, such as marketers and designers. As the field will require developers, advocates, and regulators—as well as users—with varying levels of quantum knowledge, the authors’ comprehensive and inclusive approach to building a competitive quantum workforce is refreshing and justified.

At Qubit by Qubit, founded by the Coding School and one of the largest quantum education initiatives, we have spent the past four years training over 25,000 K–12 and college students, educators, and members of the workforce in quantum information science and technology (QIST). In collaboration with school districts, community colleges and universities, and companies, we have found great excitement among all these stakeholders for QIST education. However, as Dudley and Brazil note, there is an urgent need for policymakers and funders to act now to turn this collective excitement into action.

Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large.

The authors posit that the development of a robust quantum workforce will help position the United States as a leader of Quantum 2.0, the next iteration of the quantum revolution. Our work suggests that investing in quantum education will not only benefit the field of QIST, but will result in a much stronger workforce at large. With the interdisciplinary nature of QIST, learners gain exposure and skills in mathematics, computer science, physics, and engineering, among other fields. Thus, even for learners who choose not to pursue a career in quantum, they will have a broad set of highly sought skills that they can apply to another field offering a rewarding future.

With the complexity of quantum technologies, there are a number of challenges in building a diverse quantum workforce. Dudley and Brazil highlight several of these, including the concentration of training programs in highly resourced institutions, and the need to move beyond the current focus on physics and adopt a more interdisciplinary approach. There are several additional challenges that need to be considered and addressed if millions of Americans are to become quantum-literate, including:

  • Funding efforts have been focused on supporting pilot educational programs instead of scaling already successful programs, meaning that educational opportunities are not accessible widely.
  • Many educational programs are one-offs that leave students without clear next steps. Because of the complexity of the subject area, learning pathways need to be established for learners to continue developing critical skills.
  • Diversity, inclusion, and equity efforts have been minimal and will require concerted work between industry, academia, and government.

Historically, the United States has begun conversations around workforce development for emerging and deep technologies too late, and thus has failed to ensure the workforce at large is equipped with the necessary technical knowledge and skills to move these fields forward quickly. We have the opportunity to get it right this time and ensure that the United States is leading the development of responsible quantum technologies.

KIERA PELTZ

Executive Director, Qubit by Qubit

Founder and CEO, The Coding School

To create an exceptional quantum workforce and give all Americans a chance to discover the beauty of quantum information science and technology, to contribute meaningfully to the nation’s economic and national security, and to create much-needed bridges with other like-minded nations across the world as a counterbalance to the balkanization of science, we have to change how we are teaching quantum. Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.

The imminence of quantum technologies has motivated physicists—at least in some places—to reinvent their teaching, listening to and working with their engineering, computer science, materials science, chemistry, and mathematics colleagues to create a new kind of course. In 2020, these early experiments in retooling led to a convening of 500 quantum scientists and engineers to debate undergraduate quantum education. Building on success stories such as the quantum concepts course at Virginia Tech, we laid out a plan, published in IEEE Transactions on Education in 2022, to bridge the gap between the excitement around quantum computing generated in high school and the kind of advanced graduate research in quantum information that is really so astounding. The good news is that as Virginia Tech showed, quantum information can be taught with pictures and a little algebra to first-year college students. It’s also true at the community college level, which means the massive cohort of diverse engineers who start their careers there have a shot at inventing tomorrow’s quantum technologies.

Even today, five years after the National Quantum Initiative Act became law, the word “entanglement”—the key to the way quantum particles interact that makes quantum computing possible—does not appear in physics courses at many US universities. And there are perhaps only 10 to 20 schools offering quantum engineering education at any level, from undergraduate to graduate. Imagine the howls if this were the case with computer science.

However, there are significant missing pieces. For one, there are almost no community college opportunities to learn quantum anything because such efforts are not funded at any significant level. For another, although we know how to teach the most speculative area of quantum information, namely quantum computing, to engineers, and even to new students, we really don’t know how to do that for quantum sensing, which allows us to do position, navigation, and timing without resorting to our fragile GPS system, and to measure new space-time scales in the brain without MRI, to name two of many applications. It is the most advanced area of quantum information, with successful field tests and products on the market now, yet we are currently implementing quantum engineering courses focused on a quantum computing outcome that may be a decade or more away.

How can we solve the dearth of quantum engineers? First, universities and industry can play a major role by working together—and several such collective efforts are showing the way. Arizona State University’s Quantum Collaborative is one such example. The Quantum consortium in Colorado, New Mexico, and Wyoming recently received a preliminary grant from the US Economic Development Administration to help advance both quantum development and education programs, including at community colleges, in their regions. Such efforts should be funded and expanded and the lessons they provide should be promulgated nationwide. Second, we need to teach engineers what actually works. This means incorporating quantum sensing from the outset in all budding quantum engineering education systems, building on already deployed technologies. And third, we need to recognize that much of the nation’s quantum physics education is badly out of date and start modernizing it, just as we are now modernizing engineering and computer science education with quantum content.

LINCOLN D. CARR

Quantum Engineering Program and Department of Physics

Colorado School of Mines

Preparing a skilled workforce for emerging technologies can be challenging. Training moves at the scale of years while technology development can proceed much faster or slower, creating timing issues. Thus, Sean Dudley and Marisa Brazil deserve credit for addressing the difficult topic of preparing a future quantum workforce.

At the heart of these discussions are the current efforts to move beyond Quantum 1.0 technologies that make use of quantum mechanical properties (e.g., lasers, semiconductors, and magnetic resonance imaging) to Quantum 2.0 technologies that more actively manipulate quantum states and effects (e.g., quantum computers and quantum sensors). With this focus on ramping up a skilled workforce, it is useful to pause and look at the underlying assumption that the quantum workforce requires active management.

In their analysis, Dudley and Brazil cite a report by McKinsey & Company, a global management consulting firm, which found that three quantum technology jobs exist for every qualified candidate. While this seems like a major talent shortage, the statistic is less concerning when presented in absolute numbers. Because the field is still small, the difference is less than 600 workers. And the shortage exists only when considering graduates with explicit Quantum 2.0 degrees as qualified potential employees.

McKinsey recommended closing this gap by upskilling graduates in related disciplines. Considering that 600 workers is about 33% of physics PhDs, 2% of electrical engineers, or 1% of mechanical engineers graduated annually in the United States, this seems a reasonable solution. However, employers tend to be rather conservative in their hiring and often ignore otherwise capable applicants who haven’t already demonstrated proficiency in desired skills. Thus, hiring “close-enough” candidates tends to occur only when employers feel substantial pressure to fill positions. Based on anecdotal quantum computing discussions, this probably isn’t happening yet, which suggests employers can still afford to be selective. As Ron Hira notes in “Is There Really a STEM Workforce Shortage?” (Issues, Summer 2022), shortages are best measured by wage growth. And if such price signals exist, one should expect that students and workers will respond accordingly.

When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences.

If the current quantum workforce shortage is uncertain, the future is even more uncertain. The exact size of the needed future quantum workforce depends on how Quantum 2.0 technologies develop. For example, semiconductors and MRI machines are both mature Quantum 1.0 technologies. The global semiconductor industry is a more than $500 billion business (measured in US dollars), while the global MRI business is about 100 times smaller. If Quantum 2.0 technologies follow the specialized, lab-oriented MRI model, then the workforce requirements could be more modest than many projections. More likely is a mix of market potential where technologies such as quantum sensors, which have many applications and are closer to commercialization, have a larger near-term market while quantum computers remain a complex niche technology for many years. The details are difficult to predict but will dictate workforce needs.

When we assume that rapid expansion of the quantum workforce is essential for preventing an innovation bottleneck, we are left with the common call to actively expand diversity and training opportunities outside of elite institutions—a great idea, but maybe the right answer to the wrong question. And misreading technological trends is not without consequences. Overproducing STEM workers benefits industry and academia, but not necessarily the workers themselves. If we prematurely attempt to put quantum computer labs in every high school and college, we may be setting up less-privileged students to pursue jobs that may not develop, equipped with skills that may not be easily transferred to other fields.

DANIEL J. ROZELL

Research Professor

Department of Technology and Society

Stony Brook University

CITE THIS ARTICLE

“Building the Quantum Workforce.” Issues in Science and Technology 40, no. 2 (Winter 2024).

Article link: https://issues.org/building-quantum-workforce-education-dudley-brazil-forum/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...