healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

One Record Levels the Playing Field for Patients Needing Specialized Care – FEHRM

Posted by timmreardon on 01/11/2024
Posted in: Uncategorized.

Dr. Valerie Seabaugh, FEHRM Deputy Chief Health Informatics Officer

The Federal Electronic Health Record Modernization (FEHRM) office drives the delivery and optimization of the shared electronic health record (EHR) of the Department of Defense, Department of Veterans Affairs (VA), Department of Homeland Security’s U.S. Coast Guard and Department of Commerce’s National Oceanic and Atmospheric Administration. One of the obvious benefits of a unified health record is the portability of records. We will no longer ask Veterans with disabilities and mobility challenges to haul boxes of paper records with them to appointments.

However, a less discussed but highly beneficial outcome of having a single, common federal EHR is lowering barriers to specialized telemedicine care for federal health care beneficiaries. In the old systems, patient health records had medical information isolated in paper files or in a single instance of an outdated EHR linked to a local clinic or medical facility. Although the Joint Longitudinal Viewer provided information between agencies, specialists’ access to the siloed record to place orders or document care was a major barrier to coordinating subspecialty telemedicine care for patients. If a beneficiary resided in an area without access to a needed subspecialist, the only way to reliably receive care was usually to travel to a referral medical facility.

With a single record, specialists have all the pertinent patient data at their fingertips regardless of patient location. Patients can receive telemedicine consultation from a subspecialist, many times without leaving their homes. Telehealth appointments using video technology lower barriers for patients to receive specialized care and drive federal health care toward an equitable standard of care regardless of a patient’s geographic location or mobility. For example, watch this Voice of a Veteran video to see how a Veteran in rural Arizona benefits from specialized endocrinology care with VA’s Tele-Diabetes Program.

Maintaining the federal EHR as an accurate and complete warehouse of medical information simplifies providing efficient, coordinated and convenient high-quality care. The FEHRM is at the forefront of lowering barriers to patient care by making it possible for today’s recruits to maintain a coherent lifetime record as they transition through their military career, from basic training all the way to becoming a Veteran beneficiary. The federal EHR drives health care toward an equitable future where patients receive the right care, unbounded from location and driven by choice.

Article link; https://www.linkedin.com/pulse/one-record-levels-playing-field-patients-needing-specialized-care-ybkne

Video: Health Care and High Reliability: A Cautionary Tale | The Joint Commission

Posted by timmreardon on 01/11/2024
Posted in: Uncategorized.

Joint Commission President Mark R. Chassin, M.D., FACP, M.P.P., M.P.H., discusses The Joint Commission’s efforts to accelerate high reliability.

https://www.jointcommission.org/resources/news-and-multimedia/video-resources/health-care-and-high-reliability-a-cautionary-tale/

DOD and VA Continue Making Progress Sharing Lovell FHCC Deployment Responsibilities – FEHRM

Posted by timmreardon on 01/10/2024
Posted in: Uncategorized.

Jan 8, 2024

The Enterprise Requirements Adjudication process identified additional capabilities to deploy the federal EHR at Lovell FHCC. Building off the DoD Healthcare Management System Modernization Program Management Office deployment methodology, processes and content provided a baseline of tested processes and materials to deploy at Lovell FHCC. VA methodologies were added to accomplish deployment activities related to VA-specific training, infrastructure and device installation and VA-augmentation of system adoption by VA users. This kind of resource and expertise sharing is one of the major advantages of the shared federal EHR, given that training for all roles happens in the same system and all devices and network infrastructure support the same federal EHR.

Currently, VA is solely focused on Lovell FHCC’s EHR deployment with DOD. So far, DOD has deployed to more than 135 parent military treatment facilities, encompassing 3,000+ physical locations and 185,000+ users. USCG, U.S. Military Entrance Processing Command and NOAA have also deployed the federal EHR.

Article link: https://www.linkedin.com/pulse/dod-va-continue-making-progress-sharing-lovell-fhcc-deployment-responsibilities-vgqlf

We need a moonshot for computing – MIT Technology Review

Posted by timmreardon on 01/06/2024
Posted in: Uncategorized.


The US government aims to push microelectronics research forward. But maintaining competitiveness in the long term will require embracing uncertainty.

By Brady Helwig & PJ Maykish

December 28, 2023

In its final weeks, the Obama administration released a report that rippled through the federal science and technology community. Titled Ensuring Long-Term US Leadership in Semiconductors, it warned that as conventional ways of building chips brushed up against the laws of physics, the United States was at risk of losing its edge in the chip industry. Five and a half years later, in 2022, Congress and the White House collaborated to address that possibility by passing the CHIPS and Science Act—a bold venture patterned after the Manhattan Project, the Apollo program, and the Human Genome Project. Over the course of three administrations, the US government has begun to organize itself for the next era of computing.

Secretary of Commerce Gina Raimondo has gone so far as to directly compare the passage of CHIPS to President John F. Kennedy’s 1961 call to land a man on the moon. In doing so, she was evoking a US tradition of organizing the national innovation ecosystem to meet an audacious technological objective—one that the private sector alone could not reach. Before JFK’s announcement, there were organizational challenges and disagreement over the best path forward to ensure national competitiveness in space. Such is the pattern of technological ambitions left to their own timelines.

Setting national policy for technological development involves making trade-offs and grappling with unknown future issues. How does a government account for technological uncertainty? What will the nature of its interaction with the private sector be? And does it make more sense to focus on boosting competitiveness in the near term or to place big bets on potential breakthroughs? 

The CHIPS and Science Act designated $39 billion for bringing chip factories, or “fabs,” and their key suppliers back to the United States, with an additional $11 billion committed to microelectronics R&D. At the center of the R&D program would be the National Semiconductor Technology Center, or NSTC—envisioned as a national “center of excellence” that would bring the best of the innovation ecosystem together to invent the next generation of microelectronics.  

In the year and a half since, CHIPS programs and offices have been stood up, and chip fabrication facilities in Arizona, Texas, and Ohio have broken ground. But it is the CHIPS R&D program that has an opportunity to shape the future of the field. Ultimately, there is a choice to make in terms of national R&D goals: the US can adopt a conservative strategy that aims to preserve its lead for the next five years, or it can orient itself toward genuine computing moonshots. The way the NSTC is organized, and the technology programs it chooses to pursue, will determine whether the United States plays it safe or goes “all in.” 

Welcome to the day of reckoning

In 1965, the late Intel founder Gordon Moore famously predictedthat the path forward for computing involved cramming more transistors, or tiny switches, onto flat silicon wafers. Extrapolating from the birth of the integrated circuit seven years earlier, Moore forecast that transistor count would double regularly while the cost per transistor fell. But Moore was not merely making a prediction. He was also prescribing a technological strategy (sometimes called “transistor scaling”): shrink transistors and pack them closer and closer together, and chips become faster and cheaper. This approach not only led to the rise of a $600 billion semiconductor industry but ushered the world into the digital age. 

Ever insightful, Moore did not expect that transistor scaling would last forever. He referred to the point when this miniaturization process would reach its physical limits as the “day of reckoning.” The chip industry is now very close to reaching that day, if it is not there already. Costs are skyrocketing and technical challenges are mounting. Industry road maps suggest that we may have only about 10 to 15 years before transistor scaling reaches its physical limits—and it may stop being profitable even before that. 

To keep chips advancing in the near term, the semiconductor industry has adopted a two-part strategy. On the one hand, it is building “accelerator” chips tailored for specific applications (such as AI inference and training) to speed computation. On the other, firms are building hardware from smaller functional components—called “chiplets”—to reduce costs and improve customizability. These chiplets can be arranged side by side or stacked on top of one another. The 3D approach could be an especially powerful means of improving speeds. 

This two-part strategy will help over the next 10 years or so, but it has long-term limits. For one thing, it continues to rely on the same transistor-building method that is currently reaching the end of the line. And even with 3D integration, we will continue to grapple with energy-hungry communication bottlenecks. It is unclear how long this approach will enable chipmakers to produce cheaper and more capable computers.  

Building an institutional home for moonshots

The clear alternative is to develop alternatives to conventional computing. There is no shortage of candidates, including quantum computing; neuromorphic computing, which mimics the operation of the brain in hardware; and reversible computing, which has the potential to push the energy efficiency of computing to its physical limits. And there are plenty of novel materials and devices that could be used to build future computers, such as silicon photonics, magnetic materials,and superconductor electronics. These possibilities could even be combined to form hybrid computing systems.

None of these potential technologies are new: researchers have been working on them for many years, and quantum computing is certainly making progress in the private sector. But only Washington brings the convening power and R&D dollars to help these novel systems achieve scale. Traditionally, breakthroughs in microelectronics have emerged piecemeal, but realizing new approaches to computation requires building an entirely new computing “stack”—from the hardware level up to the algorithms and software. This requires an approach that can rally the entire innovation ecosystem around clear objectives to tackle multiple technical problems in tandem and provide the kind of support needed to “de-risk” otherwise risky ventures.

Does it make more sense to focus on boosting competitiveness in the near term or to place big bets on potential breakthroughs?

The NSTC can drive these efforts. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Its research program will need to be insulated from outside pressures. It also needs to foster visionaries, including program managers from industry and academia, and back them with a large in-house technical staff. 

The center’s investment fund also needs to be thoughtfully managed, drawing on best practices from existing blue-chip deep-tech investment funds, such as ensuring transparency through due-diligence practices and offering entrepreneurs access to tools, facilities, and training. 

It is still early days for the NSTC: the road to success may be long and winding. But this is a crucial moment for US leadership in computing and microelectronics. As we chart the path forward for the NSTC and other R&D priorities, we’ll need to think critically about what kinds of institutions we’ll need to get us there. We may not get another chance to get it right.

Brady Helwig is an associate director for economy and PJ Maykish is a senior advisor at the Special Competitive Studies Project, a private foundation focused on making recommendations to strengthen long-term US competitiveness.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/12/28/1084686/computing-microelectronics-chips-act/amp/

Responsible AI has a burnout problem – MIT Technology Review

Posted by timmreardon on 01/06/2024
Posted in: Uncategorized.

Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.

By Melissa Heikkilä

October 28, 2022

Margaret Mitchell had been working at Google for two years before she realized she needed a break.

“I started having regular breakdowns,” says Mitchell, who founded and co-led the company’s Ethical AI team. “That was not something that I had ever experienced before.”

Only after she spoke with a therapist did she understand the problem: she was burnt out. She ended up taking medical leave because of stress. 

Mitchell, who now works as an AI researcher and chief ethics scientist at the AI startup Hugging Face, is far from alone in her experience. Burnout is becoming increasingly common in responsible-AI teams, says Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI consultant at Boston Consulting Group. 

Companies are under increasing pressure from regulators and activists to ensure that their AI products are developed in a way that mitigates any potential harms before they are released. In response, they have invested in teams that evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed. 

Tech companies such as Meta have been forced by courts to offer compensation and extra mental-health support for employees such as content moderators, who often have to sift through graphic and violent content that can be traumatizing. 

But teams who work on responsible AI are often left to fend for themselves, employees told MIT Technology Review, even though the work can be just as psychologically draining as content moderation. Ultimately, this can leave people in these teams feeling undervalued, which can affect their mental health and lead to burnout.

Rumman Chowdhury, who leads Twitter’s Machine Learning Ethics, Transparency, and Accountability team and is another pioneer in applied AI ethics, faced that problem in a previous role. 

“I burned out really hard at one point. And [the situation] just kind of felt hopeless,” she says. 

All the practitioners MIT Technology Review interviewed spoke enthusiastically about their work: it is fueled by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But that sense of mission can be overwhelming without the right support.

“It almost feels like you can’t take a break,” Chowdhury says. “There is a swath of people who work in tech companies whose job it is to protect people on the platform. And there is this feeling like if I take a vacation, or if I am not paying attention 24/7, something really bad is going to happen.”  

Mitchell continues to work in AI ethics, she says, “because there’s such a need for it, and it’s so clear, and so few people see it who are actually in machine learning.” 

But there are plenty of challenges. Organizations place huge pressure on individuals to fix big, systemic problems without proper support, while they often face a near-constant barrage of aggressive criticism online. 

Cognitive dissonance

The role of an AI ethicist or someone in a responsible-AI team varies widely, ranging from analyzing the societal effects of AI systems to developing responsible strategies and policies to fixing technical issues. Typically, these workers are also tasked with coming up with ways to mitigate AI harms, from algorithms that spread hate speech to systems that allocatethings like housing and benefits in a discriminatory way to the spread of graphic and violent images and language. 

Trying to fix deeply ingrained issues such as racism, sexism, and discrimination in AI systems might, for example, involve analyzing large data sets that include extremely toxic content, such as rape scenes and racial slurs.

AI systems often reflect and exacerbate the worst problems in our societies, such as racism and sexism. The problematic technologies range from facial recognition systems that classify Black people as gorillas to deepfake software used to make porn videos appearing to feature women who have not consented. Dealing with these issues can be especially taxing to women, people of color, and other marginalized groups, who tend to gravitate toward AI ethics jobs. 

And while burnout is not unique to people working in responsible AI, all the experts MIT Technology Review spoke to said they face particularly tricky challenges in that area. 

“You are working on a thing that you’re very personally harmed by day to day,” Mitchell says. “It makes the reality of discrimination even worse because you can’t ignore it.”

But despite growing mainstream awareness about the risks AI poses, ethicists still find themselves fighting to be recognized by colleagues in the AI field. 

Some even disparage the work of AI ethicists. Stability AI’s CEO, Emad Mostaque, whose startup built the open-source text-to-image AI Stable Diffusion, said in a tweet that ethics debates around his technology are “paternalistic.” Neither Mostaque nor Stability AI replied to MIT Technology Review’s request for comment by the time of publishing.

“People working in the AI field are mostly engineers. They’re not really open to humanities,” says Emmanuel Goffi, an AI ethicist and founder of the Global AI Ethics Institute, a think tank. 

Companies want a quick technical fix, Goffi says; they want someone to “explain to them how to be ethical through a PowerPoint with three slides and four bullet points.” Ethical thinking needs to go deeper, and it should be applied to how the whole organization functions, Goffi adds.  

“Psychologically, the most difficult part is that you have to make compromises every day—every minute—between what you believe in and what you have to do,” he says. 

The attitude of tech companies generally, and machine-learning teams in particular, compounds the problem, Mitchell says. “Not only do you have to work on these hard problems; you have to prove that they’re worth working on. So it’s completely the opposite of support. It’s pushback.” 

Chowdhury adds, “There are people who think ethics is a worthless field and that we’re negative about the progress [of AI].” 

Social media also makes it easy for critics to pile on researchers. Chowdhury says there’s no point in engaging with people who don’t value what they do, “but it’s hard not to if you’re getting tagged or specifically attacked, or your work is being brought up.”

Breakneck speed

The rapid pace of artificial-intelligence research doesn’t help either. New breakthroughs come thick and fast. In the past year alone, tech companies have unveiled AI systems that generate images from text, only to announce—just weeks later—even more impressive AI software that can create videos from text alone too. That’s impressive progress, but the harms potentially associated with each new breakthrough can pose a relentless challenge. Text-to-image AI could violate copyrights, and it might be trained on data sets full of toxic material, leading to unsafe outcomes. 

“Chasing whatever’s really trendy, the hot-button issue on Twitter, is exhausting,” Chowdhury says. Ethicists can’t be experts on the myriad different problems that every single new breakthrough poses, she says, yet she still feels she has to keep up with every twist and turn of the AI information cycle for fear of missing something important. 

Chowdhury says that working as part of a well-resourced team at Twitter has helped, reassuring her that she does not have to bear the burden alone. “I know that I can go away for a week and things won’t fall apart, because I’m not the only person doing it,” she says. 

But Chowdhury works at a big tech company with the funds and desire to hire an entire team to work on responsible AI. Not everyone is as lucky. 

People at smaller AI startups face a lot of pressure from venture capital investors to grow the business, and the checks that you’re written from contracts with investors often don’t reflect the extra work that is required to build responsible tech, says Vivek Katial, a data scientist at Multitudes, an Australian startup working on ethical data analytics.

The tech sector should demand more from venture capitalists to “recognize the fact that they need to pay more for technology that’s going to be more responsible,” Katial says. 

The trouble is, many companies can’t even see that they have a problem to begin with, according to a report released by MIT Sloan Management Review and Boston Consulting Group this year. AI was a top strategic priority for 42% of the report’s respondents, but only 19% said their organization had implemented a responsible-AI program. 

Some may believe they’re giving thought to mitigating AI’s risks, but they simply aren’t hiring the right people into the right roles and then giving them the resources they need to put responsible AI into practice, says Gupta.

“That’s where people start to experience frustration and experience burnout,” he adds. 

Growing demand

Before long, companies may not have much choice about whether they back up their words on ethical AI with action, because regulators are starting to introduce AI-specific laws. 

The EU’s upcoming AI Act and AI liability law will require companies to document how they are mitigating harms. In the US, lawmakers in New York, California, and elsewhere are working on regulation for the use of AI in high-risk sectors such as employment. In early October, the White House unveiled the AI Bill of Rights, which lays out five rights Americans should have when it comes to automated systems. The bill is likely to spur federal agencies to increase their scrutiny of AI systems and companies. 

And while the volatile global economy has led many tech companies to freeze hiring and threaten major layoffs, responsible-AI teams have arguably never been more important, because rolling out unsafe or illegal AI systems could expose the company to huge fines or requirements to delete their algorithms. For example, last spring the US Federal Trade Commission forced Weight Watchers to delete its algorithms after the company was found to have illegally collected data on children. Developing AI models and collecting databases are significant investments for companies, and being forced by a regulator to completely delete them is a big blow. 

Burnout and a persistent sense of being undervalued could lead people to leave the field entirely, which could harm the field of AI governance and ethics research as a whole. It’s especially risky given that those with the most experience in solving and addressing harms caused by an organization’s AI may be the most exhausted. 

“The loss of just one person has massive ramifications across entire organizations,” Mitchell says, because the expertise someone has accumulated is extremely hard to replace. In late 2020, Google sacked its ethical AI co-lead Timnit Gebru, and it fired Mitchell a few months later. Several other members of its responsible-AI team left in the space of just a few months.

Gupta says this kind of brain drain poses a “severe risk” to progress in AI ethics and makes it harder for companies to adhere to their programs. 

Last year, Google announced it was doubling its research staff devoted to AI ethics, but it has not commented on its progress since. The company told MIT Technology Review it offers training on mental-health resilience, has a peer-to-peer mental-health support initiative, and gives employees access to digital tools to help with mindfulness. It can also connect them with mental-health providers virtually. It did not respond to questions about Mitchell’s time at the company.

Meta said it has invested in benefits like a program that gives employees and their families access to 25 free therapy sessions each year. And Twitter said it offers employee counseling and coaching sessions and burnout prevention training. The company also has a peer-support program focused on mental health. None of the companies said they offered support tailored specifically for AI ethics.

As the demand for AI compliance and risk management grows, tech executives need to ensure that they’re investing enough in responsible-AI programs, says Gupta. 

Change starts from the very top. “Executives need to speak with their dollars, their time, their resources, that they’re allocating to this,” he says. Otherwise, people working on ethical AI “are set up for failure.” 

Successful responsible-AI teams need enough tools, resources, and people to work on problems, but they also need agency, connections across the organization, and the power to enact the changes they’re being asked to make, Gupta adds.

A lot of mental-health resources at tech companies center on time management and work-life balance, but more support is needed for people who work on emotionally and psychologically jarring topics, Chowdhury says. Mental-health resources specifically for people working on responsible tech would also help, she adds. 

“There hasn’t been a recognition of the effects of working on this kind of thing, and definitely no support or encouragement for detaching yourself from it,” Mitchell says.

“The only mechanism that big tech companies have to handle the reality of this is to ignore the reality of it.”

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2022/10/28/1062332/responsible-ai-has-a-burnout-problem/amp/

Video: Geoffrey Hinton talks about the “existential threat” of – MIT Technology Review

Posted by timmreardon on 12/29/2023
Posted in: Uncategorized.

Watch Hinton speak with Will Douglas Heaven, MIT Technology Review’s senior editor for AI, at EmTech Digital.

By The Editorsarchive page

May 3, 2023

Deep learning pioneer Geoffrey Hinton announced on Monday that he was stepping down from his role as a Google AI researcher after a decade with the company. He says he wants to speak freely as he grows increasingly worried about the potential harms of artificial intelligence. Prior to the announcement, Will Douglas Heaven, MIT Technology Review’s senior editor for AI, interviewed Hinton about his concerns—read the full story here. 

Soon after, the two spoke at EmTech Digital, MIT Technology Review’s signature AI event. “I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence,” Hinton said. You can watch their full conversation below.

https://www.technologyreview.com/2023/05/03/1072589/video-geoffrey-hinton-google-ai-risk-ethics/

NAICS Codes: “Good Enough For Government Work” Doesn’t Cut It – GovCon Roundup

Posted by timmreardon on 12/29/2023
Posted in: Uncategorized.

In small business and socioeconomic set-aside solicitations, the North American Industry Classification System code is like a burly, tattooed bouncer standing beside a velvet rope, deciding who’s cool enough to come inside. (For some reason, my attempts to demonstrate my coolness by discussing the FAR always seem to fall flat). The agency’s NAICS code selection is critical because the NAICS code determines the applicable size standard–and by extension, the competitive playing field.

Steven Koprince | Dec 28, 2023

A mostly-overlooked decision by the U.S. Court of Federal Claims earlier this year emphasizes that when a Contracting Officer selects a NAICS code, “good enough for government work” doesn’t cut it. Instead, a Contracting Officer must pick the single best NAICS code for the work, using a process outlined in the SBA’s regulations.

The court’s decision in Consolidated Safety Services, Inc. v. United States, No. 23-521C (2023) involved a NOAA solicitation issued as a small business set-aside. The NOAA Contracting Officer assigned NAICS code 541620 (Environmental Consulting Services) with a corresponding $19 million size standard.

Consolidated Safety Services, Inc., which did not qualify as a small business under the $19 million standard, challenged the NAICS code. CSS argued that the appropriate code was NAICS code 541715 (Research and Development in the Physical, Engineering and Life Sciences) with a corresponding 1,000-employee size standard, under which CSS qualified as a small business.

(Clearly, the competitive playing field is completely different under a $19 million size standard than under a 1,000 employee size standard. The six little digits comprising a NAICS code are powerful–which is why I’m surprised not to see contractors filing many more NAICS code appeals).

CSS filed its NAICS code appeal with the SBA’s Office of Hearings and Appeals. CSS argued that the services being solicited by NOAA “predominantely entail performing research, not advising the agency about research.” Accordingly, CSS argued, the correct NAICS code was the R&D code, 541715, not the consulting code, 541620.

SBA OHA denied CSS’s NAICS code appeal, holding that CSS had not met its burden of demonstrating that the Contracting Officer’s chosen NAICS code was erroneous. Undeterred, CSS filed an appeal with the Court of Federal Claims, challenging SBA OHA’s decision, and by extension, the Contracting Officer’s underlying NAICS code selection.

The Court wrote that, under the SBA’s regulations, a Contracting Officer must designate “the single NAICS code which best describes the principal purpose of the product or service being acquired.” (FAR 19.102 imposes a similar requirement). That is, with apologies to the late, great Tina Turner, not just any ol’ NAICS code will do–the assigned NAICS code must be simply the best.

In this regard, the court wrote, an agency “cannot justify its selection of a particular NAICS code on the basis that it fits the procurement in some general sense or that the selection is ‘good enough for government work.'” Quoting from the SBA’s regulations, the Court explained that, when selecting a NAICS code:

Primary consideration is given to the industry descriptions in the U.S. NAICS Manual, the product or service description in the solicitation and any attachments to it, the relative value and importance of the components of the procurement making up the end item being procured, and the function of the goods or services being purchased.

Further, the SBA’s regulations specify that “a procurement is generally classified according to the component which accounts for the greatest percentage of contract value.” These regulatory requirements, the court wrote, “are the substantive yardsticks the Court must use to assess the agency’s NAICS code selection and the subsequent OHA Decision.”

Applying these provisions, the court found that SBA OHA’s analysis was flawed in multiple respects. For example, examining the Performance Work Statement, the court found that the solicitation “primarily seeks R&D services,” including “literally dozens of tasks in the Solicitation that naturally fit the definitions of ‘research’ and ‘experimental development’ in the NAICS Manual.” In contrast, the court found, the solicitation included “minimal consulting tasks.” Additionally, an estimated 80% of the ordering value–that is, the ‘greatest percentage of contract value’–was comprised of R&D tasks.

In sum, the Court held, the Contracting Officer’s decision to assign the Environmental Consulting Services NAICS code, and SBA OHA’s decision upholding that selection, were “objectively unreasonable.” The court issued an injunction preventing NOAA from proceeding with the solicitation until another NAICS code was assigned.

The Consolidated Safety Services case offers valuable lessons for government and industry alike.

For agencies, it is important that Contracting Officers receive effective training on the rules and processes assosciated with selecting NAICS codes. The Consolidated Safety Services case itself is a good starting point from a training perspective, as the decision not only demonstrates the potential consequences of a flawed selection, but contains numerous references to the applicable regulations and interpretive case law.

For industry, the Consolidated Safety Services case is a valuable reminder that a Contracting Officer must pick the “best” NAICS code–that is, the one that’s “better than all the rest”–for the work and that a potential offeror is entitled to file a NAICS code appeal if the offeror believes that the NAICS code selection was flawed.

NAICS codes are just six little digits, but the NAICS code selection can mean the difference between participating in a set-aside competition–or finding yourself like me, stuck outside the velvet rope because the bouncer inexplicably doesn’t see the inherent coolness in my insights on FAR 19.102.

Boring but important disclaimer: The information in this article is not, nor is it intended to be, legal advice. You should consult an attorney for individual advice regarding your own situation.

Article link: https://www.linkedin.com/pulse/naics-codes-good-enough-government-work-doesnt-cut-steven-koprince-gaauc

How AI works is often a mystery — that’s a problem – Nature

Posted by timmreardon on 12/28/2023
Posted in: Uncategorized.

The inner workings of many AIs are mysterious, but with increasing use of such technologies in high stakes scenarios, how should their inscrutable nature be dealt with?

By Nick Petrić Howe

Download this episode of the Nature Podcast

Many AIs are ‘black box’ in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them.

Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out.

Article link: https://www.nature.com/articles/d41586-023-04154-4?

The State of the Federal EHR – FEHRM

Posted by timmreardon on 12/28/2023
Posted in: Uncategorized.

On November 14, 2023, the Federal Electronic Health Record Modernization (FEHRM) office hosted The State of the Federal EHR (the 15th meeting, formerly known as the FEHRM Industry Roundtable). This event is held twice a year to discuss the current and future state of the federal electronic health record (EHR), health information technology and health information exchange. It also highlights the progress of the FEHRM, Department of Defense (DOD), Department of Veterans Affairs (VA), Department of Homeland Security’s U.S. Coast Guard (USCG) and Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA) to implement a single, common federal EHR and related capabilities.

The theme for the November event was “Achieving Data-Driven Outcomes in the Federal EHR.” The meeting featured updates from FEHRM, DOD, VA, USCG and NOAA leaders on their federal EHR efforts as well as an interactive discussion panel focused on data-driven insights to enhance the delivery of health care for Service members, Veterans and other beneficiaries.

This event was virtual via Microsoft Teams and open to the public. We invited active participation from individuals who possess relevant broad-based knowledge and experience.

Watch The State of the Federal EHR.

Article link: https://www.fehrm.gov/fehrm-industry-interoperability-roundtable/

Quantum Computing’s Hard, Cold Reality Check – IEEE Spectrum

Posted by timmreardon on 12/25/2023
Posted in: Uncategorized.

Hype is everywhere, skeptics say, and practical applications are still far away

By EDD GENT

22 DEC 2023 6 MIN READ

The quantum computer revolutionmay be further off and more limited than many have been led to believe. That’s the message coming from a small but vocal set of prominent skeptics in and around the emerging quantum computingindustry.

Quantum computers have been touted as a solution to a wide range of problems, including financial modeling,  optimizing logistics, and accelerating machine learning. Some of the more ambitious timelines proposed by quantum computing companies have suggested these machines could be impacting real-world problems in just a handful of years. But there’s growing pushback against what many see as unrealistic expectations for the technology.

Meta’s LeCun—Not so fast, qubit

Meta’s head of AI research Yann LeCun recently made headlinesafter pouring cold water on the prospect of quantum computers making a meaningful contribution in the near future. Speaking at a media event celebrating the 10-year anniversary of Meta’s Fundamental AI Research team he said the technology is “a fascinating scientific topic,” but that he was less convinced of “the possibility of actually fabricating quantum computers that are actually useful.”

While LeCun is not an expert in quantum computing, leading figures in the field are also sounding a note of caution. Oskar Painter, head of quantum hardware for AmazonWeb Services, says there is a “tremendous amount of hype” in the industry at the minute and “it can be difficult to filter the optimistic from the completely unrealistic.”

A fundamental challenge for today’s quantum computers is that they are very prone to errors. Some have suggested that these so-called “noisy intermediate-scale quantum” (NISQ) processors could still be put to useful work. But Painter says there’s growing recognition that this is unlikely and quantum error-correction schemes will be key to achieving practical quantum computers.

“We found out over the last 10 years that many things that people have proposed don’t work. And then we found some very simple reasons for that.”
—Matthias Troyer, Microsoft

The leading proposal involves spreading information over many physical qubits to create “logical qubits” that are more robust, but this could require as many as 1,000 physical qubits for each logical one. Some have suggested that quantum error correction could even be fundamentally impossible, though that is not a mainstream view. Either way, realizing these schemes at the scale and speeds required remains a distant goal, Painter says. 

“Given the remaining technical challenges in realizing a fault-tolerant quantum computer capable of running billions of gates over thousands of qubits, it is difficult to put a timeline on it, but I would estimate at least a decade out,” he says.

Microsoft—Clarity, please

The problem isn’t just one of timescales. In May, Matthias Troyer, a technical fellow at Microsoft who leads the company’s quantum computing efforts, co-authored a paper in Communications of the ACMsuggesting that the number of applications where quantum computers could provide a meaningful advantage was more limited than some might have you believe.

“We found out over the last 10 years that many things that people have proposed don’t work,” he says. “And then we found some very simple reasons for that.”

The main promise of quantum computing is the ability to solve problems far faster than classical computers, but exactly how much faster varies. There are two applications where quantum algorithms appear to provide an exponential speed up, says Troyer. One is factoring large numbers, which could make it possible to break the public key encryption the internet is built on. The other is simulating quantum systems, which could have applications in chemistry and materials science.

Quantum algorithms have been proposed for a range of other problems including optimization, drug design, and fluid dynamics. But touted speedups don’t always pan out—sometimes amounting to a quadratic gain, meaning the time it takes the quantum algorithm to solve a problem is the square root of the time taken by its classical counterpart.

Troyer says these gains can quickly be wiped out by the massive computational overhead incurred by quantum computers. Operating a qubit is far more complicated than switching a transistor and is therefore orders of magnitude slower. This means that for smaller problems, a classical computer will always be faster, and the point at which the quantum computer gains a lead depends on how quickly the complexity of the classical algorithm scales.

Operating a qubit is far more complicated than switching a transistor and is therefore orders of magnitude slower.

Troyer and his colleagues compared a single Nvidia A100 GPU against a fictional future fault-tolerant quantum computer with 10,000 “logical qubits” and gates times much faster than today’s devices. Troyer says they found that a quantum algorithm with a quadratic speed up would have to run for centuries, or even millenia, before it could outperform a classical one on problems big enough to be useful.

Another significant barrier is data bandwidth. Qubits’ slow operating speeds fundamentally limit the rate at which you can get classical data in and out of a quantum computer. Even in optimistic future scenarios this is likely to be thousands or millions of times slower than classical computers, says Troyer. That means data-intensive applications like machine learning or searching databases are almost certainly out of reach for the foreseeable future.

The conclusion, says Troyer, was that quantum computers will only really shine on small-data problems with exponential speed ups. “All the rest is beautiful theory, but will not be practical,” he adds.

The paper didn’t make much of an impact in the quantum community, says Troyer, but many of Microsoft customers were grateful to get some clarity on realistic applications for quantum computing. He says they’ve seen a number of companies downsize or even shutdown their quantum computing teams, including in the finance and life sciences sectors.

Aaronson—Welcome, skeptics

These limitations shouldn’t really be a surprise to anyone who has been paying close attention to quantum computing research, says Scott Aaronson, a professor of computer science at the University of Texas at Austin. “There are these claims about how quantum computing will revolutionize machine learning and optimization and finance and all these industries, where I think skepticism was always warranted,” he says. “If people are just now coming around to that, well then, welcome.”

While he also thinks practical applications are still a long way off, recent progress in the field has actually given him cause for optimism. Earlier this month researchers from quantum computing startup QuEra and Harvard demonstrated that they could use a 280 qubit processor to generate 48 logical qubits–far more than previous experiments have managed. “This was definitely the biggest experimental advance maybe for several years,” says Aaronson.

“When you say quantum is going to solve all the world’s problems, and then it doesn’t, or it doesn’t right now, that creates a little bit of a letdown.”
—Yuval Boger, QuEra

Yuval Boger, chief marketing officer at QuEra, is keen to stress that the experiment was a lab demonstration, but he thinks the results have caused some to reassess their timescales for fault-tolerant quantum computing. At the same time though, he says they have also noticed a trend of companies quietly shifting resources away from quantum computing.

This has been driven, in part, by growing interest in AI since the advent of large language models, he says. But he agrees that some in the industry have exaggerated the near-term potential of the technology, and says the hype has been a double-edged sword. “It helps get investments and get talented people excited to get into the field,” he says. “But on the other hand, when you say quantum is going to solve all the world’s problems, and then it doesn’t, or it doesn’t right now, that creates a little bit of a letdown.”

Even in the areas where quantum computers look most promising, the applications could be narrower than initially hoped. In recent years, papers from researchers at scientific software company Schrödinger and a multi-institutional team have suggested that only a limited number of problems in quantum chemistry are likely to benefit from quantum speedups.

Merck KGaA—Lovely accelerator, sometimes

It’s also important to remember that many companies already have mature and productive quantum chemistry workflows that operate on classical hardware, says Philipp Harbach, global head of group digital innovation at German pharma giant Merck KGaA, in Darmstadt, Germany (not to be confused with the American company Merck). 

“In the public, the quantum computer was portrayed as if it would enable something not currently achievable, which is inaccurate,” he says. “Primarily, it will accelerate existing processes rather than introducing a completely disruptive new application area. So we are evaluating a difference here.”

Harbach’s group has been investigating the relevance of quantum computing to Merck’s work for about six years. While NISQ devices may potentially have uses for some certain highly specialized problems, they’ve concluded that quantum computing will not have a significant impact on industry until fault-tolerance is achieved. Even then, how transformative that impact could be really depends on the specific use case and products a company is working on, says Harbach.

Quantum computers shine at providing accurate solutions to problems that become intractable at larger scales for classical computers. That could be very useful for some applications, such as designing new catalysts, says Harbach. But most of the chemistry problems Merck is interested in involve screening large numbers of candidate molecules very quickly.

“Most problems in quantum chemistry do not scale exponentially, and approximations are sufficient,” he says. “They are well behaved problems, you just need to make them faster with increased system size.”

Nonetheless, there can still be cause for optimism, says Microsoft’s Troyer. Even if quantum computers can only tackle a limited palette of problems in areas like chemistry and materials science, the impact could still be game-changing. “We talk about the Stone Age and the Bronze Age, and the Iron Age, and the Silicon Age, so materials have a huge impact on mankind,” he says.

The goal of airing some skepticism, Troyer says, is not to diminish interest in the field, but to ensure that researchers are focused on the most promising applications of quantum computing with the greatest chance of impact.

Article link: https://spectrum.ieee.org/quantum-computing-skeptics

FROM YOUR SITE ARTICLES

  • Quantum Computing for Dummies ›
  • The Case Against Quantum Computing ›

RELATED ARTICLES AROUND THE WEB

  • What Is Quantum Computing? | Caltech Science Exchange ›
  • What is Quantum Computing? – Quantum Computing Explained – AWS ›

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...