healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Official: VA’s Legacy EHR Must Be Maintained, But Not a Long-Term Solution – Nextgov

Posted by timmreardon on 03/15/2023
Posted in: Uncategorized. Leave a comment

By EDWARD GRAHAMMARCH 8, 2023

The official cautioned that “modernization would require VistA be rewritten almost from scratch.”

The Department of Veterans Affairs must continue to maintain its legacy health information system as it moves forward with its multi-billion dollar electronic health record modernization project, even though the decades-old software is too outdated to serve as a viable long-term solution, a VA official said during a House Veterans’ Affairs subcommittee hearing on Tuesday. 

Daniel McCune, VA’s executive director of software product management, told members of the committee’s Technology Modernization Subcommittee that the Veterans Health Information Systems and Technology Architecture—or VistA—“has served VA and veterans for over 40 years, and we are aware of its limitations.” But he added that, as the agency continues to slowly deploy the new Oracle Cerner EHR system, “VistA remains our authoritative source of veteran data.”

Modernizing the EHR system has been a priority for VA for some time, albeit one with few tangible results until now. Prior to awarding Cerner the contract for the new EHR system—known as Millennium—in 2018, VA spent years attempting to modernize VistA. During a House Veterans’ Affairs Committee hearing on Feb. 28, U.S. Comptroller General Gene Dodaro told lawmakers that VA has spent “over $1.7 billion dollars for failed predecessor electronic healthcare record systems that either failed, or did not come to fruition.”

Rep. Matt Rosendale, R-Mont.—the chairman of the subcommittee—said VistA “must be maintained,” and called for the VA to identify “key areas of VistA that need to be modernized and are feasible to undertake.” Rosendale previously introduced legislation in January that would terminate the VA’s transition to the Oracle Cerner system if the agency “cannot demonstrate significant improvement” in its deployment. 

“The reality is, regardless of whether the Oracle Cerner implementation can be accomplished, and regardless of how we feel about that, the VA will probably continue to rely on VistA for at least another decade, and some of the elements of VistA will probably never go away because no replacement even exists,” Rosendale added.

Rep. Sheila Cherfilus-McCormick, D-Fla.—the panel’s ranking member—also agreed that maintaining VistA “is imperative” for the VA, noting that “there are functions of VistA that are not related to the EHR and would likely exist long after the EHR is replaced.”

But Cherfilus-McCormick expressed concerns about the current state of VistA, including cybersecurity risks, data management issues and a code-base “considered to be obsolete by many.”

“I’m not here to say that the Oracle Cerner approach in EHRM is going well, but I’m not sure returning to VistA is correct either,” Cherfilus-McCormick added. 

McCune agreed that maintaining VistA during the deployment of the new Oracle Cerner EHR system was essential, noting that VA has already been working to enhance the legacy system by “standardizing VistA code,” implementing an API gateway and transitioning portions of the system to a cloud environment. He noted that “20 instances of Vista have been moved to cloud, with an additional 54 planned this year.”

But McCune told lawmakers that VistA is “an old technology ill-suited for the modern digital age,” and added that “modernization would require VistA be rewritten almost from scratch, at a great cost and great risk.”

McCune highlighted the fact that VistA is written in an old programming language called Mumps, which he noted “is not taught in computer science classes.” While he said VA had been able to retain Mumps programmers “much longer than a typical workforce,” he added that “approximately 70% of our Mump programmers today are retirement eligible, and we have few options to hire or contract additional ones.”

The rollout of the new Oracle Cerner EHR system, however, has been fraught with technical issues, patient safety concerns, delays and cost overruns since it first went live at the Mann-Grandstaff VA Medical Center in Spokane, Washington in 2020. A report released in July 2022 by the VA’s Office of Inspector General found that the new software deployed at the Spokane facility inadvertently routed more than 11,000 orders for clinical services to an “unknown queue” without alerting clinicians, resulting in “multiple events of patient harm.” 

VA announced in October that it was pausing further rollouts of the Oracle Cerner software until June 2023 to give the agency time to “fully assess performance and address every concern” with the system’s deployment. The agency announced last month that it was also pushing back the planned deployment of the Oracle Cerner EHR system at the VA Ann Arbor Healthcare System until later this year or in 2024. The go-live for Ann Arbor was originally scheduled to happen in June of this year.

Ongoing issues with the new software’s deployment have only increased lawmakers’ worries about the viability and usability of the new system. Rosendale, in particular, expressed concern during the hearing about reported disparities regarding patient safety when it came to medical facilities continuing to use VistA, and those that have transitioned to the Oracle Cerner EHR system. 

Rosendale cited VA’s patient safety reports related to its EHR systems and noted that—for the 166 medical facilities still using VistA—the agency received 12,644 reports in 2020; 14,637 in 2021; and 9,211 in 2022. For the two years after Cerner went live in Spokane, Rosendale said VA received a combined 1,033 patient safety reports at that facility alone.

“That’s over 500 reports per year from one hospital using Cerner,” he said. “One hospital, compared to an average of 55 reports annually from the VistA hospitals.”

Rosendale said the data showed that, as a result of the Oracle Cerner EHR system’s beleaguered rollout, Mann-Grandstaff has “become the most dangerous VA hospital in the country.”

Article link: https://www.nextgov.com/it-modernization/2023/03/official-vas-legacy-ehr-must-be-maintained-not-long-term-solution/383753/

Pray For Peace

Posted by timmreardon on 03/11/2023
Posted in: Uncategorized. Leave a comment

Top 3 Priority Recommendations for the First Federal Chief Data Officer – Nextgov

Posted by timmreardon on 03/11/2023
Posted in: Uncategorized. Leave a comment

By CHRIS BROWN MARCH 10, 2023 03:25 PM ET

COMMENTARY | Many agency chief data officers would like to see the creation and appointment of a federal CDO.

Here’s the essential irony behind data in the federal government: Agencies have captured and cataloged plenty of it, but they still aren’t doing nearly enough with it to maximize its value.

The current administration is attempting to fix this: In its Federal Data Strategy 2021 Action Plan, the White House calls for agencies to pursue a “robust [and] integrated approach to managing and using data” while still protecting security, privacy and confidentiality. Its ten-year vision includes goals such as the initiation of data governance, planning and infrastructure; the optimization of self-service analytics capabilities; and the execution of proactive, evidence-based decisions.

Then, late last year, the Data Foundation released its annual survey of federal chief data officers to get a sense of agencies’ responses to the plan. Findings revealed that less than one-quarter of CDOs are making notable progress in implementing it.

Another intriguing finding which stood out was that three of five survey participants would like to see the creation and appointment of a federal CDO. Such leadership could lend needed vision and support to help them achieve key components of success, which include the development of data strategy and governance (as cited by two-thirds of the CDOs); the facilitation of data-driven decision-making at all levels (nearly one-half); and the enabling of data-sharing (also nearly one-half).

This leads to a central question: If the White House established a federal CDO position, what should be the top priorities on his or her “to do” list? I’d recommend the following:

Encourage agency CDOs to “think different”

To borrow from the iconic Apple ad slogan, a federal CDO could start by encouraging agency-level CDOs to shift their focus. Too many spend most of their time attempting to catalog their data, and then set policies for it. True, those CDOs need to play a leadership role in governance. But they should also drive teams to leverage data for innovation, by helping them discover how to securely share it and maximize its value. Unfortunately, this isn’t happening nearly enough.

Much of the problem is rooted in fixed mindsets and organizational culture. Data owners build a “hoarding mentality” about data—they want to gather lots of it to create impressive inventories and then lord over this digital fiefdom. But that’s as far as it goes. Agency CDOs do not take the critical next step of making data accessible and shareable. A federal CDO must champion the idea that cataloging data is really just a starting point, and hardly the endgame.

Direct agencies to liberate users with free-flowing—but secure—access to data

Agency CDOs often resist opening up data because they fear the consequences. They worry about all of the “what ifs,” e.g., “what if ‘top secret’ information trickles down to ‘secret?’” This freezes any forward progress. They don’t want to overshare and invite potential risks.

Much of the problem is rooted in the continued dependence on siloed legacy systems that require endless paperwork and labor-intensive oversight. Then, there are the highly antiquated security approaches that are still in play. To successfully protect massive amounts of data while still making it readily accessible and shareable, CDOs need to adopt an attribute-based access control model.

Simply defined, ABAC provides dynamic, real-time data access decisions and—coupled with tools such as data-use agreements and data-activity logging—ensures authorized users can proceed without security-based bottlenecks or friction while avoiding new risks for their agencies. ABAC is based on user and data attributes for data security, determining who should have access to sensitive data and whether they are seeking it for the right reasons. With this, it dynamically empowers authorization to sensitive data, applications and databases in real-time.

Issue report cards that grade real progress

Whenever the government establishes a new “fed czar,” we see lots of lofty proclamations and visionary statements. But we don’t see enough follow-through that results in actual, meaningful progress. A federal CDO needs to be empowered enough to inspire change.

Fully funded programs aimed at the improved securing, sharing and leveraging of data would help, obviously. And the federal CDO could make agency counterparts more accountable by developing metrics which measure how effectively they are deploying data sets, enabling end users and so on, and then by issuing regular report cards that publicly grade agencies on these key capabilities.

Certainly, the appointment of a federal CDO may result in much value-generating impact. But it might end up as yet another lost opportunity. A federal CDO who is genuinely empowered should lead a charge that leaps far beyond simply governance and proclamations, and instead goes toward optimal innovation. Doing so requires the enforcement of security controls, so people can fully utilize and share data sets. Without this, data is worthless. But with it, the federal CDO would truly drive agencies to not only “think different,” but do different.

Chris Brown is the chief technology officer for Immuta.

Article link: https://www.nextgov.com/ideas/2023/03/top-3-priority-recommendations-first-federal-chief-data-officer/383858/

Top 3 Priority Recommendations for the First Federal Chief Data Officer – Nextgov

Posted by timmreardon on 03/11/2023
Posted in: Uncategorized. Leave a comment

By CHRIS BROWN MARCH 10, 2023 03:25 PM ET

COMMENTARY | Many agency chief data officers would like to see the creation and appointment of a federal CDO.

Here’s the essential irony behind data in the federal government: Agencies have captured and cataloged plenty of it, but they still aren’t doing nearly enough with it to maximize its value.

The current administration is attempting to fix this: In its Federal Data Strategy 2021 Action Plan, the White House calls for agencies to pursue a “robust [and] integrated approach to managing and using data” while still protecting security, privacy and confidentiality. Its ten-year vision includes goals such as the initiation of data governance, planning and infrastructure; the optimization of self-service analytics capabilities; and the execution of proactive, evidence-based decisions.

Then, late last year, the Data Foundation released its annual survey of federal chief data officers to get a sense of agencies’ responses to the plan. Findings revealed that less than one-quarter of CDOs are making notable progress in implementing it.

Another intriguing finding which stood out was that three of five survey participants would like to see the creation and appointment of a federal CDO. Such leadership could lend needed vision and support to help them achieve key components of success, which include the development of data strategy and governance (as cited by two-thirds of the CDOs); the facilitation of data-driven decision-making at all levels (nearly one-half); and the enabling of data-sharing (also nearly one-half).

This leads to a central question: If the White House established a federal CDO position, what should be the top priorities on his or her “to do” list? I’d recommend the following:

Encourage agency CDOs to “think different”

To borrow from the iconic Apple ad slogan, a federal CDO could start by encouraging agency-level CDOs to shift their focus. Too many spend most of their time attempting to catalog their data, and then set policies for it. True, those CDOs need to play a leadership role in governance. But they should also drive teams to leverage data for innovation, by helping them discover how to securely share it and maximize its value. Unfortunately, this isn’t happening nearly enough.

Much of the problem is rooted in fixed mindsets and organizational culture. Data owners build a “hoarding mentality” about data—they want to gather lots of it to create impressive inventories and then lord over this digital fiefdom. But that’s as far as it goes. Agency CDOs do not take the critical next step of making data accessible and shareable. A federal CDO must champion the idea that cataloging data is really just a starting point, and hardly the endgame.

Direct agencies to liberate users with free-flowing—but secure—access to data

Agency CDOs often resist opening up data because they fear the consequences. They worry about all of the “what ifs,” e.g., “what if ‘top secret’ information trickles down to ‘secret?’” This freezes any forward progress. They don’t want to overshare and invite potential risks.

Much of the problem is rooted in the continued dependence on siloed legacy systems that require endless paperwork and labor-intensive oversight. Then, there are the highly antiquated security approaches that are still in play. To successfully protect massive amounts of data while still making it readily accessible and shareable, CDOs need to adopt an attribute-based access control model.

Simply defined, ABAC provides dynamic, real-time data access decisions and—coupled with tools such as data-use agreements and data-activity logging—ensures authorized users can proceed without security-based bottlenecks or friction while avoiding new risks for their agencies. ABAC is based on user and data attributes for data security, determining who should have access to sensitive data and whether they are seeking it for the right reasons. With this, it dynamically empowers authorization to sensitive data, applications and databases in real-time.

Issue report cards that grade real progress

Whenever the government establishes a new “fed czar,” we see lots of lofty proclamations and visionary statements. But we don’t see enough follow-through that results in actual, meaningful progress. A federal CDO needs to be empowered enough to inspire change.

Fully funded programs aimed at the improved securing, sharing and leveraging of data would help, obviously. And the federal CDO could make agency counterparts more accountable by developing metrics which measure how effectively they are deploying data sets, enabling end users and so on, and then by issuing regular report cards that publicly grade agencies on these key capabilities.

Certainly, the appointment of a federal CDO may result in much value-generating impact. But it might end up as yet another lost opportunity. A federal CDO who is genuinely empowered should lead a charge that leaps far beyond simply governance and proclamations, and instead goes toward optimal innovation. Doing so requires the enforcement of security controls, so people can fully utilize and share data sets. Without this, data is worthless. But with it, the federal CDO would truly drive agencies to not only “think different,” but do different.

Chris Brown is the chief technology officer for Immuta.

Article link: https://www.nextgov.com/ideas/2023/03/top-3-priority-recommendations-first-federal-chief-data-officer/383858/

The opposite of Silicon Valley: How Feds expect to use AI/ML – C4ISRNET

Posted by timmreardon on 03/07/2023
Posted in: Uncategorized. Leave a comment

By Molly Weisner Thursday, Mar 2

As private sector companies and academia shoot for the moon when it comes to exploring the potential of artificial intelligence, in the federal government, discussions are more down to earth.

The heights to which AI technology has ventured is both awesome and scary, panelists discussed at a conference last month hosted by the Advanced Technology Academic Research Center, which focuses on technology and government. Take recent headlines about ChatGPT,Microsoft’s uncanny chatbot, and Stanford’s “Woebot” robo-therapists, for example. In the government, AI is a nascent novelty, though agencies and the military say they already see use cases within reach.

RELATED
Lauren Knausenberger, chief information officer at the Department of the Air Force, listens to a question Feb. 28, 2023, at an event hosted by Billington Cybersecurity.
ChatGPT can make short work of Pentagon tasks, Air Force CIO says
OpenAI’s ChatGPT surpassed 1 million registered users within a week of its launch in late 2022.

By Colin Demarest

“What I’m most excited about is not necessarily the sexiest AI technologies … things that are showing up in science magazines,” said Greg Singleton, chief artificial intelligence officer at the U.S. Department of Health and Human Services. “What I’m really excited about for the department is seeing where we can apply these technologies to the general business that we do.”

That might mean holding off on the self-driving cars and robot secretaries for now and using AI instead to pick the low-hanging fruit of automating burdensome tasks that have grown for agencies while their workforces have not.

HHS employs 80,000 people and saw a nearly 30% increase in Medicaid/CHIP enrollment right before the pandemic.

“Our workforce is dedicated, they want to do a good job, but you have challenges when people are overloaded by processes, overloaded by volume, overloaded by batch transactional communications,” Singleton said at the ATARC conference.

The department has identified dozens of scenarios that could benefit from AI, including an automatic tool for finding chemical names in biomedical literature, a virtual assistant for finding grants, and a bot to ensure disability accommodations follow an employee who is being reassigned or promoted.

There are other agencies facing similar problems of growing workloads and deficits in manpower. The Social Security Administration’s workforce is at a 25-year low while the number of Americans receiving benefits has grown 20% since 2010. At the Federal Deposit Insurance Corporation, 38% of its workforce will be able to retire by 2027. 

These examples present opportunities for AI to render simple, clerical tasks more efficient through existing technology that is relatively quick and easy to scale, panelists said.

Failure is not an option

In October, President Joe Biden signed a bill that would require a government-led AI training program for its acquisition workforce that would be regularly updated. The government is also buying AI technology experimentally with the focus on research-based deals as opposed to hardware or software contracts, which is more mindful of risk, according to analysis by the Brookings Institute.

“The federal government is not a Silicon Valley start-up; it does not look kindly on failure, no matter the potential future payoff,” wrote Corin Stone, a former Office of the Director of National Intelligence senior official, in an article for Just Security. “The government is understandably hesitant to take risk, especially when it comes to national security activities and taxpayer dollars.

Panelists noted that appetites for AI can differ across agencies, as do budgets and resources. Congress has said that regardless, the government overall needs to play catch up on AI, with Sen. Joe Manchin saying it would be “disastrous” if the U.S. failed to be ready for China and Russia’s scaling up of cyber warfare in a hearing last year.

“It’s not just about rudimentary tasks,” said Bonnie Evangelista, who works in the Office of the Chief Digital and Artificial Intelligence Officer at the U.S. Department of Defense. “Even from a government perspective, I think there are some of us who are trying to go big and go bold, but it can be hard from a workforce perspective. It’s hard to scale because of culture.”

AI may also draw concerns from those who have misgivings about its ethical use and from union representatives who may want to fully understand repercussions for the workforce, including any possible reduction of work hours.

RELATED
This picture, taken on Jan. 23, 2023, in Toulouse, France, shows screens displaying the logos of OpenAI and ChatGPT.
ChatGPT hints at potential for artificial intelligence in government
Agency leaders must clear up the common misconception that AI/ML infrastructure, data governance and efficiency must be perfectly aligned to get started.

By Jay Meil

Research suggests that AI can “outperform workers in an increasing set of complex tasks mainly done by educated workers,” though technology has also been recognized as a source of new kinds of jobs, a report for the White House’s Council of Economic Advisers found. Panelists said AI has the potential to free up employee’s workloads so they can devote time to creative or innovative projects and “upskill,” or get up to speed on new tech.

Evangelista also said agencies are building out new teams and positions devoted to exploring AI. Last April, the Department of Defense hired Craig Martell to lead the Chief Digital and Artificial Intelligence Office, which was created just two months earlier.

Meikle Paschal, a program manager for robotic process automation at the U.S. Department of Homeland Security, said the easiest way to standardize a process is to automate it, and solving problems that way is an alternative to hiring more people to transcribe or paper push.

“This is like a generational gift that we’re going to give to the people who come after us in the workforce,” he said. “And if we hold back at this point because we’re scared to make a mistake, then the next three or four generations are going to be behind the ball.”

About Molly Weisner

Molly Weisner is a staff reporter for Federal Times where she covers labor, policy and contracting pertaining to the government workforce. She made previous stops at USA Today and McClatchy as a digital producer, and worked at The New York Times as a copy editor. Molly majored in journalism at the University of North Carolina at Chapel Hill.

Article link: https://www.c4isrnet.com/govcon/contracting/2023/03/02/the-opposite-of-silicon-valley-how-feds-expect-to-use-aiml/

DARPA Seeks Input to Advance Hybrid Quantum/Classical Computers

Posted by timmreardon on 03/07/2023
Posted in: Uncategorized. Leave a comment
Webinar to facilitate expert technical exchange of ideas on practical uses for noisy quantum systems

OUTREACH@DARPA.MIL 3/7/2023

Although fault-tolerant quantum computers are projected to be years to decades away, processors made from tens to hundreds of quantum bits have made significant progress in recent years, especially when working in tandem with a classical computer. These hybrid quantum/classical systems could enable technical disruption soon by superseding the best classical-only supercomputers in solving difficult optimization challenges and related problems of interest to defense, security, and industry. 

DARPA is sponsoring a live webinar on Tuesday, April 11, 2023, to highlight an Advanced Research Concept (ARC) topic called Imagining Practical Applications for a Quantum Tomorrow (IMPAQT). Registrants will have the opportunity to hear from government experts, university professors, and industry-leading quantum hardware providers as well as participate in live question-and-answer sessions. 

“We’re billing the webinar as a help day for quantum algorithmists,” said DARPA Innovation Fellow Alex Place, who is leading the event. “Building on successes of DARPA’s  ONISQ (Optimization with Noisy Intermediate-Scale Quantum devices) program, the webinar’s goal is to spark innovative ideas and discuss new concepts for making near-term intermediate scale quantum computers, as well as sought-after fault tolerant processors, practical and useful for solving real problems. We’re encouraging teams from academia and industry who have expertise in quantum algorithms or a practical problem that could be mapped to a quantum processor to engage with IMPAQT.”

For more technical details, view the IMPAQT Webinar special notice: https://sam.gov/opp/6532c39cfc3848bcb8182c2dc1155ece/view. To register, visit: https://events.sa-meetings.com/IMPAQT. Registration closes Friday, April 7, 2023 at 12:00 PM EDT. 

IMPAQT is the first of many anticipated DARPA ARC topics. The ARC initiative is designed to speed the pace of innovation by rapidly exploring and analyzing a high volume of promising new ideas. For more information about ARC, to view the open IMPAQT solicitation, and to see new topics as they become available, visit www.darpa.mil/arc.The ARC topics are managed by DARPA’s innovation fellows, who include recent Ph.D. graduates (within five years of receiving a doctorate) and active-duty military with STEM degrees. To learn more about the DARPA Innovation Fellowship, current fellows, and how you can apply to become a fellow visit: www.darpa.mil/innovationfellowship.

Image caption: A superconducting quantum bit (qubit) ready to be cooled to -273.14°C.

Media with inquiries should contact DARPA Public Affairs at outreach@darpa.mil

Associated images posted on www.darpa.miland video posted at www.youtube.com/darpatv may be reused according to the terms of the DARPA User Agreement.

Tweet @darpa

Article link: https://www.darpa.mil/news-events/2023-03-07

The inside story of how ChatGPT was built from the people who made it – MIT Technology Review

Posted by timmreardon on 03/03/2023
Posted in: Uncategorized. Leave a comment

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

By Will Douglas Heaven March 3, 2023

When OpenAI launched ChatGPT, with zero fanfare, in late November 2022, the San Francisco–based artificial-intelligence company had few expectations. Certainly, nobody inside OpenAI was prepared for a viral mega-hit. The firm has been scrambling to catch up—and capitalize on its success—ever since.

It was viewed in-house as a “research preview,” says Sandhini Agarwal, who works on policy at OpenAI: a tease of a more polished version of a two-year-old technologyand, more important, an attempt to iron out some of its flaws by collecting feedback from the public. “We didn’t want to oversell it as a big fundamental advance,” says Liam Fedus, a scientist at OpenAI who worked on ChatGPT.

To get the inside story behind the chatbot—how it was made, how OpenAI has been updating it since release, and how its makers feel about its success—I talked to four people who helped build what has become one of the most popular internet apps ever. In addition to Agarwal and Fedus, I spoke to John Schulman, a cofounder of OpenAI, and Jan Leike, the leader of OpenAI’s alignment team, which works on the problem of making AI do what its users want it to do (and nothing more).

What I came away with was the sense that OpenAI is still bemused by the success of its research preview, but has grabbed the opportunity to push this technology forward, watching how millions of people are using it and trying to fix the worst problems as they come up.

Since November, OpenAI has already updated ChatGPT several times. The researchers are using a technique called adversarial trainingto stop ChatGPT from letting users trick it into behaving badly (known as jailbreaking). This work pits multiple chatbots against each other: one chatbot plays the adversary and attacks another chatbot by generating text to force it to buck its usual constraints and produce unwanted responses. Successful attacks are added to ChatGPT’s training data in the hope that it learns to ignore them.       

OpenAI has also signed a multibillion-dollar deal with Microsoft and announced an alliance with Bain, a global management consulting firm, which plans to use OpenAI’s generative AI models in marketing campaigns for its clients, including Coca-Cola. Outside OpenAI, the buzz about ChatGPT has set off yet another gold rush around large language models, with companies and investors worldwide getting into the action.

That’s a lot of hype in three short months. Where did ChatGPT come from? What steps did OpenAI take to ensure it was ready to release? And where are they going next?  

The following has been edited for length and clarity.

Jan Leike: It’s been overwhelming, honestly. We’ve been surprised, and we’ve been trying to catch up.

John Schulman: I was checking Twitter a lot in the days after release, and there was this crazy period where the feed was filling up with ChatGPT screenshots. I expected it to be intuitive for people, and I expected it to gain a following, but I didn’t expect it to reach this level of mainstream popularity.

Sandhini Agarwal: I think it was definitely a surprise for all of us how much people began using it. We work on these models so much, we forget how surprising they can be for the outside world sometimes.

Liam Fedus: We were definitely surprised how well it was received. There have been so many prior attempts at a general-purpose chatbot that I knew the odds were stacked against us. However, our private beta had given us confidence that we had something that people might really enjoy.

Jan Leike: I would love to understand better what’s driving all of this—what’s driving the virality. Like, honestly, we don’t understand. We don’t know.

Part of the team’s puzzlement comes from the fact that most of the technology inside ChatGPT isn’t new. ChatGPT is a fine-tuned version of GPT-3.5, a family of large language models that OpenAI released months before the chatbot. GPT-3.5 is itself an updated version of GPT-3, which appeared in 2020. The company makes these models available on its website as application programming interfaces, or APIs, which make it easy for other software developers to plug models into their own code. OpenAI also released a previous fine-tuned version of GPT-3.5, called InstructGPT, in January 2022. But none of these previous versions of the tech were pitched to the public. 

Liam Fedus: The ChatGPT model is fine-tuned from the same language model as InstructGPT, and we used a similar methodology for fine-tuning it. We had added some conversational data and tuned the training process a bit. So we didn’t want to oversell it as a big fundamental advance. As it turned out, the conversational data had a big positive impact on ChatGPT.

John Schulman: The raw technical capabilities, as assessed by standard benchmarks, don’t actually differ substantially between the models, but ChatGPT is more accessible and usable.

Jan Leike: In one sense you can understand ChatGPT as a version of an AI system that we’ve had for a while. It’s not a fundamentally more capable model than what we had previously. The same basic models had been available on the API for almost a year before ChatGPT came out. In another sense, we made it more aligned with what humans want to do with it. It talks to you in dialogue, it’s easily accessible in a chat interface, it tries to be helpful. That’s amazing progress, and I think that’s what people are realizing.

John Schulman: It more readily infers intent. And users can get to what they want by going back and forth.

ChatGPT was trained in a very similar way to InstructGPT, using a technique called reinforcement learning from human feedback (RLHF). This is ChatGPT’s secret sauce. The basic idea is to take a large language model with a tendency to spit out anything it wants—in this case, GPT-3.5—and tune it by teaching it what kinds of responses human users actually prefer.

Jan Leike: We had a large group of people read ChatGPT prompts and responses, and then say if one response was preferable to another response. All of this data then got merged into one training run. Much of it is the same kind of thing as what we did with InstructGPT. You want it to be helpful, you want it to be truthful, you want it to be—you know—nontoxic. And then there are things that are specific to producing dialogue and being an assistant: things like, if the user’s query isn’t clear, it should ask follow-up questions. It should also clarify that it’s an AI system. It should not assume an identity that it doesn’t have, it shouldn’t claim to have abilities that it doesn’t possess, and when a user asks it to do tasks that it’s not supposed to do, it has to write a refusal message. One of the lines that emerged in this training was “As a language model trained by OpenAI …” It wasn’t explicitly put in there, but it’s one of the things the human raters ranked highly.

Sandhini Agarwal: Yeah, I think that’s what happened. There was a list of various criteria that the human raters had to rank the model on, like truthfulness. But they also began preferring things that they considered good practice, like not pretending to be something that you’re not. 

Because ChatGPT had been built using the same techniques OpenAI had used before, the team did not do anything different when preparing to release this model to the public. They felt the bar they’d set for previous models was sufficient.       

Sandhini Agarwal: When we were preparing for release, we didn’t think of this model as a completely new risk. GPT-3.5 had been out there in the world, and we know that it’s already safe enough. And through ChatGPT’s training on human preferences, the model just automatically learned refusal behavior, where it refuses a lot of requests.

Jan Leike: We did do some additional “red-teaming” for ChatGPT, where everybody at OpenAI sat down and tried to break the model. And we had external groups doing the same kind of thing. We also had an early-access program with trusted users, who gave feedback.

Sandhini Agarwal: We did find that it generated certain unwanted outputs, but they were all things that GPT-3.5 also generates. So in terms of risk, as a research preview—because that’s what it was initially intended to be—it felt fine.

John Schulman: You can’t wait until your system is perfect to release it. We had been beta-testing the earlier versions for a few months, and the beta testers had positive impressions of the product. Our biggest concern was around factuality, because the model likes to fabricate things. But InstructGPT and other large language models are already out there, so we thought that as long as ChatGPT is better than those in terms of factuality and other issues of safety, it should be good to go. Before launch we confirmed that the models did seem a bit more factual and safe than other models, according to our limited evaluations, so we decided to go ahead with the release.

OpenAI has been watching how people use ChatGPT since its launch, seeing for the first time how a large language model fares when put into the hands of tens of millions of users who may be looking to test its limits and find its flaws. The team has tried to jump on the most problematic examples of what ChatGPT can produce—from songs about God’s love for rapist priests to malware code that steals credit card numbers—and use them to rein in future versions of the model.  

Sandhini Agarwal: We have a lot of next steps. I definitely think how viral ChatGPT has gotten has made a lot of issues that we knew existed really bubble up and become critical—things we want to solve as soon as possible. Like, we know the model is still very biased. And yes, ChatGPT is very good at refusing bad requests, but it’s also quite easy to write prompts that make it not refuse what we wanted it to refuse.

Liam Fedus: It’s been thrilling to watch the diverse and creative applications from users, but we’re always focused on areas to improve upon. We think that through an iterative process where we deploy, get feedback, and refine, we can produce the most aligned and capable technology. As our technology evolves, new issues inevitably emerge.

Sandhini Agarwal: In the weeks after launch, we looked at some of the most terrible examples that people had found, the worst things people were seeing in the wild. We kind of assessed each of them and talked about how we should fix it.

Jan Leike: Sometimes it’s something that’s gone viral on Twitter, but we have some people who actually reach out quietly.

Sandhini Agarwal: A lot of things that we found were jailbreaks, which is definitely a problem we need to fix. But because users have to try these convoluted methods to get the model to say something bad, it isn’t like this was something that we completely missed, or something that was very surprising for us. Still, that’s something we’re actively working on right now. When we find jailbreaks, we add them to our training and testing data. All of the data that we’re seeing feeds into a future model.

Jan Leike:  Every time we have a better model, we want to put it out and test it. We’re very optimistic that some targeted adversarial training can improve the situation with jailbreaking a lot. It’s not clear whether these problems will go away entirely, but we think we can make a lot of the jailbreaking a lot more difficult. Again, it’s not like we didn’t know that jailbreaking was possible before the release. I think it’s very difficult to really anticipate what the real safety problems are going to be with these systems once you’ve deployed them. So we are putting a lot of emphasis on monitoring what people are using the system for, seeing what happens, and then reacting to that. This is not to say that we shouldn’t proactively mitigate safety problems when we do anticipate them. But yeah, it is very hard to foresee everything that will actually happen when a system hits the real world.

In January, Microsoft revealed Bing Chat, a search chatbot that many assume to be a version of OpenAI’s officially unannounced GPT-4. (OpenAI says: “Bing is powered by one of our next-generation models that Microsoft customized specifically for search. It incorporates advancements from ChatGPT and GPT-3.5.”) The use of chatbots by tech giants with multibillion-dollar reputations to protect creates new challenges for those tasked with building the underlying models.

Sandhini Agarwal: The stakes right now are definitely a lot higher than they were, say, six months ago, but they’re still lower than where they might be a year from now. One thing that obviously really matters with these models is the context they’re being used in. Like with Google and Microsoft, even one thing not being factual became such a big issue because they’re meant to be search engines. The required behavior of a large language model for something like search is very different than for something that’s just meant to be a playful chatbot. We need to figure out how we walk the line between all these different uses, creating something that’s useful for people across a range of contexts, where the desired behavior might really vary. That adds more pressure. Because we now know that we are building these models so that they can be turned into products. ChatGPT is a product now that we have the API. We’re building this general-purpose technology and we need to make sure that it works well across everything. That is one of the key challenges that we face right now.

John Schulman: I underestimated the extent to which people would probe and care about the politics of ChatGPT. We could have potentially made some better decisions when collecting training data, which would have lessened this issue. We’re working on it now.

Jan Leike: From my perspective, ChatGPT fails a lot—there’s so much stuff to do. It doesn’t feel like we’ve solved these problems. We all have to be very clear to ourselves—and to others—about the limitations of the technology. I mean, language models have been around for a while now, but it’s still early days. We know about all the problems they have. I think we just have to be very up-front, and manage expectations, and make it clear this is not a finished product.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/amp/

Patient matching: an identity crisis in healthcare – Oracle Health

Posted by timmreardon on 03/02/2023
Posted in: Uncategorized. Leave a comment

March 1, 2023 | 5 minute read

Ted McCain

Senior Principal Product Manager, Oracle Health

Nearly a decade ago, a prolific US advertisement was released in December just in time for the holidays. This ad featured a young person wearing traditional plaid flannel pajamas and glasses, sipping hot chocolate. It prominently featured the hashtag “#GetTalking,” requesting (demanding?) that we discuss getting health insurance with our families over the holidays. The intent was to drive a deep and meaningful conversation about the lack of easily accessible health insurance and set the stage for compliance with the Affordable Care Act (ACA), commonly known as “Obamacare.”

ad featuring a young person wearing traditional plaid flannel pajamas and glasses, sipping hot chocolate

Unfortunately, the ad was widely panned by both sides of the political spectrum because, after all, who doesn’t want to discuss a highly politicized topic with their families during the holidays? Few of us honored the imperative of the ad, and “pajama boy” is remembered more for being meme fodder than for advancing the discussion of insurance availability and healthcare cost. Few would argue that healthcare in the US is affordable, and examples abound where serious injury or long-term illness bankrupt a family that was previously financially solvent. Would universal health insurance solve the problem of unaffordability, or did we miss a real opportunity to identify and discuss one of the true culprits of ever-rising healthcare costs? 

Had we followed the imperative and invested the time to delve into the deepest, darkest nooks and crannies of healthcare, we very well may have discovered that while, yes, healthcare is very expensive, one of the primary reasons for its high cost is misidentification of patients during the treatment process.

The true cost of patient misidentification

Patient identity (or lack thereof) is a contributor to skyrocketing healthcare costs. In the US alone, the American Health Information Management Association (AHIMA) estimates that as many as 10% of patient records are duplicates. These misidentifications—which occur when systems or people fail to accurately identify patients or match them to existing records of care—can be linked to as much as 33% of payer-rejected medical claims, costing the US healthcare system as much as $6 billion annually. How does misidentification impact patients? Simply put, just like all increases in the cost of doing business, these can be passed on to the patient in the form of responsibility for payment or as an increase in service prices to offset losses.

Even more concerning than the financial ramifications is patient misidentification’s impact on patient outcomes. AHIMA states, “Risk management professionals have confirmed that duplicate records have caused negative outcomes in the discovery phase of the litigation process because there will be discrepancies with diagnoses, medications, and allergies.”1 This scenario is frighteningly real. For example, the failure to correctly match a patient to a known allergy for something as simple as penicillin could lead to anaphylaxis, or worse.

National patient identifiers in the US and abroad

Beyond the scope of helping to reduce healthcare costs, many countries have national patient identity (NPID) systems in place, which help ensure proper patient identity and facilitate the sharing of clinical data between systems. According to Gartner,2 “more than 30 countries have national health programs, and many issue insurance or entitlement cards with unique identifiers to their citizens.”

The US doesn’t have any form of NPID system, and no such system exists on a global scale to facilitate clinical data exchange across international borders. While the enactment of the Health Insurance Portability and Accountability Act (HIPAA) in 1996 required that the US Department of Health and Human Services (HHS) develop a universal patient identifier (UPI) standard, this mandate was never implemented. As a result, each electronic health record (EHR) typically creates its own system-specific patient identifier and then relies on non-discrete information such as name, date of birth, and residential address to determine identity.

Had we followed the imperative and invested the time to delve into the deepest, darkest nooks and crannies of healthcare, we very well may have discovered that while, yes, healthcare is very expensive, one of the primary reasons for its high cost is misidentification of patients during the treatment process.

A case for UPI in the US

A UPI has the potential to significantly improve healthcare by ensuring accurate patient matching across disparate EHR systems, resulting in more immediate and robust exchange of health information. If integrated with patient identity systems from other countries, the UPI could help ensure the accurate exchange of current and historical healthcare information across international borders.

Consider this as an example: What if you were traveling abroad, involved in a motor vehicle accident, and unable to provide details regarding your medical history due to your injuries? If the emergency medical services in that country you were visiting could “break the glass” and acquire your historical medical information in an emergent treatment scenario, your potential for a positive outcome increases.

Why was the HIPAA requirement for UPI never implemented? In 1999, funding for the UPI requirement was suspended due to privacy concerns. In today’s climate of identity theft, how many of us still willingly provide our Social Security number when asked? Gartner states, “Much of the controversy surrounding NPID involves privacy and security issues and the desire to protect the individual’s personal information from disclosure, fraud, and misuse. NPID opponents believe the harm it causes—in terms of physician-patient trust and the risks to individual privacy rights—outweighs its purported clinical and financial benefits.”3

It’s important to acknowledge potential barriers to success—specifically, public acceptance—when much of the population has a general distrust for centralized repositories that store their personal information. At the same time, most of us carry mobile devices in our pockets that store biometric identification information in the form of fingerprints or facial recognition and never give it a second thought. The cognitive dissonance of the inability to reconcile the importance of positive identification in the healthcare space versus the value of unlocking one’s phone without having to input a passcode is staggering.

The way forward

One component of the Oracle Health vision is a unified health record, and positive patient identity is a cornerstone to achieving this goal. It’s nearly impossible to create an aggregated health record without ensuring information is precisely matched to the correct patient.

Fortunately, Oracle is well-positioned to address these challenges. The combination of our inherent ability to handle big data and make that data available globally via Oracle Cloud Infrastructure (OCI) affords us the opportunity to reimagine what a national (and even global) person index for healthcare might look like. The Oracle Healthcare Master Patient Index (OHMPI) has the potential to be elevated to an OCI service and extended to incorporate features such as referential matching and biometric signatures.

Imagine a world in which you could securely and automatically register with a new medical provider, and your clinical history is seamlessly incorporated into that new provider’s EHR—without having to fill out any paper forms. Envision checking in for a visit simply by walking through the door, facial recognition handling the check-in process. Imagine if your medical history were contained in a blockchain “medical wallet” so you always had custody of that information. The list of possibilities goes on. All we must do to realize these possibilities is invest in the application of technologies already available to the healthcare industry.

Related resources: 

  • HIPAA violations and consequences: How the recent updates impact the healthcare industry

1 Shannon Harris, “Double Trouble,” Journal of AHIMA 89, no. 8 (September 2018): 20–23.
2 Gartner, “Prepare Now for the U.S. National Patient Identifier,” G00761070 (February 2022): 2.
3 Ibid.

Article link: https://blogs.oracle.com/healthcare/post/patient-matching-an-identity-crisis-in-healthcare?

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work – MIT Technology Review

Posted by timmreardon on 02/27/2023
Posted in: Uncategorized. Leave a comment


AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

By Will Douglas Heaven February 15, 2023

At 82 years old, with an aggressive form of blood cancer that six courses of chemotherapy had failed to eliminate, “Paul” appeared to be out of options. With each long and unpleasant round of treatment, his doctors had been working their way down a list of common cancer drugs, hoping to hit on something that would prove effective—and crossing them off one by one. The usual cancer killers were not doing their job. 

With nothing to lose, Paul’s doctors enrolled him in a trial set up by the Medical University of Vienna in Austria, where he lives. The university was testing a new matchmaking technology developed by a UK-based company called Exscientia that pairs individual patients with the precise drugs they need, taking into account the subtle biological differences between people.

The researchers took a small sample of tissue from Paul (his real name is not known because his identity was obscured in the trial). They divided the sample, which included both normal cells and cancer cells, into more than a hundred pieces and exposed them to various cocktails of drugs. Then, using robotic automation and computer vision (machine-learning models trained to identify small changes in cells), they watched to see what would happen. 

In effect, the researchers were doing what the doctors had done: trying different drugs to see what worked. But instead of putting a patient through multiple months-long courses of chemotherapy, they were testing dozens of treatments all at the same time. 

The approach allowed the team to carry out an exhaustive search for the right drug. Some of the medicines didn’t kill Paul’s cancer cells. Others harmed his healthy cells. Paul was too frail to take the drug that came out on top. So he was given the runner-up in the matchmaking process: a cancer drug marketed by the pharma giant Johnson & Johnson that Paul’s doctors had not tried because previous trials had suggested it was not effective at treating his type of cancer. 

It worked. Two years on, Paul was in complete remission—his cancer was gone. The approach is a big change for the treatment of cancer, says Exscientia’s CEO, Andrew Hopkins: “The technology we have to test drugs in the clinic really does translate to real patients.” 

Selecting the right drug is just half the problem that Exscientia wants to solve. The company is set on overhauling the entire drug development pipeline. In addition to pairing patients up with existing drugs, Exscientia is using machine learning to design new ones. This could in turn yield even more options to sift through when looking for a match.

The first drugs designed with the help of AI are now in clinical trials, the rigorous tests done on human volunteers to see if a treatment is safe—and really works—before regulators clear them for widespread use. Since 2021, two drugs that Exscientia developed (or co-­developed with other pharma companies) have started the process. The company is on the way to submitting two more. 

“If we were using a traditional approach, we couldn’t have scaled this fast,” Hopkins says. 

Exscientia isn’t alone. There are now hundreds of startups exploring the use of machine learning in the pharmaceutical industry, says Nathan Benaich at Air Street Capital, a VC firm that invests in biotech and life sciences companies: “Early signs were exciting enough to attract big money.”

Today, on average, it takes more than 10 years and billions of dollars to develop a new drug. The vision is to use AI to make drug discovery faster and cheaper. By predicting how potential drugs might behave in the body and discarding dead-end compounds before they leave the computer, machine-learning models can cut down on the need for painstaking lab work. 

And there is always a need for new drugs, says Adityo Prakash, CEO of the California-based drug company Verseon: “There are still too many diseases we can’t treat or can only treat with three-mile-long lists of side effects.” 

Now, new labs are being built around the world. Last year Exscientia opened a new research center in Vienna; in February, Insilico Medicine, a drug discovery firm based in Hong Kong, opened a large new lab in Abu Dhabi. All told, around two dozen drugs (and counting) that were developed with the assistance of AI are now in or entering clinical trials. 

“If somebody tells you they can perfectly predict which drug molecule can get through the gut … they probably also have land to sell you on Mars.” Adityo Prakash, CEO of Verseon

We’re seeing this uptick in activity and investment because increasing automation in the pharmaceutical industry has started to produce enough chemical and biological data to train good machine-learning models, explains Sean McClain, founder and CEO of Absci, a firm based in Vancouver, Washington, that uses AI to search through billions of potential drug designs. “Now is the time,” McClain says. “We’re going to see huge transformation in this industry over the next five years.” 

Yet it is still early days for AI drug discovery. There are a lot of AI companies making claims they can’t back up, says Prakash: “If somebody tells you they can perfectly predict which drug molecule can get through the gut or not get broken up by the liver, things like that, they probably also have land to sell you on Mars.” 

And the technology is not a panacea: experiments on cells and tissues in the lab and tests in humans—the slowest and most expensive parts of the development process—cannot be cut out entirely. “It’s saving us a lot of time. It’s already doing a lot of the steps that we used to do by hand,” says Luisa Salter-Cid, chief scientific officer at Pioneering Medicines, part of the startup incubator Flagship Pioneering in Cambridge, Massachusetts. “But the ultimate validation needs to be done in the lab.” Still, AI is already changing how drugs are being made. It could be a few years yet before the first drugs designed with the help of AI hit the market, but the technology is set to shake up the pharma industry, from the earliest stages of drug design to the final approval process.


The basic steps involved in developing a new drug from scratch haven’t changed much. First, pick a target in the body that the drug will interact with, such as a protein; then design a molecule that will do something to that target, such as change how it works or shut it down. Next, make that molecule in a lab and check that it actually does what it was designed to do (and nothing else); and finally, test it in humans to see if it is both safe and effective.

For decades chemists have screened candidate drugs by putting samples of the desired target into lots of little compartments in a lab, adding different molecules, and watching for a reaction. Then they repeat this process many times, tweaking the structure of the candidate drug molecules—swapping out this atom for that one—and so on. Automation has sped things up, but the core process of trial and error is unavoidable. 

But test tubes are not bodies. Many drug molecules that appear to do their job in the lab end up failing when they are eventually tested in people. “The whole process of drug discovery is about failure,” says biologist Richard Law, chief business officer at Exscientia. “The reason that the cost of coming up with a drug is so high is because you have to design and test 20 drugs to get one to work.”

This new generation of AI companies is focusing on three key failure points in the drug development pipeline: picking the right target in the body, designing the right molecule to interact with it, and determining which patients that molecule is most likely to help.   

Computational techniques like molecular modeling have been reshaping the drug development pipeline for decades. But even the most powerful approaches have involved building models by hand, a process that is slow, hard, and liable to yield simulations that diverge from real-world conditions. With machine learning, vast amounts of data, including drug and molecular data, can be harnessed to build complex models automatically. This makes it far easier—and faster—to predict how drugs might behave in the body, allowing many early experiments to be carried out in silico. Machine-learning models can also sift through vast, untapped pools of potential drug molecules in a way that was not previously possible. The upshot is that the hard, but essential, work in laboratories (and later in clinical trials) need only be carried out on those molecules with the best chances of success.  

Before they even get to simulating drug behavior, many companies are applying machine learning to the problem of identifying targets. Exscientia and others use natural-language processing to mine data from vast archives of scientific reports going back decades, including hundreds of thousands of published gene sequences and millions of academic papers. The information extracted from these documents is encoded in knowledge graphs—a way to organize data that captures links including causal relationships such as “A causes B.” Machine-learning models can then predict which targets might be the most promising ones to focus on in trying to treat a particular disease.

Applying natural-language processing to data mining is not new, but pharmaceutical companies, including the bigger players, are now making it a key part of their process, hoping it can help them find connections that humans might have missed. 

Jim Weatherall, vice president of data science and AI at AstraZeneca, says that getting AI to crawl through lots of biomedical data has helped him and his team find a few drug targets they would not otherwise have considered. “It’s made a real difference,” he says. “No human is going to read millions of biology papers.” Weatherall says the technique has revealed connections between things that might seem unrelated, such as a recent finding and a forgotten result from 10 years ago. “Our biologists then go and look at that and see if it makes sense,” says Weatherall. It’s still early days for this target-identification technique, though. He says it will be “some years” before any AstraZeneca drugs that result from it go into clinical trials.


But picking a target is just the start. The bigger challenge is designing a drug molecule that will do something with it—and this is where most innovation is happening.

The interaction between molecules inside a body is vastly complicated. Many drugs have to pass through hostile environments, such as the gut, before they can do their job. And everything is governed by physical and chemical laws that operate at atomic scales. The goal of most AI-powered approaches to drug design is to navigate the vast possibilities and quickly home in on new molecules that tick as many boxes as possible.  

Generate Biomedicines, a startup based in Cambridge, Massachusetts, founded by Flagship Pioneering, is aiming to do that using the same kind of generative AI behind text-to-image software like DALL-E 2. Instead of manipulating pixels, Generate’s software works with random strands of amino acids and finds ways to twist them up into protein structures with specific properties. Since the functions of a protein are dictated by its 3D folding, this, in effect, makes it possible to order up a protein capable of doing a particular job. (Other groups, including David Baker’s lab at the University of Washington, are developing similar tech.)

“Patients can have this terrible experience of going in and out of hospital, sometimes for years, getting drugs that don’t work.” Richard Law, chief business officer of Exscientia

Absci is also trying to create new protein-­based drugs using machine learning, but through a different approach. The company takes existing antibodies—proteins that the immune system uses to remove bacteria, viruses, and other unwanted assailants—and uses models trained on data from lab experiments to come up with lots of new designs for the parts of those antibodies that glom onto foreign matter. The idea is to redesign existing antibodies to make them better at binding to targets. After making adjustments in simulation, the researchers then synthesize and test the designs that work best.

In January, Absci, which has partnerships with larger pharmaceutical companies such as Merck, announced that it had used its approach to redesign several existing antibodies, including one that targets the spike protein of SARS-CoV-2, the virus that causes covid-19, and another that blocks a type of protein that helps cancer cells grow. 

Apriori Bio, another Flagship Pioneering startup based in Cambridge, also has its eye on covid, hoping in particular to develop vaccines capable of protecting people from a wide range of viral variants. The company builds millions of variants in the lab and tests how well covid-fighting antibodies grab onto them. It then uses machine learning to predict how the best antibodies would fare against 100 billion billion (1020) more variants. The goal is to take the most promising antibodies—the ones that seem able to take on a large range of variants or might combat particular variants of concern—and use them to design variant-proof vaccines. 

“It’s just not viable to ever do this experimentally,” says Lovisa Afzelius, a partner at Flagship Pioneering and CEO of Apriori Bio. “There is no way that your human brain can put all those bits and pieces in place and figure out that entire system.”

For Prakash, this is where AI’s real potential lies: opening up a huge untapped pool of biological and chemical structures that could become the ingredients of future drugs. Once you strip out very similar molecules, Prakash says, all of Big Pharma taken together—Merck, Novartis, AstraZeneca, and so on—has an ingredient list of at most 10 million molecules to build drugs from, some proprietary and some commonly known. “That’s what we’re testing across the entire planet—the total product of the last hundred years of toil from a lot of chemists,” he says.

And yet, he says, the number of possible molecules that might make drugs, according to the rules of organic chemistry, is 1033 (other estimates have put the number of drug-like molecules even higher, in the realm of 1060). “Compare that number to 10 million and you see we’re not even fishing in a tide pool next to the ocean,” Prakash says. “We’re fishing in a droplet.” 

Like others, Prakash’s company, Verseon, is using both old and new computational techniques to survey this ocean, generating millions of possible molecules and testing their properties. Verseon treats the interaction between drugs and proteins in the body as a physics problem, simulating the push and pull between atoms that influences how molecules fit together. Such molecular simulations are not new, but Verseon uses AI to more accurately model how molecules interact. So far, the company has produced 16 candidate drugs for a range of diseases, including cardiovascular conditions, infectious diseases, and cancer. One of those drugs is in clinical trials, and trials for several others are set to begin soon.

Crucially, simulation allows researchers to zip past a lot of the messiness that generally characterizes the drug design process. Companies traditionally create batches of molecules they hope have certain properties and then test each in turn. With machine learning, they can instead start with a wish list of basic characteristics—encoded mathematically—and produce designs for molecules that have those properties at the push of a button. This flips the early phase of development on its head, says Salter-Cid: “It’s not something we used to be able to do at the beginning.” A company might ordinarily make 2,500 to 5,000 compounds over five years when developing a new drug. Exscientia made 136 for one of its new cancer drugs, in just one year. 

“It’s about speeding up cycles of exploration,” says Weatherall. “We’re getting to the stage now where we can make more and more decisions without actually having to make a molecule for real.”


However they are made, drugs still have to be tested in humans. These final phases of drug development, which involve recruiting large numbers of volunteers, are hard to run and generally take a long time—around 10 years on average and sometimes up to 20. Many drugs take years to get to this stage and still fail.

AI won’t be able to speed the clinical trial process, but it could help drug companies stack the odds more in their favor, by cutting down the time and cost involved in searching for new drug candidates. Less time spent testing dead-end drug molecules in the lab should mean that promising candidates will make it to clinical trials faster. And with less money on the line, companies might not feel as much pressure to stick with a drug that isn’t performing particularly well.

Better targeting of patients could also help improve the process. Most clinical trials measure the average effect of a medicine, tallying up how many people it worked for and how many it didn’t. If enough people in the trial see an improvement in their condition, then the drug is considered successful. If the drug isn’t effective for a large enough percentage, then it’s a failure. But this can mean that small groups of people for whom a drug worked get overlooked.

“It’s a very crude way of doing it,” says Weatherall. “What we’d actually like to do is find the subset of patients who would get the most benefit from a drug.”

This is where Exscientia’s matchmaking technology comes in. “If we can select the right patients, it does fundamentally change the economic model of the pharma industry,” says Hopkins. 

It will all also dramatically improve the lives of patients, like Paul, who do not respond to the most common drugs. “Patients can have this terrible experience of going in and out of hospital, sometimes for years, getting drugs that don’t work, until either there’s no drugs left anymore or they finally get to the one that does work for them,” says Law.

After Exscientia found a drug that worked for Paul, the company followed up with a scientific study. It took tissue samples from dozens of cancer patients who had undergone at least two failed courses of chemotherapy and evaluated the effects of 139 existing drugs on their cells. Exscientia was able to identify a drug that worked for more than half of them. 

The company now wants to use this technology to shape its approach to drug development, incorporating patient data into the earliest stages of the process to train even better AI. “Instead of starting with a model of a disease, we can start with tissue from a patient,” says Hopkins. “The patient is the best model.” 

For now, the first batch of AI-designed drugs is still making its way through the clinical trial gauntlet. It could be months, or even years, before the first ones pass and hit the market. Some may not make it. 

But even if this initial group fails, there will be another. Drug design has changed forever. “These are just the first drugs that these companies are trying,” says Benaich. “Their best drugs might be the ones that come after.”

Article link: https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/

New AIs Make The Mainstream – Nextgov

Posted by timmreardon on 02/24/2023
Posted in: Uncategorized. Leave a comment

By JOHN BREEDEN IIFEBRUARY 24, 2023 10:00 AM ET

Artificial intelligence technology is now moving ahead at warp speed.

Artificial intelligence is an interesting technology for use in government and elsewhere. For years, AI was talked about theoretically, with a few examples surfacing here and there which were either extremely limited in their capabilities or optimized for very specific tasks, like business process automation or gaming. And then ChatGPT came along, and suddenly AI was in the mainstream, in the news, and on everyone’s mind.

A few other interesting things happened as well. Suddenly, all of those warnings by scientists and tech luminaries about the dangers of AI didn’t seem quite so theoretical when we could actually see a fairly advanced AI in action. I interviewed quite a few AI scientists about their concerns over the past few months since ChatGPT was released, and they seemed to make a good case about why AI is such a powerful technology that it needs to be designed and deployed in an ethical way, especially when working for governments or otherwise placed in positions of authority or power.

The other interesting thing that happened was that so-called visual AIs which generate photographs or art on demand also started popping up all over the place, driven by the same type of casual, chat-like interface as their text-only cousins. There are fewer concerns about visual AIs doing something unethical, although artists have complained that those AIs steal their work, mix in some new elements, and then present it as original art.

It’s clear that AI technology, after years of more or less stagnation, is now moving ahead at warp speed. As such, there are now three new AIs of note that are slowly being deployed to the mainstream, or which are finally making a public debut after a long period of beta testing. Two of them are being paired with search engine technology, while the third is a purely visual engine, but one that users can download and use on their own computers without even needing to connect to the internet.

I jumped in and tested out as much of these three new AIs as I could.

The New Bing

It’s no surprise that the biggest news in terms of new AIs came from Microsoft, which just launched “The New Bing” search engine into beta, and which is slated to eventually replace the old one. The New Bing will add an AI chatbot based on ChatGPT, which is a really smart move because it allows that AI to have access to the internet and the world, unlike the truncated data that the core ChatGPT app uses.

If you remember from my review of ChatGPT, I explained how the AI was trained by human users over time to help it tailor its responses for both accuracy and conversation flow. As such, it almost always sounds good, even if its answers are sometimes not always completely correct. But the biggest limitation with ChatGPT is that the scientists developing it cut off the data that they were feeding it around the end of 2021 into the early part of 2022. If you ask it about current events, like how the Ukraine War is going, it won’t know. And if you ask it about Joe Biden visiting Kiev, it will rightly tell you that he did so on November 22, 2014, when he was Vice President of the United States, but won’t know anything about his more recent visit as president. That means that while ChatGPT currently has one of the biggest user bases for apps in history, its usefulness will decline over time as we get farther away from the point where it stopped getting new data.

Enter The New Bing, which plugs ChatGPT into the internet, and pairs it with a search engine to boot. Theoretically, anyone will be able to try out the new AI and search engine combination one day, but for now, you have to join a waiting list, as Microsoft is only letting a few people in at a time. A bunch of my friends and I joined up as soon as it was announced and only one of us got in so far, so I don’t know how long you may have to wait. While you are waiting, Bing offers some sample questions on the signup page so that you can see how it works.

The first thing that you will notice about The New Bing is that you have 1,000 characters to create your search query. And instead of having to ask for specific search terms to get good results, like “2023 Hyundai Elantra prices,” you can instead explain to Bing, in natural language, that you are looking for an affordable but comfortable car, with better than 40 miles per gallon gas milage, automatic transmission and high safety ratings. And then Bing will analyze all of that using the ChatGPT component and show you the most relevant results.

In that example, results for the car search will come up in two panels. The one on the left is a traditional search engine result with links to car dealers, company websites and maybe car magazines. But on the right, you will see what looks like a ChatGPT interface. In this case it will have lists of different car models that match your criteria with descriptions. However, unlike the baseline ChatGPT interface, almost every time that The New Bing AI makes a statement in the right panel about a vehicle, like “it can accelerate from 0 to 60 mph in 7.2 seconds” it will also provide a hyperlink so you can check out the source and make sure that the AI is telling you the truth.

Beneath those results will be more suggested search questions and an option for “Let’s Chat” which opens up a dialog like you would find with ChatGPT. From what I have seen, the Bing approach seems to be one of the best ways to go because you get the deep AI from ChatGPT without the restrictions of working within a closed system that no longer collects data.

Google Bard

Google uses a slightly different AI technology for Google Bard, its search engine and AI combination that it plans to rollout worldwide soon. According to Google, Bard will use its own next-generation language and conversation capabilities powered by the Language Model for Dialogue Applications, which Google calls LaMDA for short. The company unveiled LaMDA about two years ago, but only recently started to spotlight it, likely because of pressure from Bing, and to a lesser extent ChatGPT, making inroads against its search engine empire.

It also seems like Bard is still very much in development. Recent news stories have shared leaked memos from Google officials asking employees to quickly help test Bard, and the AI made an embarrassing blunderduring its very first introduction to the world.

I was not able to get into a live test of Google Bard or even find where I could sign up to help test it out. I suspect that it may be restricted to employees right now. Eventually, Google says that they plan to integrate Bard with their search engine as well as use it to help businesses that need support teams but don’t have the budget to hire lots of humans. They also are looking to deploy a lightweight version of LaMDA which is supposed to be highly accurate but also quick, something that will be key to a search engine’s success.

Stable Diffusion

We had a lot of fun over the Christmas holiday playing with the image generation AI part of ChatGPT, called DALL-E. Using common language in order to create great works of art is a really cool use for AI.

The DALL-E image generation AI is good, but is limited in a few key ways. First off, due to the popularity of the generator, users are only given a set amount of credits every month to spend on making new images. And secondly, there seems to be quite a bit of censorship implied on what DALL-E is allowed to generate.

Both of those issues are overcome with the Stable Diffusion engine. It works very much like DALL-E, but without many of the same restrictions. In fact, with a little effort, you can even install Stable Diffusion on your own PC or Mac computer and use it as much as you like without even having to connect to the internet.

You can also use Stable Diffusion online, although you may have to wait a bit during peak times. In my testing, I never had to wait more than about 15 seconds before my images started to generate. There also seems to be far less censorship of what images you can create compared with DALL-E, and the quality of the art is fairly comparable—better in some cases and worse in others. 

Once you master how to carefully explain what you want the AI to generate, your art will get better with either Stable Diffusion or DALL-E. So practice definitely makes perfect, or at least more so, when learning how to paint a digital picture with just your words—and a highly advanced AI hanging on everything you say. 

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Article link: https://www.nextgov.com/emerging-tech/2023/02/new-ais-make-mainstream/383192/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
    • When AI Governance Fails 03/15/2026
    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (7)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...