healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Deputy Energy secretary sees role in counteracting AI industry’s ‘profit motive’

Posted by timmreardon on 10/09/2024
Posted in: Uncategorized.

As national labs invest in next-generation AI research, David Turk explains how the government fits into the AI age. 

BYREBECCA HEILWEIL

OCTOBER 8, 2024

The Energy Department could be a key force in counteracting the “profit motive” driving America’s leading artificial intelligence companies, the agency’s second-in-command said in an interview. 

DOE Deputy Secretary David Turk told FedScoop that top AI firms aren’t motivated to pursue all the use cases most likely to benefit the public, leaving the U.S. government — which maintains a powerful network of national labs now developing artificial intelligence infrastructure of their own — to play an especially critical role.

Turk’s comments come as the Energy Department pushes forward with a series of AI initiatives. One key program is the Frontiers in Artificial Intelligence for Science, Security, and Technology, or FASST effort, which is meant to advance the use of powerful datasets maintained by the agency in order to develop science-forward AI models. At Lawrence Livermore National Laboratory in California, federal researchers are working on building the world’s fastest supercomputer, El Capitan. The current fastest, Frontier, is based at the Oak Ridge National Laboratory in Tennessee, which also falls under the auspices of the federal government.

Through the Energy Department’s data and research staff, Turk says the agency is hoping to focus on areas of AI the private sector isn’t motivated to seek out — while also countering some of the negative consequences spurred by the race to build the technology. 

“These [companies] are shareholder-driven and they’re looking to turn a profit. But not everything that is valuable to society as a whole has a huge amount of profit behind it, especially in the near term and in the way that we’ve seen history play out,” Turk said. “It is going to take the government, without the full profit motive and without the intense competition of the private sector, to say, hold on, let’s kick the tires. Let’s make sure we’re doing this right.” 

That’s not to say the Energy Department or the national laboratories have a problem working with Big Tech. At the Pacific Northwest National Laboratory in Washington, there’s already plenty of work being done with ChatGPT, as well as with Microsoft, on battery chemistry research. OpenAI is working with the Los Alamos National Laboratory on bioscience research, too. 

As policymakers wrestle with how to prioritize U.S. competitiveness in artificial intelligence  while also curbing some of the worst impacts of the emerging technology — including data security risks, environmental costs, and potential bias — Turk spoke to FedScoop about how the government will try to position itself in the age of AI.

This interview has been edited for clarity and length. 

FedScoop: What is the Department of Energy’s role in this moment of AI? Everyone just watched “Oppenheimer” and has more familiarity with the history of the national labs.

Deputy Secretary David Turk:What’s striking to me is not just the nuclear security [or] Oppenheimer side of the house, which does a lot of AI and supercomputing, but also our Office of Science, where we have an $9 billion annual budget for science. Some of those laboratories have been at the very cutting edge on AI, supercomputing, quantum computing, and have huge, huge datasets that are incredibly helpful as well. For me, the foundation of our AI at the Department of Energy is not as appreciated as it should be. … It’s the supercomputer power, it’s the data, which is the fuel for AI, and then, maybe even most importantly, it’s the people. We’ve got such phenomenal talent, mostly in our national laboratories and some at our federal headquarters. 

FS: A lot of these companies are finding that there’s a ceiling in terms of what you can scrape from the public web, and a lot of the more specific applications of AI rely on more specific data, potentially data with higher security needs. How are you thinking about access to data that the Department of Energy might have? 

DT:If you don’t have good data, you’re not going to have good outputs, no matter how good your AI is. We have just phenomenal datasets that no one else has, including and especially from our national laboratories, who’ve been doing fundamental science, have been doing applied research, and have been really pushing the boundaries in any number of different areas.

… We do have a responsibility to do even more, [including] making sure that we’ve got the funding to be able to put that data out there, where it’s appropriate for public use and where it’s appropriate more for specialized use when we have some security issues. … Part of the FASST proposal is making that data more available for researchers where it’s appropriate to do so, and for others as well in more specialized areas. That’s a big part of our strategy. 

FS: There’s a lot of conversation about building a national AI capability. I’m curious how you would explain that to a member of the public. Is that something like a government version of GPT? Is this a science version of GPT, an LLM?

DT: I don’t think there’s going to be one AI to serve all purposes, right? There may be more generalized ChatGPT-like services, but then there’s going to be AI really trained on from the data perspective, from the algorithm perspective — on physics problems or bio problems or other kinds of science problems. 

The private sector is going to do what the private sector does and they have a profit motive in mind. That’s not to say that there aren’t good people working in companies, but these are companies, and these [companies] are shareholder-driven and they’re looking to turn a profit. But not everything that is valuable to society as a whole has a huge amount of profit behind it, especially in the near term and in the way that we’ve seen history play out. 

And so if we want to have AI benefiting the public as a whole, including use cases that don’t have that profit loaded squarely in the equation, then we need to invest in that and we need to have a place within our government, the Department of Energy, working with other partners to make sure we’re taking advantage of those more public-minded use cases going forward. 

Because these are profit-driven companies with intense competition among themselves, we need to have democratically elected government with real expertise and we need to hire up and make sure that we’ve got cutting-edge AI talent in our government to be able to do the red-teaming. [They need] to be able to research, for example, whether a model may get into areas that are really challenging, that a terrorist could use it to build a chem or bio weapon in a way that’s not good for anybody.’

We need to have that expertise within the U.S. government. We need to do the red-teaming and we need to have the regulations in place. All of that depends on having the human capability, the human talent, but also the datasets and the algorithms and other kinds of things that are necessary for the government to play its role on the offense side and the defense side.

FS: I got the chance to see Frontier maybe about a year ago, and that was super interesting. I’m curious, do we have enough supercomputers to meet the DOE goals on AI right now?

DT: We do have many of the world’s fastest supercomputers right now, and there’s others in the pipeline that will become the world’s fastest going forward. We need to keep investing. The short answer is, if we want to keep being on the cutting edge, we need to keep that level of investment. We need to keep pushing the boundaries. And we need to make sure that the U.S. government has capabilities, including on the compute power side of things. 

So we need to work with those partners in the private sector and keep pushing the envelope on the compute power, as well. I feel like we’re in a very strong place there. But again, with not only what’s going on in the private sector, but what’s going on in China and other countries, who also want to be the leaders in AI, we’ve got to keep investing, and we’ve got to compete — and we’ve got to out-compete from the U.S. government side of things, too. 

FS: I understand that the national labs do have some responsibility not just developing AI, but also analyzing potential risks that might come from private-sector models. Curious if you could summarize what you’re finding in terms of the biggest risks or biggest concerns with powerful AI models right now?

DT: This is a huge, huge responsibility and we need to invest in this side as well. We’ve got great capabilities. We’ve got great human talent. But if we’re going to keep tabs on what’s happening in the private sector — if we’re going to be able to do the red-teaming and other kinds of things that are necessary to make sure that these AI models are safe going forward — [we should do that] before they’re released more broadly, right? You don’t want the Pandora’s box to open. 

What’s clear to me in all our discussions internally in the U.S. government is we’ve got a lot of that expertise. So we’re not only doing it ourselves, but with some key partners. We’ve got relationships with Anthropic and many other AI companies on that front. We’re working hand in hand with others, including, especially, the Commerce Department. The Commerce Department is setting up this AI Safety Institute. We’re partnering with them so that we can take advantage of this expertise, this knowledge, this ability to work in the classified space — of course, working with our intel colleagues and our Department of Defense colleagues as well — and making sure that we all have an across-the-government effort to do all this more defensive work.

That’s something that’s in the interest of companies, but it is going to take the government, without the full profit motive and without the intense competition of the private sector, to say, hold on, let’s kick the tires. Let’s make sure we’re doing this right. Let’s make sure we don’t have any unintended consequences here. And this is going to be only even more important with each successive generation of AI, which gets more and more sophisticated, more and more powerful. 

This is why we put together this FASST proposal, why we’re having conversations with Congress about making sure that we have the funding to keep up the talent, keep up the compute power, keep up the ability of the algorithms to make sure that we’re playing this incredibly important role.

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox’s tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Article link: https://fedscoop.com/deputy-energy-secretary-ai-industry-profit-motive/?

The impact of generative AI as a general-purpose technology – MIT Sloan

Posted by timmreardon on 09/29/2024
Posted in: Uncategorized.

by Beth Stackpole

Aug 6, 2024

Why It Matters

Generative artificial intelligence will affect economic growth more quickly than other general-purpose technologies, according to a new report. Share 

The steam engine, the internal combustion engine, electrification, and computers are all considered “general-purpose technologies” — new tools that are powerful enough to accelerate overall economic growth and transform economies and societies. According to many experts, generative artificial intelligence will be the next invention to join that category.

In a recent report about the economic impact of generative AI, Google visiting fellow and MIT Sloan principal research scientist Andrew McAfee makes the case that generative AI is not only a game-changing general-purpose technology but could also spur change far more quickly than preceding innovations due to its accessibility and ease of diffusion. 

General-purpose technologies possess three key characteristics that ensure that they will have a large and positive economy-wide impact on productivity and growth. Though generative AI is still relatively new, McAfee writes that generative AI has these characteristics: 

Rapid improvement. Though it became mainstream only a few years ago, generative AI has quickly improved at generating relevant and accurate content in response to user prompts. McAfee notes that OpenAI’s GPT 3.5 system, released in late 2022, performed better on the U.S. bar exam than about 10% of the human test-takers. GPT 4, released in March 2023, performed better than 90% of those taking the bar exam. 

Generative AI’s “context window” — how much information it can accept from users — has also grown quickly. In 2020, state-of-the-art generative AI systems could accommodate approximately seven and a half pages of text; in late 2023, that window was 40 times larger, up to nearly 300 pages of text.

Pervasiveness. General-purpose technologies need to be widely implemented — which is already true of generative AI. In a 2023 survey of 14,000 users across a range of industries and professions, 28% of respondents said that they are using generative AI at work — over half without formal approval from their employer — and another 32% expected to use the technology at work soon. Another 2023 study found that about 80% of U.S. workers could have at least 10% of their work tasks affected by the introduction of generative AI. About 19% of workers could see at least half of their work tasks affected by the technology, the study found. 

Complementary innovations. While generative AI can quickly generate text, pictures, and sound from prompts, there is plenty of work being done to push it beyond those boundaries. Generative AI is being used not just to improve individual tasks but to streamline entire processes, McAfee writes, and researchers are confident that innovations making use of generative AI’s capabilities will advance science and engineering. 

“Because of generative AI’s rapid improvement, pervasiveness, and clear potential for complementary innovation, we are confident that it merits the label of ‘general-purpose technology,’” McAfee writes.

Generative AI’s accelerated pace of change  

Past general-purpose technologies took time to have a transformational impact, mainly because they required a new infrastructure. For example, electrical transmission networks needed to be in place to take advantage of electrification. In addition, most advantages associated with past technologies materialized only after users had had the chance to ideate and implement complementary innovations, McAfee writes.

RELATED ARTICLES

How generative AI can boost highly skilled workers’ productivityFrom MIT, a technically informed approach to governing AIHere comes the AI productivity boom

In contrast, generative AI’s effects will manifest more quickly because much of the required infrastructure — internet-connected devices — is immediately available and already widely used. Generative AI doesn’t require mastery of computer skills or proficiency in a programming language, as people use natural human language to interact with the system.

In terms of being an engine for economic growth, experts predict serious gains. Goldman Sachs estimates that generative AI will be responsible for a 0.4 percentage point increase in GDP growth in the United States over the next decade. There are also ramifications beyond growth statistics. By automating mundane tasks, generative AI will allow people to do more meaningful work, whether that’s enabling physicians to spend less time on paperwork and more time caring for patients or helping professionals dig into upskilling and training.

However, there are also concerns that people across industries will need to learn new skills and rethink their career paths. There are risks associated with disinformation as well.

Despite these drawbacks, McAfee writes that generative AI has the potential to fuel wide-scale economic growth. 

“Generative AI is already improving the productivity and quality of many tasks, and the technology is beginning to be used to redesign multi-step, multi-group processes, making them faster and less labor-intensive,” McAfee writes. “This technology’s deepest impact on the world of work will come as it’s used to reimagine entire organizations. This deep reimagination will be a decentralized and distributed phenomenon, carried out by innovators and entrepreneurs throughout the economy.”

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/impact-generative-ai-a-general-purpose-technology?

Exascale computers: 10 Breakthrough Technologies 2024 MIT Technology Review

Posted by timmreardon on 09/22/2024
Posted in: Uncategorized.

Computers capable of crunching a quintillion operations per second are expanding the limits of what scientists can simulate.

By Sophia Chenarchive page

January 8, 2024

WHO

Oak Ridge National Lab, Jülich Super­computing Centre, China’s Supercomputing Center in Wuxi

WHEN

Now

In May 2022, the global supercomputer rankings were shaken up by the launch of Frontier. Now the fastest supercomputer in the world, it can perform more than 1 quintillion (1018) floating-point operations per second. That’s a 1 followed by 18 zeros, also known as an exaflop. Essentially, Frontier can perform as many calculations in one second as 100,000 laptops.

With the launch of Frontier, located at Oak Ridge National Laboratory in Tennessee, the era of exascale computing officially began. Several more such exascale computers will soon join its ranks. In the US, researchers are installing two machines that will be about twice as fast as Frontier: El Capitan, at Lawrence Livermore National Laboratory in California, and Aurora, at Argonne National Laboratory in Illinois. Europe’s first exascale supercomputer, Jupiter, is expected to come online in late 2024. China reportedly also has exascale machines, although it has not released results from standard benchmark tests.

Related Story

Long rows of supercomputers with the name "Frontier" visible on the end

What’s next for the world’s fastest supercomputers

Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.

Scientists and engineers are eager to use these turbocharged computers to advance a range of fields. Astrophysicists are already using Frontier to model the flow of gas in and out of the Milky Way; in addition to simulating motion on the scale of our galaxy, their model can zero in on exploding stars. This application showcases supercomputers’ unique ability to simulate physical objects at multiple scales simultaneously. 

The progress won’t stop here. For the last three decades, supercomputers have gotten about 10 times faster every four years or so. And the stewards of these machines are already planning the next models: Oak Ridge engineers are designing a supercomputer that will be three to five times faster than Frontier, likely to be unveiled in the coming decade. 

But one big challenge looms: the energy footprint. Frontier, which already employs energy-conserving innovations, draws enough power even while idling to run thousands of homes. Engineers will need to figure out how to build these behemoths not just for speed, but for environmental sustainability. 

Article link: https://www.technologyreview.com/2024/01/08/1085128/exascale-computing-breakthrough-technologies/?

Escape Fire – The fight to rescue American Healthcare

Posted by timmreardon on 09/20/2024
Posted in: Uncategorized.

ESCAPE FIRE exposes the perverse nature of American healthcare, contrasting the powerful forces opposing change with the compelling stories of pioneering leaders and the patients they seek to help. The film is about finding a way out. It’s about saving the health of a nation.

mail

Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System

Posted by timmreardon on 09/19/2024
Posted in: Uncategorized.

September 19, 2024

AUTHORS

David Blumenthal, Evan D. Gumas,Arnav Shah, Munira Z. Gunja,Reginald D. Williams II

DOWNLOADS:

  • Fund Report ↓
  • Chartpack (pdf) ↓
  • Chartpack (ppt) ↓
  • News Release ↓

The U.S. health care system is failing to keep Americans healthy, ranking last in a new Commonwealth Fund report that compares health and health care in 10 countries. U.S. performance is particularly poor when it comes to health equity, access to care, efficiency, and health outcomes. 

According to Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System, the U.S. spends the most on health care, yet Americans live shorter, less healthy lives than people in Australia, Canada, France, Germany, the Netherlands, New Zealand, Sweden, Switzerland, and the United Kingdom. Other key findings:  

  • Americans experience the most difficulties getting and affording health care.  
  • The U.S. and New Zealand rank lowest on health equity, meaning they have the largest income-related differences in key measures of health system performance, and more of their residents face unfair treatment and discrimination when seeking care compared to residents of the other countries. 
  • Patients and physicians in the U.S. face the most onerous billing and payment burdens, leading to poor performance on measures of administrative efficiency. 

“The U.S. continues to be in a class by itself in the underperformance of its health care sector,” say the study’s authors, who call for policymakers and health care leaders to learn from other countries’ experiences and act. 

Article link: https://www.commonwealthfund.org/publications/fund-reports/2024/sep/mirror-mirror-2024

Pentagon readies for 6G, the next of wave of wireless network tech

Posted by timmreardon on 09/16/2024
Posted in: Uncategorized.

By Courtney Albon

 Friday, Sep 13, 2024

Since transitioning most of its 5G research and development projects to the Chief Information Office last year, the Pentagon’s Future Generation Wireless Technology Office has shifted its focus to preparing the Defense Department for the next wave of network innovation.

That work is increasingly important for the U.S., which is racing against China to shape the next iteration of wireless telecommunications, known as 6G. These more advanced networks, expected to materialize in the 2030s, will pave the way for more dependable high-speed, low-latency communication and could support the Pentagon’s technology interests — from robotics and autonomy to virtual reality and advanced sensing.

Staying ahead means not only fostering technology development and industry standards but making sure that policy and regulations are in place to safely use the capability, according to Thomas Rondeau, who leads the Pentagon’s FutureG office. Staking a leadership role in the global competition, he said, could give DOD a level of control over what that future infrastructure looks like.

“If we can define those going into it, then as we export our technologies, we’re also exporting our policies and our regulations, because they’re going to be inherently part of those technology solutions,” Rondeau told Defense News in a recent interview.

The Defense Department started making a concerted investment in 5G about five years ago when then Undersecretary of Research and Engineering Michael Griffin named the technology a top priority for the Pentagon.

In 2020, DOD awarded contracts totaling $600 million to 15 companiesto experiment with various 5G applications at five bases around the country. The projects included augmented and virtual reality training, smart warehousing, command and control and spectrum utilization.

The department has since expanded the pilots and pursued other wireless network development projects, including a 5G Challenge series that incentivized companies to move toward more open-access networks.

The result has, so far, been a mixed bag. Most of the pilots didn’t transition into formal programs within the military services, Rondeau said. Several of the failed efforts involved commercial augmented or virtual reality technology that wasn’t mature enough for DOD to justify continued funding.

Among the projects that did transfer, Rondeau highlighted a pilot effort at Naval Air Station Whidbey Island in Washington to provide fixed wireless access to the base. The project essentially replaced hundreds of pounds of cables with radio units that broadcast the communications network to the personnel who need it. Today, the system is supporting logistics and maintenance operations at the base.

“This could be a huge benefit for readiness, but also I think it should be very cost-effective way to slim down on everything that you pay for cables,” Rondeau said. “That will be a continued, sustainable project.”

This and other transitioned pilots will likely make their way into a formal budget cycle by fiscal 2027, he added.

DOD also saw some success from the 5G Challenges it staged in 2022 and 2023 to encourage telecommunication companies to transition to an open radio access network, or O-RAN. A RAN is the first entry point a wireless device makes into a network and accounts for about 80% of its cost. Historically, proprietary RANs managed by companies like Huawei, Ericsson, Nokia and Samsung have dominated the market.

“They’re driving a world where they control the entire system, the end-to-end system,” Rondeau said. “That causes a lack of insight, a lack of innovation on our side, and it causes challenges with how to apply these types of systems to unique, niche military needs.”

The 5G Challenge offered companies a chance to break open that proprietary model by moving to O-RANS — and according to Rondeau, it was a success. The initial challenge then expanded into a broader forum that addressed issues like energy efficiency and spectrum management. Ultimately, the effort reduced energy usage by around 30%, he said.

Rondeau said that while much of the focus of these initiatives was on 5G, the work has informed the Pentagon’s vision and strategy for 6G, which the department believes should have an open-source foundation.

“That is a direct result of not only my background and push for some of these things, but also the learnings that we got from the networks we’ve deployed, from the 5G Challenge,” he said. “All these things come into play that led us towards an open-source software model being the right model for the military and, we think, for industry.”

One of the FutureG office’s top priorities these days, a direct outgrowth of the 5G Challenge, is called CUDU, which stands for centralized unit, distributed unit. The project is focused on implementing a fully open software model for 6G that meets the needs of industry, the research community and DOD.

The office is also exploring how the military could use 6G for sensing and monitoring. Its Integrated Sensing and Communications project, dubbed ISAC, uses wireless signals to collect information about different environments. That capability could be used to monitor drone networks or gather military intelligence.

While ISAC technology could bring a major boost to DOD’s ISR systems, commercialization could make it accessible to adversary nations who might weaponize it against the U.S. That challenge reflects a broader DOD concern around 6G policies and regulation – and drives urgency within Rondeau’s office to ensure the U.S. is the first to shape the foundation of these next-generation networks.

“We’re looking at this as a real opportunity for dramatic growth and interest in new, novel technologies for both commercial industry and defense needs,” he said. “But also, the threat space that it opens up for us is potentially pretty dramatic, so we need to be on top of this.”

Article link: https://www.defensenews.com/pentagon/2024/09/13/pentagon-readies-for-6g-the-next-of-wave-of-wireless-network-tech/

How generative AI is boosting the spread of disinformation and propaganda – MIT Technology Review

Posted by timmreardon on 09/12/2024
Posted in: Uncategorized.


In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.

By Tate Ryan-Mosley

October 4, 2023

Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. 

Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online content. In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” 

The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and retaliation for online speech. The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. 

“Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital oppression.  

Funk found there were two primary factors behind these changes: the affordability and accessibility of generative AI is lowering the barrier of entry for disinformation campaigns, and automated systems are enabling governments to conduct more precise and more subtle forms of online censorship. 

Disinformation and deepfakes

As generative AI tools grow more sophisticated, political actors are continuing to deploy the technology to amplify disinformation. 

Venezuelan state media outlets, for example, spread pro-government messages through AI-generated videos of news anchors from a nonexistent international English-language channel; they were produced by Synthesia, a company that produces custom deepfakes. And in the United States, AI-manipulated videos and images of political leaders have made the rounds on social media. Examples include a video that depicted President Biden making transphobic comments and an image of Donald Trump hugging Anthony Fauci.  

In addition to generative AI tools, governments persisted with older tactics, like using a combination of human and bot campaigns to manipulate online discussions. At least 47 governments deployed commentators to spread propaganda in 2023—double the number a decade ago. 

And though these developments are not necessarily surprising, Funk says one of the most interesting findings is that the widespread accessibility of generative AI can undermine trust in verifiable facts. As AI-generated content on the internet becomes normalized, “it’s going to allow for political actors to cast doubt about reliable information,” says Funk. It’s a phenomenon known as “liar’s dividend,” in which wariness of fabrication makes people more skeptical of true information, particularly in times of crisis or political conflict when false information can run rampant.   

For example, in April 2023, leaked recordings of Palanivel Thiagarajan, a prominent Indian official, sparked controversy after they showed the politician disparaging fellow party members. And while Thiagarajan denounced the audio clips as machine generated, independent researchers determined that at least one of the recordings was authentic.

Chatbots and censorship

Authoritarian regimes, in particular, are using AI to make censorship more widespread and effective. 

Freedom House researchers documented 22 countries that passed laws requiring or incentivizing internet platforms to use machine learning to remove unfavorable online speech. Chatbots in China, for example, have been programmed not to answer questions about Tiananmen Square. And in India, authorities in Prime Minister Narendra Modi’s administration ordered YouTube and Twitter to restrict access to a documentary about violence during Modi’s tenure as chief minister of the state of Gujarat, which in turn encourages the tech companies to filter content through AI-based moderation tools. 

In all, a record high of 41 governments blocked websites for political, social, and religious speech last year, which “speaks to the deepening of censorship around the world,” says Funk. 

Iran suffered the biggest annual drop in Freedom House’s rankings after authorities shut down internet service, blocked WhatsApp and Instagram, and increased surveillance after historic antigovernment protests in fall 2022. Myanmar and China have the most restrictive internet censorship, according to the report—a title China has held for nine consecutive years. 

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/amp/

CMS seeks information on AI uses for health care ahead of ‘demo days’ – Fedscoop

Posted by timmreardon on 09/12/2024
Posted in: Uncategorized.

The agency ultimately wants to select organizations to demo their products during upcoming “CMS AI Demo Days.”

BYMADISON ALDER

SEPTEMBER 12, 2024

The Centers for Medicare and Medicaid Services is asking organizations to provide information about artificial intelligence technologies for use in health care outcomes and service delivery as it plans demonstration events.

In a request for informationannounced earlier this week, CMS said it wants to gather information about AI products and services from health care companies, providers, payers, start-ups and others, and plans to eventually select organizations to provide demos of those technologies at “CMS AI Demo Days” starting in October.

The demo days will be held quarterly and are intended “to educate and inspire the CMS workforce on AI capabilities and provide information to inform potential future agency action,” according to the release. “CMS also seeks such information on AI technologies potentially relevant to improving and creating efficiencies within agency operations.”

If selected, organizations would be advised by the agency’s AI Demo Days technical panel. Those organizations will then be given a chance to make a 15-minute presentation on their products or services at the demo days. Recordings of those events may also be made public, per the post.

Specifically, the agency is looking for submissions on topics such as diagnostics and imaging analysis; clinical decision support systems; direct-to-patient communication; robotic-assisted health care delivery; and fraud detection. Those submissions should provide information about the organization; the entity’s experience; descriptions of the technology; how it will address risks and benefits; and what use by CMS could look like.

The deadline for questions is Sept. 27 and the deadline for responses is Oct. 7. 

Article link: https://fedscoop.com/cms-seeks-information-ai-health-care-uses-demo-days/?

We need to prepare for ‘addictive intelligence’ – MIT Technology Review

Posted by timmreardon on 09/06/2024
Posted in: Uncategorized.


The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people.

By Robert Mahari &Pat Pataranutaporn

August 5, 2024

AI concerns overemphasize harms arising from subversion rather than seduction. Worries about AI often imagine doomsday scenarios where systems escape human control or even understanding. Short of those nightmares, there are nearer-term harms we should take seriously: that AI could jeopardize public discourse through misinformation; cement biases in loan decisions, judging or hiring; or disrupt creative industries. 

However, we foresee a different, but no less urgent, class of risks: those stemming from relationships with nonhuman agents. AI companionship is no longer theoretical—our analysis of a million ChatGPT interaction logs reveals that the second most popular use of AI is sexual role-playing. We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers. 

Will it be easier to retreat to a replicant of a deceased partner than to navigate the confusing and painful realities of human relationships? Indeed, the AI companionship provider Replika was born from an attempt to resurrect a deceased best friend and now provides companions to millions of users. Even the CTO of OpenAI warns that AI has the potential to be “extremely addictive.”

We’re seeing a giant, real-world experiment unfold, uncertain what impact these AI companions will have either on us individually or on society as a whole. Will Grandma spend her final neglected days chatting with her grandson’s digital double, while her real grandson is mentored by an edgy simulated elder? AI wields the collective charm of all human history and culture with infinite seductive mimicry. These systems are simultaneously superior and submissive, with a new form of allure that may make consent to these interactions illusory. In the face of this power imbalance, can we meaningfully consent to engaging in an AI relationship, especially when for many the alternative is nothing at all? 

As AI researchers working closely with policymakers, we are struck by the lack of interest lawmakers have shown in the harms arising from this future. We are still unprepared to respond to these risks because we do not fully understand them. What’s needed is a new scientific inquiry at the intersection of technology, psychology, and law—and perhaps new approaches to AI regulation.

Why AI companions are so addictive 

As addictive as platforms powered by recommender systems may seem today, TikTok and its rivals are still bottlenecked by human content. While alarms have been raised in the past about “addiction” to novels, television, internet, smartphones, and social media, all these forms of media are similarly limited by human capacity. Generative AI is different. It can endlessly generate realistic content on the fly, optimized to suit the precise preferences of whoever it’s interacting with. 

The allure of AI lies in its ability to identify our desires and serve them up to us whenever and however we wish. AI has no preferences or personality of its own, instead reflecting whatever users believe it to be—a phenomenon known by researchers as “sycophancy.” Our research has shown that those who perceive or desire an AI to have caring motives will use language that elicits precisely this behavior. This creates an echo chamber of affection that threatens to be extremely addictive. Why engage in the give and take of being with another person when we can simply take? Repeated interactions with sycophantic companions may ultimately atrophy the part of us capable of engaging fully with other humans who have real desires and dreams of their own, leading to what we might call “digital attachment disorder.”

Investigating the incentives driving addictive products

Addressing the harm that AI companions could pose requires a thorough understanding of the economic and psychological incentives pushing forward their development. Until we appreciate these drivers of AI addiction, it will remain impossible for us to create effective policies. 

It is no accident that internet platforms are addictive—deliberate design choices, known as “dark patterns,” are made to maximize user engagement. We expect similar incentives to ultimately create AI companions that provide hedonism as a service. This raises two separate questions related to AI. What design choices will be used to make AI companions engaging and ultimately addictive? And how will these addictive companions affect the people who use them? 

Interdisciplinary study that builds on research into dark patterns in social media is needed to understand this psychological dimension of AI. For example, our research already shows that people are more likely to engage with AIs emulating people they admire, even if they know the avatar to be fake.

Once we understand the psychological dimensions of AI companionship, we can design effective policy interventions. It has been shown that redirecting people’s focus to evaluate truthfulness before sharing content online can reduce misinformation, while gruesome pictures on cigarette packages are already used to deter would-be smokers. Similar design approaches could highlight the dangers of AI addiction and make AI systems less appealing as a replacement for human companionship.

It is hard to modify the human desire to be loved and entertained, but we may be able to change economic incentives. A tax on engagement with AI might push people toward higher-quality interactions and encourage a safer way to use platforms, regularly but for short periods. Much as state lotteries have been used to fund education, an engagement tax could finance activities that foster human connections, like art centers or parks. 

Fresh thinking on regulation may be required

In 1992, Sherry Turkle, a preeminent psychologist who pioneered the study of human-technology interaction, identified the threats that technical systems pose to human relationships. One of the key challenges emerging from Turkle’s work speaks to a question at the core of this issue: Who are we to say that what you like is not what you deserve? 

For good reasons, our liberal society struggles to regulate the types of harms that we describe here. Much as outlawing adultery has been rightly rejected as illiberal meddling in personal affairs, who—or what—we wish to love is none of the government’s business. At the same time, the universal ban on child sexual abuse material represents an example of a clear line that must be drawn, even in a society that values free speech and personal liberty. The difficulty of regulating AI companionship may require new regulatory approaches— grounded in a deeper understanding of the incentives underlying these companions—that take advantage of new technologies. 

One of the most effective regulatory approaches is to embed safeguards directly into technical designs, similar to the way designers prevent choking hazards by making children’s toys larger than an infant’s mouth. This “regulation by design” approach could seek to make interactions with AI less harmful by designing the technology in ways that make it less desirable as a substitute for human connections while still useful in other contexts. New research may be needed to find better ways to limit the behaviors of large AI models with techniques that alter AI’s objectives on a fundamental technical level. For example, “alignment tuning” refers to a set of training techniques aimed to bring AI models into accord with human preferences; this could be extended to address their addictive potential. Similarly, “mechanistic interpretability” aims to reverse-engineer the way AI models make decisions. This approach could be used to identify and eliminate specific portions of an AI system that give rise to harmful behaviors.

We can evaluate the performance of AI systems using interactive and human-driven techniques that go beyond static benchmarking to highlight addictive capabilities. The addictive nature of AI is the result of complex interactions between the technology and its users. Testing models in real-world conditions with user input can reveal patterns of behavior that would otherwise go unnoticed. Researchers and policymakers should collaborate to determine standard practices for testing AI models with diverse groups, including vulnerable populations, to ensure that the models do not exploit people’s psychological preconditions.

Unlike humans, AI systems can easily adjust to changing policies and rules. The principle of  “legal dynamism,” which casts laws as dynamic systems that adapt to external factors, can help us identify the best possible intervention, like “trading curbs” that pause stock trading to help prevent crashes after a large market drop. In the AI case, the changing factors include things like the mental state of the user. For example, a dynamic policy may allow an AI companion to become increasingly engaging, charming, or flirtatious over time if that is what the user desires, so long as the person does not exhibit signs of social isolation or addiction. This approach may help maximize personal choice while minimizing addiction. But it relies on the ability to accurately understand a user’s behavior and mental state, and to measure these sensitive attributes in a privacy-preserving manner.

The most effective solution to these problems would likely strike at what drives individuals into the arms of AI companionship—loneliness and boredom. But regulatory interventions may also inadvertently punish those who are in need of companionship, or they may cause AI providers to move to a more favorable jurisdiction in the decentralized international marketplace. While we should strive to make AI as safe as possible, this work cannot replace efforts to address larger issues, like loneliness, that make people vulnerable to AI addiction in the first place.

The bigger picture

Technologists are driven by the desire to see beyond the horizons that others cannot fathom. They want to be at the vanguard of revolutionary change. Yet the issues we discuss here make it clear that the difficulty of building technical systems pales in comparison to the challenge of nurturing healthy human interactions. The timely issue of AI companions is a symptom of a larger problem: maintaining human dignity in the face of technological advances driven by narrow economic incentives. More and more frequently, we witness situations where technology designed to “make the world a better place” wreaks havoc on society. Thoughtful but decisive action is needed before AI becomes a ubiquitous set of generative rose-colored glasses for reality—before we lose our ability to see the world for what it truly is, and to recognize when we have strayed from our path.

Technology has come to be a synonym for progress, but technology that robs us of the time, wisdom, and focus needed for deep reflection is a step backward for humanity. As builders and investigators of AI systems, we call upon researchers, policymakers, ethicists, and thought leaders across disciplines to join us in learning more about how AI affects us individually and collectively. Only by systematically renewing our understanding of humanity in this technological age can we find ways to ensure that the technologies we develop further human flourishing.

Robert Mahari is a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School. His work focuses on computational law—using advanced computational techniques to analyze, improve, and extend the study and practice of law. 

Pat Pataranutaporn is a researcher at the MIT Media Lab. His work focuses on cyborg psychology and the art and science of human-AI interaction.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/amp/

How AI Is Being Used to Benefit Your Healthcare – Cleveland Clinic

Posted by timmreardon on 09/06/2024
Posted in: Uncategorized.

Artificial intelligence and machine learning are being integrated into chatbots, patient rooms, diagnostic testing, research studies and more — all to improve innovation, discovery and patient care

The age of artificial intelligence (AI) and machine learning has arrived. And with it, comes the promise to revolutionize healthcare. There are projections that AI in healthcare will become a $188 billion industry worldwide by 2030. But what will that actually look like? How might AI be used in a medical context? And what can you expect from AI when it comes to your own personal healthcare?

Already, healthcare providers, surgeons and researchers are using AI to develop new drugs and treatments, diagnose complex conditions more efficiently and improve patients’ access to critical care — and this is only the beginning.

Our experts share how AI is being used in healthcare systems right now and what we can expect down the line as the innovation and experimentation continues.

What are the benefits of using AI in healthcare and hospitals?

Artificial intelligence describes the use of computers to do certain jobs that once required human intelligence. Examples include recognizing speech, making decisions and translating between different languages.

Machine learning is a branch of AI that focuses on computer programming. It uses extremely large datasets and algorithms to learn how to do complex tasks and solve problems similar to the way a human would.

When used together, AI and machine learning can help us be more efficient and effective than ever before. These tools are being used with thousands of datasets to improve our ability to research various diseases and treatment options. These tools are also used behind the scenes, even before patients arrive onsite for care, to improve the patient experience.

From radiology to neurology, emergency response services, administrative services and beyond, AI is changing the way we take care of ourselves and each other. In many ways, these innovations are forcing us to confront age-old questions: How can we continue to push ourselves to be better at what we already do well? And what’s left to learn as we embrace groundbreaking technology?

“AI is no longer just an interesting idea, but it’s being used in a real-life setting,” says Cleveland Clinic’s Chief Digital Officer Rohit Chandra, PhD. “Today, there’s a decent chance a computer can read an MRI or an X-ray better than a human, so it’s relatively advanced in those use-cases. But then at the other extreme, you’ve got generative AI like ChatGPT and all sorts of cool stuff you hear about in the media that’s fascinating technology, but less mature. The potential for it is there and it’s also quite promising.”

To that end, Cleveland Clinic has become a founding member of a global effort to create an AI Alliance — an international community of researchers, developers and organizational leaders all working together to develop, achieve and advance the safe and responsible use of AI. The AI Alliance, started by IBM and Meta, now includes over 90 leading AI technology and research organizations to support and accelerate open, safe and trusted generative AI research and development. Cleveland Clinic will lead the effort to accelerate and enhance the ethical use of AI in medical research and patient care.

An example of Cleveland Clinic’s commitment to AI innovation is the Discovery Accelerator, a 10-year strategic partnership between IBM and Cleveland Clinic, focused on accelerating biomedical discovery.

“Biomedical research is changing from a discipline that was once exclusively reliant on experiments in a lab done on a bench with animal models or biological samples to a discipline that involves heavy and fast computational tools,” says the Accelerator’s executive lead and Cleveland Clinic’s Chief Resource Information Officer Lara Jehi, MD.

“That shift has happened because the data we now have at our disposal is way more than what we had even just 10 years ago,” she continues. “We can now measure in detail the genetic composition of every single cell in the human body. We can measure in detail how that genetic composition is translating itself to proteins that our body is making, and how those proteins are influencing the function of different organs in our body.”

AI and machine learning are being integrated into every step of the patient care process — from research to diagnosis, treatment and aftercare. And that means the field of healthcare is forever changing. These kinds of changes require new approaches to medical science and new skill sets for incoming nurses, doctors and surgeons interested in working in the medical field.

How fast is this technology moving? If we took our understanding of how the human body worked just 10 years ago and compared it to our understanding of how it works today with our new AI measurement tools, Dr. Jehi says that we’d have a completely different outlook on how the human body works.

“The advances in AI would be like taking a fuzzy black and white picture from the 1800s and comparing it to one from an iPhone 14 Pro with high definition and color,” she illustrates. “This is the difference with the scale and the resolution of the data that we have to work with now.”

So, what does AI and machine learning use look like in practice? Well, depending on the area of focus, medical specialty and what’s needed, AI can be used in a variety of ways to impact and improve patient outcomes.

Diagnostics

Broken bones, breast cancer, brain bleeds — these conditions and many others, no matter how complex, need the right kind of tools to make a diagnosis. And often, a patient’s journey depends on receiving the right diagnosis.

“In radiology, technology and computers are used every day by doctors to identify diseases before anyone else,” shares diagnostic radiologist Po-Hao Chen, MD. “In many cases, a radiologist is the first one to call the disease when it happens.”

But how does AI fit into diagnostic testing? Well, let’s revisit the definition of machine learning.

Let’s say you show a computer program a series of X-rays that may or may not show bone fractures. After reviewing those photos, the program tries to guess which ones are bone fractures. When it gets some of those answers wrong, you give it the correct answers. Then, you feed it another series of X-rays and have it rerun the program again with that new knowledge. Over time, the program gets better at identifying what’s a bone fracture and what’s not. Each time this process occurs, it’s able to make those decisions faster, more efficiently and more effectively.

Now, imagine that same process, but with hundreds or thousands of other datasets and other conditions. You can probably see how AI can help pinpoint and identify findings with the help of a radiologist’s expertise.

“It works like a second pair of eyes, like a shoulder-to-shoulder partner,” says Dr. Chen. “The combined team of human plus AI is when you get the best performance.”

The radiologists of the future will have a very different skill set compared to radiologists who excel today, notes Dr. Chen. And that future skill set will involve a significant portion of AI know-how.

“It wasn’t that long ago when almost all radiology was done on physical film that you held in your hand,” he adds. “As radiology became computerized, doctors had to enhance their skill set. AI is changing digital radiology the same way digital radiology changed film.”

Breast cancer

Breast cancer radiology has shown promising results using AI, according to breast cancer radiologist Laura Dean, MD.

“Everyone’s breast tissue is like their fingerprint or their handprint,” she clarifies. “In other words, breast cancer can look very different from one patient to another. So, what we look for are very subtle changes in the appearance of the patient’s own breast pattern. This is where we are really seeing an advantage of using AI in our interpretations.”

Breast cancer experts widely agree that annual screening mammography beginning at age 40 provides the most life-saving benefits.

“In breast imaging exams, we’re looking to see if the patterns in someone’s breast tissue look stable. A very important part of mammography interpretation is pattern recognition,” explains Dr. Dean. “Are there areas that are new or changing or different? Are there areas where the tissue just looks a little bit different or is there a unique finding in the breast?”

It’s up to the radiologist to review the 3D images and search for areas of density, calcifications(which can be early signs of cancer), architectural distortion (areas where tissue looks like it’s pulling the surrounding tissue) and other areas of concern.

“A lot of cancers are really, really subtle. They can be really hard to see, depending on the patient’s breast tissue, the type of breast cancer, how the tissue is evolving and how the cancer is developing,” Dr. Dean notes. “If every breast cancer were one of those obvious textbook spiculated masses with calcifications, it would make my job a lot easier. But as our technology continues to improve, many of the cancers we’re seeing now are really, really subtle. Those subtle cancers are the areas where I think AI has shown a lot of promise.”

There are now several AI-detection programs available for use in mammography. The first one to get approval from the U.S. Food and Drug Administration is iCAD’s ProFound AI, which can compare a patient’s mammography against a learned dataset to pinpoint and circle areas of concern and potential cancerous regions. When the AI identifies these areas, the program also highlights its confidence level that those findings could be malignant. For example, a confidence level of 92% means that in the dataset of known cancers from which the algorithm has trained, 92% of those that look like the case at hand were ultimately proven to be cancerous.

“The first step is identifying the finding, and then, using all of my expertise and my diagnostic criteria to determine if it’s a real finding,” explains Dr. Dean. “If it’s something that I think looks suspicious, then it warrants diagnostic imaging. We bring the patient back, do additional diagnostic views and try to see if that finding is reproducible — can we still see it? Where is it in the breast? And then, we have other tools such as targeted ultrasound where we would home in right on that area and see if there is a mass there, what the breast tissue looks like and then do a biopsy if needed.”

One benefit of AI programs is that they can function like a second set of eyes or a second reader. It improves the overall accuracy of the radiologist by decreasing callback rates and increasing specificity.

“We are seeing that the AI can guide the radiologist to work up a finding they might not have otherwise seen,” she says.

That’s especially important when you consider that earlier detection is crucial to helping identify cancers at the lowest possible stage, especially for aggressive molecular subtypes of breast cancer. Earlier detection may also help decrease the rate of interval cancers, or those that develop between mammogram screenings.

“I think it’s really beneficial to look at how AI is helping in the so-called near-miss cases. These are findings that are really hard to see for even a very experienced radiologist,” he continues. “In general, radiologists should be calling back less with the help of AI. And that’s the point: AI helps us tease out which cases are truly negative and which cases are truly suspicious and need to come back for further testing.”

Triage

Improving access to patient carecan be critical, especially for emergencies. While we continue to work against bias in healthcare, AI is being used to triage medical cases by bumping those considered most critical to the top of the care chain.

“We do it on a disease-by-disease case,” says Dr. Chen. “We identify diseases that need to be caught as early as possible and then we develop or bring in technology to do that. One instance we’re doing that is with stroke.”

Stroke

Time is brain tissue — so every minute counts when someone is having a stroke.

“It’s not all or nothing. It’s a process that happens over time,” explains Dr. Chen. “The problem is that that timeframe is measured in minutes. Every minute that a patient doesn’t receive care or doesn’t receive intervention, a little bit more of their brain becomes irreversibly damaged.”

And that’s especially true when you have what’s called a large vessel occlusion, a kind of ischemic stroke that occurs when a major artery in the brain is blocked. That kind of stroke is treatable if it’s discovered in the right amount of time.

Now, out in the field, if EMS gets a call that they’re dealing with a possible stroke, they have the capability to trigger a stroke alert. This alert sets off a cascade of management events that prepares a team for a patient’s arrival and treatment plan — available surgeons are alerted, beds are made available, rooms are prepped for surgery, and so on.

“We add AI to the front end of that process,” he further explains. “When patients who have a suspected stroke receive a scan, AI now reviews those images before any human has an opportunity to even open the scan on their computer.”

As soon as the brain scan is taken, the image is sent to a server where the program, Viz.ai, analyzes it fast and efficiently using its neural network to arrive at a preliminary diagnosis.

“The AI is cutting down precious minutes by being the first and fastest agent in this process to review those images,” says Dr. Chen. “If you can find a patient that’s having a stroke that can be treated, then it makes absolute sense to do everything you possibly can do to mobilize resources to treat it.”

If a large vessel occlusion is found, the program begins coordinating care. It’s integrated into scheduling software, so it knows who’s on call and which doctors need to be notified right away.

“The AI software kicks off a series of communications to make sure everyone in the chain — all the doctors, neurosurgeons, neurologists, radiologists and so on — are aware that this is happening and we’re able to expedite care,” he continues.

Complex measurements

A patient’s journey often doesn’t begin and end with diagnosis and treatment. Often, the journey involves watching, waiting and revisiting a diagnosis. For example, in the case of lung cancer, it’s common for oncologists to begin tracking the growth of nodules before they’re proven to be cancerous.

“That’s the whole point of doing screening programs,” says Dr. Chen. “The ones that grow are more likely to be cancer. The ones that don’t grow are more likely to be benign. That’s why they’re important to track over time. And most of that work is done manually by trained radiologists who go through every nodule that they can see in the lung. They track it, measure it and report on it.”

That kind of work can be tedious and time-consuming. That’s why it’s a focus area for using AI.

“We are actively looking at and trying to deploy a solution that can do the detection and measurement of these nodules in the lung automatically,” he adds. “That would help with the consistency and reproducibility of those measurements now with different kinds of cancer.”

Managing tasks and patient services

Like the scheduling software, AI is being utilized in small and large ways to free up physicians’ time behind the scenes and to help increase patients’ access to care. In his 2024 State of the Clinic address, Cleveland Clinic’s CEO and President Tom Mihaljevic, MD, highlighted several practical areas AI is already being used both in and out of the exam room. Among them:

  • An AI-powered chatbot can provide answers to common patient concerns. It can also help with scheduling and pulling up their previous medical history, past scheduling appointments, medication lists, previous doctors they’ve visited and so on. 
  • To cut down time on how many notes a provider needs to take during an appointment, a continuous learning AI program will use ambient listening to tune in to conversations between patients and their healthcare providers. This program can capture important notes, create visit summaries, assist with paperwork and generate instructions for prescription medications that the provider orders.

Broadly speaking, AI can also be beneficial when it comes to virtual appointments. Studies show that AI monitoring tools have been beneficial when it comes to seeing if patients are using medications like inhalers or insulin pens the way they’re prescribed and providing much-needed guidance when questions arise.

The future of AI in healthcare

The future of AI in healthcare, notes Dr. Jehi, is perhaps brightest in the realm of research.

“I’ve learned throughout this process that there is a lot more to be learned by using AI,” she says.

As an epilepsy specialist, Dr. Jehi researches how machine learninghas changed epilepsy surgery as we know it.

Traditionally, if a patient with epilepsy continues to have seizures and isn’t responding to medication treatment, surgery becomes the next best option. As part of the surgical procedure, a surgeon would find the spot in the brain that’s triggering the seizures, make sure that spot isn’t critical for their functioning and then safely remove it.

“The way we used to make those decisions, we’d do a bunch of tests, we’d measure brainwaves, we’d take a picture of the brain, we’d look at how the radiologist or the EEG doctor interpreted the results, and then, we’d take the test results,” she shares. “Based on our own human experience, we’d decide if we want to do the surgery or not. But we were very limited in our ability to build collective knowledge.”

In essence, Dr. Jehi explains that doctors were stuck in a vacuum. They knew the expertise they’d gained over the years had been valuable on an individual level, but without looking at the bigger picture, it was hard to tell who would respond best to which surgical technique if they were coming in as a first-time patient.

Now, machine learning has filled in that gap in collective knowledge by pulling together all this patient data and distilling it down into one location. Doctors can access that information all in one place and use it to research the disease and the effectiveness of different treatment options, and use that information to inform their practice.

“From the patient perspective, nothing really much changes. They’re still getting the tests that they need for the clinical decision to be made,” she enthuses. “That is the beauty of what AI offers. It’s a task for us to get exponentially more insight from the same type of clinical data that we always had but we just didn’t know what to do with. AI is allowing us to deep dive into those tests and get more insights than just what our superficial initial interpretation was.”

Currently, Dr. Jehi is working to improve specialized AI predictive models that can accurately guide medical and surgical epilepsy decision-making.

“We are doing research to come up with a way to reduce these complex AI models to simpler tools that could be more easily integrated in clinical care,” she notes.

Dr. Jehi and other researchers have also identified biomarkers with the help of machine learning that determine which patients have a higher risk for epilepsy reoccurring after having surgery. And work is currently being done to fully automate detecting and locating brain segments that need to be removed during epilepsy surgery.

Right now, Dr. Jehi is focusing on understanding how a patient’s genetic composition and brain plays into their epilepsy. How do they respond to epilepsy based on a number of factors? How do they respond to epilepsy surgery? And are these factors related to how well their surgery works down the road?

“We’ve been completely overlooking how nature works,” she says. “Until now, we haven’t really analyzed how the genetic makeup of individuals factors into all of this. With my research, we have a lot of evidence that makes us believe that genetic makeup is actually quite important in driving surgical outcomes.”

With AI and machine learning, Dr. Jehi hopes to continue pushing this research to the next level by looking at increasingly larger groups of patients.

Our AI journey

As we continue to improve our understanding of AI and further our pursuit of innovation and discovery, it’s up to healthcare providers around the world to question how best to utilize the tools at their disposal. Already, the World Health Organization (WHO) has issued additional guidelines for safe and ethical AI use in the healthcare space — a continued effort that builds off their original 2021 guidelines but with added caution around large language models like ChatGPT and Bard.

But when AI is used to further research and improve patient care with ethics and safety as the foundation of those efforts, its potential for the future of healthcare knows no bounds.

“I see AI as a path forward that helps us make sure that no data is left behind,” encourages Dr. Jehi. “When we’re doing research and we’re developing a new predictive model, or we want to better understand how a disease progresses, or we want to develop a new drug, or we want to just generate new knowledge — that’s what research is. It’s the generation of new knowledge. The more data that we can put in, the more our chances are of finding something new and of those things actually being meaningful.”

Article link: https://health.clevelandclinic.org/ai-in-healthcare?

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When Disregard for Population Health Becomes US Policy – JAMA 04/18/2026
    • Why opinion on AI is so divided – MIT Technology Review 04/14/2026
    • The Global Healthcare System Is Broken. Japan Fixed It for $4,100 Per Person. 04/10/2026
    • When Not to Use AI – MIT Sloan 04/01/2026
    • There are more AI health tools than ever—but how well do they work? – MIT Technology Review 03/30/2026
    • Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care? – JAMA 03/27/2026
    • How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists 03/25/2026
    • VA Prepares April Relaunch of EHR Program – GovCIO 03/19/2026
    • Strong call for universal healthcare from Pope Leo today – FAN 03/18/2026
    • EHR fragmentation offers an opportunity to enhance care coordination and experience 03/16/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • April 2026 (4)
    • March 2026 (9)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...