healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Nobody knows how AI works – MIT Technology Review

Posted by timmreardon on 10/15/2024
Posted in: Uncategorized.


It’s still early days for our understanding of AI, so expect more glitches and fails as it becomes a part of real-world products.

By Melissa Heikkilä

March 5, 2024

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’ve been experimenting with using AI assistants in my day-to-day work. The biggest obstacle to their being useful is they often get things blatantly wrong. In one case, I used an AI transcription platform while interviewing someone about a physical disability, only for the AI summary to insist the conversation was about autism. It’s an example of AI’s “hallucination” problem, where large language models simply make things up. 

Recently we’ve seen some AI failures on a far bigger scale. In the latest (hilarious) gaffe, Google’s Gemini refused to generate images of white people, especially white men. Instead, users were able to generate images of Black popes and female Nazi soldiers. Google had been trying to get the outputs of its model to be less biased, but this backfired, and the tech company soon found itself in the middle of the US culture wars, with conservative critics and Elon Musk accusing it of having a “woke” bias and not representing history accurately. Google apologized and paused the feature. 

In another now-famous incident, Microsoft’s Bing chat told a New York Times reporter to leave his wife. And customer service chatbots keep getting their companies in all sorts of trouble. For example, Air Canada was recently forced to give a customer a refund in compliance with a policy its customer service chatbot had made up. The list goes on. 

Tech companies are rushing AI-powered products to launch, despite extensive evidence that they are hard to control and often behave in unpredictable ways. This weird behavior happens because nobody knows exactly how—or why—deep learning, the fundamental technology behind today’s AI boom, works. It’s one of the biggest puzzles in AI. My colleague Will Douglas Heaven just published a piece where he dives into it. 

The biggest mystery is how large language models such as Gemini and OpenAI’s GPT-4 can learn to do something they were not taught to do. You can train a language model on math problems in English and then show it French literature, and from that, it can learn to solve math problems in French. These abilities fly in the face of classical statistics, which provide our best set of explanations for how predictive models should behave, Will writes. Read more here. 

It’s easy to mistake perceptions stemming from our ignorance for magic. Even the name of the technology, artificial intelligence, is tragically misleading. Language models appear smart because they generate humanlike prose by predicting the next word in a sentence. The technology is not truly intelligent, and calling it that subtly shifts our expectations so we treat the technology as more capable than it really is. 

Don’t fall into the tech sector’s marketing trap by believing that these models are omniscient or factual, or even near ready for the jobs we are expecting them to do. Because of their unpredictability, out-of-control biases, security vulnerabilities, and propensity to make things up, their usefulness is extremely limited. They can help humans brainstorm, and they can entertain us. But, knowing how glitchy and prone to failure these models are, it’s probably not a good idea to trust them with your credit card details, your sensitive information, or any critical use cases.

As the scientists in Will’s piece say, it’s still early days in the field of AI research. According to Boaz Barak, a computer scientist at Harvard University who is currently on secondment to OpenAI’s superalignment team, many people in the field compare it to physics at the beginning of the 20th century, when Einstein came up with the theory of relativity. 

The focus of the field today is how the models produce the things they do, but more research is needed into why they do so. Until we gain a better understanding of AI’s insides, expect more weird mistakes and a whole lot of hype that the technology will inevitably fail to live up to. 

Deeper Learning

Google DeepMind’s new generative model makes Super Mario–like games from scratch

OpenAI’s recent reveal of its stunning generative model Sora pushed the envelope of what’s possible with text-to-video. Now Google DeepMind brings us text-to-video games. The new model, called Genie, can take a short description, a hand-drawn sketch, or a photo and turn it into a playable video game in the style of classic 2D platformers like Super Mario Bros. But don’t expect anything fast-paced. The games run at one frame per second, versus the typical 30 to 60 frames per second of most modern games.

Level up: Google DeepMind’s researchers are interested in more than just game generation. The team behind Genie works on open-ended learning, where AI-controlled bots are dropped into a virtual environment and left to solve various tasks by trial and error. It’s a technique that could have the added benefit of advancing the field of robotics. Read more from Will Douglas Heaven.

Bits and Bytes

What Luddites can teach us about resisting an automated future
This comic is a nice look at the history of workers’ efforts to preserve their rights in the face of new technologies, and draws parallels to today’s struggle between artists and AI companies. (MIT Technology Review) 

Elon Musk is suing OpenAI and Sam Altman
Get the popcorn out. Musk, who helped found OpenAI, argues that the company’s leadership has transformed it from a nonprofit that is developing open-source AI for the public good into a for-profit subsidiary of Microsoft. (The Wall Street Journal) 

Generative AI might bend copyright law past the breaking point
Copyright law exists to foster a creative culture that compensates people for their creative contributions. The legal battle between artists and AI companies is likely to test the notion of what constitutes “fair use.” (The Atlantic) 

Tumblr and WordPress have struck deals to sell user data to train AI 
Reddit is not the only platform seeking to capitalize on today’s AI boom. Internal documents reveal that Tumblr and WordPress are working with Midjourney and OpenAI to offer user-created content as AI training data. The documents reveal that the data set Tumblr was trying to sell included content that should not have been there, such as private messages. (404 Media) 

A Pornhub chatbot stopped millions from searching for child abuse videos
Over the last two years, an AI chatbot has directed people searching for child sexual abuse material on Pornhub in the UK to seek help. This happened over 4.4 million times, which is a pretty shocking number. (Wired) 

The perils of AI-generated advertising. Case: Willy Wonka
An events company in Glasgow, Scotland, used an AI image generator to attract customers to “Willy’s Chocolate Experience,” where “chocolate dreams become reality”—only for customers to arrive at a half-deserted warehouse with a sad Oompa Loompa and depressing decorations. The police were called, the event went viral, and the internet has been having a field day since. (BBC) 

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/03/05/1089449/nobody-knows-how-ai-works/amp/

Deputy Energy secretary sees role in counteracting AI industry’s ‘profit motive’

Posted by timmreardon on 10/09/2024
Posted in: Uncategorized.

As national labs invest in next-generation AI research, David Turk explains how the government fits into the AI age. 

BYREBECCA HEILWEIL

OCTOBER 8, 2024

The Energy Department could be a key force in counteracting the “profit motive” driving America’s leading artificial intelligence companies, the agency’s second-in-command said in an interview. 

DOE Deputy Secretary David Turk told FedScoop that top AI firms aren’t motivated to pursue all the use cases most likely to benefit the public, leaving the U.S. government — which maintains a powerful network of national labs now developing artificial intelligence infrastructure of their own — to play an especially critical role.

Turk’s comments come as the Energy Department pushes forward with a series of AI initiatives. One key program is the Frontiers in Artificial Intelligence for Science, Security, and Technology, or FASST effort, which is meant to advance the use of powerful datasets maintained by the agency in order to develop science-forward AI models. At Lawrence Livermore National Laboratory in California, federal researchers are working on building the world’s fastest supercomputer, El Capitan. The current fastest, Frontier, is based at the Oak Ridge National Laboratory in Tennessee, which also falls under the auspices of the federal government.

Through the Energy Department’s data and research staff, Turk says the agency is hoping to focus on areas of AI the private sector isn’t motivated to seek out — while also countering some of the negative consequences spurred by the race to build the technology. 

“These [companies] are shareholder-driven and they’re looking to turn a profit. But not everything that is valuable to society as a whole has a huge amount of profit behind it, especially in the near term and in the way that we’ve seen history play out,” Turk said. “It is going to take the government, without the full profit motive and without the intense competition of the private sector, to say, hold on, let’s kick the tires. Let’s make sure we’re doing this right.” 

That’s not to say the Energy Department or the national laboratories have a problem working with Big Tech. At the Pacific Northwest National Laboratory in Washington, there’s already plenty of work being done with ChatGPT, as well as with Microsoft, on battery chemistry research. OpenAI is working with the Los Alamos National Laboratory on bioscience research, too. 

As policymakers wrestle with how to prioritize U.S. competitiveness in artificial intelligence  while also curbing some of the worst impacts of the emerging technology — including data security risks, environmental costs, and potential bias — Turk spoke to FedScoop about how the government will try to position itself in the age of AI.

This interview has been edited for clarity and length. 

FedScoop: What is the Department of Energy’s role in this moment of AI? Everyone just watched “Oppenheimer” and has more familiarity with the history of the national labs.

Deputy Secretary David Turk:What’s striking to me is not just the nuclear security [or] Oppenheimer side of the house, which does a lot of AI and supercomputing, but also our Office of Science, where we have an $9 billion annual budget for science. Some of those laboratories have been at the very cutting edge on AI, supercomputing, quantum computing, and have huge, huge datasets that are incredibly helpful as well. For me, the foundation of our AI at the Department of Energy is not as appreciated as it should be. … It’s the supercomputer power, it’s the data, which is the fuel for AI, and then, maybe even most importantly, it’s the people. We’ve got such phenomenal talent, mostly in our national laboratories and some at our federal headquarters. 

FS: A lot of these companies are finding that there’s a ceiling in terms of what you can scrape from the public web, and a lot of the more specific applications of AI rely on more specific data, potentially data with higher security needs. How are you thinking about access to data that the Department of Energy might have? 

DT:If you don’t have good data, you’re not going to have good outputs, no matter how good your AI is. We have just phenomenal datasets that no one else has, including and especially from our national laboratories, who’ve been doing fundamental science, have been doing applied research, and have been really pushing the boundaries in any number of different areas.

… We do have a responsibility to do even more, [including] making sure that we’ve got the funding to be able to put that data out there, where it’s appropriate for public use and where it’s appropriate more for specialized use when we have some security issues. … Part of the FASST proposal is making that data more available for researchers where it’s appropriate to do so, and for others as well in more specialized areas. That’s a big part of our strategy. 

FS: There’s a lot of conversation about building a national AI capability. I’m curious how you would explain that to a member of the public. Is that something like a government version of GPT? Is this a science version of GPT, an LLM?

DT: I don’t think there’s going to be one AI to serve all purposes, right? There may be more generalized ChatGPT-like services, but then there’s going to be AI really trained on from the data perspective, from the algorithm perspective — on physics problems or bio problems or other kinds of science problems. 

The private sector is going to do what the private sector does and they have a profit motive in mind. That’s not to say that there aren’t good people working in companies, but these are companies, and these [companies] are shareholder-driven and they’re looking to turn a profit. But not everything that is valuable to society as a whole has a huge amount of profit behind it, especially in the near term and in the way that we’ve seen history play out. 

And so if we want to have AI benefiting the public as a whole, including use cases that don’t have that profit loaded squarely in the equation, then we need to invest in that and we need to have a place within our government, the Department of Energy, working with other partners to make sure we’re taking advantage of those more public-minded use cases going forward. 

Because these are profit-driven companies with intense competition among themselves, we need to have democratically elected government with real expertise and we need to hire up and make sure that we’ve got cutting-edge AI talent in our government to be able to do the red-teaming. [They need] to be able to research, for example, whether a model may get into areas that are really challenging, that a terrorist could use it to build a chem or bio weapon in a way that’s not good for anybody.’

We need to have that expertise within the U.S. government. We need to do the red-teaming and we need to have the regulations in place. All of that depends on having the human capability, the human talent, but also the datasets and the algorithms and other kinds of things that are necessary for the government to play its role on the offense side and the defense side.

FS: I got the chance to see Frontier maybe about a year ago, and that was super interesting. I’m curious, do we have enough supercomputers to meet the DOE goals on AI right now?

DT: We do have many of the world’s fastest supercomputers right now, and there’s others in the pipeline that will become the world’s fastest going forward. We need to keep investing. The short answer is, if we want to keep being on the cutting edge, we need to keep that level of investment. We need to keep pushing the boundaries. And we need to make sure that the U.S. government has capabilities, including on the compute power side of things. 

So we need to work with those partners in the private sector and keep pushing the envelope on the compute power, as well. I feel like we’re in a very strong place there. But again, with not only what’s going on in the private sector, but what’s going on in China and other countries, who also want to be the leaders in AI, we’ve got to keep investing, and we’ve got to compete — and we’ve got to out-compete from the U.S. government side of things, too. 

FS: I understand that the national labs do have some responsibility not just developing AI, but also analyzing potential risks that might come from private-sector models. Curious if you could summarize what you’re finding in terms of the biggest risks or biggest concerns with powerful AI models right now?

DT: This is a huge, huge responsibility and we need to invest in this side as well. We’ve got great capabilities. We’ve got great human talent. But if we’re going to keep tabs on what’s happening in the private sector — if we’re going to be able to do the red-teaming and other kinds of things that are necessary to make sure that these AI models are safe going forward — [we should do that] before they’re released more broadly, right? You don’t want the Pandora’s box to open. 

What’s clear to me in all our discussions internally in the U.S. government is we’ve got a lot of that expertise. So we’re not only doing it ourselves, but with some key partners. We’ve got relationships with Anthropic and many other AI companies on that front. We’re working hand in hand with others, including, especially, the Commerce Department. The Commerce Department is setting up this AI Safety Institute. We’re partnering with them so that we can take advantage of this expertise, this knowledge, this ability to work in the classified space — of course, working with our intel colleagues and our Department of Defense colleagues as well — and making sure that we all have an across-the-government effort to do all this more defensive work.

That’s something that’s in the interest of companies, but it is going to take the government, without the full profit motive and without the intense competition of the private sector, to say, hold on, let’s kick the tires. Let’s make sure we’re doing this right. Let’s make sure we don’t have any unintended consequences here. And this is going to be only even more important with each successive generation of AI, which gets more and more sophisticated, more and more powerful. 

This is why we put together this FASST proposal, why we’re having conversations with Congress about making sure that we have the funding to keep up the talent, keep up the compute power, keep up the ability of the algorithms to make sure that we’re playing this incredibly important role.

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox’s tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Article link: https://fedscoop.com/deputy-energy-secretary-ai-industry-profit-motive/?

The impact of generative AI as a general-purpose technology – MIT Sloan

Posted by timmreardon on 09/29/2024
Posted in: Uncategorized.

by Beth Stackpole

Aug 6, 2024

Why It Matters

Generative artificial intelligence will affect economic growth more quickly than other general-purpose technologies, according to a new report. Share 

The steam engine, the internal combustion engine, electrification, and computers are all considered “general-purpose technologies” — new tools that are powerful enough to accelerate overall economic growth and transform economies and societies. According to many experts, generative artificial intelligence will be the next invention to join that category.

In a recent report about the economic impact of generative AI, Google visiting fellow and MIT Sloan principal research scientist Andrew McAfee makes the case that generative AI is not only a game-changing general-purpose technology but could also spur change far more quickly than preceding innovations due to its accessibility and ease of diffusion. 

General-purpose technologies possess three key characteristics that ensure that they will have a large and positive economy-wide impact on productivity and growth. Though generative AI is still relatively new, McAfee writes that generative AI has these characteristics: 

Rapid improvement. Though it became mainstream only a few years ago, generative AI has quickly improved at generating relevant and accurate content in response to user prompts. McAfee notes that OpenAI’s GPT 3.5 system, released in late 2022, performed better on the U.S. bar exam than about 10% of the human test-takers. GPT 4, released in March 2023, performed better than 90% of those taking the bar exam. 

Generative AI’s “context window” — how much information it can accept from users — has also grown quickly. In 2020, state-of-the-art generative AI systems could accommodate approximately seven and a half pages of text; in late 2023, that window was 40 times larger, up to nearly 300 pages of text.

Pervasiveness. General-purpose technologies need to be widely implemented — which is already true of generative AI. In a 2023 survey of 14,000 users across a range of industries and professions, 28% of respondents said that they are using generative AI at work — over half without formal approval from their employer — and another 32% expected to use the technology at work soon. Another 2023 study found that about 80% of U.S. workers could have at least 10% of their work tasks affected by the introduction of generative AI. About 19% of workers could see at least half of their work tasks affected by the technology, the study found. 

Complementary innovations. While generative AI can quickly generate text, pictures, and sound from prompts, there is plenty of work being done to push it beyond those boundaries. Generative AI is being used not just to improve individual tasks but to streamline entire processes, McAfee writes, and researchers are confident that innovations making use of generative AI’s capabilities will advance science and engineering. 

“Because of generative AI’s rapid improvement, pervasiveness, and clear potential for complementary innovation, we are confident that it merits the label of ‘general-purpose technology,’” McAfee writes.

Generative AI’s accelerated pace of change  

Past general-purpose technologies took time to have a transformational impact, mainly because they required a new infrastructure. For example, electrical transmission networks needed to be in place to take advantage of electrification. In addition, most advantages associated with past technologies materialized only after users had had the chance to ideate and implement complementary innovations, McAfee writes.

RELATED ARTICLES

How generative AI can boost highly skilled workers’ productivityFrom MIT, a technically informed approach to governing AIHere comes the AI productivity boom

In contrast, generative AI’s effects will manifest more quickly because much of the required infrastructure — internet-connected devices — is immediately available and already widely used. Generative AI doesn’t require mastery of computer skills or proficiency in a programming language, as people use natural human language to interact with the system.

In terms of being an engine for economic growth, experts predict serious gains. Goldman Sachs estimates that generative AI will be responsible for a 0.4 percentage point increase in GDP growth in the United States over the next decade. There are also ramifications beyond growth statistics. By automating mundane tasks, generative AI will allow people to do more meaningful work, whether that’s enabling physicians to spend less time on paperwork and more time caring for patients or helping professionals dig into upskilling and training.

However, there are also concerns that people across industries will need to learn new skills and rethink their career paths. There are risks associated with disinformation as well.

Despite these drawbacks, McAfee writes that generative AI has the potential to fuel wide-scale economic growth. 

“Generative AI is already improving the productivity and quality of many tasks, and the technology is beginning to be used to redesign multi-step, multi-group processes, making them faster and less labor-intensive,” McAfee writes. “This technology’s deepest impact on the world of work will come as it’s used to reimagine entire organizations. This deep reimagination will be a decentralized and distributed phenomenon, carried out by innovators and entrepreneurs throughout the economy.”

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/impact-generative-ai-a-general-purpose-technology?

Exascale computers: 10 Breakthrough Technologies 2024 MIT Technology Review

Posted by timmreardon on 09/22/2024
Posted in: Uncategorized.

Computers capable of crunching a quintillion operations per second are expanding the limits of what scientists can simulate.

By Sophia Chenarchive page

January 8, 2024

WHO

Oak Ridge National Lab, Jülich Super­computing Centre, China’s Supercomputing Center in Wuxi

WHEN

Now

In May 2022, the global supercomputer rankings were shaken up by the launch of Frontier. Now the fastest supercomputer in the world, it can perform more than 1 quintillion (1018) floating-point operations per second. That’s a 1 followed by 18 zeros, also known as an exaflop. Essentially, Frontier can perform as many calculations in one second as 100,000 laptops.

With the launch of Frontier, located at Oak Ridge National Laboratory in Tennessee, the era of exascale computing officially began. Several more such exascale computers will soon join its ranks. In the US, researchers are installing two machines that will be about twice as fast as Frontier: El Capitan, at Lawrence Livermore National Laboratory in California, and Aurora, at Argonne National Laboratory in Illinois. Europe’s first exascale supercomputer, Jupiter, is expected to come online in late 2024. China reportedly also has exascale machines, although it has not released results from standard benchmark tests.

Related Story

Long rows of supercomputers with the name "Frontier" visible on the end

What’s next for the world’s fastest supercomputers

Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.

Scientists and engineers are eager to use these turbocharged computers to advance a range of fields. Astrophysicists are already using Frontier to model the flow of gas in and out of the Milky Way; in addition to simulating motion on the scale of our galaxy, their model can zero in on exploding stars. This application showcases supercomputers’ unique ability to simulate physical objects at multiple scales simultaneously. 

The progress won’t stop here. For the last three decades, supercomputers have gotten about 10 times faster every four years or so. And the stewards of these machines are already planning the next models: Oak Ridge engineers are designing a supercomputer that will be three to five times faster than Frontier, likely to be unveiled in the coming decade. 

But one big challenge looms: the energy footprint. Frontier, which already employs energy-conserving innovations, draws enough power even while idling to run thousands of homes. Engineers will need to figure out how to build these behemoths not just for speed, but for environmental sustainability. 

Article link: https://www.technologyreview.com/2024/01/08/1085128/exascale-computing-breakthrough-technologies/?

Escape Fire – The fight to rescue American Healthcare

Posted by timmreardon on 09/20/2024
Posted in: Uncategorized.

ESCAPE FIRE exposes the perverse nature of American healthcare, contrasting the powerful forces opposing change with the compelling stories of pioneering leaders and the patients they seek to help. The film is about finding a way out. It’s about saving the health of a nation.

mail

Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System

Posted by timmreardon on 09/19/2024
Posted in: Uncategorized.

September 19, 2024

AUTHORS

David Blumenthal, Evan D. Gumas,Arnav Shah, Munira Z. Gunja,Reginald D. Williams II

DOWNLOADS:

  • Fund Report ↓
  • Chartpack (pdf) ↓
  • Chartpack (ppt) ↓
  • News Release ↓

The U.S. health care system is failing to keep Americans healthy, ranking last in a new Commonwealth Fund report that compares health and health care in 10 countries. U.S. performance is particularly poor when it comes to health equity, access to care, efficiency, and health outcomes. 

According to Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System, the U.S. spends the most on health care, yet Americans live shorter, less healthy lives than people in Australia, Canada, France, Germany, the Netherlands, New Zealand, Sweden, Switzerland, and the United Kingdom. Other key findings:  

  • Americans experience the most difficulties getting and affording health care.  
  • The U.S. and New Zealand rank lowest on health equity, meaning they have the largest income-related differences in key measures of health system performance, and more of their residents face unfair treatment and discrimination when seeking care compared to residents of the other countries. 
  • Patients and physicians in the U.S. face the most onerous billing and payment burdens, leading to poor performance on measures of administrative efficiency. 

“The U.S. continues to be in a class by itself in the underperformance of its health care sector,” say the study’s authors, who call for policymakers and health care leaders to learn from other countries’ experiences and act. 

Article link: https://www.commonwealthfund.org/publications/fund-reports/2024/sep/mirror-mirror-2024

Pentagon readies for 6G, the next of wave of wireless network tech

Posted by timmreardon on 09/16/2024
Posted in: Uncategorized.

By Courtney Albon

 Friday, Sep 13, 2024

Since transitioning most of its 5G research and development projects to the Chief Information Office last year, the Pentagon’s Future Generation Wireless Technology Office has shifted its focus to preparing the Defense Department for the next wave of network innovation.

That work is increasingly important for the U.S., which is racing against China to shape the next iteration of wireless telecommunications, known as 6G. These more advanced networks, expected to materialize in the 2030s, will pave the way for more dependable high-speed, low-latency communication and could support the Pentagon’s technology interests — from robotics and autonomy to virtual reality and advanced sensing.

Staying ahead means not only fostering technology development and industry standards but making sure that policy and regulations are in place to safely use the capability, according to Thomas Rondeau, who leads the Pentagon’s FutureG office. Staking a leadership role in the global competition, he said, could give DOD a level of control over what that future infrastructure looks like.

“If we can define those going into it, then as we export our technologies, we’re also exporting our policies and our regulations, because they’re going to be inherently part of those technology solutions,” Rondeau told Defense News in a recent interview.

The Defense Department started making a concerted investment in 5G about five years ago when then Undersecretary of Research and Engineering Michael Griffin named the technology a top priority for the Pentagon.

In 2020, DOD awarded contracts totaling $600 million to 15 companiesto experiment with various 5G applications at five bases around the country. The projects included augmented and virtual reality training, smart warehousing, command and control and spectrum utilization.

The department has since expanded the pilots and pursued other wireless network development projects, including a 5G Challenge series that incentivized companies to move toward more open-access networks.

The result has, so far, been a mixed bag. Most of the pilots didn’t transition into formal programs within the military services, Rondeau said. Several of the failed efforts involved commercial augmented or virtual reality technology that wasn’t mature enough for DOD to justify continued funding.

Among the projects that did transfer, Rondeau highlighted a pilot effort at Naval Air Station Whidbey Island in Washington to provide fixed wireless access to the base. The project essentially replaced hundreds of pounds of cables with radio units that broadcast the communications network to the personnel who need it. Today, the system is supporting logistics and maintenance operations at the base.

“This could be a huge benefit for readiness, but also I think it should be very cost-effective way to slim down on everything that you pay for cables,” Rondeau said. “That will be a continued, sustainable project.”

This and other transitioned pilots will likely make their way into a formal budget cycle by fiscal 2027, he added.

DOD also saw some success from the 5G Challenges it staged in 2022 and 2023 to encourage telecommunication companies to transition to an open radio access network, or O-RAN. A RAN is the first entry point a wireless device makes into a network and accounts for about 80% of its cost. Historically, proprietary RANs managed by companies like Huawei, Ericsson, Nokia and Samsung have dominated the market.

“They’re driving a world where they control the entire system, the end-to-end system,” Rondeau said. “That causes a lack of insight, a lack of innovation on our side, and it causes challenges with how to apply these types of systems to unique, niche military needs.”

The 5G Challenge offered companies a chance to break open that proprietary model by moving to O-RANS — and according to Rondeau, it was a success. The initial challenge then expanded into a broader forum that addressed issues like energy efficiency and spectrum management. Ultimately, the effort reduced energy usage by around 30%, he said.

Rondeau said that while much of the focus of these initiatives was on 5G, the work has informed the Pentagon’s vision and strategy for 6G, which the department believes should have an open-source foundation.

“That is a direct result of not only my background and push for some of these things, but also the learnings that we got from the networks we’ve deployed, from the 5G Challenge,” he said. “All these things come into play that led us towards an open-source software model being the right model for the military and, we think, for industry.”

One of the FutureG office’s top priorities these days, a direct outgrowth of the 5G Challenge, is called CUDU, which stands for centralized unit, distributed unit. The project is focused on implementing a fully open software model for 6G that meets the needs of industry, the research community and DOD.

The office is also exploring how the military could use 6G for sensing and monitoring. Its Integrated Sensing and Communications project, dubbed ISAC, uses wireless signals to collect information about different environments. That capability could be used to monitor drone networks or gather military intelligence.

While ISAC technology could bring a major boost to DOD’s ISR systems, commercialization could make it accessible to adversary nations who might weaponize it against the U.S. That challenge reflects a broader DOD concern around 6G policies and regulation – and drives urgency within Rondeau’s office to ensure the U.S. is the first to shape the foundation of these next-generation networks.

“We’re looking at this as a real opportunity for dramatic growth and interest in new, novel technologies for both commercial industry and defense needs,” he said. “But also, the threat space that it opens up for us is potentially pretty dramatic, so we need to be on top of this.”

Article link: https://www.defensenews.com/pentagon/2024/09/13/pentagon-readies-for-6g-the-next-of-wave-of-wireless-network-tech/

How generative AI is boosting the spread of disinformation and propaganda – MIT Technology Review

Posted by timmreardon on 09/12/2024
Posted in: Uncategorized.


In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.

By Tate Ryan-Mosley

October 4, 2023

Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. 

Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online content. In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” 

The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and retaliation for online speech. The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. 

“Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital oppression.  

Funk found there were two primary factors behind these changes: the affordability and accessibility of generative AI is lowering the barrier of entry for disinformation campaigns, and automated systems are enabling governments to conduct more precise and more subtle forms of online censorship. 

Disinformation and deepfakes

As generative AI tools grow more sophisticated, political actors are continuing to deploy the technology to amplify disinformation. 

Venezuelan state media outlets, for example, spread pro-government messages through AI-generated videos of news anchors from a nonexistent international English-language channel; they were produced by Synthesia, a company that produces custom deepfakes. And in the United States, AI-manipulated videos and images of political leaders have made the rounds on social media. Examples include a video that depicted President Biden making transphobic comments and an image of Donald Trump hugging Anthony Fauci.  

In addition to generative AI tools, governments persisted with older tactics, like using a combination of human and bot campaigns to manipulate online discussions. At least 47 governments deployed commentators to spread propaganda in 2023—double the number a decade ago. 

And though these developments are not necessarily surprising, Funk says one of the most interesting findings is that the widespread accessibility of generative AI can undermine trust in verifiable facts. As AI-generated content on the internet becomes normalized, “it’s going to allow for political actors to cast doubt about reliable information,” says Funk. It’s a phenomenon known as “liar’s dividend,” in which wariness of fabrication makes people more skeptical of true information, particularly in times of crisis or political conflict when false information can run rampant.   

For example, in April 2023, leaked recordings of Palanivel Thiagarajan, a prominent Indian official, sparked controversy after they showed the politician disparaging fellow party members. And while Thiagarajan denounced the audio clips as machine generated, independent researchers determined that at least one of the recordings was authentic.

Chatbots and censorship

Authoritarian regimes, in particular, are using AI to make censorship more widespread and effective. 

Freedom House researchers documented 22 countries that passed laws requiring or incentivizing internet platforms to use machine learning to remove unfavorable online speech. Chatbots in China, for example, have been programmed not to answer questions about Tiananmen Square. And in India, authorities in Prime Minister Narendra Modi’s administration ordered YouTube and Twitter to restrict access to a documentary about violence during Modi’s tenure as chief minister of the state of Gujarat, which in turn encourages the tech companies to filter content through AI-based moderation tools. 

In all, a record high of 41 governments blocked websites for political, social, and religious speech last year, which “speaks to the deepening of censorship around the world,” says Funk. 

Iran suffered the biggest annual drop in Freedom House’s rankings after authorities shut down internet service, blocked WhatsApp and Instagram, and increased surveillance after historic antigovernment protests in fall 2022. Myanmar and China have the most restrictive internet censorship, according to the report—a title China has held for nine consecutive years. 

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/amp/

CMS seeks information on AI uses for health care ahead of ‘demo days’ – Fedscoop

Posted by timmreardon on 09/12/2024
Posted in: Uncategorized.

The agency ultimately wants to select organizations to demo their products during upcoming “CMS AI Demo Days.”

BYMADISON ALDER

SEPTEMBER 12, 2024

The Centers for Medicare and Medicaid Services is asking organizations to provide information about artificial intelligence technologies for use in health care outcomes and service delivery as it plans demonstration events.

In a request for informationannounced earlier this week, CMS said it wants to gather information about AI products and services from health care companies, providers, payers, start-ups and others, and plans to eventually select organizations to provide demos of those technologies at “CMS AI Demo Days” starting in October.

The demo days will be held quarterly and are intended “to educate and inspire the CMS workforce on AI capabilities and provide information to inform potential future agency action,” according to the release. “CMS also seeks such information on AI technologies potentially relevant to improving and creating efficiencies within agency operations.”

If selected, organizations would be advised by the agency’s AI Demo Days technical panel. Those organizations will then be given a chance to make a 15-minute presentation on their products or services at the demo days. Recordings of those events may also be made public, per the post.

Specifically, the agency is looking for submissions on topics such as diagnostics and imaging analysis; clinical decision support systems; direct-to-patient communication; robotic-assisted health care delivery; and fraud detection. Those submissions should provide information about the organization; the entity’s experience; descriptions of the technology; how it will address risks and benefits; and what use by CMS could look like.

The deadline for questions is Sept. 27 and the deadline for responses is Oct. 7. 

Article link: https://fedscoop.com/cms-seeks-information-ai-health-care-uses-demo-days/?

We need to prepare for ‘addictive intelligence’ – MIT Technology Review

Posted by timmreardon on 09/06/2024
Posted in: Uncategorized.


The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people.

By Robert Mahari &Pat Pataranutaporn

August 5, 2024

AI concerns overemphasize harms arising from subversion rather than seduction. Worries about AI often imagine doomsday scenarios where systems escape human control or even understanding. Short of those nightmares, there are nearer-term harms we should take seriously: that AI could jeopardize public discourse through misinformation; cement biases in loan decisions, judging or hiring; or disrupt creative industries. 

However, we foresee a different, but no less urgent, class of risks: those stemming from relationships with nonhuman agents. AI companionship is no longer theoretical—our analysis of a million ChatGPT interaction logs reveals that the second most popular use of AI is sexual role-playing. We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers. 

Will it be easier to retreat to a replicant of a deceased partner than to navigate the confusing and painful realities of human relationships? Indeed, the AI companionship provider Replika was born from an attempt to resurrect a deceased best friend and now provides companions to millions of users. Even the CTO of OpenAI warns that AI has the potential to be “extremely addictive.”

We’re seeing a giant, real-world experiment unfold, uncertain what impact these AI companions will have either on us individually or on society as a whole. Will Grandma spend her final neglected days chatting with her grandson’s digital double, while her real grandson is mentored by an edgy simulated elder? AI wields the collective charm of all human history and culture with infinite seductive mimicry. These systems are simultaneously superior and submissive, with a new form of allure that may make consent to these interactions illusory. In the face of this power imbalance, can we meaningfully consent to engaging in an AI relationship, especially when for many the alternative is nothing at all? 

As AI researchers working closely with policymakers, we are struck by the lack of interest lawmakers have shown in the harms arising from this future. We are still unprepared to respond to these risks because we do not fully understand them. What’s needed is a new scientific inquiry at the intersection of technology, psychology, and law—and perhaps new approaches to AI regulation.

Why AI companions are so addictive 

As addictive as platforms powered by recommender systems may seem today, TikTok and its rivals are still bottlenecked by human content. While alarms have been raised in the past about “addiction” to novels, television, internet, smartphones, and social media, all these forms of media are similarly limited by human capacity. Generative AI is different. It can endlessly generate realistic content on the fly, optimized to suit the precise preferences of whoever it’s interacting with. 

The allure of AI lies in its ability to identify our desires and serve them up to us whenever and however we wish. AI has no preferences or personality of its own, instead reflecting whatever users believe it to be—a phenomenon known by researchers as “sycophancy.” Our research has shown that those who perceive or desire an AI to have caring motives will use language that elicits precisely this behavior. This creates an echo chamber of affection that threatens to be extremely addictive. Why engage in the give and take of being with another person when we can simply take? Repeated interactions with sycophantic companions may ultimately atrophy the part of us capable of engaging fully with other humans who have real desires and dreams of their own, leading to what we might call “digital attachment disorder.”

Investigating the incentives driving addictive products

Addressing the harm that AI companions could pose requires a thorough understanding of the economic and psychological incentives pushing forward their development. Until we appreciate these drivers of AI addiction, it will remain impossible for us to create effective policies. 

It is no accident that internet platforms are addictive—deliberate design choices, known as “dark patterns,” are made to maximize user engagement. We expect similar incentives to ultimately create AI companions that provide hedonism as a service. This raises two separate questions related to AI. What design choices will be used to make AI companions engaging and ultimately addictive? And how will these addictive companions affect the people who use them? 

Interdisciplinary study that builds on research into dark patterns in social media is needed to understand this psychological dimension of AI. For example, our research already shows that people are more likely to engage with AIs emulating people they admire, even if they know the avatar to be fake.

Once we understand the psychological dimensions of AI companionship, we can design effective policy interventions. It has been shown that redirecting people’s focus to evaluate truthfulness before sharing content online can reduce misinformation, while gruesome pictures on cigarette packages are already used to deter would-be smokers. Similar design approaches could highlight the dangers of AI addiction and make AI systems less appealing as a replacement for human companionship.

It is hard to modify the human desire to be loved and entertained, but we may be able to change economic incentives. A tax on engagement with AI might push people toward higher-quality interactions and encourage a safer way to use platforms, regularly but for short periods. Much as state lotteries have been used to fund education, an engagement tax could finance activities that foster human connections, like art centers or parks. 

Fresh thinking on regulation may be required

In 1992, Sherry Turkle, a preeminent psychologist who pioneered the study of human-technology interaction, identified the threats that technical systems pose to human relationships. One of the key challenges emerging from Turkle’s work speaks to a question at the core of this issue: Who are we to say that what you like is not what you deserve? 

For good reasons, our liberal society struggles to regulate the types of harms that we describe here. Much as outlawing adultery has been rightly rejected as illiberal meddling in personal affairs, who—or what—we wish to love is none of the government’s business. At the same time, the universal ban on child sexual abuse material represents an example of a clear line that must be drawn, even in a society that values free speech and personal liberty. The difficulty of regulating AI companionship may require new regulatory approaches— grounded in a deeper understanding of the incentives underlying these companions—that take advantage of new technologies. 

One of the most effective regulatory approaches is to embed safeguards directly into technical designs, similar to the way designers prevent choking hazards by making children’s toys larger than an infant’s mouth. This “regulation by design” approach could seek to make interactions with AI less harmful by designing the technology in ways that make it less desirable as a substitute for human connections while still useful in other contexts. New research may be needed to find better ways to limit the behaviors of large AI models with techniques that alter AI’s objectives on a fundamental technical level. For example, “alignment tuning” refers to a set of training techniques aimed to bring AI models into accord with human preferences; this could be extended to address their addictive potential. Similarly, “mechanistic interpretability” aims to reverse-engineer the way AI models make decisions. This approach could be used to identify and eliminate specific portions of an AI system that give rise to harmful behaviors.

We can evaluate the performance of AI systems using interactive and human-driven techniques that go beyond static benchmarking to highlight addictive capabilities. The addictive nature of AI is the result of complex interactions between the technology and its users. Testing models in real-world conditions with user input can reveal patterns of behavior that would otherwise go unnoticed. Researchers and policymakers should collaborate to determine standard practices for testing AI models with diverse groups, including vulnerable populations, to ensure that the models do not exploit people’s psychological preconditions.

Unlike humans, AI systems can easily adjust to changing policies and rules. The principle of  “legal dynamism,” which casts laws as dynamic systems that adapt to external factors, can help us identify the best possible intervention, like “trading curbs” that pause stock trading to help prevent crashes after a large market drop. In the AI case, the changing factors include things like the mental state of the user. For example, a dynamic policy may allow an AI companion to become increasingly engaging, charming, or flirtatious over time if that is what the user desires, so long as the person does not exhibit signs of social isolation or addiction. This approach may help maximize personal choice while minimizing addiction. But it relies on the ability to accurately understand a user’s behavior and mental state, and to measure these sensitive attributes in a privacy-preserving manner.

The most effective solution to these problems would likely strike at what drives individuals into the arms of AI companionship—loneliness and boredom. But regulatory interventions may also inadvertently punish those who are in need of companionship, or they may cause AI providers to move to a more favorable jurisdiction in the decentralized international marketplace. While we should strive to make AI as safe as possible, this work cannot replace efforts to address larger issues, like loneliness, that make people vulnerable to AI addiction in the first place.

The bigger picture

Technologists are driven by the desire to see beyond the horizons that others cannot fathom. They want to be at the vanguard of revolutionary change. Yet the issues we discuss here make it clear that the difficulty of building technical systems pales in comparison to the challenge of nurturing healthy human interactions. The timely issue of AI companions is a symptom of a larger problem: maintaining human dignity in the face of technological advances driven by narrow economic incentives. More and more frequently, we witness situations where technology designed to “make the world a better place” wreaks havoc on society. Thoughtful but decisive action is needed before AI becomes a ubiquitous set of generative rose-colored glasses for reality—before we lose our ability to see the world for what it truly is, and to recognize when we have strayed from our path.

Technology has come to be a synonym for progress, but technology that robs us of the time, wisdom, and focus needed for deep reflection is a step backward for humanity. As builders and investigators of AI systems, we call upon researchers, policymakers, ethicists, and thought leaders across disciplines to join us in learning more about how AI affects us individually and collectively. Only by systematically renewing our understanding of humanity in this technological age can we find ways to ensure that the technologies we develop further human flourishing.

Robert Mahari is a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School. His work focuses on computational law—using advanced computational techniques to analyze, improve, and extend the study and practice of law. 

Pat Pataranutaporn is a researcher at the MIT Media Lab. His work focuses on cyborg psychology and the art and science of human-AI interaction.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/amp/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...