healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Tech companies must ‘acknowledge the damage’, says UN, and other digital technology stories you need to know – WEF

Posted by timmreardon on 07/03/2024
Posted in: Uncategorized.
Cathy Li

Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum

  • This round-up brings you key digital technology stories from the past fortnight.
  • Top headlines: UN’s call to big tech companies over product harm; Nvidia briefly becomes most expensive company in the world; McDonald’s to end test run of AI chatbots at drive-thrus.

UN chief warns tech firm to ‘acknowledge the damage’ their products are causing

António Guterres, Secretary-General of the United Nations (UN), has called on technology firms to “acknowledge the damage your products are inflicting on people and communities”.

Speaking at the launch of the UN’s Global Principles for Information Integrity, he said that algorithms on social media platforms had the ability to “push people into information bubbles and reinforce prejudices including racism, misogyny and discrimination of all kinds” through opaque algorithms.

UN Secretary-General António Guterres launches new UN Global Principles on Information Integrity. Image: UN Photo/Eskinder Debebe

“You have the power to mitigate harm to people and societies around the world,” he said. “You have the power to change business models that profit from disinformation and hate”.

His comments come after several high-profile calls for social media companies to do more to protect vulnerable people, especially children.

Earlier in the month, New York passed legislation to protect children from addictive social media content. Similarly, the US Surgeon General, Vivek Murthy, called on social media apps to add warning labels reminding users they can cause harm to young people.

Writing in an op-ed for the New York Times, he said that while this would not make the platforms safe, it could increase awareness among young people and influence their behaviour.

And in the UK, a group of parents is calling on social media companies to grant access to their children’s data following their deaths. One ended their life after viewing harmful content, while another may have died after participating in a social media ‘challenge’.

Misinformation and disinformation is predicted to be the biggest global risk over the next two years. Image: World Economic Forum Global Risk Report 2024

AI boom sees Nvidia briefly become world’s most valuable company

Nvidia became the world’s most valuable company in June, overtaking Microsoft and Apple to hit a market value of $3.34 trillion. The company’s microchips are playing a key part in the development and advancement of artificial intelligence (AI) technology. Microsoft retook the top spot a few days later following a stock selloff.

Despite the fall, many investors expect Nvidia to continue rising in valuation in the future. The business has seen its stock surge 180% in 2024 alone. At the time of writing, The Guardian reported a 2.8% rise in the company’s shares in early trading on 25 June. 

The company started the month by unveiling its new generation of processors – Rubin – less than three months after launching its predecessor, the Blackwell chip.

News in brief: Digital technology stories from around the world

McDonald’s is to halt testing of AI chatbots at drive-thrus. The systems, which had been implemented in over 100 US locations, featured an AI voice that could respond to customer orders. The company has not given a reason for ending its test run, but shortcomings in the technology including multiplying items or incorrectly adding items have been shared on social media.

A film credited to ChatGPT 4.0 has had its premiere cancelled. The Last Screenwriter was due to debut at the Prince Charles Cinema in London, but a backlash has seen the screening withdrawn. Speaking to the Daily Beast, director Peter Luisi said: “I think people don’t know enough about the project. All they hear is ‘first film written entirely by AI’ and they immediately see the enemy, and their anger goes towards us.”

SAP is restructuring 8,000 jobs to focus on AI-driven business areas, the company announced. It will spend €2 billion ($2.2 billion) to retrain employees or replace them through voluntary redundancy programs.

And US record labels are suing two prominent AI music generators, Suno and Udio, reports Wired. Universal Music Group, Warner Music Group and Sony Music Group filed lawsuits in US federal court, alleging copyright infringement. Speaking in a press release, Recording Industry Association of America chair and CEO Mitch Glazier said: “Unlicenced services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent or pay set back the promise of genuinely innovative AI for us all.” 

More on digital technology on Agenda

Digital twin technologies have the potential to drive innovation in the industrial sector, allowing for improvements in efficiency without sacrificing productivity. See how the technology could support integrated digital ecosystems and help the energy sector in this piece.

As we face the fourth industrial revolution, better knowledge exchange between businesses and governments will help facilitate faster growth in tech-enabled industries. See how regions like Karnataka in India can become attractive locations for businesses developing AI solutions in this article.

Spatial computing, blockchain and AI have all generated excitement in recent years. But their potential is set to grow further as synergies between the technologies are realized. Learn about some of the industries already witnessing the transformative impact of these key technologies as they converge.

Article link: https://www.weforum.org/agenda/2024/07/ai-regulation-digital-news-july-2024/

Cryptography may offer a solution to the massive AI-labeling problem 

Posted by timmreardon on 07/03/2024
Posted in: Uncategorized.


An internet protocol called C2PA adds a “nutrition label” to images, video, and audio.

By Tate Ryan-Mosley

July 28, 2023

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label their AI-generated images, audio, and video with “prominent markings” disclosing their synthetic origins. 

There’s a big problem, though: identifying material that was created by artificial intelligence is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

But another approach has been attracting attention lately: C2PA. Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. 

The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who—or what—created it. 

The project, part of the nonprofit Joint Development Foundation, was started by Adobe, Arm, Intel, Microsoft, and Truepic, which formed the Coalition for Content Provenance and Authenticity (from which C2PA gets its name). Over 1,500 companies are now involved in the project through the closely affiliated open-source community, Content Authenticity Initiative (CAI), including ones as varied and prominent as Nikon, the BBC, and Sony.

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. 

Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by “supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art.”

What is C2PA and how is it being used?

Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt in to labeling their visual and audio content with information about where it came from. (At least for the moment, this does not apply to text-based posts.) 

Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone. 

Truepic, which sells content verification products, has demonstrated how the protocol works with a deepfake video with Revel.ai. When a viewer hovers over a little icon at the top right corner of the screen, a box of information about the video appears that includes the disclosure that it “contains AI-generated content.” 

Adobe has also already integrated C2PA, which it calls content credentials, into several of its products, including Photoshop and Adobe Firefly. “We think it’s a value-add that may attract more customers to Adobe tools,” Andy Parsons, senior director of the Content Authenticity Initiative at Adobe and a leader of the C2PA project, says. 

C2PA is secured through cryptography, which relies on a series of codes and keys to protect information from being tampered with and to record where information came from. More specifically, it works by encoding provenance information through a set of hashes that cryptographically bind to each pixel, says Jenks, who also leads Microsoft’s work on C2PA. 

C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks. 

The value of provenance information 

Adding provenance information to media to combat misinformation is not a new idea, and early research seems to show that it could be promising: one project from a master’s student at the University of Oxford, for example, found evidence that users were less susceptible to misinformation when they had access to provenance information about content. Indeed, in OpenAI’s update about its AI detection tool, the company said it was focusing on other “provenance techniques” to meet disclosure requirements.

That said, provenance information is far from a fix-all solution. C2PA is not legally binding, and without required internet-wide adoption of the standard, unlabeled AI-generated content will exist, says Siwei Lyu, a director of the Center for Information Integrity and professor at the University at Buffalo in New York. “The lack of over-board binding power makes intrinsic loopholes in this effort,” he says, though he emphasizes that the project is nevertheless important.

What’s more, since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate. 

Ultimately, the coalition’s most significant challenge may be encouraging widespread adoption across the internet ecosystem, especially by social media platforms. The protocol is designed so that a photo, for example, would have provenance information encoded from the time a camera captured it to when it found its way onto social media. But if the social media platform doesn’t use the protocol, it won’t display the photo’s provenance data.

The major social media platforms have not yet adopted C2PA. Twitter had signed on to the project but dropped out after Elon Musk took over. (Twitter also stopped participating in other volunteer-based projects focused on curbing misinformation.)  

C2PA “[is] not a panacea, it doesn’t solve all of our misinformation problems, but it does put a foundation in place for a shared objective reality,” says Parsons. “Just like the nutrition label metaphor, you don’t have to look at the nutrition label before you buy the sugary cereal.

“And you don’t have to know where something came from before you share it on Meta, but you can. We think the ability to do that is critical given the astonishing abilities of generative media.”

This piece has been updated to clarify the relationship between C2PA and CAI.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/07/28/1076843/cryptography-ai-labeling-problem-c2pa-provenance/amp/

How should AI-generated content be labeled? – MIT Sloan

Posted by timmreardon on 07/03/2024
Posted in: Uncategorized.

by

Brian Eastwood

 Nov 29, 2023

Why It Matters

Content labels are one way to identify content generated with artificial intelligence. A new study looks at what wording is most effective. Share 

In late October, President Joe Biden issued a wide-ranging executive order on AI security and safety. The order includes new standards and best practices for clearly labeling AI-generated content, in part to help Americans determine whether communications that appear to be from the government are authentic.

This points to a concern that as generative AI becomes more widely used, manipulated content could easily spread false information. As the executive order indicates, content labels are one strategy for combatting the spread of misinformation. But what are the right terms to use? Which ones will be widely understood by the public as indicating that something has been generated or manipulated by artificial intelligence technology or is intentionally misleading?

A new working paper co-authored by MIT Sloan professor David Rand found that across the United States, Mexico, Brazil, India, and China, people associated certain terms, such as “AI generated” and “AI manipulated,” most closely with content created using AI. Conversely, the labels “deepfake” and “manipulated” were most associated with misleading content, whether AI created it or not.

These results show that most people have a reasonable understanding of what “AI” means, which is a good starting point. They also suggest that any effort to label content needs to consider the overarching goal, said Rand, a professor of management science and brain and cognitive sciences. Rand co-authored the paper with Ziv Epstein, SM ’19 and PhD ’23, a postdoctoral fellow at Stanford; MIT graduate researcher Cathy Fang, SM ’23; and Antonio A. Arechar, a professor at the Center for Research and Teaching in Economics in Aguascalientes, Mexico.

Rand also co-authored a recent policy brief about labeling AI-generated content. 

“A lot of AI-generated content is not misleading, and a lot of misleading content is not AI-generated,” Rand said. “Is the concern really about AI-generated content per se, or is it more about misleading content?”

Looking at how people understand various AI-related terms 

Governments, technology companies, and industry associations are wrestling with how to let viewers know that they are viewing artificially generated content, given that face-swapping and voice imitation tools can be used to create misleading content, and images can be generated that falsely depict people in compromising situations.

In addition to the recent executive order, U.S. Rep. Ritchie Torres has proposed the AI Disclosure Act of 2023, which would require a disclaimer on any content — including videos, photos, text, or audio — generated by AI. Meanwhile, the Coalition for Content Provenance and Authenticityhas developed an open technical standard for tracing the origins of content and determining whether it has been manipulated.

Disclaimers, watermarks, or other labels would be useful to indicate how content was created or whether it is misleading; in fact, studies have indicated that social media users are less likely to believe or share content labeled as misleading. But before trying to label content that is generated by AI, platforms and policymakers need to know which terms are widely understood by the general population. If labels use a term that is overly jargony or confusing, it could interfere with the label’s goal.

To look at what terms were understood correctly most often, the researchers surveyed more than 5,100 people across five countries in four languages. Participants were randomly assigned one of nine terms: “AI generated,” “generated with an AI tool,” “artificial,” “synthetic,” “deepfake,” “manipulated,” “not real,” “AI manipulated,” or “edited.” They were then shown descriptions of 20 different content types and asked whether the assigned term applied to each type of content.

The phrases “AI generated,” “generated with an AI tool,” and “AI manipulated” were most closely associated with content generated using AI.

Alternatively, the researchers found that “deepfake” and “manipulated” were most closely associated with potentially misleading content. Terms such as “edited,” “synthetic,” or “not real” were not closely associated with either AI-generated content or misleading content.

The results were similar among the participants, regardless of age, gender, education, digital literacy, and familiarity with AI.

“The differences between ‘AI manipulated’and ‘manipulated’ are quite striking: Simply adding the ‘AI’ qualifier dramatically changed which pieces of content participants understood the term as applying [to],” the researchers write.

The purpose of an AI label 

Content labels could serve two different purposes. One is to indicate that content was generated using AI. The other is to show that the content could mislead viewers, whether created by AI or not. That will be an important consideration as momentum builds to label AI generated content.

RELATED ARTICLES

The legal issues presented by generative AI AI needs to be more ‘pro-worker.’ These 5 policies can help MIT Sloan research about social media and misinformation

“It could make sense to have different labels for misleading content that is AI-generated, versus content that’s not AI-generated,” Rand said.

How the labels are generated will also matter. Self-labeling has obvious disadvantages, as few creators will willingly admit that their content is intentionally misleading. Machine learning, crowdsourcing, and digital forensics are viable options, though relying on those approaches will become more challenging as the lines between content made by humans and generated by computers continue to blur. And under the principle of implied authenticity, the more content that gets labeled, the more that content without a label is assumed to be real.

Finally, researchers found that some labels will not work everywhere. For example, in the study, Chinese speakers associated the word “artificial” with human involvement, whereas the term connotes automation in English, Portuguese, and Spanish.

“You can’t just take labels shown to work well in the United States and blindly apply them cross-culturally,” Rand said. “Testing of labels will need to be done in different countries to ensure that terms resonate.”

READ THE PAPER: WHAT LABEL SHOULD BE APPLIED TO CONTENT PRODUCED BY GENERATIVE AI?

READ NEXT: STUDY GAUGES HOW PEOPLE PERCEIVE AI-GENERATED CONTENT

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/how-should-ai-generated-content-be-labeled?

AI Security

Posted by timmreardon on 07/01/2024
Posted in: Uncategorized.

Safeguarding Large Language Models and Why This Matters for the Future of Geopolitics

Event Details

Date:

July 18, 2024

Time:

4:00–6:00 p.m. Eastern

Location:American Association for the Advancement of Science (AAAS)auditorium
12th St. NW and H St. NW, Washington, D.C.

Register

REGISTER TO ATTEND IN-PERSON

This event will also be livestreamed.

REGISTER TO JOIN THE LIVESTREAM

We’ll send you an email with details on how to connect.

Program

Given the dramatic, rapid, and unpredictable rate of change of AI capabilities, there is an urgent need for robust, forward-thinking strategies to ensure the security of AI systems. As many national governments have acknowledged, AI models may soon be critical for national security: They could potentially drive advantages in strategic competition—and, in the wrong hands, enable significant harm.

Please join RAND on Thursday, July 18, 4:00 Eastern, for a moderated panel discussion on the increasingly important topic of securing AI, and the implications for national and homeland security. A brief reception will follow the discussion and Q&A.

Speakers

Vijay Bolina

Vijay Bolina

Chief Information Security Officer (CISO), Google DeepMind

Lisa Einstein

Lisa Einstein

Senior Advisor for AI and Executive Director of the Cybersecurity Advisory Committee, CISA

Sella Nevo

Sella Nevo

Director, RAND Meselson Center and RAND Senior Information Scientist

Learn More about AI Security

A recent RAND study, Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models, focuses on the potential theft and misuse of foundation AI model weights and details how promising security measures can be adapted specifically for model weights.

For more from RAND on AI, see our collection of featured research and commentary related to artificial intelligence topics.

Register for This Program

Please register to attend in person to join us in the auditorium at the American Association for the Advancement of Science (AAAS) in Washington, D.C., or register to join the livestream.

Contact securing-ai-panel@rand.org with questions about the event.

Article link: https://www.rand.org/events/2024/07/securing-ai.html?

The 15 Diseases of Leadership – HBR

Posted by timmreardon on 06/27/2024
Posted in: Uncategorized.

https://hbr.org/2015/04/the-15-diseases-of-leadership-according-to-pope-francis

DHS report details AI’s potential to amplify biological, chemical threats – Nextgov

Posted by timmreardon on 06/27/2024
Posted in: Uncategorized.

By ALEXANDRA KELLEYJUNE 24, 2024

As artificial intelligence and machine learning continue to intersect with sensitive research efforts, the Department of Homeland Security recommended increased communication and guidance to mitigate dangerous outcomes.

Artificial intelligence has the potential to unlock secrets to the development of weapons of mass destruction, particularly chemical and biological threats, to malicious actors, according to a Department of Homeland Security reportpublicly released last week.

The full report, which was teed up in April with the release of a fact sheet, examines the role of AI in aiding but also thwarting efforts by adversaries to research, develop and use chemical, biological, radiological and nuclear weapons. The report was required under President Joe Biden’s October 2023 executive order on AI.

“The increased proliferation and capabilities of AI tools … may lead to significant changes in the landscape of threats to U.S. national security over time, including by influencing the means, accessibility, or likelihood of a successful CBRN attack,” DHS’s Countering Weapons of Mass Destruction Office states in the report.

According to the report, “known limitations in existing U.S. biological and chemical security regulations and enforcement, when combined with increased use of AI tools, could increase the likelihood of both intentional and unintentional dangerous research outcomes that pose a risk to public health, economic security, or national security.”

Specifically, the report states that the proliferation of publicly available AI tools could lower the barrier to entry for malicious actors seeking information on the composition, development and delivery of chemical and biological weapons. While access to laboratory facilities is still a hurdle, the report notes that so-called “cloud labs” could allow threat actors to remotely develop components of weapons of mass destruction in the physical world, potentially under the cover of anonymity. 

CWMD recommended that the U.S. develop guidance covering the “tactical exclusion and/or protection of sensitive chemical and biological data” from public training materials for large language models, as well as more oversight governing access to remote-controlled lab facilities. 

The report also said that specific federal guidance is needed to govern how biological design tools and biological- and chemical-specific foundation models are used. This guidance would ideally include “granular release practices” for source code and specification for the weight calculations used to build a relevant language model.

More generally, the report seeks the development of consensus within U.S. government regulatory agencies on how to manage AI and machine learning technologies, in particular as they intersect with chemical and biological research.

Other findings include incorporating “safe harbor” vulnerability reporting practices into organizational proceedings, practicing internal evaluation and red teaming efforts, cultivating a broader culture of responsibility among expert life science communities and responsibly investigating the benefits AI and machine learning could have in biological, chemical and nuclear contexts. 

The report also envisions a role for AI in mitigating existing CBRN risks through threat detection and response, including via disease surveillance, diagnostics, “and many other applications the national security and public health communities have not identified.”

While the findings in this report are not enforceable mandates, DHS said that the contents will help shape future policy and objectives within the CWMD office. 

“Moving forward, CWMD will explore how to operationalize the report’s recommendations through existing federal government coordination groups and associated efforts led by the White House,” a DHS spokesperson told Nextgov/FCW. “The Office will integrate AI analysis into established threat and risk assessments as well as into the planning and acquisition that it performs on behalf of federal, state, local, tribal and territorial partners.” 

Article link: https://www.nextgov.com/artificial-intelligence/2024/06/dhs-report-details-ais-potential-amplify-biological-chemical-threats/397607/?

Top 10 Emerging Technologies of 2024 – WEF

Posted by timmreardon on 06/26/2024
Posted in: Uncategorized.

These are the Top 10 Emerging #Technologies which could significantly impact #society and the #economy in the next 3-5 years. This World Economic Forum report, produced in collaboration with Frontiers, draws on the expertise of scientists, researchers and futurists and covers applications in health, communication, infrastructure and sustainability. Learn more about #emergingtech24 here: https://lnkd.in/e9qfH9Mz #AMNC2


Download PDF

The Top 10 Emerging Technologies report is a vital source of strategic intelligence. First published in 2011, it draws on insights from scientists, researchers and futurists to identify 10 technologies poised to significantly influence societies and economies. These emerging technologiesare disruptive, attractive to investors and researchers, and expected to achieve considerable scale within five years. This edition expands its analysis by involving over 300 experts from the Forum’s Global Future Councils and a global network of comprising over 2,000 chief editors worldwide from top institutions through Frontiers, a leading publisher of academic research.

Explore the report

Report summaryKey FindingsRead more

Online readerFull reportRead more

More on the topicWorld Economic Forum Identifies Top 10 Emerging Technologies to Address Global ChallengesRead more

Article link: https://www.weforum.org/publications/top-10-emerging-technologies-2024/

Digital twins are helping scientists run the world’s most complex experiments – MIT Technology Review

Posted by timmreardon on 06/22/2024
Posted in: Uncategorized.


Engineers use the high-fidelity models to monitor operations, plan fixes, and troubleshoot problems.

By Sarah Scoles

June 10, 2024

In January 2022, NASA’s $10 billion James Webb Space Telescope was approaching the end of its one-million-mile trip from Earth. But reaching its orbital spot would be just one part of its treacherous journey. To ready itself for observations, the spacecraft had to unfold itself in a complicated choreography that, according to its engineers’ calculations, had 344 different ways to fail. A sunshield the size of a tennis court had to deploy exactly right, ending up like a giant shiny kite beneath the telescope. A secondary mirror had to swing down into the perfect position, relying on three legs to hold it nearly 25 feet from the main mirror. 

Finally, that main mirror—its 18 hexagonal pieces nestled together as in a honeycomb—had to assemble itself. Three golden mirror segments had to unfold from each side of the telescope, notching their edges against the 12 already fitted together. The sequence had to go perfectly for the telescope to work as intended.

“That was a scary time,” says Karen Casey, a technical director for Raytheon’s Air and Space Defense Systems business, which built the software that controls JWST’s movements and is now in charge of its flight operations. 

Over the multiple days of choreography, engineers at Raytheon watched the events unfold as the telescope did. The telescope, beyond the moon’s orbit, was way too distant to be visible, even with powerful instruments. But the telescope was feeding data back to Earth in real time, and software near-simultaneously used that data to render a 3D video of how the process was going, as it was going. It was like watching a very nerve-racking movie.

The 3D video represented a “digital twin” of the complex telescope: a computer-based model of the actual instrument, based on information that the instrument provided. “This was just transformative—to be able to see it,” Casey says.

The team watched tensely, during JWST’s early days, as the 344 potential problems failed to make their appearance. At last, JWST was in its final shape and looked as it should—in space and onscreen. The digital twin has been updating itself ever since.

The concept of building a full-scale replica of such a complicated bit of kit wasn’t new to Raytheon, in part because of the company’s work in defense and intelligence, where digital twins are more popular than they are in astronomy.

JWST, though, was actually more complicated than many of those systems, so the advances its twin made possible will now feed back into that military side of the business. It’s the reverse of a more typical story, where national security pursuits push science forward. Space is where non-defense and defense technologies converge, says Dan Isaacs, chief technology officer for the Digital Twin Consortium, a professional working group, and digital twins are “at the very heart of these collaborative efforts.”

As the technology becomes more common, researchers are increasingly finding these twins to be productive members of scientific society—helping humans run the world’s most complicated instruments, while also revealing more about the world itself and the universe beyond.  

800 million data points

The concept of digital twins was introduced in 2002 by Michael Grieves, a researcher whose work focused on business and manufacturing. He suggested that a digital model of a product, constantly updated with information from the real world, should accompany the physical item through its development. 

But the term “digital twin” actually came from a NASA employee named John Vickers, who first used it in 2010 as part of a technology road map report for the space agency. Today, perhaps unsurprisingly, Grieves is head of the Digital Twins Institute, and Vickers is still with NASA, as its principal technologist. 

Since those early days, technology has advanced, as it is wont to do. The Internet of Things has proliferated, hooking real-world sensors stuck to physical objects into the ethereal internet. Today, those devices number more than 15 billion, compared with mere millions in 2010. Computing power has continued to increase, and the cloud—more popular and powerful than it was in the previous decade—allows the makers of digital twins to scale their models up or down, or create more clones for experimentation, without investing in obscene amounts of hardware. Now, too, digital twins can incorporate artificial intelligence and machine learning to help make sense of the deluge of data points pouring in every second. 

Out of those ingredients, Raytheon decided to build its JWST twin for the same reason it also works on defense twins: there was little room for error. “This was a no-fail mission,” says Casey. The twin tracks 800 million data points about its real-world sibling every day, using all those 0s and 1s to create a real-time video that’s easier for humans to monitor than many columns of numbers. 

The JWST team uses the twin to monitor the observatory and also to predict the effects of changes like software updates. When testing these, engineers use an offline copy of the twin,  upload hypothetical changes, and then watch what happens next. The group also uses an offline version to train operators and to troubleshoot IRL issues—the nature of which Casey declines to identify. “We call them anomalies,” she says. 

Science, defense, and beyond

JWST’s digital twin is not the first space-science instrument to have a simulated sibling. A digital twin of the Curiosity rover helped NASA solve the robot’s heat issues. At CERN, the European particle accelerator, digital twins help with detector development and more mundane tasks like monitoring cranes and ventilation systems. The European Space Agency wants to use Earth observation data to create a digital twin of the planet itself. 

At the Gran Telescopio Canarias, the world’s largest single-mirror telescope, the scientific team started building a twin about two years ago—before they’d even heard the term. Back then, Luis Rodríguez, head of engineering, came to Romano Corradi, the observatory’s director. “He said that we should start to interconnect things,” says Corradi. They could snag principles from industry, suggested Rodríguez, where machines regularly communicate with each other and with computers, monitor their own states, and automate responses to those states.

The team started adding sensors that relayed information about the telescope and its environment. Understanding the environmental conditions around an observatory is “fundamental in order to operate a telescope,” says Corradi. Is it going to rain, for instance, and how is temperature affecting the scope’s focus? 

After they had the sensors feeding data online, they created a 3D model of the telescope that rendered those facts visually. “The advantage is very clear for the workers,” says Rodríguez, referring to those operating the telescope. “It’s more easy to manage the telescope. The telescope in the past was really, really hard because it’s very complex.”

Right now, the Gran Telescopio twin just ingests the data, but the team is working toward a more interpretive approach, using AI to predict the instrument’s behavior. “With information you get in the digital twin, you do something in the real entity,” Corradi says. Eventually, they hope to have a “smart telescope” that responds automatically to its situation. 

Corradi says the team didn’t find out that what they were building had a name until they went to an Internet of Things conference last year. “We saw that there was a growing community in industry—and not in science, in industry—where everybody now is doing these digital twins,” he says.

The concept is, of course, creeping into science—as the particle accelerators and space agencies show. But it’s still got a firmer foothold at corporations. “Always the interest in industry precedes what happens in science,” says Corradi.  But he thinks projects like theirs will continue to proliferate in the broader astronomy community. For instance, the group planning the proposed Thirty Meter Telescope, which would have a primary mirror made up of hundreds of segments, called to request a presentation on the technology. “We just anticipated a bit of what was already happening in the industry,” says Corradi.

The defense industry really loves digital twins. The Space Force, for instance, used one to plan Tetra 5, an experiment to refuel satellites. In 2022, the Space Force also gave Slingshot Aerospace a contract to create a digital twin of space itself, showing what’s going on in orbit to prepare for incidents like collisions. 

Isaacs cites an example in which the Air Force sent a retired plane to a university so researchers could develop a “fatigue profile”—a kind of map of how the aircraft’s stresses, strains, and loads add up over time. A twin, made from that map, can help identify parts that could be replaced to extend the plane’s life, or to design a better plane in the future. Companies that work in both defense and science—common in the space industry in particular—thus have an advantage, in that they can port innovations from one department to another.

JWST’s twin, for instance, will have some relevance for projects on Raytheon’s defense side, where the company already works on digital twins of missile defense radars, air-launched cruise missiles, and aircraft. “We can reuse parts of it in other places,” Casey says. Any satellite the company tracks or sends commands to “could benefit from piece-parts of what we’ve done here.”  

Some of the tools and processes Raytheon developed for the telescope, she continues, “can copy-paste to other programs.” And in that way, the JWST digital twin will probably have twins of its own.

Sarah Scoles is a Colorado-based science journalist and the author, most recently, of the book Countdown: The Blinding Future of Nuclear Weapons.

Article link: https://www.linkedin.com/posts/mit-technology-review_digital-twins-are-helping-scientists-run-activity-7210230903908790272-cIQ4?

The Evolution Of AI And Mental Healthcare

Posted by timmreardon on 06/21/2024
Posted in: Uncategorized.

Rob MorrisForbes Councils Member

Forbes Business CouncilCOUNCIL POST

People have been using chatbots for decades, well before ChatGPT was released. One of the first chatbots was created in 1966, by Joseph Weizenbaum at MIT. It was called ELIZA, and it was designed to mimic the behaviors of a psychotherapist. Though Weizenbaum had no intention of using ELIZA for actual therapy (in fact, he rebelled against this idea), the concept was compelling.

Now, almost 60 later, we are still imagining ways in which machines might help provide mental health support.

Indeed, AI offers many exciting new possibilities for mental health care. But understanding its benefits—while navigating its risks—is a complex challenge.

We can explore potential applications of AI and mental health by looking at two fundamental use cases: those related to the provider, and those related to the client.

Provider-Facing Opportunities

Training

AI can be used to help train mental health practitioners by simulating interactions between clients and patients. For instance, ReflexAI uses AI to create a safe training environment for crisis line volunteers. Instead of learning in the moment, with a real caller, volunteers can rehearse and refine their skills with an AI agent.

Quality Monitoring

AI can also help monitor conversations between providers and clients. It has always been difficult to assess whether providers are adhering to evidence-based practices. AI has the potential to provide immediate feedback and suggestions to help improve client-provider interactions. Lyssn applies this concept to various domains, including crisis services like 988.

Suggested Responses

AI can also provide in-the-moment advice, offering suggestions and resources for providers. This could be especially helpful for crisis counselors, where the need to provide timely and empathetic feedback is extremely important.

Detection

There is also research suggesting that some mental health conditions can be inferred from various signals, such as one’s tone of voice, speech patterns and facial expressions. These biomarkers have the potential to greatly facilitate screening for mental health. A good example is the work being done by Ellipsis Health.

Administrative

While less exciting and attention-grabbing than other opportunities, the greatest potential for near-term impact might relate to easing administrative burden. Companies like Eleos Health are turning behavioral health conversations into automated documentation and detailed clinical insights.

Client-Facing Opportunities

Chatbots like Woebot and Wysa are already capable of delivering evidence-based mental health support. However, as of this writing, they do not use generative AI. Instead, they guide the user through carefully crafted pre-scripted interactions. Let’s call this ELIZA 2.0.

But there are now several startups exploring something like ELIZA 3.0, where generative AI conducts the entire therapeutic process. Generative AI offers the potential to provide rich, nuanced interactions with the user, ideally improving the effectiveness of online interventions.

Users are also given much more control of the experience, potentially redirecting the chatbot toward different therapeutic approaches, as needed. New startups are already seizing this opportunity. For example, Sonia uses large language models to mimic cognitive-behavioral therapy.

Other companies (such as Replika and Character.ai) are providing companion bots that form bonds with end users, offering kind words and support. These AI chatbots are not trained to deliver anything resembling traditional therapy, but some users believe they have therapeutic benefits.

Risks

It seems clear that AI is well-positioned to enhance many elements of mental health care. However, significant risks remain for nearly all of the opportunities described thus far.

For providers, AI could become a crutch—something that is increasingly used without human scrutiny. For example, the current state-of-the-art models lack the situational awareness of providers to recognize complex and potentially very dangerous shifts in mood and behaviors.

In high-stakes environments, such as crisis helplines, AI could cause dangerous unanticipated consequences. This has happened before. For instance, a simple helpline bot designed to treat eating disorders was accidentally connected to a generative AI model, without the awareness of the researchers who were involved. Unfortunately, the bot then proceeded to advocate unhealthy diets and exercise for individuals struggling with disordered eating. Here, the root failure was likely a coordination issue between different organizations, but it exemplifies ways in which AI could be dangerous in real-world deployments.

For clients, AI has the potential to mislead users into thinking they are receiving acceptable care. A therapist bot may contend that it has clinical training and an advanced degree. Most users will probably know this is false, but these platforms tend to attract young people, and many may decide not to speak up about their mental health, believing these platforms provide sufficient care.

There are, of course, many other issues to consider, such as data privacy.

As we continue to explore AI in mental health, it’s important to balance its potential benefits with careful consideration of the risks to ensure it truly helps those in need.

Article link: https://www.forbes.com/sites/forbesbusinesscouncil/2024/06/21/the-evolution-of-ai-and-mental-healthcare/

U.S. Chip Subsidies Surge, 2024 Construction Funding Reportedly Exceeds Total of Previous 27 Years

Posted by timmreardon on 06/20/2024
Posted in: Uncategorized.

2024-06-14

The US government’s CHIPS and Science Act is reportedly injecting funds into chip manufacturing at an unprecedented rate. According to a recent report by the U.S. Census Bureau, the growth rate of construction funding for computer and electrical manufacturing is remarkably high. The amount of money the government is pouring into this industry in 2024 alone is equivalent to the total of the previous 27 years combined.

Due to the substantial funding provided by the U.S. CHIPS Act, the construction industry in the United States is experiencing explosive growth. Companies such as TSMC, Intel, Samsung, and Micron have received billions of dollars to build new plants in the U.S.

Research by the Semiconductor Industry Association indicates that the U.S. will triple its domestic semiconductor manufacturing capacity by 2032. It is also projected that by the same year, the U.S. will produce 28% of the world’s advanced logic (below 10nm) manufacturing, surpassing the goal of producing 20% of the world’s advanced chips announced by U.S. Commerce Secretary Gina Raimondo. 

Currently, new plant constructions are underway. Despite the enormous expenditures, there have been delays in construction across the United States, affecting plants of Samsung, TSMC, and Intel.

Notably, a previous report from South Korean media BusinessKorea revealed Samsung has postponed the mass production timeline of the fab in Taylor, Texas, US from late 2024 to 2026. Similarly, a report from TechNews, which cited a research report from the Center for Security and Emerging Technology (CSET), noted the postponement of the production of two plants in Arizona, US. Additionally, Intel, as per a previous report from the Wall Street Journal (WSJ), was also said to be delaying the construction timetable for its chip-manufacturing project in Ohio.

Article link: https://www.trendforce.com/news/2024/06/14/news-u-s-chip-subsidies-surge-2024-construction-funding-reportedly-exceeds-total-of-previous-27-years/

Read more

  • [News] A Subsidy Wave Sweeping in the US, Europe and Japan amid the Cut-Throat Chip Competition
  • [News] US Allocates USD 39 Billion Subsidy to Semiconductor Industry for Establishing Plants

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...