healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

I’ve watched 3 “revolutionary” healthcare technologies fail spectacularly.

Posted by timmreardon on 06/28/2025
Posted in: Uncategorized.

Each time, the technology was perfect.

The implementation was disastrous.

Google Health (shut down twice). Microsoft HealthVault (lasted 12 years, then folded). IBM Watson for Oncology (massively overpromised).

Billions invested. Solid technology. Total failure.

Not because the vision was wrong, but because healthcare adoption follows different rules than consumer tech.

Here’s what I learned building healthcare tech for 15 years:
1/ Healthcare moves at the speed of trust, not innovation
↳ Lives are at stake, so skepticism is protective
↳ Regulatory approval takes years usually for good reason
↳ Doctors need extensive validation before adoption
↳ Patients want proven solutions, not beta testing

2/ Integration trumps innovation every time
↳ The best tool that no one uses is worthless
↳ Workflow integration matters more than features
↳ EMR compatibility determines adoption rates
↳ Training time is always underestimated

3/ The “cool factor” doesn’t predict success
↳ Flashy demos rarely translate to daily use
↳ Simple solutions often outperform complex ones
↳ User interface design beats artificial intelligence
↳ Reliability matters more than cutting-edge features

4/ Reimbursement determines everything
↳ No CPT code = no sustainable business model
↳ Insurance coverage drives provider adoption
↳ Value-based care is changing this slowly
↳ Free trials don’t create lasting change

5/ Clinical champions make or break technology
↳ One enthusiastic doctor can drive adoption
↳ Early adopters must see immediate benefits
↳ Word-of-mouth beats marketing every time
↳ Resistance from key stakeholders kills innovations

The pattern I’ve seen: companies build technology for the healthcare system they wish existed, not the one that actually exists.

They optimize for TechCrunch headlines instead of clinic workflows.

They design for Silicon Valley investors instead of 65-year-old physicians.

A successful healthcare technology I’ve implemented?

A simple visit summarization app that saved me time and let me focus on the patient.

No fancy interface, very lightweight, integrated into my clinical workflow, effortless to use.

Just solved an problem that users had.

Healthcare doesn’t need more revolutionary technology.

It needs evolutionary technology that works within existing systems.

⁉️ What’s the simplest technology that’s made the biggest difference in your healthcare experience? Sometimes basic beats brilliant.
♻️ Repost if you believe implementation beats innovation in healthcare
👉 Follow me (Reza Hosseini Ghomi, MD, MSE) for realistic perspectives on healthcare technology

Article link: https://www.linkedin.com/posts/rezahg_ive-watched-3-revolutionary-healthcare-activity-7342178230193295360-XWK_?

This AI Model Never Stops Learning – Wired

Posted by timmreardon on 06/21/2025
Posted in: Uncategorized.

Scientists at Massachusetts Institute of Technology have devised a way for large language models to keep learning on the fly—a step toward building AI that continually improves itself.

MODERN LARGE LANGUAGEmodels (LLMs) might write beautiful sonnets and elegant code, but they lack even a rudimentary ability to learn from experience.

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligencemodels that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.

“The initial idea was to explore if tokens [units of text fed to LLMs and generated by them] could cause a powerful update to a model,” says Jyothish Pari, a PhD student at MIT involved with developing SEAL. Pari says the idea was to see if a model’s output could be used to train it.

Adam Zweiger, an MIT undergraduate researcher involved with building SEAL, adds that although newer models can “reason” their way to better solutions by performing more complex inference, the model itself does not benefit from this reasoning over the long term.

SEAL, by contrast, generates new insights and then folds it into its own weights or parameters. Given a statement about the challenges faced by the Apollo space program, for instance, the model generated new passages that try to describe the implications of the statement. The researchers compared this to the way a human student writes and reviews notes in order to aid their learning.

The system then updated the model using this data and tested how well the new model is able to answer a set of questions. And finally, this provides a reinforcement learning signal that helps guide the model toward updates that improve its overall abilities and which help it carry on learning.

The researchers tested their approach on small and medium-size versions of two open source models, Meta’s Llama and Alibaba’s Qwen. They say that the approach ought to work for much larger frontier models too.

The researchers tested the SEAL approach on text as well as a benchmark called ARC that gauges an AI model’s ability to solve abstract reasoning problems. In both cases they saw that SEAL allowed the models to continue learning well beyond their initial training.

Pulkit Agrawal, a professor at MIT who oversaw the work, says that the SEAL project touches on important themes in AI, including how to get AI to figure out for itself what it should try to learn. He says it could well be used to help make AI models more personalized. “LLMs are powerful but we don’t want their knowledge to stop,” he says.

SEAL is not yet a way for AI to improve indefinitely. For one thing, as Agrawal notes, the LLMs tested suffer from what’s known as “catastrophic forgetting,” a troubling effect seen when ingesting new information causes older knowledge to simply disappear. This may point to a fundamental difference between artificial neural networks and biological ones. Pari and Zweigler also note that SEAL is computationally intensive, and it isn’t yet clear how best to most effectively schedule new periods of learning. One fun idea, Zweigler mentions, is that, like humans, perhaps LLMs could experience periods of “sleep” where new information is consolidated.

Still, for all its limitations, SEAL is an exciting new path for further AI research—and it may well be something that finds its way into future frontier AI models.

What do you think about AI that is able to keep on learning? Send an email to hello@wired.com to let me know.

Article link; https://www.wired.com/story/this-ai-model-never-stops-learning/?

FEHRM CTO Targets Two-Year Cloud Migration for Federal EHR

Posted by timmreardon on 06/20/2025
Posted in: Uncategorized.

WED, 06/18/2025 

Lance Scott touts new EHR tech advancements, including cloud migration, expanded data exchange and AI integration to improve care delivery.

The Federal Electronic Health Record Modernization Office is targeting new tech advancements for the federal EHR, including moving to the cloud, boosting interoperability through new information exchange programs and integrating AI, the office’s  CTO Lance Scott explained earlier this month during the 2025 ACT-IAC Health Innovation Conference in Reston, Virginia.

Moving the Federal EHR to the Cloud

Federal EHR agencies will transition the EHR to the cloud in tranches, according to Scott, and the deployment could take nearly two years to complete as agencies develop “flexible scalability.”

“We want to take advantage of the inherent native cloud services that we’ve got. It’s no small feat. It’s going to take better part of 18 months to two years to do,” Scott said.

Scott said that his team is working to ensure that the transition is as seamless for the user as possible as the EHR continues to be developed and moved to the cloud. Ideally, the user would not recognize a significant change as the system switches over.

“We’re trying to keep as much functionality turmoil out of the mix as possible to make sure that we don’t impact the users too much now. However, what we’re doing is we’re setting the stage,” Scott said during the conference.

Despite the potential promise of the EHR, Scott said he still has lingering concerns about cost increases of the modernization effort as it moves to the cloud, specifically hidden costs that have yet to materialize.

“I think the biggest thing that I’m worried about is functionality that’s going to be enabled by us going to the cloud that we haven’t looked at yet, that will cost extra money,” Scott said. “As far as the general move to the cloud, I don’t think I’ve seen any use case that says that it makes more sense to stay on prem.”

Expanding the Seamless Exchange Program

Scott said the Department of Veterans Affairs’ Seamless Exchange program is “finally reaching fruition.” VA first piloted the program in last year in Walla Walla, Washington. The pilot was successful enough that VA plans to launch the program on a wider scale in November of this year. The program offers new opportunities for interoperability between the Defense Department and VA, and DOD intends to roll out its own Seamless Exchange capability following the success of the VA program.

“The reason why it’s so exciting is years ago, my focus was to get more data, get more partners, do as much as we can to bring in data. Now we’ve got 96% of the U.S. market that we exchange data with. Now we’ve got another problem. The problem is information overflow,” Scott said.

The seamless data exchange is built upon three foundational pillars: data de-duplication, which Scott said has a huge impact on performance and cost; data provenance, as data shared over and over between partners loses its origin; and auto-ingestion, which brings in data from hundreds or even thousands of partners and needs to be analyzed by clinicians to drive best outcomes.

According to Scott, lessons learned from these pilots will directly affect the deployment of the EHR and lead to better outcomes overall. The VA is currently on track to deploy the EHR at 13 new sites in fiscal year 2026 following a nearly three-year deployment pause.

AI’s Role in the Future Federal EHR

Within the DOD, Scott pointed to U.S. Military Entrance Processing Command, which uses data gathered by the EHR to filter candidates looking to join the military. The influx of data has allowed employees to sift through candidates at a much more efficient pace and approve or decline candidates based on a number of factors, such as medical history or drug use.

In the future, Scott says the next generation of the EHR will be AI-enabled, with new technologies augmenting the ability of clinicians to provide quality care.

“They’re going to have digital assistants. They’re going to have ambient listening. There’s going to be agents listening into what the doctor and patient talk back and forth about,” Scott said. “They actually will draft up diagnoses and notes for the clinician to look at and finalize and sign.”

Article link: https://govciomedia.com/fehrm-cto-targets-two-year-cloud-migration-for-federal-ehr/

The American Sense of Fair Play

Posted by timmreardon on 06/20/2025
Posted in: Uncategorized.

Somehow we seemed to have lost the American sense of fair play. It’s the intuitive sense that people, regardless of status can have the opportunity to pursue their goals and interests, without interference from the government. Depriving immigrants, well situated and contributing to the American economy, paying taxes, and peaceful, not committing crimes,and just trying to survive for of what can best be described as a meager existence, in menial jobs to exist and sustain their lives is morally and ethically wrong and anti Christian in nature. Those who wish to disrupt their peaceful pursuit of a peaceful life are monsters of chaos and misinformation, and hardship for the poor and disenfranchised is their condemnation. The American sense of Fair Play requires that they be given an equal opportunity to prosper, regardless of station in life. Fair Play is not tax cuts for those who don’t need it at the expense of those who are barely surviving on the edge of life. Where is our collective humanity?

Our brain is quietly paying a price for using ChatGPT… –

Posted by timmreardon on 06/17/2025
Posted in: Uncategorized.

A recent study from MIT researchers (12 June), which explored what happens when people rely on AI tools like ChatGPT for tasks like essay writing.

One of the key findings (probably not very surprising):
—— 𝐔𝐬𝐢𝐧𝐠 𝐂𝐡𝐚𝐭𝐆𝐏𝐓 𝐫𝐞𝐝𝐮𝐜𝐞𝐝 𝐧𝐞𝐮𝐫𝐚𝐥 𝐜𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐯𝐢𝐭𝐲 𝐚𝐧𝐝 𝐜𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐞𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭 (compared to other groups)

So what they did is, 
54 participants were split into three groups:

  • One group used ChatGPT
  • One used a search engine
  • One worked without any digital assistance

They wrote essays while their brain activity was tracked (using EEG), their writing was analyzed, and they were interviewed about the experience.

Other interesting findings:
—— People relying on ChatGPT felt less ownership of their work and struggled more to recall or quote it.

—— When switching tools, those moving from ChatGPT to Brain-only found it harder to work unaided, while those moving the other way adapted quickly, some said it felt like gaining a superpower.

The study warns of cognitive debt: offloading too much thinking to AI could quietly erode critical thinking and deeper engagement over time.

The paper is quite long, full of rich details (I haven’t gone through it all yet!).

I’ll drop the link in the comments if you’re curious to explore it. Also, check page 5 first – it’s the “How to read this paper” guide from the authors, which is super helpful especially if you don’t have time for 200+ pages!

📍Btw, if you want to keep the brain active and build something great with AI, join our 𝐋𝐞𝐚𝐝𝐖𝐢𝐭𝐡𝐀𝐈𝐀𝐠𝐞𝐧𝐭𝐬 Hackathon (July 11–14)

Article link: https://www.linkedin.com/posts/alexwang2911_ai-cognitivescience-chatgpt-activity-7340798998154223618-8rvr?utm_source=share&utm_medium=member_ios&rcm=ACoAAAMNLzwBx4gZFYdrkprBeSa7F0HmSkFdYwU

VA official touts progress on EHR modernization project – Nextgov

Posted by timmreardon on 06/17/2025
Posted in: Uncategorized.

By EDWARD GRAHAMJUNE 16, 2025 04:38 PM ET

The agency is undertaking “a no-fail mission to deliver a Federal EHR at every VA medical center by 2031,” VA Deputy Secretary Paul Lawrence said.

The Department of Veterans Affairs is making “real progress” in its push to deploy its new electronic health record at 13 VA medical facilities next year, according to a top agency official. 

VA Deputy Secretary Paul Lawrence said in a Monday LinkedIn post that the agency is “currently up and running with deployment activities at 11 sites across Michigan, Southern Ohio, and Indiana going live in 2026,” and is also planning to begin activities at two additional medical facilities in Cleveland and Anchorage later this month.

VA initially signed a $10 billion contract — which was later revised to over $16 billion — with Cerner in May 2018 to modernize its legacy health record system and make it interoperable with the Pentagon’s new health record, which was also provided by Cerner. Oracle later acquired Cerner in 2022.

The agency paused most deployments of its modernized EHR system in April 2023, however, to address patient safety concerns, technical issues and usability challenges at the sites where the new software had been deployed. 

VA announced in December that it was moving out of its operational pause and was looking to deploy the new EHR system at four Michigan-based medical sites in mid-2026. VA Secretary Doug Collins subsequently announced in March that the agency was planning to implement the modernized software at nine additional medical facilities next year, bringing the total to 13 sites.

As of this month, the new EHR system has been fully deployed at just six of VA’s 170 medical centers. During a congressional hearing last month, however, Collins told lawmakers he was optimistic about efforts to speed up rollouts of the new software, saying that “once you get momentum, you can add more sites as you go.” 

The Trump administration’s fiscal year 2026 budget proposal, which was released in May, also included a roughly $2.2 billion boost for the rollout of the new EHR system.

Lawrence said he is holding regular bi-weekly working sessions with the team at Oracle Health about the modernization project, and that VA is making a concerted push to complete the deployment.

“We’re rolling up our sleeves to tackle the tough issues head-on, from pharmacy modules to referrals, and eliminating outdated processes that are holding us back,” Lawrence said. “This is a no-fail mission to deliver a Federal EHR at every VA medical center by 2031.”

Even as VA works to ramp up its activities ahead of next year’s planned deployments, congressional lawmakers are still looking to shore up the modernization project. Last week, Republican lawmakers on the House Veterans’ Affairs Committee put forward a discussion draft of legislation that seeks to improve oversight and governance of the EHR software’s rollout. 

Article link: https://www.nextgov.com/modernization/2025/06/va-official-touts-progress-ehr-modernization-project/406113/

New database details AI risks – MIT

Posted by timmreardon on 06/12/2025
Posted in: Uncategorized.

by Beth Stackpole

Nov 26, 2024

Why It Matters

The AI Risk Repository aims to provide industry, policymakers, and academics with a shared framework for monitoring and maintaining AI risk oversight.

As artificial intelligence sees unprecedented growth and industry use cases soar, concerns mount about the technology’s risks, including bias, data breaches, job loss, and misuse. 

According to research firm Arize AI, the number of Fortune 500 companies citing AI as a risk in their annual financial reports hit 281 this year. That represents a 473.5% increase from 2022, when just 49 companies flagged the technology as a risk factor.

Given the scope and seriousness of the risk climate, a team of researchers that included MIT Sloan research scientist Neil Thompson has created the AI Risk Repository, a living database of over 700 risks posed by AI, categorized by cause and risk domain. The project aims to provide industry, policymakers, academics, and risk evaluators with a shared framework for monitoring and maintaining oversight of AI risks. The repository can also aid organizations with their internal risk assessments, risk mitigation strategies, and research and training development. 

777

The AI Risk Database details 777 different risks cited in AI literature to date.

While other entities have attempted to classify AI risks, existing classifications have generally been focused on only a small part of the overall AI risk landscape.

“The risks posed by AI systems are becoming increasingly significant as AI adoption accelerates across industry and society,” said Peter Slattery, a researcher at MIT FutureTech and the project lead. “However, these risks are often discussed in fragmented ways, across different industries and academic fields, without a shared vocabulary or consistent framework.”

Creating a unified risk view

To create the risk repository, the researchers searched academic databases and consulted other resources to review existing taxonomies and structured classifications of AI risk. They found that two types of classification systems were common in existing literature: high-level categorizations of causes of AI risks, such as when and why risks from AI occur; and midlevel categorizations of hazards and harms from AI, such as using AI to develop weapons or training AI systems on limited data.

Both types of classification systems are used in the AI Risk Repository, which has three components:

  • The AI Risk Database captures 777 different risks from 43 documents, with quotes and page numbers included. It will be updated as new risks emerge.
  • The Causal Taxonomy of AI Risksclassifies how, when, and why such risks occur, based on their root causes. Causes are broken out into three categories: entity responsible (human or AI), the intentionality behind the risk (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment).
  • The Domain Taxonomy of AI Riskssegments risks by the domain in which they occur, such as privacy, misinformation, or AI systems safety. This section mentions seven domains and 23 subdomains. 

The two taxonomies can be used separately to filter the database for specific risks and domains, or they can be used in tandem to understand how each causal factor relates to each risk domain. For example, a user can use both filters to differentiate between discrimination and toxicity risks when AI is deliberately trained on toxic content from the outset, and instances of risk where AI inadvertently causes harm after the fact by displaying toxic content.

As part of the exercise, the researchers uncovered some interesting insights about the current literature. Among them:

  • Most risks were attributed to AI systems rather than to humans (51% versus 34%).
  • Most of the risks discussed occurred after an AI model had been trained and deployed (65%) rather than before (10%).
  • Nearly an equal number of intentional (35%) and unintentional (37%) risks were identified.

Putting the AI Risk Repository to work 

The MIT AI Risk Repository will have different uses for different audiences.

RELATED ARTICLES

A framework for assessing AI risk

From MIT, a technically informed approach to governing

AIThird-party AI tools pose risks for organizations

Policymakers. The repository can serve as a guide for developing and enacting regulations on AI systems. For example, it can be used to identify the type and nature of risks and their sources as AI developers aim to comply with regulations like the EU AI Act. The tool also creates a common language and set of criteria for discussing AI risks at a global scale.

Auditors. The repository provides a shared understanding of risks from AI systems that can guide those in charge of evaluating and auditing AI risks. While some AI risk management frameworks had already been developed, they are much less comprehensive.

Academics. The taxonomy can be used to synthesize information about AI risks across studies and sources. It can also help identify gaps in current knowledge so efforts can be directed toward those areas. The AI Risk Repository can also play a role in education and training, acclimating students and professionals to the inner workings of the AI risk landscape.

Industry. The AI Risk Repository can be a critical tool for safe and responsible AI application development as organizations build new systems. The AI Risk Database can also help identify specific behaviors that mitigate risk exposure.

“The risks of AI are poised to become increasingly common and pressing,” the MIT researchers write. “Efforts to understand and address these risks must be able to keep pace with the advancements in deployment of AI systems. We hope our living, common frame of reference will help these endeavors to be more accessible, incremental, and successful.”

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/new-database-details-ai-risks?

Algorithms are everywhere – MIT Technology Review

Posted by timmreardon on 06/07/2025
Posted in: Uncategorized.


Three new books warn against turning into the person the algorithm thinks you are.

By Bryan Gardiner

February 27, 2024

Like a lot of Netflix subscribers, I find that my personal feed tends to be hit or miss. Usually more miss. The movies and shows the algorithms recommend often seem less predicated on my viewing history and ratings, and more geared toward promoting whatever’s newly available. Still, when a superhero movie starring one of the world’s most famous actresses appeared in my “Top Picks” list, I dutifully did what 78 million other households did and clicked.

As I watched the movie, something dawned on me: recommendation algorithms like the ones Netflix pioneered weren’t just serving me what they thought I’d like—they were also shaping what gets made. And not in a good way. 

The movie in question wasn’t bad, necessarily. The acting was serviceable, and it had high production values and a discernible plot (at least for a superhero movie). What struck me, though, was a vague sense of déjà vu—as if I’d watched this movie before, even though I hadn’t. When it ended, I promptly forgot all about it. 

That is, until I started reading Kyle Chayka’s recent book, Filterworld: How Algorithms Flattened Culture. A staff writer for the New Yorker, Chayka is an astute observer of the ways the internet and social media affect culture. “Filterworld” is his coinage for “the vast, interlocking … network of algorithms” that influence both our daily lives and the “way culture is distributed and consumed.” 

Music, film, the visual arts, literature, fashion, journalism, food—Chayka argues that algorithmic recommendations have fundamentally altered all these cultural products, not just influencing what gets seen or ignored but creating a kind of self-reinforcing blandness we are all contending with now.

That superhero movie I watched is a prime example. Despite my general ambivalence toward the genre, Netflix’s algorithm placed the film at the very top of my feed, where I was far more likely to click on it. And click I did. That “choice” was then recorded by the algorithms, which probably surmised that I liked the movie and then recommended it to even more viewers. Watch, wince, repeat.  

“Filterworld culture is ultimately homogenous,” writes Chayka, “marked by a pervasive sense of sameness even when its artifacts aren’t literally the same.” We may all see different things in our feeds, he says, but they are increasingly the same kind of different. Through these milquetoast feedback loops, what’s popular becomes more popular, what’s obscure quickly disappears, and the lowest-­common-denominator forms of entertainment inevitably rise to the top again and again. 

This is actually the opposite of the personalization Netflix promises, Chayka notes. Algorithmic recommendations reduce taste—traditionally, a nuanced and evolving opinion we form about aesthetic and artistic matters—into a few easily quantifiable data points. That oversimplification subsequently forces the creators of movies, books, and music to adapt to the logic and pressures of the algorithmic system. Go viral or die. Engage. Appeal to as many people as possible. Be popular.  

A joke posted on X by a Google engineer sums up the problem: “A machine learning algorithm walks into a bar. The bartender asks, ‘What’ll you have?’ The algorithm says, ‘What’s everyone else having?’” “In algorithmic culture, the right choice is always what the majority of other people have already chosen,” writes Chayka. 

One challenge for someone writing a book like Filterworld—or really any book dealing with matters of cultural import—is the danger of (intentionally or not) coming across as a would-be arbiter of taste or, worse, an outright snob. As one might ask, what’s wrong with a little mindless entertainment? (Many asked just that in response to Martin Scorsese’s controversial Harper’sessay  in 2021, which decried Marvel movies and the current state of cinema.) 

Chayka addresses these questions head on. He argues that we’ve really only traded one set of gatekeepers (magazine editors, radio DJs, museum curators) for another (Google, Facebook, TikTok, Spotify). Created and controlled by a handful of unfathomably rich and powerful companies (which are usually led by a rich and powerful white man), today’s algorithms don’t even attempt to reward or amplify quality, which of course is subjective and hard to quantify. Instead, they focus on the one metric that has come to dominate all things on the internet: engagement.

There may be nothing inherently wrong (or new) about paint-by-numbers entertainment designed for mass appeal. But what algorithmic recommendations do is supercharge the incentives for creating only that kind of content, to the point that we risk not being exposed to anything else.

“Culture isn’t a toaster that you can rate out of five stars,” writes Chayka, “though the website Goodreads, now owned by Amazon, tries to apply those ratings to books. There are plenty of experiences I like—a plotless novel like Rachel Cusk’s Outline, for example—that others would doubtless give a bad grade. But those are the rules that Filterworld now enforces for everything.”

Chayka argues that cultivating our own personal taste is important, not because one form of culture is demonstrably better than another, but because that slow and deliberate process is part of how we develop our own identity and sense of self. Take that away, and you really do become the person the algorithm thinks you are. 

Algorithmic omnipresence

As Chayka points out in Filterworld, algorithms “can feel like a force that only began to exist … in the era of social networks” when in fact they have “a history and legacy that has slowly formed over centuries, long before the Internet existed.” So how exactly did we arrive at this moment of algorithmic omnipresence? How did these recommendation machines come to dominate and shape nearly every aspect of our online and (increasingly) our offline lives? Even more important, how did we ourselves become the data that fuels them?

These are some of the questions Chris Wiggins and Matthew L. Jones set out to answer in How Data Happened: A History from the Age of Reason to the Age of Algorithms. Wiggins is a professor of applied mathematics and systems biology at Columbia University. He’s also the New York Times’ chief data scientist. Jones is now a professor of history at Princeton. Until recently, they both taught an undergrad course at Columbia, which served as the basis for the book.

They begin their historical investigation at a moment they argue is crucial to understanding our current predicament: the birth of statistics in the late 18th and early 19th century. It was a period of conflict and political upheaval in Europe. It was also a time when nations were beginning to acquire both the means and the motivation to track and measure their populations at an unprecedented scale.

“War required money; money required taxes; taxes required growing bureaucracies; and these bureaucracies needed data,” they write. “Statistics”may have originally described “knowledge of the state and its resources, without any particularly quantitative bent or aspirations at insights,” but that quickly began to change as new mathematical tools for examining and manipulating data emerged.

One of the people wielding these tools was the 19th-century Belgian astronomer Adolphe Quetelet. Famous for, among other things, developing the highly problematic body mass index (BMI), Quetelet had the audacious idea of taking the statistical techniques his fellow astronomers had developed to study the position of stars and using them to better understand society and its people. This new “social physics,” based on data about phenomena like crime and human physical characteristics, could in turn reveal hidden truths about humanity, he argued.

“Quetelet’s flash of genius—whatever its lack or rigor—was to treat averages about human beings as if they were real quantities out there that we were discovering,” write Wiggins and Jones. “He acted as if the average height of a population was a real thing, just like the position of a star.” 

From Quetelet and his “average man” to Francis Galton’s eugenics to Karl Pearson and Charles Spearman’s “general intelligence,” Wiggins and Jones chart a depressing progression of attempts—many of them successful—to use data as a scientific basis for racial and social hierarchies. Data added “a scientific veneer to the creation of an entire apparatus of discrimination and disenfranchisement,” they write. It’s a legacy we’re still contending with today. 

Another misconception that persists? The notion that data about people are somehow objective measures of truth. “Raw data is an oxymoron,” observed the media historian Lisa Gitelman a number of years ago. Indeed, all data collection is the result of human choice, from what to collect to how to classify it to who’s included and excluded. 

Whether it’s poverty, prosperity, intelligence, or creditworthiness, these aren’t real things that can be measured directly, note Wiggins and Jones. To quantify them, you need to choose an easily measured proxy. This “reification” (“literally, making a thing out of an abstraction about real things”) may be necessary in many cases, but such choices are never neutral or unproblematic. “Data is made, not found,” they write, “whether in 1600 or 1780 or 2022.”

Perhaps the most impressive feat Wiggins and Jones pull off in the book as they continue to chart data’s evolution throughout the 20th century and the present day is dismantling the idea that there is something inevitable about the way technology progresses. 

For Quetelet and his ilk, turning to numbers to better understand humans and society was not an obvious choice. Indeed, from the beginning, everyone from artists to anthropologists understood the inherent limitations of data and quantification, making some of the same critiques of statisticians that Chayka makes of today’s algorithmic systems (“Such statisticians ‘see quality not at all, but only quantity’”).

Whether they’re talking about the machine-learning techniques that underpin today’s AI efforts or an internet built to harvest our personal data and sell us stuff, Wiggins and Jones recount many moments in history when things could have just as likely gone a different way.

“The present is not a prison sentence, but merely our current snapshot,” they write. “We don’t have to use unethical or opaque algorithmic decision systems, even in contexts where their use may be technically feasible. Ads based on mass surveillance are not necessary elements of our society. We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future. Privacy is not dead because of technology; it’s not true that the only way to support journalism or book writing or any craft that matters to you is spying on you to service ads. There are alternatives.” 

A pressing need for regulation

If Wiggins and Jones’s goal was to reveal the intellectual tradition that underlies today’s algorithmic systems, including “the persistent role of data in rearranging power,” Josh Simons is more interested in how algorithmic power is exercised in a democracy and, more specifically, how we might go about regulating the corporations and institutions that wield it.

Currently a research fellow in political theory at Harvard, Simons has a unique background. Not only did he work for four years at Facebook, where he was a founding member of what became the Responsible AI team, but he previously served as a policy advisor for the Labour Party in the UK Parliament. 

In Algorithms for the People: Democracy in the Age of AI, Simons builds on the seminal work of authors like Cathy O’Neil, Safiya Noble, and Shoshana Zuboff to argue that algorithmic prediction is inherently political. “My aim is to explore how to make democracy work in the coming age of machine learning,” he writes. “Our future will be determined not by the nature of machine learning itself—machine learning models simply do what we tell them to do—but by our commitment to regulation that ensures that machine learning strengthens the foundations of democracy.”

Much of the first half of the book is dedicated to revealing all the ways we continue to misunderstand the nature of machine learning, and how its use can profoundly undermine democracy. And what if a “thriving democracy”—a term Simons uses throughout the book but never defines—isn’t always compatible with algorithmic governance? Well, it’s a question he never really addresses. 

Whether these are blind spots or Simons simply believes that algorithmic prediction is, and will remain, an inevitable part of our lives, the lack of clarity doesn’t do the book any favors. While he’s on much firmer ground when explaining how machine learning works and deconstructing the systems behind Google’s PageRank and Facebook’s Feed, there remain omissions that don’t inspire confidence. For instance, it takes an uncomfortably long time for Simons to even acknowledge one of the key motivations behind the design of the PageRank and Feed algorithms: profit. Not something to overlook if you want to develop an effective regulatory framework. 

Much of what’s discussed in the latter half of the book will be familiar to anyone following the news around platform and internet regulation (hint: that we should be treating providers more like public utilities). And while Simons has some creative and intelligent ideas, I suspect even the most ardent policy wonks will come away feeling a bit demoralized given the current state of politics in the United States. 

In the end, the most hopeful message these books offer is embedded in the nature of algorithms themselves. In Filterworld, Chayka includes a quote from the late, great anthropologist David Graeber: “The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.” It’s a sentiment echoed in all three books—maybe minus the “easily” bit. 

Algorithms may entrench our biases, homogenize and flatten culture, and exploit and suppress the vulnerable and marginalized. But these aren’t completely inscrutable systems or inevitable outcomes. They can do the opposite, too. Look closely at any machine-learning algorithm and you’ll inevitably find people—people making choices about which data to gather and how to weigh it, choices about design and target variables. And, yes, even choices about whether to use them at all. As long as algorithms are something humans make, we can also choose to make them differently. 

Bryan Gardiner is a writer based in Oakland, California.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/02/27/1088164/algorithms-book-reviews-kyle-chayka-chris-wiggins-matthew-l-jones-josh-simons/amp/

$400 Million Breakthrough: ASML’s New High-NA Machine Set to Transform Chipmaking – Techovedas

Posted by timmreardon on 05/29/2025
Posted in: Uncategorized.

By KUMAR PRIYADARSHI

MAY 29, 2025

INTERNATIONAL, SEMICONDUCTOR NEWS

Introduction

The semiconductor industry has reached a pivotal moment with ASML’s unveiling of its $400 million High-NA (High Numerical Aperture) chipmaking machine. 

This technological marvel promises to revolutionize how the world’s most advanced microchips are manufactured. 

By enabling unprecedented precision and speed in chip production, ASML’s latest innovation sets a new standard for the entire semiconductor supply chain. 

As the demand for smaller, faster, and more efficient chips skyrockets across industries like AI, smartphones, and data centers, this breakthrough machine could be the key to unlocking the next era of computing power.

Key Highlights:

$400 million per unit: ASML’s High NA is the world’s most expensive chipmaking machine.

Only 5 shipped so far: Intel, TSMC, and Samsung are early adopters.

Twice the reliability: Intel reports significant yield and throughput improvements.

Global assembly footprint: Modules come from the U.S., Germany, and the Netherlands.

EUV exclusivity: ASML is the only company globally producing EUV systems.

/techovedas.com/asml-makes-history-first-high-na-300m-euv-litho-scanner-shipped-to-intel-resolution-upto-8-nm/

A Colossal Machine Changing the Chipmaking Landscape

Standing larger than a double-decker bus, the High NA systemconsists of four modules built across the U.S. (California and Connecticut), Germany, and the Netherlands. 

The final assembly and testing occur in Veldhoven before being disassembled again for delivery. It takes seven Boeing 747s or 25 trucks to transport a single machine.

Only five High NA units have been delivered so far, with the first commercial installation at Intel’s Oregon fab in 2024. 

ASML expects adoption to expand to all its EUV customers, including Micron, SK Hynix, Rapidus, and others.

What Makes High NA Special?

ASML’s High NA builds on its EUV $400 million legacy by increasing the numerical aperture — the size of the lens opening used to project light onto silicon wafers. A larger aperture allows smaller, more precise patterns to be etched in fewer steps, improving chip performance and reducing production time.

According to ASML’s EVP of Technology Jos Benschop, the two primary benefits of High NA are:

  • “Shrink”: Fit more transistors onto a single wafer.
  • Faster throughput and higher yield by avoiding multiple patterning.

Intel reported producing 30,000 wafers with High NA and noted the tool is twice as reliable as ASML’s earlier EUV machines. Samsung claimed a 60% reduction in cycle time, indicating the potential for faster chips and lower costs.

The Physics Behind the Process

ASML’s High NA continues to use 13.5nm EUV light, made by firing 50,000 tin droplets per secondwith a powerful laser, creating plasma hotter than the sun. This light is projected through precision mirrors — the flattest surfaces on Earth — crafted by German optics partner Zeiss.

Because EUV is absorbed by all known materials, the entire lithography process happens in a vacuum, with light bounced and focused using specialized mirrors before reaching the silicon wafer.

Compared to ASML’s older DUV (Deep Ultraviolet) systems that use 193nm light and compete with Nikon and Canon, EUV — and now High NA — allows chipmakers to continue scaling down transistors in line with Moore’s Law.

A Risk That Paid Off

Developing EUV technology took over 20 years and was once considered an impossible endeavor. 

“We barely made it… It’s been a very risky investment because there was no guarantee the technology would work.”ASML CEO Christophe Fouquet recalled,

Since proving EUV’s viability in 2018, ASML has cornered the global market. In 2024, the company sold 44 EUV machinesat prices starting from $220 million. DUV sales, while lower-tech, remained strong at 374 units, with China being a key buyer.

Geopolitics and U.S. Export Controls

Despite booming global demand, U.S. export restrictions prevent ASML from selling its EUV machines to China. This ban, originating during Donald Trump’s presidency, remains in effect. China still buys DUV systems, which accounted for 49% of ASML’s sales in Q2 2024, driven by a backlog of orders.

Fouquet expects that figure to return to the historical norm of 20–25% in 2025. However, ASML is bracing for uncertainties, especially as Trump’s new tariff plans could disrupt its 800-part global supply chain. Each High NA machine involves imports and exports between the U.S., Germany, the Netherlands, and Asia.

//techovedas.com/7-major-takeaways-from-asml-q2-fy24-earnings-report-analysis/#google_vignette

Power Efficiency and AI Demands

High NA is not just about precision. It also tackles energy concerns in an AI-driven future. Fouquet warned, 

“If we don’t improve the power efficiency of our AI chips, training models could consume the world’s energy by 2035.”

ASML has reduced energy consumption per wafer by 60% since 2018, a crucial milestone as chipmakers seek sustainable growth amid rising demand for compute power.

ASML Expands U.S. Presence

Though headquartered in the Netherlands, ASML is deepening its U.S. footprint. In 2024, 17% of ASML’s sales came from the U.S., a figure expected to grow with new fabs under construction by Intel in Ohio and Arizona.

Of ASML’s 44,000 global employees, 8,500 are U.S.-based, spread across 18 offices. Fouquet called Intel a “very critical” partner for America’s goal of semiconductor independence— even as TSMC remains aheadin advanced manufacturing.

techovedas.com/intel-launches-first-u-s-apprenticeship-program-for-manufacturing-facility-technicians/

Outlook

ASML’s $400 million High NA system represents a technological leap that could reshape the global semiconductor landscape. With only a handful of companies able to afford it, and only ASML able to build it, the tool solidifies the company’s monopoly on advanced chip lithography.

Yet, challenges remain — from geopolitical tensions and tariffsto energy efficiency and production scalability.

For now, ASML’s High NA machine is not just a feat of engineering; it’s the centerpiece of the battle for semiconductor supremacy.

Article link: https://techovedas.com/400-million-breakthrough-asmls-new-high-na-machine-set-to-transform-chipmaking/

Insights from Nuclear History for AI Governance – RAND

Posted by timmreardon on 05/28/2025
Posted in: Uncategorized.

Benjamin Boudreaux, Gregory Smith, Edward Geist, Leah Dion

EXPERT INSIGHTS Published May 21, 2025

Click to access RAND_PEA3652-1.pdf

There have been multiple proposals for the international governance of artificial intelligence (AI) that draw from the existing nuclear governance regimes. In this paper, the authors analyze lessons from the history of nuclear stability and draw analogies to building international governance of AI. The authors analyze two major episodes in nuclear governance, the failure of the Baruch Plan and the success of the Non-Proliferation Treaty, to understand what factors led to the failure or success of these governance initiatives. The authors also identify the challenges that proposals for global AI governance face that might complicate building a regime similar to the nuclear nonproliferation one. This paper is intended for those interested in potential models for global governance of AI that draw on past global governance efforts, such as nuclear nonproliferation.

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Introduction: Disinformation as a multiplier of existential threat – Bulletin of the Atomic Scientists 03/12/2026
    • AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap – MIT Sloan 03/08/2026
    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
    • Agentic AI, explained – MIT Sloan 02/18/2026
    • Anthropic’s head of AI safety Mrinank Sharma resigns, says ‘world is in peril’ in resignation letter 02/10/2026
    • Moltbook was peak AI theater 02/09/2026
    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2026 (2)
    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...