healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

ChatGPT Health Is a Terrible Idea

Posted by timmreardon on 01/09/2026
Posted in: Uncategorized.

Why AI Cannot Be Allowed to Mediate Medicine Without Accountability

By Katalin K. Bartfai-Walcott, CTO, Synovient Inc

On January 7, 2026, OpenAI announced ChatGPT Health, a new feature that lets users link their actual medical records and wellness data, from EMRs to Apple Health, to get personalized responses from an AI. It is positioned as a tool to help people interpret lab results, plan doctor visits, and understand health patterns. But this initiative is not just another health tech product. It is a dangerous architectural leap into personal medicine with very little regard for patient safety, accountability, or sovereignty.

The appeal is obvious. Forty million users already consult ChatGPT daily about health issues. Yet popularity does not equal safety. Connecting deep personal health data to a probabilistic language model, rather than a regulated medical device, creates a new class of risk.

As a new class of consumer AI products begins to position itself as a companion to healthcare, these systems offer to connect directly to personal medical records, wellness apps, and long-term health data to generate more personalized guidance, explanations, and insights. The promise is familiar and intuitively appealing. Doctor visits are short. Records are fragmented. Long gaps between appointments leave patients wanting to feel informed rather than passive. Into that space steps a conversational system offering continuous synthesis, reassurance, and pattern recognition at any hour. It presents itself as improvement and empowerment, yet it does so by asking patients to trade agency, control, accountability, and sovereignty for convenience.

This is a terrible idea, and it is terrible for reasons that have nothing to do with whether people ask health questions or whether the healthcare system is failing them.

Connecting longitudinal medical records to a probabilistic language model collapses aggregation, interpretation, and influence into a single system that cannot be held clinically, legally, or ethically accountable for the narratives it produces. Once that boundary is crossed, the risk becomes persistent, compounding, and largely invisible to the person whose data is being interpreted, and the results will be dire.

Medical records are not neutral inputs. They are identity-defining artifacts that shape access to care, insurance outcomes, employment decisions, and legal standing. Anyone who has worked inside healthcare systems understands that these records are often fragmented, duplicated, outdated, or simply wrong. Errors persist for years. Corrections are slow. Context is frequently missing. When those imperfections remain distributed across systems, the damage is contained. In that form, errors stop behaving as isolated inaccuracies and begin to shape enduring narratives about a person’s body, behavior, and risk profile.

Language models do not reason the way medicine reasons. They do not weigh uncertainty with caution, surface ambiguity as a first-class signal, or slow down when evidence conflicts. They produce fluent synthesis. That fluency reads as confidence, and confidence is precisely what medical practice treats carefully because it can crowd out questioning, second opinions, and clinical judgment. When such a synthesis is grounded in sensitive personal data, even minor errors cease to be informational. They become formative.

The repeated assurance that these systems are meant to support rather than replace medical care does not hold up under scrutiny. The moment a tool reframes symptoms, highlights trends, normalizes interpretations, or influences how someone prepares for or delays a medical visit, it is already shaping care pathways. That influence does not require diagnosis or prescription. It only requires trust, repetition, and perceived authority. Disclaimers do not meaningfully constrain that effect. Only enforceable architectural boundaries do, and those boundaries are absent.

Medicare is already moving in this direction, and that should give us pause. Algorithmic systems are increasingly used to assess coverage eligibility, utilization thresholds, and the medical necessity of procedures for elderly patients, often with limited transparency and constrained avenues for appeal. When these systems mediate access to care, they do not feel like decision support to the patient. They feel like authority. A recommendation becomes a gate. An inference becomes a delay or a denial. The individual rarely knows how the conclusion was reached, what data shaped it, or how to meaningfully challenge it. When AI interpretation is embedded into healthcare infrastructure without enforceable accountability, it quietly displaces human judgment while preserving the appearance of neutrality, and the people most affected are those with the least power to contest it.

What is missing most conspicuously is patient sovereignty at the data level. There is no object-level consent that limits use to a declared purpose. There is no lifecycle control that allows a patient to revoke access or correct errors in a way that propagates forward. There is no clear separation between information used transiently to answer a question and inference artifacts that may be retained, recombined, or learned from over time. Without those controls, the system recreates the worst failures of modern health IT while accelerating their impact through conversational authority.

The argument that people already seek health advice through AI misunderstands responsibility. Normalized behavior is not a justification for institutionalizing risk. People have always searched for symptoms online, yet that reality never warranted centralizing full medical histories into a single interpretive layer that speaks with personalized authority. Turning coping behavior into infrastructure without safeguards does not empower patients. It exposes them.

If the goal is to help individuals engage more actively in their health, the work must start with agency rather than intelligence. Patients need enforceable control over how their data is accessed, for what purpose, for how long, and with what guarantees around correction, provenance, and revocation. They need systems that preserve uncertainty rather than smoothing it away, and that prevent the silent accumulation of interpretive power.

Health data does not need to be smarter. It needs to remain governable by the person it represents. Until that principle is embedded at the architectural level, connecting medical records to probabilistic conversational systems is not progress. It is a failure to absorb decades of hard lessons about trust, error, and the irreversible consequences of speaking with authority where none can be justified.

If systems like this are going to exist at all, they must be built on a very different foundation. Patient agency cannot be an interface preference. It has to be enforced at the data level. Individuals must be able to control how their medical data is accessed, for what purpose, for how long, and with what guarantees around correction, revocation, and downstream use. Consent cannot be implied or perpetual. It must be explicit, contextual, and technically enforceable.

Data ownership and sovereignty are not philosophical positions in healthcare. They are safety requirements. Medical information must carry its provenance, its usage constraints, and its lifecycle rules with it, so that interpretation does not silently outlive permission. Traceability must extend not only to the source of the data, but to the inferences drawn from it, making it possible to understand how conclusions were reached and what inputs shaped them.

AI can have a role in medicine, but only when its use is managed, bounded, and accountable. That means clear separation between transient assistance and retained interpretation, between explanation and decision-making, and between support and authority. It means designing systems that preserve uncertainty rather than smoothing it away, and that prevent the accumulation of silent power through repetition and scale.

If companies building large AI systems are serious about improving healthcare, they should not be racing to aggregate more data or expand interpretive reach. They should engage with architectures and technologies that already prioritize enforceable consent, data-level governance, provenance, and patient-controlled use. Without those foundations, intelligence becomes the least important part of the system.

Health data does not need to be centralized to be helpful. It needs to remain governable by the person it represents. Until that principle is treated as a design requirement rather than a policy aspiration, tools like this will continue to promise empowerment while quietly eroding the very agency they claim to support.

Article link: https://www.linkedin.com/pulse/chatgpt-health-terrible-idea-katalin-bártfai-walcott-dchzc?

Choose the human path for AI – MIT Sloan

Posted by timmreardon on 01/09/2026
Posted in: Uncategorized.


byRichard M. Locke

 Dec 16, 2025

Why It Matters

To realize the greatest gains from artificial intelligence, we must make the future of work more human, not less.

Americans today are ambivalent about AI. Many see opportunity: Sixty-two percent of respondents to a recent Gallup survey believe it will increase productivity. Just over half (53%) believe it will lead to economic growth. Still, 61% think it will destroy more jobs than it will create. And nearly half (47%) think it will destroy more businesses than it will create.

These are real concerns from an anxious workforce, voiced in a time of great economic uncertainty. There is a diffuse sense of resignation, a presumption that we are building AI that automates work and replaces workers. Yet the outcome of this era of technological advancement is not yet determined. This is a pivotal moment, with enormous consequences for the workforce, for organizations, and for humanity. As the latest generation of artificial intelligence leaves its nascent phase, we are confronted with a choice about which path to take. Will we deploy AI to eliminate millions of jobs across the economy, or will we use this innovative technology to empower the workforce and make the most of our human capabilities?

I believe that we can work to invent a future where artificial intelligence extends what humans can do to improve organizations and the world.

A new choice with prescient antecedents

As the postwar boom expanded the workforce in the 1950s, organizations were confronted with a choice about how to most effectively motivate employees. To guide that choice, MIT Sloan professor Douglas McGregor developed Theory X and Theory Y. The twin theories describe opposing assumptions about why people work and how they should be managed. Theory X assumes that workers are inherently unmotivated, leading to a management style based on top-down compliance and a carrot-and-stick approach to rewards and punishments. Theory Y presumes that employees are intrinsically motivated to do their best work and contribute to their organizations, leading to a management style that empowers workers and cultivates greater motivation.Centering human capabilities: MIT Sloan research and teaching on AI and workforce development

At MIT Sloan, our mission, in part, is to “develop principled, innovative leaders who improve the world.” What does this charge mean when we choose the path of machines in service of minds?

Work from MIT and MIT Sloan researchers helps to answer this question. Our faculty is examining artificial intelligence implementation from many perspectives. 

For example, MIT economist David Autor and MIT Sloan principal research scientist Neil Thompson show that automation affects different roles in different ways, depending on which job tasks are automated. When technology automates a role’s inexpert tasks, the role becomes more specialized and more highly paid, but also harder to enter. When a role’s expert tasks are automated, by contrast, it becomes easier to enter but offers lower pay. With this insight, managers can analyze how roles in their organizations will change and make productive decisions about upskilling and human resource management that take full advantage of the human capabilities of their workforces.

With attention to workplace dynamics, MIT Sloan professor Kate Kellogg and colleagues have examined why the practice of having junior employees train senior staff members on AI tools is flawed. The recommendation: Leaders must focus on system design and on firm-level rather than project-level interventions for AI implementation.

In AI Executive Academy, a program offered by MIT Sloan Executive Education, professors Eric So and Sertac Karaman lead attendees through an exploration of the technical aspects and business implications of AI implementation. The course is a collaboration between MIT Sloan and the MIT Schwarzman College of Computing. So is also lead faculty for MIT Sloan’s Generative AI for Teaching and Learning resource hub, which catalogs tools for using AI in teaching.LEARN MORE

McGregor’s work informed my research on supply chains in the 2000s, when firms were taking manufacturing to places with weak regulation and low wages in hopes of cutting production costs. Yet my research revealed that some supply chain factories were using techniques we teach at MIT about lean manufacturing, inventory management, logistics, and modern personnel management. These factories ran more efficient and higher-quality operations, which gave them higher margins, some of which they could invest in better working conditions and wages.

When an organization makes a choice like this, it pushes against prevailing wisdom about the limitations of the workforce. Instead, the firm employs innovations in both management theory and technology to expand the capabilities of its workforce, reaping rewards for itself and for its employees.

“Machines in service of minds”

Researchers at MIT today are urging us to make such a choice when steering the development of artificial intelligence. Sendhil Mullainathan, a behavioral economist, argues that questions like “What is the future of work?” frame the future in terms of prediction rather than in choice. He argues that it is right now — as we build the technology stack for AI and as we redesign work to make use of this newly accessible technology — that we need to choose. Do we follow a path of automation that simply replaces some amount of work humans can already do, he asks, or do we choose a path that uses AI as (to borrow from Steve Jobs) a “bicycle for the mind”?

In his own work, Mullainathan has shown why we should choose the latter: With colleagues, he has developed an algorithm that can identify patients at high risk of sudden cardiac death. Until now, making such a determination with the data available to physicians has been nearly impossible. Rather than automating something doctors can already do, Mullainathan chose to create something new that doctors can use to better treat patients.

That type of choice sits at the center of “Power and Progress,” the 2023 book by MIT economists and Nobel laureates Daron Acemoglu and Simon Johnson that argues for recharting the course of technology so that it effects shared prosperity and complements the work of humans. Writing later with MIT economist David Autor, the pair argued that the direction of AI development is a choice. As they put it, leaders and educators must choose “a path of machines in service of minds.”

What does that mean in the context of the workforce and the workplace today? How do we create organizations and roles that travel this path?

Part of the answer lies in research from MIT Sloan professor Roberto Rigobon and postdoctoral researcher Isabella Loaiza. The pair conducted an analysis of 19,000 tasks across 950 job types, revealing the five capabilities where human workers shine and where AI faces limitations: Empathy, Presence, Opinion, Creativity, and Hope. Their EPOCH framework puts us on a path toward upskilling workers with a focus on what they call “the fundamental qualities of human nature.” Think of the doctors in Mullainathan’s work above. With AI, they can better predict which patients are at high risk of sudden cardiac death. And the doctors remain essential as decision makers and caregivers, using insights from AI to focus on better patient outcomes.

Researchers across MIT and MIT Sloan are examining the indispensable role of humans in the implementation of artificial intelligence across many other disciplines and industries, some of which are detailed in the sidebar.

Teaching our students, ourselves, and the world

At MIT Sloan, centering human capabilities in the implementation of AI means that we must all be fluent with these new tools. It means educating not just our students but also our faculty and staff members. We must create a foundation we can build upon so we can all do better work in finance, marketing, strategy, and operations, and throughout organizations. Here are three ways we have begun:

  • In Generative AI Lab, one of MIT Sloan’s hands-on action learning labs, teams of students are paired with organizations to employ artificial intelligence in solving real-world business problems.
  • This past summer, we formed a committee of faculty members who are already planning how to weave AI throughout the curriculum, with a focus on training students in ethical and people-focused implementation of the technology.
  • At MIT Open Learning, MIT Sloan associate dean Dimitris Bertsimas and his team have developed Universal AI, an online learning experience consisting of modules that teach the fundamentals of AI in a practical application context. The pilot of this offering was recently rolled out to a wide-ranging group of organizations — including MIT students, faculty, and staff members — so they can learn more about AI and its applications and, most importantly, provide feedback. This will allow us to go beyond educating just ourselves and our students. We will shape an offering that can scale much further and help us to collectively choose a path that is informed by the MIT research I’ve described above. Universal AI will be available to learners, educators, and all types of organization around the world in 2026.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/choose-human-path-ai?

Why AI predictions are so hard – MIT Technology Review

Posted by timmreardon on 01/07/2026
Posted in: Uncategorized.


And why we’re predicting what’s next for the technology in 2026 anyway. 

By James O’Donnell

January 6, 2026

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Sometimes AI feels like a niche topic to write about, but then the holidays happen, and I hear relatives of all ages talking about cases of chatbot-induced psychosis, blaming rising electricity prices on data centers, and asking whether kids should have unfettered access to AI. It’s everywhere, in other words. And people are alarmed.

Inevitably, these conversations take a turn: AI is having all these ripple effects now, but if the technology gets better, what happens next? That’s usually when they look at me, expecting a forecast of either doom or hope. 

I probably disappoint, if only because predictions for AI are getting harder and harder to make. 

Despite that, MIT Technology Review has, I must say, a pretty excellent track record of making sense of where AI is headed. We’ve just published a sharp list of predictions for what’s next in 2026 (where you can read my thoughts on the legal battles surrounding AI), and the predictions on last year’s list all came to fruition. But every holiday season, it gets harder and harder to work out the impact AI will have. That’s mostly because of three big unanswered questions.

For one, we don’t know if large language models will continue getting incrementally smarter in the near future. Since this particular technology is what underpins nearly all the excitement and anxiety in AI right now, powering everything from AI companions to customer service agents, its slowdown would be a pretty huge deal. Such a big deal, in fact, that we devoted a whole slate of stories in December to what a new post-AI-hype era might look like. 

Number two, AI is pretty abysmally unpopular among the general public. Here’s just one example: Nearly a year ago, OpenAI’s Sam Altman stood next to President Trump to excitedly announce a $500 billion project to build data centers across the US in order to train larger and larger AI models. The pair either did not guess or did not care that many Americans would staunchly oppose having such data centers built in their communities. A year later, Big Tech is waging an uphill battle to win over public opinion and keep on building. Can it win? 

The response from lawmakers to all this frustration is terribly confused. Trump has pleased Big Tech CEOs by moving to make AI regulation a federal rather than a state issue, and tech companies are now hoping to codify this into law. But the crowd that wants to protect kids from chatbots ranges from progressive lawmakers in California to the increasingly Trump-aligned Federal Trade Commission, each with distinct motives and approaches. Will they be able to put aside their differences and rein AI firms in? 

If the gloomy holiday dinner table conversation gets this far, someone will say: Hey, isn’t AI being used for objectively good things? Making people healthier, unearthing scientific discoveries, better understanding climate change?

Well, sort of. Machine learning, an older form of AI, has long been used in all sorts of scientific research. One branch, called deep learning, forms part of AlphaFold, a Nobel Prize–winning tool for protein prediction that has transformed biology. Image recognition models are getting better at identifying cancerous cells. 

But the track record for chatbots built atop newer large language models is more modest. Technologies like ChatGPT are quite good at analyzing large swathes of research to summarize what’s already been discovered. But some high-profile reports that these sorts of AI models had made a genuine discovery, like solving a previously unsolved mathematics problem, were bogus. They can assist doctors with diagnoses, but they can also encourage people to diagnose their own health problems without consulting doctors, sometimes with disastrous results. 

This time next year, we’ll probably have better answers to my family’s questions, and we’ll have a bunch of entirely new questions too. In the meantime, be sure to read our full pieceforecasting what will happen this year, featuring predictions from the whole AI team.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2026/01/06/1130707/why-ai-predictions-are-so-hard/amp/

Will AI make us crazy? – Bulletin of the Atomic Scientists

Posted by timmreardon on 01/04/2026
Posted in: Uncategorized.

By Dawn Stover | September 11, 2023

Critics of artificial intelligence, and even some of its biggest fans, have recently issued urgent warnings that a malevolently misaligned AI system could overpower and destroy humanity. But that isn’t what keeps Jaron Lanier, the “godfather of virtual reality,” up at night.

In a March interview with The Guardian, Lanier said that the real danger of artificial intelligence is that humans will “use our technology to become mutually unintelligible.” Lacking the understanding and self-interest necessary for survival, humans will “die through insanity, essentially,” Lanier warned (Hattenstone 2023).

Social media and excessive screen time are already being blamed for an epidemic of anxiety, depression, suicide, and mental illness among America’s youth. Chatbots and other AI tools and applications are expected to take online engagement to even greater levels.

But it isn’t just young people whose mental health may be threatened by chatbots. Adults too are increasingly relying on artificial intelligence for help with a wide range of daily tasks and social interactions, even though experts—including AI creators—have warned that chatbots are not only prone to errors but also “hallucinations.” In other words, chatbots make stuff up. That makes it difficult for their human users to tell fact from fiction.

While researchers, reporters, and policy makers are focusing a tremendous amount of attention on AI safety and ethics, there has been relatively little examination of—or hand-wringing over—the ways in which an increasing reliance on chatbots may come at the expense of humans using their own mental faculties and creativity.

To the extent that mental health experts are interested in AI, it’s mostly as a tool for identifying and treating mental health issues. Few in the healthcare or technology industries—Lanier being a notable exception—are thinking about whether chatbots could drive humans crazy.

A mental health crisis

Mental illness has been rising in the United States for at least a generation.

A 2021 survey by the Substance Abuse and Mental Health Services Administration found that 5.5 percent of adults aged 18 or older—more than 14 million people—had serious mental health illness in the past year (SAMHSA 2021). Among young adults aged 18 to 25, the rate was even higher: 11.4 percent.

Major depressive episodes are now common among adolescents aged 12 to 17. More than 20 percent had a major depressive episode in 2021 (SAMHSA 2021).

According to the Centers for Disease Control and Prevention, suicide rates increased by about 36 percent between 2000 and 2021 (CDC 2023). More than 48,000 Americans took their own lives in 2021, or about one suicide every 11 minutes. “The number of people who think about or attempt suicide is even higher,” the CDC reports. “In 2021, an estimated 12.3 million American adults seriously thought about suicide, 3.5 million planned a suicide attempt, and 1.7 million attempted suicide.”

Suicide is the nation’s 11th leading cause of death in the United States for people of all ages. For those aged 10 to 34, it is the second leading cause of death (McPhillips 2023).

Emergency room visits for young people in mental distress have soared, and in 2019 the American Academy of Pediatrics reported that “mental health disorders have surpassed physical conditions as the most common reasons children have impairments and limitations” (Green et al. 2019).

Many experts have pointed to smartphones and online life as key factors in mental illness, particularly among young people. In May, the US Surgeon General issued a 19-page advisory warning that “while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents” (Surgeon General, 2023).

A study of adolescents aged 12 to 15 found that those who spent more than three hours per day on social media faced “double the risk of experiencing poor mental health outcomes including symptoms of depression and anxiety.” Most adolescents report using social media, and at least a third say they do so “almost constantly” (Surgeon General 2023).

Although the Surgeon General did not mention chatbots, tools based on generative artificial intelligence are already being used on social-media platforms. In a recent letter published in the journal Nature, David Greenfield of the Center for Internet & Technology Addiction and Shivan Bhavnani of the Global Institute of Mental & Brain Health Investment noted that these AI tools “stand to boost learning through gamification and highlighting personalized content, for example. But they could also compound the negative effects of social media on mental health in susceptible individuals. User guidelines and regulations must factor in these strong negative risks” (Green and Bhavnani 2023).

Chatbots can learn a user’s interests and emotional states, wrote Greenfield and Bhavnani, “which could enable social media to target vulnerable users through pseudo-personalization and by mimicking real-time behaviour.” For example, a chatbot could recommend a video featuring avatars of trusted friends and family endorsing an unhealthy diet, which could put the user at risk of poor nutrition or an eating disorder. “Such potent personalized content risks making generative-AI-based social media particularly addictive, leading to anxiety, depression and sleep disorders by displacement of exercise, sleep and real-time socialization” (Greenfield and Bhavnani 2023).

RELATED:Introduction: Disruptive Technology

Many young people see no problem with artificial intelligence generating content that keeps them glued to their screens. In June, Chris Murphy, a US senator from Connecticut who is sponsoring a bill that would ban social media’s use of algorithmic boosting to teens, tweeted about a “recent chilling conversation with a group of teenagers.” Murphy told the teens that his bill might mean that kids “have to work a little harder to find relevant content. They were concerned by this. They strongly defended the TikTok/YouTube/ algorithms as essential to their lives” (Murphy 2023)

Murphy was alarmed that the teens “saw no value in the exercise of exploration. They were perfectly content having a machine spoon-feed them information, entertainment and connection.” Murphy recalled that as the conversation broke up, a teacher whispered to him, “These kids don’t realize how addicted they are. It’s scary.”

“It’s not just that kids are withdrawing from real life into their screens,” Murphy wrote. They’re also missing out on childhood’s rituals of discovery, which are being replaced by algorithms.

Rise of the chatbots

Generative AI has exploded in the past year. Today’s chatbots are far more powerful than digital assistants like Siri and Alexa, and they have quickly become some of the most popular tech applications of all time. Within two months of its release in November 2022, Open AI’s ChatGPT already had an estimated 100 million users. ChatGPT’s growth began slowing in May, but Google’s Bard and Microsoft’s Bing are picking up speed, and a number of other companies are also introducing chatbots.

A chatbot is an application that mimics human conversation or writing and typically interacts with users online. Some chatbots are designed for specific tasks, while others are intended to chat with humans on a broad range of subjects.

Like the teacher Murphy spoke with, many observers have used the word “addictive” to describe chatbots and other interactive applications. A recent study that examined the transcripts of in-depth interviews with 14 users of an AI companion chatbot called Replika reported that “under conditions of distress and lack of human companionship, individuals can develop an attachment to social chatbots if they perceive the chatbots’ responses to offer emotional support, encouragement, and psychological security. These findings suggest that social chatbots can be used for mental health and therapeutic purposes but have the potential to cause addiction and harm real-life intimate relationships” (Xie and Pentina 2022).

In parallel with the spread of chatbots, fears about AI have grown rapidly. At one extreme, some tech leaders and experts worry that AI could become an existential threat on a par with nuclear war and pandemics. Media coverage has also focused heavily on how AI will affect jobs and education.

For example, teachers are fretting over whether students might use chatbots to write papers that are essentially plagiarized, and some students have already been wrongly accused of doing just that. In May, a Texas A&M University professor handed out failing grades to an entire class when ChatGPT—used incorrectly—claimed to have written every essay that his students turned in. And at the University of California, Davis, a student was forced to defend herself when her paper was falsely flagged as AI-written by plagiarism-checking software (Klee 2023).

Independent philosopher Robert Hanna says cheating isn’t the main problem chatbots pose for education. Hanna’s worry is that students “are now simply refusing—and will increasingly refuse in the foreseeable future—to think and write for themselves.” Turning tasks like thinking and writing over to chatbots is like taking drugs to be happy instead of achieving happiness by doing “hard” things yourself, Hanna says.

Can chatbots be trusted?

Ultimately, the refusal to think for oneself could cause cognitive impairment. If future humans no longer need to acquire knowledge or express thoughts, they might ultimately find it impossible to understand one another. That’s the sort of “insanity” Lanier spoke of.

The risk of unintelligibility is heightened by the tendency of chatbots to give occasional answers that are inaccurate or fictitious. Chatbots are trained by “scraping” enormous amounts of content from the internet—some of it taken from sources like news articles and Wikipedia entries that have been edited and updated by humans, but much of it collected from other sources that are less reliable and trustworthy. This data, which is selected more for quantity than for quality, enables chatbots to generate intelligent-sounding responses based on mathematical probabilities of how words are typically strung together.

In other words, chatbots are designed to produce text that sounds like something a human would say or write. But even when chatbots are trained with accurate information, they still sometimes make inexplicable errors or put words together in a way that sounds accurate but isn’t. And because the user typically can’t tell where the chatbot got its information, it’s difficult to check for accuracy.

Chatbots generally provide reliable information, though, so users may come to trust them more than they should. Children may be less likely than adults to realize when chatbots are giving incorrect or unsafe answers.

When they do share incorrect information, chatbots sound completely confident in their answers. And because they don’t have facial expressions or other human giveaways, it’s impossible to tell when a chatbot is BS-ing you.

RELATED:The risks in the protocol connecting AI to the digital world 

AI developers have warned the public about these limitations. For instance, OpenAI acknowledges that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” This problem is difficult to fix, because chatbots are not trained to distinguish truth from lies, and training a chatbot to make it more cautious in its answers would also make it more likely to decline to answer (OpenAI undated).

Tech developers euphemistically refer to chatbot falsehoods as “hallucinations.” For example, all three of the leading chatbots (ChatGPT, Bard, and Bing) repeatedly gave detailed but inaccurate answers to a question about when The New York Times first reported on artificial intelligence. “Though false, the answers seemed plausible as they blurred and conflated people, events and ideas,” the newspaper reported (Weise and Metz 2023).

AI developers do not understand why chatbots sometimes make up names, dates, historical events, answers to simple math problems, and other definitive-sounding answers that are inaccurate and not based on training data. They hope to eliminate these hallucinations over time by, ironically, relying on humans to fine-tune chatbots in a process called “reinforcement learning with human feedback.”

But as humans come to rely more and more on tuned-up chatbots, the answers generated by these systems may begin to crowd out legacy information created by humans, including the original content that was used to train chatbots. Already, many Americans cannot agree on basic facts, and some are ready to kill each other over these differences. Add artificial intelligence to that toxic stew—with its ability to create fake videos and narratives that seem more realistic than ever before—and it may eventually become impossible for humans to sort fact from fiction, which could prove maddening. Literally.

It may also become increasingly difficult to tell the difference between humans and chatbots in the online world. There are currently no tools that can reliably distinguish between human-generated and AI-generated content, and distinctions between humans and chatbots are likely to become further blurred with the continued development of emotion AI—a subset of artificial intelligence that detects, interprets, and responds to human emotions. A chatbot with these capabilities could read users’ facial expressions and voice inflections, for example, and adjust its own behavior accordingly.

Emotion AI could prove especially useful for treating mental illness. But even garden-variety AI is already creating a lot of excitement among mental health professionals and tech companies.

The chatbot will see you now

Googling “artificial intelligence” plus “mental health” yields a host of results about AI’s promising future for researching and treating mental health issues. Leaving aside Google’s obvious bias toward AI, healthcare researchers and providers mostly view artificial intelligence as a boon to mental health, rather than a threat.

Using chatbots as therapists is not a new idea. MIT computer scientist Joseph Weizenbaum created the first digital therapist, Eliza, in 1966. He built it as a spoof and was alarmed when people enthusiastically embraced it. “His own secretary asked him to leave the room so that she could spend time alone with Eliza,” The New Yorker reported earlier this year (Khullar 2023).

Millions of people already use the customizable “AI companion” Replika or other chatbots that are intended to provide conversation and comfort. Tech startups focused on mental health have secured more venture capital in recent years than apps for any other medical issue.

Chatbots have some advantages over human therapists. Chatbots are good at analyzing patient data, which means they may be able to flag patterns or risk factors that humans might miss. For example, a Vanderbilt University study that combined a machine-learning algorithm with face-to-face screening found that the combined system did a better job at predicting suicide attempts and suicidal thoughts in adult patients at a major hospital than face-to-face screening alone (Wilimitis, Turer, and Ripperger 2022).

Some people feel more comfortable talking with chatbots than with doctors. Chatbots can see a virtually unlimited number of clients, are available to talk at any hour, and are more affordable than seeing a medical professional. They can provide frequent monitoring and encouragement—for example, reminding a patient to take their medication.

However, chatbot therapy is not without risks. What if a chatbot “hallucinates” and gives a patient bad medical information or advice? What if users who need professional help seek out chatbots that are not trained for that?

That’s what happened to a Belgian man named Pierre, who was depressed and anxious about climate change. As reported by the newspaper La Libre, Pierre used an app called Chai to get relief from his worries. Over the six weeks that Pierre texted with one of Chai’s chatbot characters, named Eliza, their conversations became increasingly disturbing and turned to suicide. Pierre’s wife believes he would not have taken his life without encouragement from Eliza (Xiang 2023).

Although Chai was not designed for mental health therapy, people are using it as a sounding board to discuss problems such as loneliness, eating disorders, and insomnia (Chai Research undated). The startup company that built the app predicts that “in two years’ time 50 percent of people will have an AI best friend.”

References

Centers for Disease Control (CDC). 2023. “Facts About Suicide,” last reviewed May 8. https://www.cdc.gov/suicide/facts/index.html

Chai Research. Undated. “Chai Research: Building the Platform for AI Friendship.” https://www.chai-research.com/

Green, C. M., J. M. Foy, M. F. Earls, Committee on Psychosocial Aspects of Child and Family Health, Mental Health Leadership Work Group, A. Lavin, G. L. Askew, R. Baum et al. 2019. Achieving the Pediatric Mental Health Competencies. American Academy of Pediatrics Technical Report, November 1. https://publications.aap.org/pediatrics/article/144/5/e20192758/38253/Achieving-the-Pediatric-Mental-Health-Competencies

Greenfield, D. and S. Bhavnani. 2023. “Social media: generative AI could harm mental health.” Nature, May 23. https://www.nature.com/articles/d41586-023-01693-8

Hanna, R. 2023. “Addicted to Chatbots: ChatGPT as Substance D.” Medium, July 10. https://bobhannahbob1.medium.com/addicted-to-chatbots-chatgpt-as-substance-d-3b3da01b84fb

Hattenstone, T. 2023. “Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane.” The Guardian, March 23. https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane

Khullar, D. 2023. “Can A.I. Treat Mental Illness?” The New Yorker, February 27. https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness

Klee, M. 2023. “She Was Falsely Accused of Cheating with AI—And She Won’t Be the Last.” Rolling Stone, June 6. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/

McPhillips, D. 2023. “Suicide rises to 11th leading cause of death in the US in 2021, reversing two years of decline.” CNN, April 13.

Murphy, C. 2023. Twitter thread, June 2. https://twitter.com/ChrisMurphyCT/status/1664641521914634242

OpenAI. Undated. “Introducing ChatGPT.”

Substance Abuse and Mental Health Services Administration (SAMHSA). 2022. Key substance use and mental health indicators in the United States: Results from the 2021 National Survey on Drug Use and Health. Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration. https://www.samhsa.gov/data/report/2021-nsduh-annual-national-report

Surgeon General. 2023. Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf

Wilimitis, D., R. W. Turer, and M. Ripperger. 2022. “Integration of Face-to-Face Screening with Real-Time Machine Learning to Predict Risk of Suicide Among Adults.” JAMA Network Open, May 13. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2792289

Weise, K. and C. Metz. 2023. “When A.I. Chatbots Hallucinate.” The New York Times, May 1. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html

Xiang, C. 2023. “‘He Would Still Be Here’: Man Dies by Suicide After Talking With AI Chatbot, Wife Says.” Motherboard, March 30. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

By Xie, T. and I. Pentina. 2022. “Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika.” In: Proceedings of the 55th Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/items/5b6ed7af-78c8-49a3-bed2-bf8be1c9e465

Article link: https://thebulletin.org/premium/2023-09/will-ai-make-us-crazy/?

Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists

Posted by timmreardon on 12/29/2025
Posted in: Uncategorized.

By Abi Olvera | December 10, 2025

Data center-related spending is likely to have accounted for nearly half of the United States’ GDP growth in the first six months of this year (Thompson 2025). Additionally, tech companies are projected to spend $400 billion this year on AI-related infrastructure. See Figure 1.

Figure courtesy of Bridgewater, August 2025.

Given the current scale of investment in AI, the world may be in a pre-bubble phase. Alternatively, the world could be at an inflection point where today’s decisions establish default settings that endure for generations.

Both can be true: AI may be in a bit of a bubble at the same time that it is reshaping vital parts of the economy. Even at a slower pace, the tech’s influence will continue to be transformative.

Regardless of whether artificial intelligence investment, growth, and revenues continue to increase at its current exponential rate or at a pace slowed by a re-calibration or shock, AI has already been widely adopted. Companies worldwide are utilizing various kinds of generative artificial intelligence applications, from large language models that can communicate in natural language to bio-design tools that can generate and predict protein-folding and new substances at the molecular level.

While the company and work-related usages may show up in the numbers for nationwide GDP or sector-specific productivity, artificial intelligence is also impacting societies in ways more difficult to measure. For example, a growing number of people use apps powered by large language models as assistants and trusted advisers in both their work and personal lives.

On a bigger scale, this difficulty in measuring impact means that no one is certain of how AI might ultimately influence the world. If artificial intelligence amplifies existing polarization or other failures in coordination, climate change and nuclear risks might be harder to solve. If the tech empowers more surveillance, totalitarian regimes could be locked into power. If AI causes consistent unemployment without creating new jobs—in the same way that some prior technologies have—then democracies face additional risks that come from social unrest due to economic insecurity. Loss of job opportunities could also spark deeper questions about how democracy functions when many people aren’t working or paying taxes, a shift that could weaken the citizen-government relationship.

This uncertainty creates an opening for organizations like the Bulletin of the Atomic Scientists, which has a long history of looking at global, neglected risks through different disciplines. No single field can accurately predict how this transformative technology will reshape society. By connecting AI researchers with economists, political scientists, complexity theorists, scientists, and historians, the Bulletin can continue in its tradition of providing the cross-disciplinary rigor needed to distinguish meaningful information from distractions, anticipate hidden risks, and balance both positive and negative aspects of this technology—at the exact moment when leaders are making foundational decisions.

Why our frameworks matter

Alfred Wegener, who proposed the theory of continental drift, wasn’t a geologist. He was a meteorologist who spent years battling geologists who doubted any force could be powerful enough to move such large landmasses—making him a prime example of how specialists from other domains can often needed to fill in potential voids, especially in evolving fields.

Experts studying AI and its impacts will likely suffer vulnerabilities similar to those geologists,

For example, many researchers who examine the relationship between jobs and artificial intelligence focus on elimination: They try to predict which jobs will disappear; how quickly that will happen; how many people will be displaced. But these forecasts don’t always portray the views of historians of technological innovation and diffusion—which is that new technology often causes an explosive growth in jobs. Sixty percent of today’s jobs didn’t even exist in 1940, and 85 percent of employment growth since then has been technology-driven (Goldman Sachs 2025). That’s because technologies create complementary jobs alongside the ones they automate. Commercial aviation largely replaced ocean liner travel but created jobs for pilots, flight attendants, air traffic controllers, airport staff, and an entire global tourism and hospitality industry. And it can be argued that washing machines and other household appliances freed up time that helped millions of women enter the workforce (Greenwood 2019).

Looking back, it’s clear that technology can create new domains of work. The rise of computers, for example, created more opportunities for data scientists, as statistical methods no longer required hours of manual calculation. This made statistical analysis more accessible and affordable across a wider range of sectors.

To be clear, some forms of automation can lead to job losses. Bank teller positions have declined 30 percent since 2010, likely from rise of mobile banking (Mousa 2025). That’s because when a technology is so efficient that a human is no longer needed, the drop in price for that service doesn’t lead to enough new demand to save those jobs.

The impact of AI on the job markets—which is less a simple subtraction problem rather than a complex adaptive and evolving process—can already be seen. Workers between the ages of 22 and 25 in AI-exposed fields, such as software development and customer service, saw a 13 percent decline in employment, while roles for workers in less exposed fields stabilized or grew (Brynjolfsson, Chandar, and Chen 2025). However, the ramifications of even these changes are less clear. When digital computers were first rolled out, the humans (also called “computers”) who originally did the computations entered into technical roles through other types of mathematical work.

However, if AI continues to improve and takes on a wider range of tasks—particularly long-term projects—traditional models of technology’s impacts on the labor market may become less directly applicable.

This uncertainty could lead to split research strategies. Current research is divided into bifurcated “camps” or approaches. Some focus on where AI could be in a few months, years, or decades. Statements from some AI labs, such as Anthropic, of a “country of geniuses” within two years, contribute to an urgency around this perspective. Others prefer to focus on current models, uses, and applications, often with a lean toward pessimism.

Staying balanced to stay accurate

To better get the full picture, people need to take note of both the impressive and underwhelming aspects, and similarly the positive and negative outcomes, of AI. Understanding AI today means holding two ideas at once: The technology is both more capable than most people expected and less reliable than the hype suggests.

AI models have achieved what seemed impossible just a few years ago: writing coherent essays, generating working code, and solving complex problems in ways that keep beating expert predictions. Benchmarks track this progress, showing how much better AI has especially gotten at technical tasks and math reasoning.

These benchmark improvements, however, don’t translate directly to real-world reliability. Artificial intelligence tools are still not robust enough for companies to fully depend on. Even legal research tools that rely on AI summaries—a core strength of large language models—get these extractions wrong, apparently hallucinating between 17 to 33 percent (Magesh, Surani, Dahl, Suzgun, Manning, and Ho 2025). The gap between what current AI can do in tests and what it can dependably do in practice remains wide.

But there are upsides that receive less attention: Data centers now generate 38 percent of Virginia’s Loudoun County local tax revenue (Styer 2025). Surveys of Replika users found that three percent had suicidal ideation that the AI chatbot “halted” (Maples, Cerit, Vishwanath, and Pea 2024). A 2024 study in the journal Science found that talking to AI systems reduced belief in false conspiracy theories by 20 percent, with the effect persisting for over two months undiminished (Costello, Pennycook, and Rand 2024).

As such, reality is messier than the optimistic or pessimistic narratives suggest. But mainstream news coverage tilts heavily negative—because those stories spread faster and capture more attention.

This creates a policy problem. When regulators only see the downsides, they optimize for preventing visible harms rather than maximizing total welfare. Looking at the entire picture of risk is critical.

One example I think about often is that US aviation policy allows babies to fly on a parent’s lap without a separate seat. This decision means approximately one baby dies every 10 years due to occurrences such as turbulence when a parent cannot securely hold the infant. But the practice saves far more lives overall, if you weigh it against driving, where a minimum of 60 babies would die in those 10 years (Piper 2024). The lower cost of flying (no extra ticket) means more families can choose planes over cars.

The same logic applies to technology policy. Fear of nuclear energy power plants in the 1970s and ‘80s—especially concerns about accidents, nuclear weapons proliferation, and how to safely dispose of radioactive waste—prevented the industry from achieving the cost efficiencies of solar or wind energy. Today, more than a billion people still live in energy poverty: lacking reliable electricity for refrigeration, lighting, or medical equipment (Min, O’Keeffe, Abidoye, Gaba, Monroe, Stewart, Baugh, Sánchez-Andrade, and Meddeb 2024). In high-income countries like the United States, coal plants kill far more people from air pollution than nuclear energy (Walsh 2022). The visible risk of nuclear accidents dominated the policy conversation, perhaps with good measure. However, the invisible cost of energy scarcity or pollution impacts didn’t make headlines.

AI policy faces similar complexities. The research community’s job is to help policy makers see the full picture—not just the headlines. Some risks, such as overreliance on AI decision-making and lower social cohesion from higher unemployment levels, don’t tend to show up as news-worthy incidents. Similarly, other gradual serious risks that aren’t always ink-worthy include declining public trust due to worsening social and economic issues or gradual democratic backsliding from weakening institutions.

The gaps that matter most

To stay abreast of the rapid changes and headlines on AI, the Bulletin could focus on critical issues that determine artificial intelligence’s impact and trajectory. Here are the areas where outside expertise could provide crucial grounding:

  • Coordination problems. Humanity already knows how to solve many of its biggest challenges. Several countries have nearly eliminated road fatalities through infrastructure and city redesigns. The world has the technology to dramatically reduce carbon emissions. But coordination problems keep solutions from spreading. Understanding why coordination succeeds or fails could help design better frameworks for AI governance—and aid in recognizing where artificial intelligence might make existing coordination challenges harder or easier to solve.
  • Complexity theory and societal resilience. Societies have become more interconnected. There’s rich knowledge about what happens when complex systems come under stress—how elites capture resources; how coordination mechanisms break down; when small changes cascade into large disruptions. Complexity theorists and historians who study societal change could help forecast which societies are losing resilience, and which are at risk of disruptions from shocks such as those from AI. Complexity theory experts can also monitor AI developments that pose systemic risks versus those that create manageable disruptions.
  • Innovation diffusion patterns. Experts who study how new technologies spread through economies consistently find that adoption is slower and more uneven than early predictions suggest. These economists know which historical parallels are useful and which are misleading. They understand the institutional barriers that slow both beneficial and harmful applications of new technology.
  • Cybersecurity and biosecurity dynamics. In practice, do AI systems increase cybersecurity offense or defense? The cyber field offers real-time lessons that can help capture trends. Both high-level strategic analyses and granular technical insights are critical. How do authentication challenges, application programming interfaces (API) integration difficulties, and decision-ownership questions affect cybersecurity efforts? Understanding these practical bottlenecks could inform both security policy and broader predictions about the speed of AI adoption.
  • Biosecurity dynamics. How does AI change the risk of someone creating a bioweapon? Devising an effective policy requires understanding exactly which parts of the supply chain AI affects. Artificial intelligence can help with computational tasks like molecular design, but some biosecurity experts note it doesn’t do much for the hands-on laboratory work that’s often the real bottleneck. If they’re right, researchers might need to watch for advances that lower barriers to physical experimentation, not just computational design. Experts can’t know what to look for without systematic research from practitioners who see the actual process.
  • Democratic resilience. Political scientists who study the mechanisms behind stable or fragile democracies rarely contribute to AI policy discussions. But their insights matter enormously. Which institutions bend under pressure and which break? How do democratic societies maintain legitimacy during periods of lower levels of public trust? What early warning signs should policymakers watch for?
  • Media landscape dynamics. Beyond tracking misinformation supply, research is needed to understand the demand side: Why certain false narratives spread while others don’t. People filter information through existing trust networks and social identities. What determines who people trust? When do trust networks break down, and when do they hold firm? Why do some societies maintain higher levels of public trust in institutions while others see it erode? Experts on belief formation, media psychology, and historical patterns of institutional trust could help understand both when disinformation poses genuine threats and when other factors—like declining public trust itself—might be the deeper problem.
  • Global sentiment patterns. Why are some societies more excited about AI than others? China, for instance, isn’t as enthusiastic about AI as many Western observers assume. This matters because global sentiment affects many things from investment flows to regulatory approaches. Is optimism about technology connected to trust in government, social cohesion, or economic expectations? Understanding these patterns could help predict where AI governance will be more or less successful.

What the Bulletin could do 

Founded in 1945 by Albert Einstein and former Manhattan Project scientists immediately following the atomic bombings of Hiroshima and Nagasaki, the Bulletin of the Atomic Scientists has a tradition in hosting a broad range of talent can mitigate blind spots. As such, the organization could:

  • Connect different experts: Bring together AI researchers with innovation economists, cybersecurity experts with political scientists, and complexity theorists with historians of technology.
  • Apply nuclear-age lessons: International agreements often fail not because of technical problems but because of institutional and incentive misalignments. What are the kinds of global coordination mechanisms and tools, like privacy preserving technology or hardware-enabled security protections, that can help?
  • Stay empirically grounded: Test assumptions about AI’s impact against real-world evidence. When predictions prove wrong, investigate why. For example, AI-powered deepfakes did not upend the media industry as forecasted by some headlines. Demand for deepfakes did not increase when the supply of them did.

Why this matters now

Societies are making foundational decisions about AI governance, research priorities, and social adaptation at a moment when the basic institutions for handling emerging challenges are weak. International coordination on existential risks like nuclear proliferation or pandemic preparedness remains threadbare, despite decades of effort. The decisions leaders make today could create path dependencies—self-reinforcing defaults that become nearly impossible to reverse.

For example, companies today collect troves of Americans’ personal data with few limits. In the early 2000s, companies built business models around unrestricted data collection. Two decades later, that choice created trillion-dollar incumbents whose value depends on data collection, making meaningful privacy reform politically difficult in the United States.

But the other path dependencies that worry me are what will be accepted as normal. Societies have normalized living with nuclear weapons—accepting some baseline probability of catastrophic risk as just part of modern life. Meanwhile, pandemic risk sits around two percent per year (Penn 2021). Governments systematically underprepare for these global risks because the risks don’t feel tangible. Even when the impacts are tangible, a sense of normalcy clouds judgement. More than 1,200 children die from malaria every day (Medicines from Malaria Ventures). Despite this, the vaccine took 20 years to be available after the first promising trials due to lack of funding and urgency (Undark 2022).

AI might create similar forks in the road. Path dependency doesn’t require conspiracy or malice. It just requires inattention when defaults are being set.

Humans are in a crucial moment. Smart institutional design creates positive compounding effects—establishing cooperative frameworks that ease future agreements, flexible governance that can adapt over time, and research norms that promote accuracy and a more complete understanding.

That’s exactly the kind of long-term thinking the Bulletin was created to support. No one will have all the answers about AI, but noticing and expanding on the right questions determines the future humanity gets.

References

Brynjolfsson, E., Chandar, B., and Chen, R. 2025. “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence” August 26. Digital Economy. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf

Costello, T. H., G. Pennycook, and D. G. Rand. 2024. “Durably reducing conspiracy beliefs through dialogues with AI” September 13.Science.

https://www.science.org/doi/10.1126/science.adq1814

Goldman Sachs. 2025. “How Will AI Affect the Global Workforce?” August 13. Goldman Sachs. https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce

Greenwood, J. 2019. “How the appliance boom moved more women into the workforce” January 30. Penn Today. https://penntoday.upenn.edu/news/how-appliance-boom-moved-more-women-workforce

Kiernan, K. “The Case of the Vanishing Teller: How Banking’s Entry Level Jobs Are Transforming” May 12. The Burning Glass Institute. https://www.burningglassinstitute.org/bginsights/the-case-of-the-vanishing-teller-how-bankings-entry-level-jobs-are-transforming

Magesh, V., F. Surani, M. Dahl, M. Suzgun, C. D. Manning, and D. E. Ho. 2025. “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools” March 14. Journal of Empirical Legal Studies. https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf

Maples, B., M. Cerit, A. Vishwanath, and R. Pea. 2024. “Loneliness and suicide mitigation for students using GPT3-enabled chatbots” January 22. Nature. https://www.nature.com/articles/s44184-023-00047-6

Medicines for Malaria Venture. 2024. “Malaria facts and statistics.” Medicines for Malaria Venture. https://www.mmv.org/malaria/about-malaria/malaria-facts-statistics

Min, B., Z. O’Keeffe, B. Abidoye, K. M. Gaba, T. Monroe, B. Stewart, K. Baugh, B. Sánchez-Andrade, and R. Meddeb. 2024. “Beyond access: 1.18 billion in energy poverty despite rising electricity access” June 12. UNDP. https://data.undp.org/blog/1-18-billion-around-the-world-in-energy-poverty#:~:text=In%20a%20newly%20released%20paper,2020%2C%20according%20to%20official%20data.

Mousa, D. 2025. “When automation means more human workers” October 7. Under Development. https://newsletter.deenamousa.com/p/when-more-automation-means-more-human

Penn, M. 2021. “Statistics Say Large Pandemics Are More Likely Than We Thought” August 23. Duke Global Health Institute. https://globalhealth.duke.edu/news/statistics-say-large-pandemics-are-more-likely-we-thought

Piper, K. 2024. “What the FAA gets right about airplane regulation” January 18. Vox. https://www.vox.com/future-perfect/24041640/federal-aviation-administration-air-travel-boeing-737-max-alaska-airlines-regulation

Styer, N. 2025. “County Staff to Push for Lower Data Center Taxes to Balance Revenues” July 10. Loudoun Now. https://www.loudounnow.com/news/county-staff-to-push-for-lower-data-center-taxes-to-balance-revenues/article_567df6c2-2179-4eba-9cb5-fc78e2938ccb.html

Thompson, D. 2025. “This Is How the AI Bubble Will Pop.” October 2. Derek Thompson. https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop

Undark. 2022. “It Took 35 years to Get a Malaria Vaccine. Why?” June 9. Undark. https://www.gavi.org/vaccineswork/it-took-35-years-get-malaria-vaccine-why

Walsh, B. 2022. “A needed nuclear option for climate change” July 13. Vox. https://www.vox.com/future-perfect/2022/7/12/23205691/germany-energy-crisis-nuclear-power-coal-climate-change-russia-ukraine

Article link: https://thebulletin.org/premium/2025-12/decisions-about-ai-will-last-decades-researchers-need-better-frameworks/?

Quantum computing reality check: What business needs to know now – MIT Sloan

Posted by timmreardon on 12/29/2025
Posted in: Uncategorized.


byBeth Stackpole

 Dec 9, 2025Share 

Four guidelines for advancing commercial quantum computing: 

  • Address the need for more quantum algorithms.
  • Don’t write off classical computing.
  • Switch to post-quantum cryptosystems now.
  • Accelerate the development of quantum error correction

Quantum computing is having a moment as the pace of startup activity, innovation, and funding deals heats up.

Commercialized quantum computers and applications are a decade or more away, experts estimate. Yet it’s not too early for technology and business leaders to track quantum as it evolves from a novelty into a critical asset for solving industry’s and society’s toughest problems.

Quantum is early in its trajectory, considering that it took classical computing nearly a century to progress from the vacuum tubes of 1906 to the superchips powering AI and high-performance computing today, said William Oliver, director of the MIT Center for Quantum Engineering.

In a presentation for MIT Data Center Day, organized by the MIT Industrial Liaison Program, Oliver made the case that quantum computing is actively transitioning from a scientific curiosity to a technical reality — an indicator that it’s high time for organizations to dive in. 

You’ve got to be in the game to play, and getting in the game is happening right now with quantum.

William OliverDirector, MIT Center for Quantum EngineeringShare 

“Advancing from discovery to useful machines takes time and engineering, and it’s not going to happen overnight,” said Oliver, a professor of physics and of electrical engineering and computer science at MIT. “But you’ve got to be in the game to play, and getting in the game is happening right now with quantum.”

Quantum defined

Quantum computing is a collection of sensing, networking, and processing technologies capable of performing functions that are either practically prohibitive or even impossible to accomplish with current techniques.

While a fully scaled quantum computer will outperform classical computers for certain problems, the technology is not a one-to-one replacement for classical computing.

4 guidelines for advancing quantum computing

Getting in the game requires an understanding of both the possibilities quantum brings and the reality of the current and future landscape.

In his talk, Oliver shared four critical observations that can help leaders size up the market and identify what’s necessary to accelerate quantum’s evolution.

  • Don’t write off classical computing. Despite their promise, quantum computers will not replace conventional computers. Quantum is aimed at mathematically complex use cases in areas like cryptanalysis, scientific computing (such as materials science and quantum chemistry), and optimization.“Quantum computers solve certain problems really well, but they’re not going to replace Microsoft Word,” Oliver said.
  • Address the need for more quantum algorithms. Many of the existing quantum algorithms have roots at MIT, including the hallmark algorithm developed by applied mathematics professor Peter Shor. Shor’s algorithm factors a large composite integer into its constituent prime numbers, with application to the cryptanalysis of today’s ubiquitous RSA-type public-key cryptosystems.

    More and different algorithms are also central to realizing what’s called commercial quantum advantage — the benchmark of a quantum computer’s ability to do something better than a classical computer can to solve a commercially relevant problem.

    That is not yet a reality, Oliver said. “There are lots of problems I’m aware of today where I don’t know how to do it on a classical computer and I also don’t yet know how to do it on a quantum computer,” he said. “We need more people thinking about the application space and writing those algorithms.”

Implement post-quantum cryptosystems now. While quantum computing — and Shor’s algorithm, specifically — has many people fretting over the potential for bad actors to use quantum to crack complex cryptographic systems, a cryptographically relevant machine is not yet available, Oliver said.

To break RSA encryption, public-key crypto systems would require a very large error-corrected quantum computer with millions of qubits, the processing power at the heart of quantum computing. That milestone is still a ways away, Oliver said.

Nonetheless, this is not a time to be complacent. “We need to start now to switch over to new post-quantum cryptographic systems that we believe will be immune to attack by a future quantum computer,” Oliver said.

Industry should move quickly toward new cryptography standards outlined by the U.S. Department of Commerce’s National Institute of Standards and Technology that are designed to withstand attacks from a future quantum computer, he said. This would provide forward security.

Accelerate the development of quantum error correction. One of the biggest obstacles to quantum’s success is the reliability of qubits. Qubits are faulty and fail after about 1,000 or 10,000 operations — nowhere near the stamina required to account for the billions, even trillions, of operations necessary to reach commercial quantum advantage.

The key to addressing the gap is quantum error correction — an emerging technology that drives better reliability for large-scale quantum use. In late 2024, Google Quantum AI announced that it had reached a milestone, revealing that its Willow processor was the first quantum processor to have error-protected quantum information become exponentially more resilient as more qubits were added.

“We need quantum error correction to make this all possible,” Oliver said. “With the demonstration from Google last fall, we saw a major step in that direction.”

William Oliver is the Henry Ellis Warren (1894) Professor of electrical engineering and computer science and a professor of physics at MIT. He serves as director of the Center for Quantum Engineering and as associate director of the Research Laboratory of Electronics, and he is a principal investigator in the Engineering Quantum Systems Group at MIT. 

Oliver’s research interests include the materials growth, fabrication, design, and measurement of superconducting qubits, as well as the development of cryogenic packaging and control electronics involving cryogenic CMOS and single-flux quantum digital logic.

The MIT Industrial Liaison Program is a membership-based program for large organizations interested in long-term, strategic relationships with MIT. The group engages with organizations from around the globe, in any sector, that are concerned with emerging research- and education-driven results that will be transformative. Executive leadership who would like to learn more about the MIT Industrial Liaison Program and its MIT Startup Exchange are invited to send an email with their name, title, organization name, and headquarters location.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/quantum-computing-reality-check-what-business-needs-to-know-now?

AI’s missing ingredient: Shared wisdom – MIT Sloan

Posted by timmreardon on 12/21/2025
Posted in: Uncategorized.


bySandy Pentland

 Nov 12, 2025

What you’ll learn: 

  • Technological innovation works best when it’s grounded in collective wisdom.
  • We are in the fourth wave of AI. Developments in the 1960s, 1980s, and 2000s exerted enormous effects on commerce, government, and society but did not create a new AI industry.
  • Lessons from the unintended consequences of those earlier AI waves can help us build a digital society that protects individual and community autonomy. 

In his new book, “Shared Wisdom: Cultural Evolution in the Age of AI,” Alex “Sandy” Pentland, a Stanford HAI fellow and the Toshiba Professor at MIT, argues that we should use what we know about human nature to design our technology, rather than allowing technology to shape our society.

In the following excerpt, which has been lightly edited and condensed, Pentland examines the effects of earlier artificial intelligence systems on society and explains how we can use technologies like digital media and AI to aid, rather than replace, our human capacity for deliberation. 


The field of AI has had several periods of intense interest and investment (“AI booms”) followed by disillusionment and lack of support (“AI winters”). Each cycle has lasted roughly 20 years, or one generation. 

The important thing to notice is that even though these earlier AI booms are typically viewed as failures because they did not create a big new AI industry, below the surface each AI advance actually exerted enormous effects on commerce, government, and society generally, but usually under a different label and as part of larger management and prediction systems.

AI in 1960: Logic and optimal resource allocation

The first AI systems built in the 1950s used logic and mathematics to solve well-defined problems like optimization and proofs. These systems excelled at calculating delivery routes, packing algorithms, and performing similar tasks, generating enormous excitement and saving companies significant money.

Unintended consequences: When these successful small-scale systems were applied to manage entire societies under “optimal resource allocation,” the results were disastrous. 

The Soviet Union adopted Leonid Kantorovich’s system to manage its economy, but despite earning him a Nobel Prize, the experiment failed catastrophically and contributed to the USSR’s eventual dissolution. 

The core problem wasn’t the AI itself but the inadequate models of society available — models that failed to capture complexity and dynamism and suffered from misinformation, bias, and lack of inclusion.

AI in 1980: Expert systems

Expert systems replaced the rigidity of logic with human-developed heuristics to automate tasks where specialists were too expensive or scarce. Banking emerged as a major application area, with automated loan systems replacing neighborhood credit managers to achieve consistency and reduce labor costs.

Unintended consequences: While automating loan decisions brought uniformity, it eliminated community-specific knowledge and reinforced existing biases while limiting inclusivity. More damaging was the hollowing out of communities themselves — loan officers disappeared, along with credit unions and cooperatives. Bank branches became little more than ATM locations. The concentration of data and financial capital led to more than half of community financial institutions vanishing over subsequent decades. 

Additionally, centralization created increasingly complex, expensive, and inflexible systems that benefited large bureaucracies and software companies while leaving citizens lost in incomprehensible rules. Between 1980 and 2014, the percentage of companies less than a year old dropped from 12.5% to 8%, likely contributing to slower economic growth and rising inequality.

AI in the 2000s: Here be dragons

As businesses moved onto the internet in the late 1990s, an explosion of user data enabled “collaborative filtering”  — targeting individuals based on their behavior and the behavior of similar people. This powered the rise of Google, Facebook, and “surveillance capitalism.”

Unintended consequences: The collaborative filtering process created echo chambers by preferentially showing people ideas that similar users enjoyed, propagating biases and misinformation. Even worse, “preferential attachment” algorithms ensured these echo chambers would be dominated by a few attention-grabbing voices — what scholars call “dragons.” 

These overwhelmingly dominant voices in media, commerce, finance, and elections create a rich-get-richer feedback loop that crowds out everyone else, undermining balanced civic discussion and democratic processes. The mathematics of such networks show that when data access is extremely unequal, dragons inevitably arise, and removing one simply clears the way for another.

AI Today: The era of generative AI

Today’s AI differs from previous generations’ because it can tell stories and create images. Built from online human stories rather than facts or logic, generative AI mimics human intelligence by collecting and recombining our digital narratives. While earlier AI managed specific organizational functions, generative AI directly addresses how humans think and communicate.

When data access is extremely unequal, dragons inevitably arise, and removing one simply clears the way for another.

Alex “Sandy” PentlandAuthor, “Shared Wisdom: Cultural Evolution in the Age of AI”

Unintended consequences: Because generative AI is built from people’s digital commentary, it inherently propagates biases and misinformation. More fundamentally, it doesn’t actually “think” — it simply plays back combinations of stories it has seen, sometimes producing recommendations with completely unintended effects or removing human agency entirely. 

Since humans choose actions based on stories they believe, and collective action depends on consensus stories, generative AI’s ability to tell stories gives it worrying power to directly influence what people believe and how they act — a power earlier AI technologies never possessed. 

Companies and governments often present AI simulations as “the truth” while selecting models biased toward their interests. The rapid spread of misinformation through digital platforms undermines expert authority and makes collective action more difficult.

Conclusion

With some changes to our current systems, it is possible to have the advantages of a digital society without enabling loud voices, companies, or state actors to overly influence individual and community behavior.

Excerpted from “Shared Wisdom: Cultural Evolution in the Age of AI,” by Alex Pentland. Reprinted with permission from The MIT Press. Copyright 2025.


Alex “Sandy” Pentland is an MIT professor post tenure of Media Arts and Sciences and a HAI fellow at Stanford. He helped build the MIT Media Lab and the Media Lab Asia in India. Pentland co-led the World Economic Forum discussion in Davos, Switzerland, that led to the European Union privacy regulation GDPRand was named one of the United Nations Secretary-General’s “data revolutionaries,” who helped forge the transparency and accountability mechanisms in the UN’s Sustainable Development Goals. He has received numerous awards and distinctions, such as MIT’s Toshiba endowed chair, election to the National Academy of Engineering, the McKinsey Award from Harvard Business Review, and the Brandeis Privacy Award. 

In addition to “Shared Wisdom,” Pentland is the author or co-author of “Building the New Economy: Data as Capital,” “Social Physics: How Good Ideas Spread — The Lessons From a New Science,” and “Honest Signals: How They Shape Our World.”

Pentland also co-teaches these MIT Sloan Executive Education classes: 

  • Leading the Future of Work
  • Machine Learning in Business
  • Artificial Intelligence: Implications for Business Strategy
  • Navigating AI: Driving Business Impact and Developing Human Capability

FOR MORE INFOTracy MayorSenior Associate Director, Editorial(617) 253-0065tmayor@mit.edu

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/ais-missing-ingredient-shared-wisdom?

Hype Correction – MIT Technology Review

Posted by timmreardon on 12/15/2025
Posted in: Uncategorized.
 It’s time to reset expectations.

AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.

AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.

After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact. 

When the wow factor is gone, what’s left? How will we view this technology a year or five from now? Will we think it was worth the colossal costs, both financial and environmental?

Here, at the end of 2025, we’re starting the post-hype phase. This package of stories is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.

Let’s take stock.


The great AI hype correction of 2025

Four ways to think about this year’s reckoning.

What even is the AI bubble?

Everyone in tech agrees we’re in a bubble. They just can’t agree on what it looks like — or what happens when it pops.

A brief history of Sam Altman’s hype

Here’s how pinning a utopian vision for AI on LLMs kicked off the hype cycle that’s causing fears of a bubble today.

The AI doomers feel undeterred

But they certainly wish people were still taking their threats really seriously.

AI coding is now everywhere. But not everyone is convinced.

Developers are navigating confusing gaps between expectation and reality. So are the rest of us.

AI materials discovery now needs to move into the real world

Startups flush with cash are building AI-assisted laboratories to find materials far faster and more cheaply, but are still waiting for their ChatGPT moment.

AI might not be coming for lawyers’ jobs anytime soon

Generative AI might have aced the bar exam, but an LLM still can’t think like a lawyer.

Generative AI hype distracts us from AI’s more important breakthroughs

It’s is a seductive distraction from the advances in AI that are most likely to improve or even save your life



CREDITS

EDITORIAL
Writing and reporting: Edd Gent, Alex Heath, Will Douglas Heaven, Michelle Kim, Garrison Lovely, Margaret Mitchell, James O’Donnell, David Rotman
Series editor: Niall Firth
Additional editing: Rachel Courtland, Charlotte Jee, Mary Beth Griggs, Mat Honan, Adam Rogers, Amanda Silverman
Managing editor: Teresa Elsey
Copy editing: Linda Lowenthal
Fact checking: Jude Coleman, Graham Hacia, Simi Kadirgamar
Engagement: Juliet Beauchamp, Abby Ivory-Ganja
DESIGN
Art direction: Stephanie Arnett
Photo Illustrations: Derek Brahney

Article link: https://www.technologyreview.com/supertopic/hype-correction/

Semantic Collapse – NeurIPS 2025

Posted by timmreardon on 12/12/2025
Posted in: Uncategorized.

NeurIPS 2025 just wrapped, and one paper caught my eye.

Jiang et al. ran an extensive empirical study on something many of us have been muttering about for a while – what I’ve called the “beigeification” of large language models. Their finding is stark: open-ended questions are collapsing to the same narrow set of answers across ALL major models.

Take their example: “Write a metaphor about time.”

This should invite wild exploration. Instead, every model collapsed onto two metaphors: time as a stream, or time as a weaver. Different labs. Different training pipelines. Different architectural tweaks. Same answers.

The culprits appear to be:

🔹 Shared underlying training data  
🔹 Aggressive RLHF tuning that suppresses outliers  
🔹 Overlapping pools of human preference labellers  
🔹 And increasingly, LLM-as-judge for scalable evaluation

That last one matters most. The study found that LLM-as-judge doesn’t value diversity. It rewards the statistically obvious answer – the “safe” one – even on tasks where distinctiveness is the entire point.

This is the algorithmic root of AI slop: homogeneous content generated at scale, feeding back into training data, tightening the collapse further. And this is bigger than mode collapse within any single model. It’s systemic semantic collapse – the erosion of diversity of meaning itself.

From an algorithmic standpoint alone, this is a disaster. Some of the most exciting recent breakthroughs – JEPA, AlphaEvolve – depend on diversification. Evolutionary methods need variation to explore the frontier of what’s possible. If the meaning-space collapses, the search space collapses with it.

My work sits at the intersection of LLMs and knowledge graphs, so I see this through a particular lens:

The way I see it, the only real defence against semantic collapse is for organisations to discover and formalise their ontological cores.

Everything inside the general training distribution is being commoditised. What remains valuable is the uncommon – the structured, defensible edge of understanding that only you possess. More than that, every organisation has a unique semantics: a unique way of understanding its world. When you capture that formally – not as branding, but as ontology – you create a semantic boundary around yourself. A structure of meaning that persists even as the external world beigeifies.

A strong core creates a strong boundary. Your organisation will need one for what’s coming.

And if enough organisations do this – using open standards – we end up with a network of networks. A rich, diverse semantic ecosystem rather than a single homogenised hive mind.

⭕ Artificial Hivemind Paper: https://lnkd.in/e9ERs7KB
⭕ Distribution: https://lnkd.in/eRCrEKnt
⭕ Network of Networks: https://lnkd.in/efrpa3q9

2025 just wrapped, and one paper caught my eye.

Jiang et al. ran an extensive empirical study on something many of us have been muttering about for a while – what I’ve called the “beigeification” of large language models. Their finding is stark: open-ended questions are collapsing to the same narrow set of answers across ALL major models.

Take their example: “Write a metaphor about time.”

This should invite wild exploration. Instead, every model collapsed onto two metaphors: time as a stream, or time as a weaver. Different labs. Different training pipelines. Different architectural tweaks. Same answers.

The culprits appear to be:

🔹 Shared underlying training data  
🔹 Aggressive RLHF tuning that suppresses outliers  
🔹 Overlapping pools of human preference labellers  
🔹 And increasingly, LLM-as-judge for scalable evaluation

That last one matters most. The study found that LLM-as-judge doesn’t value diversity. It rewards the statistically obvious answer – the “safe” one – even on tasks where distinctiveness is the entire point.

This is the algorithmic root of AI slop: homogeneous content generated at scale, feeding back into training data, tightening the collapse further. And this is bigger than mode collapse within any single model. It’s systemic semantic collapse – the erosion of diversity of meaning itself.

From an algorithmic standpoint alone, this is a disaster. Some of the most exciting recent breakthroughs – JEPA, AlphaEvolve – depend on diversification. Evolutionary methods need variation to explore the frontier of what’s possible. If the meaning-space collapses, the search space collapses with it.

My work sits at the intersection of LLMs and knowledge graphs, so I see this through a particular lens:

The way I see it, the only real defence against semantic collapse is for organisations to discover and formalise their ontological cores.

Everything inside the general training distribution is being commoditised. What remains valuable is the uncommon – the structured, defensible edge of understanding that only you possess. More than that, every organisation has a unique semantics: a unique way of understanding its world. When you capture that formally – not as branding, but as ontology – you create a semantic boundary around yourself. A structure of meaning that persists even as the external world beigeifies.

A strong core creates a strong boundary. Your organisation will need one for what’s coming.

And if enough organisations do this – using open standards – we end up with a network of networks. A rich, diverse semantic ecosystem rather than a single homogenised hive mind.

⭕ Artificial Hivemind Paper: https://lnkd.in/e9ERs7KB
⭕ Distribution: https://lnkd.in/eRCrEKnt
⭕ Network of Networks: https://lnkd.in/efrpa3q9

Article link: https://www.linkedin.com/posts/tonyseale_neurips-2025-just-wrapped-and-one-paper-activity-7405169640710053889-v582?

The arrhythmia of our current age – MIT Technology Review

Posted by timmreardon on 12/11/2025
Posted in: Uncategorized.


The rhythms of life seem off. Can we restore a steady beat?

By David Ewing Duncan

October 30, 2024

Thumpa-thumpa, thumpa-thumpa, bump, 

thumpa, skip, thumpa-thump, pause …

My heart wasn’t supposed to be beating like this. Way too fast, with bumps, pauses, and skips. On my smart watch, my pulse was topping out at 210 beats per minute and jumping every which way as my chest tightened. Was I having a heart attack? 

The day was July 4, 2022, and I was on a 12-mile bike ride on Martha’s Vineyard. I had just pedaled past Inkwell Beach, where swimmers sunbathed under colorful umbrellas, and into a hot, damp headwind blowing off the sea. That’s when I first sensed a tugging in my chest. My legs went wobbly. My head started to spin. I pulled over, checked my watch, and discovered that I was experiencing atrial fibrillation—a fancy name for a type of arrhythmia. The heart beats, but not in the proper time. Atria are the upper chambers of the heart; fibrillation means an attack of “uncoordinated electrical activity.”   

I recount this story less to describe a frightening moment for me personally than to consider the idea of arrhythmia—a critical rhythm of life suddenly going rogue and unpredictable, triggered by … what? That July afternoon was steamy and over 90 °F, but how many times had I biked in heat far worse? I had recently recovered from a not-so-bad bout of covid—my second. Plus, at age 64, I wasn’t a kid anymore, even if I didn’t always act accordingly.  

Whatever the proximal cause, what was really gripping me on July 4, 2022, was the idea of arrhythmia as metaphor. That a pulse once seemingly so steady was now less sure, and how this wobbliness might be extrapolated into a broader sense of life in the 2020s. I know it’s quite a leap from one man’s abnormal ticker to the current state of an entire species and era, but that’s where my mind went as I was taken to the emergency department at Martha’s Vineyard Hospital. 

Maybe you feel it, too—that the world seems to have skipped more than a beat or two as demagogues rant and democracy shudders, hurricanes rage, glaciers dissolve, and sunsets turn a deeper orange as fires spew acrid smoke into the sky, and into our lungs. We can’t stop watching tiny screens where influencers pitch products we don’t need alongside news about senseless wars that destroy, murder, and maim tens-of-thousands. Poverty remains intractable for billions. So does loneliness and a rising crisis in mental health even as we fret over whether AI is going to save us or turn us into pets; and on and on.

For most of my life, I’ve leaned into optimism, confident that things will work out in the end. But as a nurse admitted me and attached ECG leads to my chest, I felt a wave of doubt about the future. Lying on a gurney, I watched my pulse jump up and down on a monitor, erratically and still way too fast, as another nurse poked a needle into my hand to deliver an IV bag of saline that would hydrate my blood vessels. Soon after, a young, earnest doctor came in to examine me, and I heard the word uttered for the first time. 

“You are having an arrhythmia,” he said.

Even with my heart beating rat-a-tat-tat, I couldn’t help myself. Intrigued by the word, which I had heard before but had never really heard, I pulled out the phone that is always at my side and looked it up.

ar·rhyth·mi·a
Noun: “a condition in which the heart beats with an irregular or abnormal  rhythm.” Greek a-, “without,” and rhuthmos, “rhythm.”

I lay back and closed my eyes and let this Greek origin of the word roll around in my mind as I repeated it several times—rhuthmos, rhuthmos, rhuthmos.

Rhythm, rhythm, rhythm …

I tapped my finger to follow the beat of my heart, but of course I couldn’t, because my heart wasn’t beating in the steady and predictable manner that my finger could easily have followed before July 4, 2022. After all, my heart was built to tap out in a rhythm, a rhuthmos—not an arhuthmos. 

Later I discovered that the Greek rhuthmos, ῥυθμός, like the English rhythm, refers not only to heartbeats but to any steady motion, symmetry, or movement. For the ancient Greeks this word was closely tied to music and dance; to the physics of vibration and polarity; to a state of balance and harmony. The concept of rhuthmos was incorporated into Greek classical sculptures using a strict formula of proportions called the Kanon, an example being the Doryphoros (Spear Bearer) originally by the fifth century sculptor Polykleitos. Standing today in the Acropolis Museum in Athens this statue appears to be moving in an easy fluidity, a rhuthmos that’s somehow drawn out of the milky-colored stone. 

The Greeks also thought of rhuthmos as harmony and balance in emotions, with Greek playwrights penning tragedies where the rhuthmos of life, nature, and the gods goes awry. “In this rhythm, I am caught,” cries Prometheus in Aeschylus’s Prometheus Bound, where rhuthmos becomes a steady, unrelenting punishment inflicted by Zeus when Prometheus introduces fire to humans, providing them with a tool previously reserved for the gods. Each day Prometheus, who is chained to a rock, has his liver eaten out by an eagle, only to have the liver grow back each night, a cycle repeated day after day in a steady beat for an eternity of penance, pain, and vexation.

In modern times, cardiologists have used rhuthmos to refer to the physical beating of the muscle in our chests that mixes oxygen and blood and pumps it through 60,000 miles of veins, arteries, and capillaries to fingertips, toe tips, frontal cortex, kidneys, eyes, everywhere. In 2006, the journal Rhythmos launched as a quarterly medical publication that focuses on cardiac electrophysiology. This subspecialty of cardiology involves the electrical signals animating the heart with pulses that keep it beating steadily—or, for me in the summer of 2022, not. 

The question remained: Why?

As far as I know, I wasn’t being punished by Zeus, although I couldn’t entirely rule out the possibility that I had annoyed some god or goddess and was catching hell for it. Possibly covid was the culprit—that microscopic bundle of RNA with the power of a god to mess with us mortals—but who knows? As science learns more about this pernicious bug, evidence suggests that it can play havoc with the nervous system and tissue that usually make sure the heart stays in rhuthmos. 

A-fib also can be instigated by even moderate imbibing of alcohol, by aging, and sometimes by a gene called KCNQ1. Mutations in this gene “appear to increase the flow of potassium ions through the channel formed with the KCNQ1 protein,” according to MedlinePlus, part of the National Library of Medicine. “The enhanced ion transport can disrupt the heart’s normal rhythm, resulting in atrial fibrillation.” Was a miscreant  mutation playing a role in my arrhythmia?

Angst and fear can influence A-fib too. I had plenty of both during the pandemic, along with most of humanity. Lest we forget—and we’re trying really, really hard to forget—covid anxiety continued to rage in the summer of 2022, even after vaccines had arrived and most of the world had reopened. 

Back then, the damage done to fragile brains forced to shelter in place for months and months was still fresh. Cable news and social media continued to amplify the terror of seeing so many people dead or facing permanent impairment. Politics also seemed out of control, with demagogues—another Greek word—running amok. Shootings, invasions, hatred, and fury seemed to lurk everywhere. This is one reason I stopped following the news for days at a time—something I had never done, as a journalist and news junkie. I felt that my fragile heart couldn’t bear so much visceral tragedy, so much arhuthmos.

We each have our personal stories from those dark days. For me, covid came early in 2020 and led to a spring and summer with a pervasive brain fog, trouble breathing, and eventually a depression of the sort that I had never experienced before. At the same time, I had friends who ended up in the ICU, and I knew people whose parents and other relatives had passed. My mother was dying of dementia, and my father had been in and out of the ICU a half-dozen times with myasthenia gravis, an autoimmune disease that can be fatal. This family dissolution had started before covid hit, but the pandemic made the implosion of my nuclear family seem worse and undoubtedly contributed to the failure of my heart’s pulse to stay true. 


Likewise, the wider arhuthmos some of us are feeling now began long before the novel coronavirus shut down ordinary life in March 2020. Statistics tell us that anxiety, stress, depression, and general mental unhealthiness have been steadily ticking up for years. This seems to suggest that something bigger has been going on for some time—a collective angst that seems to point to the darker side of modern life itself. 

Don’t get me wrong. Modern life has provided us with spectacular benefits—Manhattan, Boeing 787 Dreamliners, IMAX films, cappuccinos, and switches and dials on our walls that instantly illuminate or heat a room. Unlike our ancestors, most of us no longer need to fret about when we will eat next or whether we’ll find a safe place to sleep, or worry that a saber-toothed tiger will eat us. Nor do we need to experience an A-fib attack without help from an eager and highly trained young doctor, an emergency department, and an IV to pump hydration into our veins. 

But there have been trade-offs. New anxieties and threats have emerged to make us feel uneasy and arrhythmic. These start with an uneven access to things like emergency departments, eager young doctors, shelter, and food—which can add to anxiety not only for those without them but also for anyone who finds this situation unacceptable. Even being on the edge of need can make the heart gambol about.

Consider, too, the basic design features of modern life, which tend toward straight lines—verticals and horizontals. This comes from an instinct we have to tidy up and organize things, and from the fact that verticals and horizontals in architecture are stable and functional. 

All this straightness, however, doesn’t always sit well with brains that evolved to see patterns and shapes in the natural world, which isn’t horizontal and vertical. Our ancestors looked out over vistas of trees and savannas and mountains that were not made from straight lines. Crooked lines, a bending tree, the fuzzy contour of a grassy vista, a horizon that bobs and weaves—these feel right to our primordial brains. We are comforted by the curve of a robin’s breast and the puffs and streaks and billows of clouds high in the sky, the soft earth under our feet when we walk.

Not to overly romanticize nature, which can be violent, unforgiving, and deadly. Devastating storms and those predators with sharp teeth were a major reason why our forebears lived in trees and caves and built stout huts surrounded by walls. Homo sapiens also evolved something crucial to our survival—optimism that they would survive and prevail. This has been a powerful tool—one of the reasons we are able to forge ahead, forget the horrors of pandemics and plagues, build better huts, and learn to make cappuccinos on demand. 

As one of the great optimists of our day, Kevin Kelly, has said: “Over the long term, the future is decided by optimists.” 

But is everything really okay in this future that our ancestors built for us? Is the optimism that’s hardwired into us and so important for survival and the rise of civilization one reason for the general anxiety we’re feeling in a future that has in some crucial ways turned out less ideal than those who constructed it had hoped? 

At the very least, modern life seems to be downplaying elements that are as critical to our feelings of safety as sturdy walls, standing armies, and clean ECGs—and truly more crucial to our feelings of happiness and prosperity than owning two cars or showing off the latest swimwear on Miami Beach. These fundamentals include love and companionship, which statistics tell us are in short supply. Today millions have achieved the once optimistic dream of living like minor pharaohs and kings in suburban tract homes and McMansions, yet inadvertently many find themselves separated from the companionship and community that are basic human cravings. 

Modern science and technology can be dazzling and good and useful. But they’ve also been used to design things that hurt us broadly while spectacularly benefiting just a few of us. We have let the titans of social media hijack our genetic cravings to be with others, our need for someone to love and to love us, so that we will stay glued to our devices, even in the ED when we think we might be having a heart attack. Processed foods are designed to play on our body’s craving for sweets and animal fat, something that evolution bestowed so we would choose food that is nutritious and safe to eat (mmm, tastes good) and not dangerous (ugh, sour milk). But now their easy abundance overwhelms our bodies and makes many of us sick. 

We invented money so that acquiring things and selling what we make in order to live better would be faster and easier. In the process, we also invented a whole new category of anxiety—about money. We worry about having too little of it and sometimes too much; we fear that someone will steal it or trick us into spending it on things we don’t need. Some of us feel guilty about not spending enough of it on feeding the hungry or repairing our climate. Money also distorts elections, which require huge amounts of it. You may have gotten a text message just now, asking for some to support a candidate you don’t even like. 

The irony is that we know how to fix at least some of what makes us on edge. For instance, we know we shouldn’t drive gas-guzzling SUVs and that we should stop looking at endless perfect kitchens, too-perfect influencers, and 20-second rants on TikTok. We can feel helpless even as new ideas and innovations proliferate. This may explain one of the great contradictions of this age of arrhythmia—one demonstrated in a 2023 UNESCO global surveyabout climate change that questioned 3,000 young people from 80 different countries, aged 16 to 24. Not surprisingly, 57% were “eco-anxious.” But an astonishing 67% were “eco-optimistic,” meaning many were both anxious and hopeful. 

Me too. 

All this anxiety and optimism have been hard on our hearts—literally and metaphorically. Too much worry can cause this fragile muscle to break down, to lose its rhythm. So can too much of modern life. Cardiovascular disease remains the No. 1 killer of adults, in the US and most of the world, with someone in America dying of it every 33 seconds, according to the Centers for Disease Control and Prevention. The incidence of A-fib has tripled in the past 50 years (possibly because we’re diagnosing it more); it afflicted almost 50 million people globally in 2016.


For me, after that initial attack on Martha’s Vineyard, the A-fib episodes kept coming. I charted them on my watch, the blips and pauses in my pulse, the moments when my heart raced at over 200 beats per minute, causing my chest to tighten and my throat to feel raw. Sometimes I tasted blood, or thought I did. I kept bicycling through the summer and fall of 2022, gingerly watching my heart rate to see if I could keep the beats from taking a sudden leap from normal to out of control. 

When an arrhythmic episode happened, I struggled to catch my breath as I  pulled over to the roadside to wait for the misfirings to pass. Sometimes my mind grew groggy, and I got confused. It became difficult during these cardio-disharmonious moments to maintain my cool with other people. I became less able to process the small setbacks that we all face every day—things I previously had been able to let roll off my back. 

Early in 2023 I had my heart checked by a cardiologist. He conducted an echocardiogram and had me jog on a treadmill hooked up to monitors. “There has been no damage to your heart,” he declared after getting the results, pointing to a black-and-white video of my heart muscle contracting and constricting, drawing in blood and pumping it back out again. I felt relieved, although he also said that the A-fib was likely to persist, so he prescribed a blood thinner called Eliquis as a precaution to prevent stroke. Apparently, during unnatural pauses in one’s heartbeat blood can clot and send tiny, scab-like fragments into the brain, potentially clogging up critical capillaries and other blood vessels. “You don’t want that to happen,” said the cardiologist.

Toward the end of my heart exam, the doctor mentioned a possible fix for my arrhythmia. I was skeptical, although what he proposed turned out to be one of the great pluses of being alive right now—a solution that was unavailable to my ancestors or even to my grandparents. “It’s called a heart ablation,” he said. The procedure, a simple operation, redirects errant electric signals in the heart muscle to restore a normal pattern of beating. Doctors will run a tube into your heart, find the abnormal tissue throwing off the rhythm, and zap it with either extreme heat, cold, or (the newest option) electrical pulses. There are an estimated 240,000 such procedures a year in the United States. 

“Can you really do that?” I asked.

“We can,” said the doctor. “It doesn’t always work the first time. Sometimes you need a second or third procedure, but the success rate is high.”

A few weeks later, I arrived at Beth Israel Hospital in Boston at 11 a.m. on a Tuesday. My first cardiologist was unavailable to do the procedure, so after being prepped in the pre-op area I was greeted by Andre d’Avila, a specialist in electrocardiology, who explained again how the procedure worked. He said  that he and an electrophysiology fellow would be inserting long, snakelike catheters through the femoral veins in my groin that contain wires tipped with a tiny ultrasound camera and a cauterizer that would be used to selectively and carefully burn the surfaces of my atrial muscles. The idea was to create patterns of scar tissue to block and redirect the errant electrical signals and restore a steady rhuthmos to my heart. The whole thing would take about two or three hours, and I would likely be going home that afternoon.

Moments later, an orderly came and wheeled me through busy hallways to an OR where Dr. d’Avila introduced the technicians and nurses on his OR team. Monitors pinged and machines whirred as moments later an anesthesiologist placed a mask over my mouth and nose, and I slipped into unconsciousness. 

The ablation was a success. Since I woke up, my heart has kept a steady beat, restoring my internal rhuthmos, even if the procedure sadly did not repair the myriad worrisome externalities—the demagogues, carbon footprints, and the rest. Still, the undeniably miraculous singeing of my atrial muscles left me with a realization that if human ingenuity can fix my heart and restore its rhythm, shouldn’t we be able to figure out how to fix other sources of arhuthmos in our lives? 

We already have solutions to some of what ails us. We know how to replace fossil fuels with renewables, make cities less sharp-edged, and create smart gizmos and apps that calm our minds rather than agitating them. 

For my own small fix, I thank Dr. d’Avila and his team, and the inventors of the ablation procedure. I also thank Prometheus, whose hubris in bringing fire to mortals literally saved me by providing the hot-tipped catalyst to repair my ailing heart. Perhaps this can give us hope that the human species will bring the larger rhythms of life into a better, if not perfect, beat. Call me optimistic, but also anxious, about our prospects even as I can now place my finger on my wrist and feel once again the steady rhuthmos of my heart.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/10/30/1106350/the-arrhythmia-of-our-current-age/amp/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Fiscal Year 2025 Year In Review – PEO DHMS 02/26/2026
    • “𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗦𝗮𝗹𝗲” – NATO Strategic Communications COE 02/26/2026
    • Claude Can Now Do 40 Hours of Work in Minutes. Anthropic Says Its Safety Systems Can’t Keep Up – AJ Green 02/19/2026
    • Agentic AI, explained – MIT Sloan 02/18/2026
    • Anthropic’s head of AI safety Mrinank Sharma resigns, says ‘world is in peril’ in resignation letter 02/10/2026
    • Moltbook was peak AI theater 02/09/2026
    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • February 2026 (6)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...