healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

AI’s missing ingredient: Shared wisdom – MIT Sloan

Posted by timmreardon on 12/21/2025
Posted in: Uncategorized.


bySandy Pentland

 Nov 12, 2025

What you’ll learn: 

  • Technological innovation works best when it’s grounded in collective wisdom.
  • We are in the fourth wave of AI. Developments in the 1960s, 1980s, and 2000s exerted enormous effects on commerce, government, and society but did not create a new AI industry.
  • Lessons from the unintended consequences of those earlier AI waves can help us build a digital society that protects individual and community autonomy. 

In his new book, “Shared Wisdom: Cultural Evolution in the Age of AI,” Alex “Sandy” Pentland, a Stanford HAI fellow and the Toshiba Professor at MIT, argues that we should use what we know about human nature to design our technology, rather than allowing technology to shape our society.

In the following excerpt, which has been lightly edited and condensed, Pentland examines the effects of earlier artificial intelligence systems on society and explains how we can use technologies like digital media and AI to aid, rather than replace, our human capacity for deliberation. 


The field of AI has had several periods of intense interest and investment (“AI booms”) followed by disillusionment and lack of support (“AI winters”). Each cycle has lasted roughly 20 years, or one generation. 

The important thing to notice is that even though these earlier AI booms are typically viewed as failures because they did not create a big new AI industry, below the surface each AI advance actually exerted enormous effects on commerce, government, and society generally, but usually under a different label and as part of larger management and prediction systems.

AI in 1960: Logic and optimal resource allocation

The first AI systems built in the 1950s used logic and mathematics to solve well-defined problems like optimization and proofs. These systems excelled at calculating delivery routes, packing algorithms, and performing similar tasks, generating enormous excitement and saving companies significant money.

Unintended consequences: When these successful small-scale systems were applied to manage entire societies under “optimal resource allocation,” the results were disastrous. 

The Soviet Union adopted Leonid Kantorovich’s system to manage its economy, but despite earning him a Nobel Prize, the experiment failed catastrophically and contributed to the USSR’s eventual dissolution. 

The core problem wasn’t the AI itself but the inadequate models of society available — models that failed to capture complexity and dynamism and suffered from misinformation, bias, and lack of inclusion.

AI in 1980: Expert systems

Expert systems replaced the rigidity of logic with human-developed heuristics to automate tasks where specialists were too expensive or scarce. Banking emerged as a major application area, with automated loan systems replacing neighborhood credit managers to achieve consistency and reduce labor costs.

Unintended consequences: While automating loan decisions brought uniformity, it eliminated community-specific knowledge and reinforced existing biases while limiting inclusivity. More damaging was the hollowing out of communities themselves — loan officers disappeared, along with credit unions and cooperatives. Bank branches became little more than ATM locations. The concentration of data and financial capital led to more than half of community financial institutions vanishing over subsequent decades. 

Additionally, centralization created increasingly complex, expensive, and inflexible systems that benefited large bureaucracies and software companies while leaving citizens lost in incomprehensible rules. Between 1980 and 2014, the percentage of companies less than a year old dropped from 12.5% to 8%, likely contributing to slower economic growth and rising inequality.

AI in the 2000s: Here be dragons

As businesses moved onto the internet in the late 1990s, an explosion of user data enabled “collaborative filtering”  — targeting individuals based on their behavior and the behavior of similar people. This powered the rise of Google, Facebook, and “surveillance capitalism.”

Unintended consequences: The collaborative filtering process created echo chambers by preferentially showing people ideas that similar users enjoyed, propagating biases and misinformation. Even worse, “preferential attachment” algorithms ensured these echo chambers would be dominated by a few attention-grabbing voices — what scholars call “dragons.” 

These overwhelmingly dominant voices in media, commerce, finance, and elections create a rich-get-richer feedback loop that crowds out everyone else, undermining balanced civic discussion and democratic processes. The mathematics of such networks show that when data access is extremely unequal, dragons inevitably arise, and removing one simply clears the way for another.

AI Today: The era of generative AI

Today’s AI differs from previous generations’ because it can tell stories and create images. Built from online human stories rather than facts or logic, generative AI mimics human intelligence by collecting and recombining our digital narratives. While earlier AI managed specific organizational functions, generative AI directly addresses how humans think and communicate.

When data access is extremely unequal, dragons inevitably arise, and removing one simply clears the way for another.

Alex “Sandy” PentlandAuthor, “Shared Wisdom: Cultural Evolution in the Age of AI”

Unintended consequences: Because generative AI is built from people’s digital commentary, it inherently propagates biases and misinformation. More fundamentally, it doesn’t actually “think” — it simply plays back combinations of stories it has seen, sometimes producing recommendations with completely unintended effects or removing human agency entirely. 

Since humans choose actions based on stories they believe, and collective action depends on consensus stories, generative AI’s ability to tell stories gives it worrying power to directly influence what people believe and how they act — a power earlier AI technologies never possessed. 

Companies and governments often present AI simulations as “the truth” while selecting models biased toward their interests. The rapid spread of misinformation through digital platforms undermines expert authority and makes collective action more difficult.

Conclusion

With some changes to our current systems, it is possible to have the advantages of a digital society without enabling loud voices, companies, or state actors to overly influence individual and community behavior.

Excerpted from “Shared Wisdom: Cultural Evolution in the Age of AI,” by Alex Pentland. Reprinted with permission from The MIT Press. Copyright 2025.


Alex “Sandy” Pentland is an MIT professor post tenure of Media Arts and Sciences and a HAI fellow at Stanford. He helped build the MIT Media Lab and the Media Lab Asia in India. Pentland co-led the World Economic Forum discussion in Davos, Switzerland, that led to the European Union privacy regulation GDPRand was named one of the United Nations Secretary-General’s “data revolutionaries,” who helped forge the transparency and accountability mechanisms in the UN’s Sustainable Development Goals. He has received numerous awards and distinctions, such as MIT’s Toshiba endowed chair, election to the National Academy of Engineering, the McKinsey Award from Harvard Business Review, and the Brandeis Privacy Award. 

In addition to “Shared Wisdom,” Pentland is the author or co-author of “Building the New Economy: Data as Capital,” “Social Physics: How Good Ideas Spread — The Lessons From a New Science,” and “Honest Signals: How They Shape Our World.”

Pentland also co-teaches these MIT Sloan Executive Education classes: 

  • Leading the Future of Work
  • Machine Learning in Business
  • Artificial Intelligence: Implications for Business Strategy
  • Navigating AI: Driving Business Impact and Developing Human Capability

FOR MORE INFOTracy MayorSenior Associate Director, Editorial(617) 253-0065tmayor@mit.edu

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/ais-missing-ingredient-shared-wisdom?

Hype Correction – MIT Technology Review

Posted by timmreardon on 12/15/2025
Posted in: Uncategorized.
 It’s time to reset expectations.

AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.

AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.

After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact. 

When the wow factor is gone, what’s left? How will we view this technology a year or five from now? Will we think it was worth the colossal costs, both financial and environmental?

Here, at the end of 2025, we’re starting the post-hype phase. This package of stories is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.

Let’s take stock.


The great AI hype correction of 2025

Four ways to think about this year’s reckoning.

What even is the AI bubble?

Everyone in tech agrees we’re in a bubble. They just can’t agree on what it looks like — or what happens when it pops.

A brief history of Sam Altman’s hype

Here’s how pinning a utopian vision for AI on LLMs kicked off the hype cycle that’s causing fears of a bubble today.

The AI doomers feel undeterred

But they certainly wish people were still taking their threats really seriously.

AI coding is now everywhere. But not everyone is convinced.

Developers are navigating confusing gaps between expectation and reality. So are the rest of us.

AI materials discovery now needs to move into the real world

Startups flush with cash are building AI-assisted laboratories to find materials far faster and more cheaply, but are still waiting for their ChatGPT moment.

AI might not be coming for lawyers’ jobs anytime soon

Generative AI might have aced the bar exam, but an LLM still can’t think like a lawyer.

Generative AI hype distracts us from AI’s more important breakthroughs

It’s is a seductive distraction from the advances in AI that are most likely to improve or even save your life



CREDITS

EDITORIAL
Writing and reporting: Edd Gent, Alex Heath, Will Douglas Heaven, Michelle Kim, Garrison Lovely, Margaret Mitchell, James O’Donnell, David Rotman
Series editor: Niall Firth
Additional editing: Rachel Courtland, Charlotte Jee, Mary Beth Griggs, Mat Honan, Adam Rogers, Amanda Silverman
Managing editor: Teresa Elsey
Copy editing: Linda Lowenthal
Fact checking: Jude Coleman, Graham Hacia, Simi Kadirgamar
Engagement: Juliet Beauchamp, Abby Ivory-Ganja
DESIGN
Art direction: Stephanie Arnett
Photo Illustrations: Derek Brahney

Article link: https://www.technologyreview.com/supertopic/hype-correction/

Semantic Collapse – NeurIPS 2025

Posted by timmreardon on 12/12/2025
Posted in: Uncategorized.

NeurIPS 2025 just wrapped, and one paper caught my eye.

Jiang et al. ran an extensive empirical study on something many of us have been muttering about for a while – what I’ve called the “beigeification” of large language models. Their finding is stark: open-ended questions are collapsing to the same narrow set of answers across ALL major models.

Take their example: “Write a metaphor about time.”

This should invite wild exploration. Instead, every model collapsed onto two metaphors: time as a stream, or time as a weaver. Different labs. Different training pipelines. Different architectural tweaks. Same answers.

The culprits appear to be:

🔹 Shared underlying training data  
🔹 Aggressive RLHF tuning that suppresses outliers  
🔹 Overlapping pools of human preference labellers  
🔹 And increasingly, LLM-as-judge for scalable evaluation

That last one matters most. The study found that LLM-as-judge doesn’t value diversity. It rewards the statistically obvious answer – the “safe” one – even on tasks where distinctiveness is the entire point.

This is the algorithmic root of AI slop: homogeneous content generated at scale, feeding back into training data, tightening the collapse further. And this is bigger than mode collapse within any single model. It’s systemic semantic collapse – the erosion of diversity of meaning itself.

From an algorithmic standpoint alone, this is a disaster. Some of the most exciting recent breakthroughs – JEPA, AlphaEvolve – depend on diversification. Evolutionary methods need variation to explore the frontier of what’s possible. If the meaning-space collapses, the search space collapses with it.

My work sits at the intersection of LLMs and knowledge graphs, so I see this through a particular lens:

The way I see it, the only real defence against semantic collapse is for organisations to discover and formalise their ontological cores.

Everything inside the general training distribution is being commoditised. What remains valuable is the uncommon – the structured, defensible edge of understanding that only you possess. More than that, every organisation has a unique semantics: a unique way of understanding its world. When you capture that formally – not as branding, but as ontology – you create a semantic boundary around yourself. A structure of meaning that persists even as the external world beigeifies.

A strong core creates a strong boundary. Your organisation will need one for what’s coming.

And if enough organisations do this – using open standards – we end up with a network of networks. A rich, diverse semantic ecosystem rather than a single homogenised hive mind.

⭕ Artificial Hivemind Paper: https://lnkd.in/e9ERs7KB
⭕ Distribution: https://lnkd.in/eRCrEKnt
⭕ Network of Networks: https://lnkd.in/efrpa3q9

2025 just wrapped, and one paper caught my eye.

Jiang et al. ran an extensive empirical study on something many of us have been muttering about for a while – what I’ve called the “beigeification” of large language models. Their finding is stark: open-ended questions are collapsing to the same narrow set of answers across ALL major models.

Take their example: “Write a metaphor about time.”

This should invite wild exploration. Instead, every model collapsed onto two metaphors: time as a stream, or time as a weaver. Different labs. Different training pipelines. Different architectural tweaks. Same answers.

The culprits appear to be:

🔹 Shared underlying training data  
🔹 Aggressive RLHF tuning that suppresses outliers  
🔹 Overlapping pools of human preference labellers  
🔹 And increasingly, LLM-as-judge for scalable evaluation

That last one matters most. The study found that LLM-as-judge doesn’t value diversity. It rewards the statistically obvious answer – the “safe” one – even on tasks where distinctiveness is the entire point.

This is the algorithmic root of AI slop: homogeneous content generated at scale, feeding back into training data, tightening the collapse further. And this is bigger than mode collapse within any single model. It’s systemic semantic collapse – the erosion of diversity of meaning itself.

From an algorithmic standpoint alone, this is a disaster. Some of the most exciting recent breakthroughs – JEPA, AlphaEvolve – depend on diversification. Evolutionary methods need variation to explore the frontier of what’s possible. If the meaning-space collapses, the search space collapses with it.

My work sits at the intersection of LLMs and knowledge graphs, so I see this through a particular lens:

The way I see it, the only real defence against semantic collapse is for organisations to discover and formalise their ontological cores.

Everything inside the general training distribution is being commoditised. What remains valuable is the uncommon – the structured, defensible edge of understanding that only you possess. More than that, every organisation has a unique semantics: a unique way of understanding its world. When you capture that formally – not as branding, but as ontology – you create a semantic boundary around yourself. A structure of meaning that persists even as the external world beigeifies.

A strong core creates a strong boundary. Your organisation will need one for what’s coming.

And if enough organisations do this – using open standards – we end up with a network of networks. A rich, diverse semantic ecosystem rather than a single homogenised hive mind.

⭕ Artificial Hivemind Paper: https://lnkd.in/e9ERs7KB
⭕ Distribution: https://lnkd.in/eRCrEKnt
⭕ Network of Networks: https://lnkd.in/efrpa3q9

Article link: https://www.linkedin.com/posts/tonyseale_neurips-2025-just-wrapped-and-one-paper-activity-7405169640710053889-v582?

The arrhythmia of our current age – MIT Technology Review

Posted by timmreardon on 12/11/2025
Posted in: Uncategorized.


The rhythms of life seem off. Can we restore a steady beat?

By David Ewing Duncan

October 30, 2024

Thumpa-thumpa, thumpa-thumpa, bump, 

thumpa, skip, thumpa-thump, pause …

My heart wasn’t supposed to be beating like this. Way too fast, with bumps, pauses, and skips. On my smart watch, my pulse was topping out at 210 beats per minute and jumping every which way as my chest tightened. Was I having a heart attack? 

The day was July 4, 2022, and I was on a 12-mile bike ride on Martha’s Vineyard. I had just pedaled past Inkwell Beach, where swimmers sunbathed under colorful umbrellas, and into a hot, damp headwind blowing off the sea. That’s when I first sensed a tugging in my chest. My legs went wobbly. My head started to spin. I pulled over, checked my watch, and discovered that I was experiencing atrial fibrillation—a fancy name for a type of arrhythmia. The heart beats, but not in the proper time. Atria are the upper chambers of the heart; fibrillation means an attack of “uncoordinated electrical activity.”   

I recount this story less to describe a frightening moment for me personally than to consider the idea of arrhythmia—a critical rhythm of life suddenly going rogue and unpredictable, triggered by … what? That July afternoon was steamy and over 90 °F, but how many times had I biked in heat far worse? I had recently recovered from a not-so-bad bout of covid—my second. Plus, at age 64, I wasn’t a kid anymore, even if I didn’t always act accordingly.  

Whatever the proximal cause, what was really gripping me on July 4, 2022, was the idea of arrhythmia as metaphor. That a pulse once seemingly so steady was now less sure, and how this wobbliness might be extrapolated into a broader sense of life in the 2020s. I know it’s quite a leap from one man’s abnormal ticker to the current state of an entire species and era, but that’s where my mind went as I was taken to the emergency department at Martha’s Vineyard Hospital. 

Maybe you feel it, too—that the world seems to have skipped more than a beat or two as demagogues rant and democracy shudders, hurricanes rage, glaciers dissolve, and sunsets turn a deeper orange as fires spew acrid smoke into the sky, and into our lungs. We can’t stop watching tiny screens where influencers pitch products we don’t need alongside news about senseless wars that destroy, murder, and maim tens-of-thousands. Poverty remains intractable for billions. So does loneliness and a rising crisis in mental health even as we fret over whether AI is going to save us or turn us into pets; and on and on.

For most of my life, I’ve leaned into optimism, confident that things will work out in the end. But as a nurse admitted me and attached ECG leads to my chest, I felt a wave of doubt about the future. Lying on a gurney, I watched my pulse jump up and down on a monitor, erratically and still way too fast, as another nurse poked a needle into my hand to deliver an IV bag of saline that would hydrate my blood vessels. Soon after, a young, earnest doctor came in to examine me, and I heard the word uttered for the first time. 

“You are having an arrhythmia,” he said.

Even with my heart beating rat-a-tat-tat, I couldn’t help myself. Intrigued by the word, which I had heard before but had never really heard, I pulled out the phone that is always at my side and looked it up.

ar·rhyth·mi·a
Noun: “a condition in which the heart beats with an irregular or abnormal  rhythm.” Greek a-, “without,” and rhuthmos, “rhythm.”

I lay back and closed my eyes and let this Greek origin of the word roll around in my mind as I repeated it several times—rhuthmos, rhuthmos, rhuthmos.

Rhythm, rhythm, rhythm …

I tapped my finger to follow the beat of my heart, but of course I couldn’t, because my heart wasn’t beating in the steady and predictable manner that my finger could easily have followed before July 4, 2022. After all, my heart was built to tap out in a rhythm, a rhuthmos—not an arhuthmos. 

Later I discovered that the Greek rhuthmos, ῥυθμός, like the English rhythm, refers not only to heartbeats but to any steady motion, symmetry, or movement. For the ancient Greeks this word was closely tied to music and dance; to the physics of vibration and polarity; to a state of balance and harmony. The concept of rhuthmos was incorporated into Greek classical sculptures using a strict formula of proportions called the Kanon, an example being the Doryphoros (Spear Bearer) originally by the fifth century sculptor Polykleitos. Standing today in the Acropolis Museum in Athens this statue appears to be moving in an easy fluidity, a rhuthmos that’s somehow drawn out of the milky-colored stone. 

The Greeks also thought of rhuthmos as harmony and balance in emotions, with Greek playwrights penning tragedies where the rhuthmos of life, nature, and the gods goes awry. “In this rhythm, I am caught,” cries Prometheus in Aeschylus’s Prometheus Bound, where rhuthmos becomes a steady, unrelenting punishment inflicted by Zeus when Prometheus introduces fire to humans, providing them with a tool previously reserved for the gods. Each day Prometheus, who is chained to a rock, has his liver eaten out by an eagle, only to have the liver grow back each night, a cycle repeated day after day in a steady beat for an eternity of penance, pain, and vexation.

In modern times, cardiologists have used rhuthmos to refer to the physical beating of the muscle in our chests that mixes oxygen and blood and pumps it through 60,000 miles of veins, arteries, and capillaries to fingertips, toe tips, frontal cortex, kidneys, eyes, everywhere. In 2006, the journal Rhythmos launched as a quarterly medical publication that focuses on cardiac electrophysiology. This subspecialty of cardiology involves the electrical signals animating the heart with pulses that keep it beating steadily—or, for me in the summer of 2022, not. 

The question remained: Why?

As far as I know, I wasn’t being punished by Zeus, although I couldn’t entirely rule out the possibility that I had annoyed some god or goddess and was catching hell for it. Possibly covid was the culprit—that microscopic bundle of RNA with the power of a god to mess with us mortals—but who knows? As science learns more about this pernicious bug, evidence suggests that it can play havoc with the nervous system and tissue that usually make sure the heart stays in rhuthmos. 

A-fib also can be instigated by even moderate imbibing of alcohol, by aging, and sometimes by a gene called KCNQ1. Mutations in this gene “appear to increase the flow of potassium ions through the channel formed with the KCNQ1 protein,” according to MedlinePlus, part of the National Library of Medicine. “The enhanced ion transport can disrupt the heart’s normal rhythm, resulting in atrial fibrillation.” Was a miscreant  mutation playing a role in my arrhythmia?

Angst and fear can influence A-fib too. I had plenty of both during the pandemic, along with most of humanity. Lest we forget—and we’re trying really, really hard to forget—covid anxiety continued to rage in the summer of 2022, even after vaccines had arrived and most of the world had reopened. 

Back then, the damage done to fragile brains forced to shelter in place for months and months was still fresh. Cable news and social media continued to amplify the terror of seeing so many people dead or facing permanent impairment. Politics also seemed out of control, with demagogues—another Greek word—running amok. Shootings, invasions, hatred, and fury seemed to lurk everywhere. This is one reason I stopped following the news for days at a time—something I had never done, as a journalist and news junkie. I felt that my fragile heart couldn’t bear so much visceral tragedy, so much arhuthmos.

We each have our personal stories from those dark days. For me, covid came early in 2020 and led to a spring and summer with a pervasive brain fog, trouble breathing, and eventually a depression of the sort that I had never experienced before. At the same time, I had friends who ended up in the ICU, and I knew people whose parents and other relatives had passed. My mother was dying of dementia, and my father had been in and out of the ICU a half-dozen times with myasthenia gravis, an autoimmune disease that can be fatal. This family dissolution had started before covid hit, but the pandemic made the implosion of my nuclear family seem worse and undoubtedly contributed to the failure of my heart’s pulse to stay true. 


Likewise, the wider arhuthmos some of us are feeling now began long before the novel coronavirus shut down ordinary life in March 2020. Statistics tell us that anxiety, stress, depression, and general mental unhealthiness have been steadily ticking up for years. This seems to suggest that something bigger has been going on for some time—a collective angst that seems to point to the darker side of modern life itself. 

Don’t get me wrong. Modern life has provided us with spectacular benefits—Manhattan, Boeing 787 Dreamliners, IMAX films, cappuccinos, and switches and dials on our walls that instantly illuminate or heat a room. Unlike our ancestors, most of us no longer need to fret about when we will eat next or whether we’ll find a safe place to sleep, or worry that a saber-toothed tiger will eat us. Nor do we need to experience an A-fib attack without help from an eager and highly trained young doctor, an emergency department, and an IV to pump hydration into our veins. 

But there have been trade-offs. New anxieties and threats have emerged to make us feel uneasy and arrhythmic. These start with an uneven access to things like emergency departments, eager young doctors, shelter, and food—which can add to anxiety not only for those without them but also for anyone who finds this situation unacceptable. Even being on the edge of need can make the heart gambol about.

Consider, too, the basic design features of modern life, which tend toward straight lines—verticals and horizontals. This comes from an instinct we have to tidy up and organize things, and from the fact that verticals and horizontals in architecture are stable and functional. 

All this straightness, however, doesn’t always sit well with brains that evolved to see patterns and shapes in the natural world, which isn’t horizontal and vertical. Our ancestors looked out over vistas of trees and savannas and mountains that were not made from straight lines. Crooked lines, a bending tree, the fuzzy contour of a grassy vista, a horizon that bobs and weaves—these feel right to our primordial brains. We are comforted by the curve of a robin’s breast and the puffs and streaks and billows of clouds high in the sky, the soft earth under our feet when we walk.

Not to overly romanticize nature, which can be violent, unforgiving, and deadly. Devastating storms and those predators with sharp teeth were a major reason why our forebears lived in trees and caves and built stout huts surrounded by walls. Homo sapiens also evolved something crucial to our survival—optimism that they would survive and prevail. This has been a powerful tool—one of the reasons we are able to forge ahead, forget the horrors of pandemics and plagues, build better huts, and learn to make cappuccinos on demand. 

As one of the great optimists of our day, Kevin Kelly, has said: “Over the long term, the future is decided by optimists.” 

But is everything really okay in this future that our ancestors built for us? Is the optimism that’s hardwired into us and so important for survival and the rise of civilization one reason for the general anxiety we’re feeling in a future that has in some crucial ways turned out less ideal than those who constructed it had hoped? 

At the very least, modern life seems to be downplaying elements that are as critical to our feelings of safety as sturdy walls, standing armies, and clean ECGs—and truly more crucial to our feelings of happiness and prosperity than owning two cars or showing off the latest swimwear on Miami Beach. These fundamentals include love and companionship, which statistics tell us are in short supply. Today millions have achieved the once optimistic dream of living like minor pharaohs and kings in suburban tract homes and McMansions, yet inadvertently many find themselves separated from the companionship and community that are basic human cravings. 

Modern science and technology can be dazzling and good and useful. But they’ve also been used to design things that hurt us broadly while spectacularly benefiting just a few of us. We have let the titans of social media hijack our genetic cravings to be with others, our need for someone to love and to love us, so that we will stay glued to our devices, even in the ED when we think we might be having a heart attack. Processed foods are designed to play on our body’s craving for sweets and animal fat, something that evolution bestowed so we would choose food that is nutritious and safe to eat (mmm, tastes good) and not dangerous (ugh, sour milk). But now their easy abundance overwhelms our bodies and makes many of us sick. 

We invented money so that acquiring things and selling what we make in order to live better would be faster and easier. In the process, we also invented a whole new category of anxiety—about money. We worry about having too little of it and sometimes too much; we fear that someone will steal it or trick us into spending it on things we don’t need. Some of us feel guilty about not spending enough of it on feeding the hungry or repairing our climate. Money also distorts elections, which require huge amounts of it. You may have gotten a text message just now, asking for some to support a candidate you don’t even like. 

The irony is that we know how to fix at least some of what makes us on edge. For instance, we know we shouldn’t drive gas-guzzling SUVs and that we should stop looking at endless perfect kitchens, too-perfect influencers, and 20-second rants on TikTok. We can feel helpless even as new ideas and innovations proliferate. This may explain one of the great contradictions of this age of arrhythmia—one demonstrated in a 2023 UNESCO global surveyabout climate change that questioned 3,000 young people from 80 different countries, aged 16 to 24. Not surprisingly, 57% were “eco-anxious.” But an astonishing 67% were “eco-optimistic,” meaning many were both anxious and hopeful. 

Me too. 

All this anxiety and optimism have been hard on our hearts—literally and metaphorically. Too much worry can cause this fragile muscle to break down, to lose its rhythm. So can too much of modern life. Cardiovascular disease remains the No. 1 killer of adults, in the US and most of the world, with someone in America dying of it every 33 seconds, according to the Centers for Disease Control and Prevention. The incidence of A-fib has tripled in the past 50 years (possibly because we’re diagnosing it more); it afflicted almost 50 million people globally in 2016.


For me, after that initial attack on Martha’s Vineyard, the A-fib episodes kept coming. I charted them on my watch, the blips and pauses in my pulse, the moments when my heart raced at over 200 beats per minute, causing my chest to tighten and my throat to feel raw. Sometimes I tasted blood, or thought I did. I kept bicycling through the summer and fall of 2022, gingerly watching my heart rate to see if I could keep the beats from taking a sudden leap from normal to out of control. 

When an arrhythmic episode happened, I struggled to catch my breath as I  pulled over to the roadside to wait for the misfirings to pass. Sometimes my mind grew groggy, and I got confused. It became difficult during these cardio-disharmonious moments to maintain my cool with other people. I became less able to process the small setbacks that we all face every day—things I previously had been able to let roll off my back. 

Early in 2023 I had my heart checked by a cardiologist. He conducted an echocardiogram and had me jog on a treadmill hooked up to monitors. “There has been no damage to your heart,” he declared after getting the results, pointing to a black-and-white video of my heart muscle contracting and constricting, drawing in blood and pumping it back out again. I felt relieved, although he also said that the A-fib was likely to persist, so he prescribed a blood thinner called Eliquis as a precaution to prevent stroke. Apparently, during unnatural pauses in one’s heartbeat blood can clot and send tiny, scab-like fragments into the brain, potentially clogging up critical capillaries and other blood vessels. “You don’t want that to happen,” said the cardiologist.

Toward the end of my heart exam, the doctor mentioned a possible fix for my arrhythmia. I was skeptical, although what he proposed turned out to be one of the great pluses of being alive right now—a solution that was unavailable to my ancestors or even to my grandparents. “It’s called a heart ablation,” he said. The procedure, a simple operation, redirects errant electric signals in the heart muscle to restore a normal pattern of beating. Doctors will run a tube into your heart, find the abnormal tissue throwing off the rhythm, and zap it with either extreme heat, cold, or (the newest option) electrical pulses. There are an estimated 240,000 such procedures a year in the United States. 

“Can you really do that?” I asked.

“We can,” said the doctor. “It doesn’t always work the first time. Sometimes you need a second or third procedure, but the success rate is high.”

A few weeks later, I arrived at Beth Israel Hospital in Boston at 11 a.m. on a Tuesday. My first cardiologist was unavailable to do the procedure, so after being prepped in the pre-op area I was greeted by Andre d’Avila, a specialist in electrocardiology, who explained again how the procedure worked. He said  that he and an electrophysiology fellow would be inserting long, snakelike catheters through the femoral veins in my groin that contain wires tipped with a tiny ultrasound camera and a cauterizer that would be used to selectively and carefully burn the surfaces of my atrial muscles. The idea was to create patterns of scar tissue to block and redirect the errant electrical signals and restore a steady rhuthmos to my heart. The whole thing would take about two or three hours, and I would likely be going home that afternoon.

Moments later, an orderly came and wheeled me through busy hallways to an OR where Dr. d’Avila introduced the technicians and nurses on his OR team. Monitors pinged and machines whirred as moments later an anesthesiologist placed a mask over my mouth and nose, and I slipped into unconsciousness. 

The ablation was a success. Since I woke up, my heart has kept a steady beat, restoring my internal rhuthmos, even if the procedure sadly did not repair the myriad worrisome externalities—the demagogues, carbon footprints, and the rest. Still, the undeniably miraculous singeing of my atrial muscles left me with a realization that if human ingenuity can fix my heart and restore its rhythm, shouldn’t we be able to figure out how to fix other sources of arhuthmos in our lives? 

We already have solutions to some of what ails us. We know how to replace fossil fuels with renewables, make cities less sharp-edged, and create smart gizmos and apps that calm our minds rather than agitating them. 

For my own small fix, I thank Dr. d’Avila and his team, and the inventors of the ablation procedure. I also thank Prometheus, whose hubris in bringing fire to mortals literally saved me by providing the hot-tipped catalyst to repair my ailing heart. Perhaps this can give us hope that the human species will bring the larger rhythms of life into a better, if not perfect, beat. Call me optimistic, but also anxious, about our prospects even as I can now place my finger on my wrist and feel once again the steady rhuthmos of my heart.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/10/30/1106350/the-arrhythmia-of-our-current-age/amp/

AI: The Metabolic Mirage

Posted by timmreardon on 12/09/2025
Posted in: Uncategorized.

6 days after we published our first AI Bubble Index, Bloomberg validated what our report detected:
“A light has been shined on the complexity of the financing, the circular deals, the debt issues. OpenAI’s connections now look more like an anchor.”

That is exactly what we described as The Metabolic Mirage (image below). Every business decision makes sense in isolation:

  • Nvidia ships GPUs
  • Hyperscalers deploy billions to stay competitive
  • They fund AI startups, who buy Nvidia chips with vendor financing
  • Nvidia’s numbers look great and all players’ equities appreciate

Asset managers surveyed by Fortune magazine just confirmed this: “You can’t call it a bubble when you’re seeing tech companies deliver a massive earnings beat.”

Well, until 1 link breaks.

Since we published a week ago, the structural stress we measured in company financials has become visible in credit markets:

  • Oracle Credit Default Swap spreads: highest since the 2009 financial crisis
  • Banks: using insurance to hedge AI loan losses
  • OpenAI’s revenue to commitment gap: $207B (!)

We normally analyze individual companies and business units for investors and leadership teams to establish whether strategic, value creation or transformation plans can metabolize into performance. We applied the same methodology and math to the AI ecosystem to verify if the massive CAPEX commitments can metabolize in ROI.

The result: a 132x Metabolic Overload score, measuring multiplicative stress across five infrastructure layers.
Capital deployment velocity is running 132 times faster than the physical infrastructure stack can absorb it.

Links to the report, sources & 7-day timeline in comments below.

DecisionDNA #AIInfrastructure #MetabolicOverload

Article link: https://www.linkedin.com/posts/boykester_decisiondna-aiinfrastructure-metabolicoverload-activity-7403700934729560064-Sh-r?

When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists

Posted by timmreardon on 12/05/2025
Posted in: Uncategorized.

By Jeremy Hsu, December 5, 2025

Silicon Valley and its backers have placed a trillion-dollar bet on the idea that generative AI can transform the global economy and possibly pave the way for artificial general intelligence, systems that can exceed human capabilities. But multiple warning signs indicate that the marketing hype surrounding these investments has vastly overrated what current AI technology can achieve, creating an AI bubble with growing societal costs that everyone will pay for regardless of when and how the bubble bursts.

The history of AI development has been punctuated by boom-and-bust cycles (with the busts called AI winters) in the 1970s and 1980s. But there has never been an AI bubble like the one that began inflating around corporate and investor expectations since OpenAI released ChatGPT in November 2022. Tech companies are now spending between $72 billion and $125 billion per year each on purchasing vast arrays of AI computing chips and constructing massive data centers that can consume as much electricity as entire cities—and private investors continue to pour more money into the tech industry’s AI pursuits, sometimes at the expense of other sectors of the economy.

“What I see as a labor economist is we have starved everything to feed one mouth,” says Ron Hetrick, Principal Economist at Lightcast, a labor market analytics company. “These are now three years that we have foregone development in so many industries as we shove food into a mouth that’s already so full.”

That huge AI bet is increasingly looking like a bubble; it has buoyed both the stock market and a US economy otherwise struggling with rising unemployment, inflation, and the longest government shutdown in history. In September, Deutsche Bank warned that the United States could already be in an economic recession without the tech industry’s AI spending spree and cautioned that such spending cannot continue indefinitely. However it ends, the AI bubble’s most enduring legacy may be the global disruptions from any financial crisis that follows—and the societal costs already incurred from betting so heavily on energy-hungry data centers and AI chips that may suddenly become stranded assets.

warning signs. 

Silicon Valley’s focus on developing ever-larger AI models has spurred a buildout of bigger data centers crammed with computing power. The staggering growth in AI compute demand would require tech companies to build $500 billion worth of data centers packed with chips each year—and companies would need to rake in $2 trillion in combined annual revenue to fund that buildout, according to a Bain & Company report. The report also estimates that the tech industry is likely to fall $800 billion short of the required revenue.

That shortfall is less surprising than it might seem. US Census Bureau data show that AI adoption by companies with more than 250 employees may have already peaked and began declining or flattening out this year. Most businesses still don’t see a significant return on their investment when trying to use the latest generative AI tools: Software company Atlassian found that 96 percent of companies didn’t achieve significant productivity gains, and MIT researchersshowed that 95 percent of companies get zero return from their pilot programs with generative AI. Separately, a paper by Nobel laureate Daron Acemoglu, an MIT professor in economics, estimates AI-driven productivity will produce only a “modest increase” in gross domestic product (GDP) of between 1.1 and 1.6 percent over the next 10 years.

Claims that AI can replace human workers on a large scale also appear overblown, or at least premature. When evaluating AI’s impact on employment, the Yale Budget Lab found that the “broader labor market has not experienced a discernible disruption since ChatGPT’s release 33 months ago,” according to the group’s analysis published in October 2025. Coincidentally, the largest tech companies spending billions of dollars on data centers have also led recent private-sector job cuts by laying off tens of thousands of their own workers—but observers such as Hetrick suggest that the tech companies are actually reducing workforces to help pay for their ongoing AI investments.

Another bubble warning sign: Silicon Valley’s accelerating spending spree on data centers and chips has outpaced what even the largest tech companies can afford. Companies such as Amazon, Google, Microsoft, Meta, and Oracle have already spent a record 60 percent of operating cash flow on capital expenditures like data centers and chips as of June 2025.

The financing ouroboros. Now, tech companies are increasingly resorting to “creative finance” such as circular financing deals to continue raising money for data centers and chips, says Andrew Odlyzko, professor emeritus of mathematics at the University of Minnesota, who has studied the history of financial manias and previous bubbles around technologies like railroads. Such creative finance “generates these very complex structures which nobody fully understands until things blow up and then various people are left to pick up the pieces,” he says. “I’m seeing more of that, and that’s what is getting me concerned; this part is typical of bubbles.”

For example, Meta sold $30 billion of corporate bonds in late October and also secured another $30 billion in off-balance-sheet debt through a joint venture structured by Morgan Stanley, arrangements that can hide the risks and liabilities of such deals. The swift accumulation of $100 billion in AI-related debt per quarter among various companies “raises eyebrows for anyone that has seen credit cycles,” said Matthew Mish, head of credit strategy at UBS Investment Bank, in a Bloomberg interview.

Business journalists and analysts have also called attention to the growing number of “circular finance” deals fueling the AI bubble, with one example being AI chipmaker NVIDIA investing $100 billion in OpenAI even as the latter plans to buy more NVIDIA chips. A recent Morgan Stanley presentation showing the messy entanglement of AI-related deals, illustrated by arrows connecting various tech companies, “resembled a plate of spaghetti,” according to the Wall Street Journal.

As a result, a growing number of business leaders and institutions have voiced alarm about the stock market bubble building around AI, including the Bank of England and the International Monetary Fund. Even bullish tech and financial CEOs such as Amazon’s Jeff Bezos, JPMorgan Chase’s Jamie Dimon, Google’s Sundar Pichai, and OpenAI’s Sam Altman have acknowledged the existence of an AI bubble, despite their optimism about the advance of AI generally.

After the crash. If the stock market craters after a bursting of the AI bubble, it won’t just be financial institutions and venture capitalists losing money. Some 62 percent of Americans who reported owning stocks in 2025, according to a Gallup survey, could also be affected. Another national survey by The BlackRock Foundation and Commonwealth nonprofits also shows that approximately 54 percent of people with incomes between $30,000 and $80,000 have investment accounts. All these people stand to lose much of their investments if the AI bubble pops and market exuberance evaporates. The Economist has highlighted how the top 20 companies on the S&P 500 stock market index currently account for 52 percent of total market value, with the top 10 especially dominated by AI-related companies.

The market mayhem brought on by a deflation of the AI bubble could also mean economic disruption worldwide. Writing for The Economist, Gita Gopinath, former chief economist for the International Monetary Fund, warned that a bursting of the AI bubble on the magnitude of the dot-com bubble collapse in 2000 could have “severe global consequences,” including the wipeout of more than $20 trillion in wealth for American households and $15 trillion in wealth for foreign investors.

Similarly, the International Monetary Fund’s latest World Economic Outlook report described how “an abrupt repricing of tech stocks could be triggered by disappointing results on earnings and productivity gains related to artificial intelligence, marking an end to the AI investment boom and the associated exuberance of financial markets, with the possibility of broader implications for macrofinancial stability.”

If the AI bubble pops, the US government will likely turn to its central bank, the Federal Reserve, to stabilize the wider economy by injecting huge amounts of cash into the financial system, as it did after the 2008 financial crisis, Odlyzko says. But he warned that a new government bailout of the financial system would mean another significant jump in the national debt and increased wealth inequality, because taxpayer dollars would be once again focused on stabilizing a sector in which the wealthiest individuals will benefit disproportionately from recovering corporate profits and rebounding share prices. A repeat of the financial bailout cycle that privatizes the gains of wealthy risk-takers while socializing the losses to everyone else is “likely to lead to even more [political] polarization and perhaps true populist movements,” Odlyzko says.

The United States is less well equipped to handle the AI bubble if it were to burst today because of the weakened US dollar, political pressure on the Federal Reserve’s institutional independence, limitations on economic growth due to President Trump’s sweeping tariffs and trade wars, and record levels of government debt that could constrain attempts to use fiscal stimulus to right-size a sinking economy, Gopinath wrotein The Economist.

Not every major country pursuing lofty AI ambitions is equally vulnerable to such a crash. For example, China faces less risk than Silicon Valley should the AI boom goes bust, even if the event would likely “deflate some froth on both sides,” says Lizzi C. Lee, a fellow on Chinese Economy at the Center for China Analysis at the Asia Society Policy Institute in New York. That is because China’s pragmatic approach to deploying AI more closely resembles “industrial electrification” rather than “funding the dot-com boom,” she says.

“China is structurally less exposed to the ‘AI-for-AI’ bubble because its policy and investment logic centers on integrating AI into the real economy—manufacturing, logistics, public services—rather than frontier models for their own sake,” says Lee. “So, the focus is more on measurable productivity gains, not speculative valuations.”

Tech skeletons. Previous financial bubbles around technologies such as railroads and the internet left behind some functional physical infrastructure, which proved useful for newcomers that swooped in afterward to establish new businesses. But the sudden end of the AI bubble may not yield that kind of silver lining; should the AI boom not produce the requisite income to support continued investment in it, tech companies could find themselves with plenty of underutilized data centers and chips.

Some AI-focused data centers could be repurposed to run less intensive computing workloads. But AI chips usually lose their economic value within just a few years as they fall behind the latest chip technologies, and they also experience rapid physical deterioration from running typical AI workloads, writes Mihir Kshirsagar, Technology Policy Clinic director at Princeton University. Shipments of AI-related chips for data centers reached 3.85 million units in 2023 alone, and that number could grow to 7.9 million by 2030.

A sudden glut of “three-year-old chips at firesale prices” won’t enable newcomers to challenge the entrenched market dominance of existing tech giants, given that the latter can easily afford to run the latest AI hardware, according to Kshirsagar. By comparison, the dot-com bubble collapse allowed newcomers to buy internet fiber at bargain-bin prices, and those physical assets lasted for decades.

Stranded energy infrastructure assets associated with the end of the AI bubble could pose an even bigger complication: Historic electricity demand from the rapid buildout of data centers has spurred utility companies to commit billions of dollars to building new power transmission lines, natural gas pipelines, and power plants, but someone will need to pay the costs of all that energy infrastructure buildout if AI demand fizzles.

Paying for power. Data centers currently represent the fastest-rising source of power demand for the United States, and the electricity needs of individual data center campuses are also growing to gargantuan proportions. Tech companies have rushed to build new gigawatt-scale data centers such as Meta’s “Hyperion” data center in Louisiana, which would consume twice as much electricity as the entire city of New Orleans. Meanwhile, a new Amazon data center campus in Indiana will require as much electricity as half of all homes in the state, or approximately 1.5 million households.

To meet that demand, utility companies are making a projected $1.4 trillion investment in electricity infrastructure between 2025 and 2030. And they are paying for it by raising electricity rates for all ratepayers, including households and small businesses. Utility companies have already gotten approval to increase rates by approximately $29 billion nationwide in the first half of 2025, compared to just $12 billion requested and approved in the first half of 2024. This is taking place while US residential electricity costs have jumped by almost 30 percent on average since 2021, according to a report by the nonprofit PowerLines.

“If you believe the projections here, we’re in the early stages of this buildout for industrial-scale computing, and therefore the infrastructure needed to power these facilities is just starting to get built, which means we’re all just starting to pay for it,” says Ari Peskoe, director of the Electricity Law Initiative at Harvard Law School. “So, there’s a possibility that the consumer impacts get a lot worse, unless there’s significant reforms of how utilities spread the costs of infrastructure.”

There is already some evidence showing that data center demand for power is driving up local electricity costs. A Bloomberginvestigation found that areas of the country with “significant data center activity” saw wholescale electricity prices soar by as much as 267 percent for a single month compared to five years ago. More than 70 percent of regions that saw price increases were located within 50 miles of such data center clusters.

Tech companies are helping to finance some of the energy infrastructure expansion, especially by using power purchase agreements to bring more power generation into the mix. For example, Microsoft and Google have signed power purchase agreements with energy companies to reopen the nuclear power plants at Three Mile Island in Pennsylvania and the Duane Arnold Energy Center in Iowa, respectively.

But utility companies and their other ratepayers still bear the brunt of expenses for building new power plants, local power lines and transformers, and transmission lines to carry electricity across longer distances. Peskoe and Eliza Martin, a Legal Fellow at Harvard’s Environmental & Energy Law Program, reviewed almost 50 regulatory proceedings related to data center utility rates to show how those processes can shift the utility costs to the public.

If the AI bubble pops and many data centers shut down, much of the local energy infrastructure “is probably mostly unnecessary,” says Peskoe. But new power plants could potentially be repurposed for other energy customers if the demand is there, and new transmission lines that carry electricity across long distances are particularly helpful under a variety of future conditions, he says.

Some state public utility commissions have begun working with utility companies to implement measures that would make data centers pay higher electricity rates and interconnection costs for using the electricity grid, along with exit fees that data center owners would have to pay if they cease operations before contracts expire. Nearly 30 states have proposed or approved new electricity rates for large customers such as data centers, according to a databasemaintained by the Smart Electric Power Alliance and North Carolina Clean Energy Technology Center.

But energy infrastructure development costs associated with data centers could still be “socialized” and borne by ordinary utility customers if projects don’t have those protections in place, Peskoe says. “I’m sure there would be some utilities that, if there were a burst of the bubble, would probably go to regulators and say, ‘Hey, we want to recover the cost of these facilities from everyone,’” he says.

The unsustainable bubble. The timing of an AI bubble implosion could also shape its long-term impact on the US power grid’s energy mix and carbon emissions. Data center developers have purchased power from, and sometimes made direct investments in, a diverse array of sources, including renewable power, along with nuclear power and energy storage solutions—and yet half of the rising power demand from US data centers is still likely to be supplied through natural gas generation by 2030, according to an S&P Global report.

This is especially problematic for attempts to mitigate climate change, given that natural gas power plants produce carbon dioxide while natural gas wells and pipelines can leak methane as an especially potent greenhouse gas. But that has not stopped some tech companies from rushing to install dozens of gas turbines as on-site power plants for data centers, including Elon Musk’s xAI company and the Project Stargate joint venture involving OpenAI and Oracle. This AI-driven demand for natural gas generation is occurring even as the United Nations’ Emissions Gap Report 2025 warns that the world already faces escalating climate risks from its failures to limit global warming.

Like financial crises of the past, an abrupt end to the AI bubble could inflict considerable economic pain on millions of people worldwide. But the alternative is the prolonging of an AI bubble that is increasingly unsustainable in both the financial and environmental senses, with the winners mainly being some of the wealthiest companies on the planet and their investors.

“Ultimately, for society’s sake, it would be a wonderful thing the faster this thing goes, because very few people are benefiting from it,” says Hetrick, the labor economist at Lightcast. “Had we spread the wealth and invested in various industries, who knows how many innovations we could have come up with by now while we’ve been incinerating this money.”

Article link: https://thebulletin-org.cdn.ampproject.org/c/s/thebulletin.org/2025/12/when-it-all-comes-crashing-down-the-aftermath-of-the-ai-boom/amp/

Why Digital Transformation—And AI—Demands Systems Thinking – Forbes

Posted by timmreardon on 12/02/2025
Posted in: Uncategorized.

ByKevin Cushnie,

Forbes Councils Member.

for Forbes Technology Council

COUNCIL POST | Membership (fee-based)

Nov 25, 2025 at 10:45am EST

Kevin Cushnie leads Product Engineering, Innovation and Transformation for MC Systems.

Global investment in digital transformation is on track to approach $4 trillion by 2027, according to IDC. Yet research from BCG shows that up to 70% of these initiatives fail, and Gartner found that less than half meet or exceed business outcome targets.

The persistence of these failure rates, despite decades of experience and monumental investment, points to a fundamental problem: applying linear, project-based thinking to a challenge that is inherently complex and interconnected.

This disconnect has become especially critical as organizations race to integrate AI into their operations. These technologies go beyond simply task automation; they reshape information flows, decision-making authority and organizational knowledge itself.

Without a systems approach, AI implementations can become another expensive failure in the transformation graveyard.

Why Project-Based Thinking Leads To Failure

The traditional approach treats digital transformation as a checklist: gather requirements, select technology, deploy and train users. This model typically confines the effort to the IT department, treating it as a purely technical task with a defined endpoint. The result is siloed technology investments that collide with organizational reality.

There are several obstacles with this approach: organizational culture, change management failures and corporate inertia, to name a few.

Despite this, organizations continue to underinvest in change management, with only 30% having a focused strategy as part of their digital transformation program, according to TEKsystems research. Investment is being poured into what we can see and measure (technology) while ignoring the invisible forces that determine success (people, culture and processes).

The value leaks out through these unattended human and cultural components, producing low user adoption and missed business outcomes.

Seeing Your Organization As A Living Ecosystem

Systems thinking offers a radically different lens. Rather than viewing your organization as a collection of departments executing tasks, it recognizes that you’re managing an intricate ecosystem where components— people, processes, culture and technology—are interdependent and constantly influencing each other.

Three principles matter most:

1. Relationships between components are as important as the components themselves. A brilliant AI tool means nothing if employees don’t trust it or understand when to override its recommendations.

2. Causality is circular, not linear. The outcome of one process becomes the input for another, creating continuous cycles. Employee resistance to a new system, for instance, might prompt leadership to improve communication, which then reduces resistance.

3. New behaviors emerge from interactions that you cannot predict by studying individual parts in isolation.

Applied to digital transformation, this means recognizing three interconnected subsystems: the human system (culture, fears, skills, trust), the process system (workflows, workarounds, operational realities) and the technology system (tools, platforms, integrations).

Change any one element, and ripples propagate throughout the entire structure.

For AI integration specifically, systems thinking forces leaders to map where AI intersects with human decision-making authority. Where should the algorithm operate autonomously and where must humans retain override capability? How do we design feedback loops so human expertise continuously improves AI performance, while AI-generated insights enhance human capability?

These are systemic questions that determine whether AI augments your workforce or alienates it. Here are three ways to take this from theory to practice: 

1. Map your ecosystem before you deploy.

Ecosystem mapping goes beyond traditional organizational charts to reveal how work actually gets done. It visualizes entities, relationships, value flows and, crucially, the workarounds people have created to manage inefficiencies.

These informal processes signal system stress and reveal where new technology will face friction. For AI implementations, mapping shows you which business processes genuinely benefit from automation versus where human judgment remains essential.

Leaders who map first identify strategic intervention points before committing capital, rather than discovering obstacles after deployment.

2. Reframe resistance as system intelligence.

When employees push back against change, the instinct is to overcome their objections. Systems thinking applies a different approach, seeing resistance as vital system feedback.

People naturally protect stability, predictability and control—it’s how human systems maintain equilibrium.

When Clorox embarked on its five-year, $580 million AI-powered transformation(subscription required), leadership made a strategic choice to upskill staff rather than replace them, as reported by The Wall Street Journal. This decision addressed the root cause of resistance, fear of job vulnerability, and turned a potential balancing force into a reinforcing one. The result was a culture of curiosity and problem-solving that positioned Clorox as an industry model for AI transformation.

Resistance often reveals legitimate concerns about feasibility, unintended consequences or flawed implementation plans. By treating pushback as a diagnostic tool rather than an obstacle, you improve the initiative itself while also building trust.

3. Build continuous feedback loops.

In complex systems, long delays between cause and effect hide critical problems. Annual surveys provide outdated data for initiatives that need real-time adjustment. Instead, embed continuous feedback mechanisms—pulse surveys, sentiment analysis and agile retrospectives.

For AI systems specifically, this means monitoring not just technical performance but human trust, override rates and decision quality. When you can identify patterns immediately and adjust before small issues cascade, you enhance organizational agility while demonstrating that employee input actively shapes decisions.

From Project Manager To Systems Architect

The contrast between Kodak and Domino’s illustrates what’s at stake.

Kodak invented the first digital camera, but couldn’t pivot away from its film-based business model. The company’s structure, incentives and culture acted as a powerful balancing loop that actively resisted change.

Conversely, Domino’s holistically re-engineered its entire customer journey and operational model, redefining itself as a technology company that happens to sell pizza. That systemic alignment drove genuine transformation.

Leaders must evolve from managing isolated projects to architecting learning organizations—building adaptive capacity for continuous evolution rather than executing one-time initiatives.

Systems thinking isn’t just a theory but a strategic necessity in an era where AI, automation and digital platforms fundamentally reshape how organizations create value. The challenge for technology leaders is to shift from managing parts to orchestrating the whole.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


ByKevin CushnieCOUNCIL POST | Membership (fee-based)Kevin Cushnie leads Product Engineering, Innovation and Transformation for MC Systems. Read Kevin Cushnie’s full executive profile here.Read MoreFind Kevin Cushnie on LinkedIn. Visit Kevin’swebsite.

Article link: https://www.forbes.com/councils/forbestechcouncil/2025/11/25/why-digital-transformation-and-ai-demands-systems-thinking/

How artificial intelligence impacts the US labor market – MIT Sloan

Posted by timmreardon on 12/01/2025
Posted in: Uncategorized.

by Seb Murray

 Oct 9, 2025Share 

What you’ll learn: 

  • AI adoption leads to increased company growth in revenue, profits, employment, and profitability.
  • Exposure to AI is greatest in higher-paying roles that involve information processing and analysis.
  • In the 2014 – 2023 period, AI-exposed roles did not experience job losses relative to other roles, due to offsetting factors.

Artificial intelligence is often portrayed as a job killer, but new research tracking AI adoption from 2010 to 2023 paints a more nuanced picture: AI’s impact is often on specific tasks within jobs rather than on whole occupations.

A new study co-authored by MIT Sloan associate professor Lawrence Schmidt found that when AI can perform most of the tasks that make up a particular job, the share of people in that role within a company falls by about 14%.

But when AI’s impact is concentrated in just a few tasks within a role — leaving other responsibilities untouched — employment in that role can grow. The reason? With some of their tasks automated, workers can focus on activities where AI is less capable, such as critical thinking or coming up with new ideas.

Even workers in high-wage roles heavily exposed to AI — positions near the top of the pay scale — saw their share of total employment grow by about 3% over five years. That’s because AI boosted firm productivity: Companies that used the technology grew faster, which helped sustain or even expand head count in high-exposure positions.

The takeaway for employers rolling the technology out to their workforce: “Firms that adopt AI don’t necessarily need to shed workers; they can grow and make more stuff and use workers more efficiently than other firms,” said Schmidt, an economist who studies how finance and the job market intersect.More about the methodology

The team used advances in natural language processing to analyze the task requirements and work descriptions of about 58 million LinkedIn profiles and 14 million job postings.

They compared them to roughly 20,000 work activities listed in O*NET, the U.S. Department of Labor’s database, identifying which tasks AI is most capable of performing. 

They then calculated the average exposure for all of the tasks in each job and measured how evenly that exposure was distributed, showing whether AI could handle nearly every task in a role or only a few.

Read more: Artificial Intelligence and the Labor MarketLEARN MORE

The study, co-authored with Menaka Hampole of the Yale School of Management and Dimitris Papanikolaouand Bryan Seegmiller of Northwestern University’s Kellogg School of Management, used natural language processing to construct new measures of workers’ task-level exposure to AI and machine learning from 2010 to 2023, capturing variation across firms, occupations, and time.

What happened to jobs — and why

As of December 2023, AI had not caused major changes in total employment. Losses in highly exposed roles were largely offset by gains in other jobs — and by hiring growth at firms that used AI to become more productive. But the findings predate the widespread rise of generative AI, which the authors note could shift such patterns in the future.

The study also shows that companies can see substantial gains by putting AI to work — and that that growth translates into jobs. Firms that use AI extensively tend to be larger and more productive, and pay higher wages. They also grow faster: A large increase in AI use is linked to about 6% higher employment growth and 9.5% more sales growth over five years.

Which jobs shrank — and which grew 

The analysis shed light on who is most at risk from automation. The researchers found, perhaps surprisingly, that exposure to AI is greatest in high-paying roles, which often involve information processing and analysis — tasks that AI can already do well.

That contrasts with earlier technology waves, such as computerization and factory automation in the 1980s through early 2000s, which — as previous research has shown — mainly displaced middle-skill, routine jobs, like clerical work and basic bookkeeping.

Jobs that shrank:

  • Top-paying roles (like management analysts, aerospace engineers, and computer and information research scientists): Employment within firms fell by about 3.5% over five years. These more lucrative roles are more likely to be in firms that adopt AI; the productivity gains that those firms realize more than offset the losses. As a result, the highest-paid jobs still make up a slightly bigger share of employment than those less exposed to AI.
  • Business, financial, architecture, and engineering jobs: These shrank — by about 2% to 2.5% over five years — because they have a high share of tasks that match what AI can do. Among them, business and financial jobs are more exposed to AI than architecture and engineering roles, but they’re also more likely to be in firms that use AI heavily, which helps offset some of the losses.

Jobs that grew:

  • Legal jobs: These roles gained the most from AI. They face little direct impact from automation and are often in firms that use AI heavily, leading to a predicted 6.4% increase in employment.

Why even low-exposure jobs are at risk

But not all low-exposure roles are safe. Even jobs with little direct impact from AI can shrink if their employers are slow to adopt the technology. The study found that food service jobs wilted relative to other jobs not because AI can do the work but because employers that don’t use AI grow more slowly, reducing demand for workers.

Relatedly, workers who are in occupations involving tasks amenable to AI but work at employers that do not adopt AI are not necessarily better off, since their employers grow less rapidly than their AI-adopting peers.

Generative AI could change the equation

However, the research reflects an earlier phase of AI adoption: The dataset runs up to 2023, so most of it predates the rapid rise of generative AI tools like ChatGPT, which launched in late 2022.

According to Schmidt, it is still an open question as to whether the patterns seen so far will hold in the generative AI era. Because generative AI can learn from fewer examples, he said, it could take on a wider range of tasks, leaving fewer for humans to shift to.

“It’s the million-dollar question,” he said. “Generative AI might reduce the gains from shifting work to people, but it could also make everyone more productive — a rising tide that lifts all boats.”

What employers can do

For Schmidt, the results highlight how much management decisions can shape AI’s impact.

“One of the important roles of firms in minimizing the displacements from AI is to really lean into task reallocation: Take your existing workforce, work with the AI, and make sure time is being reallocated toward tasks where people have a comparative advantage,” he said.

The key is to embrace the technology in the right way, Schmidt said.

“We shouldn’t be afraid of it,” he said. “We should learn how to use it and leverage those productivity gains — not as a substitute for things people are good at, like critical thinking and coming up with new ideas.”

To help companies get the most out of AI, Schmidt outlined a few priorities for business leaders:

  • Encourage hands-on use. Do not wait until AI is rolled out across the company; give employees the chance to try different tools now. Early, active use builds skills and confidence.
  • Pick the right tools and use them well. Make sure teams choose the versions and features that fit the work, and use them often enough to see real gains.
  • Think beyond efficiency. Use AI not only to save time on routine work but also to open up new possibilities, such as tackling complex problems or generating fresh ideas.

Lawrence Schmidt is an applied economist working at the intersection of finance and macro-labor. His research examines fundamental risk factors that impact the value of human capital and the causes and consequences of imperfect risk-sharing in labor and financial markets. Schmidt’s research tackles these questions by a combination of quantitative models and empirical work, which leverages, and often creates, novel microeconomic datasets, advanced econometric methods, and cutting-edge tools for textual analysis.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/how-artificial-intelligence-impacts-us-labor-market?

Will quantum computing be chemistry’s next AI?

Posted by timmreardon on 12/01/2025
Posted in: Uncategorized.

Despite billion-dollar investments, the technology faces hurdles that keeps its future uncertain

by Elizabeth Walsh

November 13, 2025 

KEY INSIGHTS

  • Chemistry problems could be among the first to be solved via quantum computers.
  • Companies are already demonstrating how the technology can solve chemistry problems.
  • Applications important to industry will require millions of qubits, but other challenges must first be overcome.

At the 2025 Quantum World Congress, a glittering bronze chandelier with protruding wires hung suspended in a glass case. Around it, a crowd queued to snap photos. For many people there, it was the closest they’ve ever come to a quantum computer.

The meeting, held in September just outside Washington, DC, attracted hundreds of researchers, investors, and executives to talk about technology that could revolutionize computing. But throughout the presentations of breakthroughs that are bringing the potential of quantum computing closer to reality, an uncomfortable truth often went unspoken: the exotic-looking quantum machines have yet to outperform their classical counterparts.

For decades, quantum computing has promised advances in fields as varied as cryptography, navigation, and optimization. And the field where it could bring advantages over classical computing soonest is chemistry. Chemical problems are suited to the technology because molecules are themselves quantum systems. And, in theory, the computers could simulate any part of a quantum system’s behavior.

Artificial intelligence is a once speculative technology that has become crucial in chemistry, and quantum computing is following a similar trajectory, says Alán Aspuru-Guzik, a professor of chemistry at the University of Toronto and senior director of quantum chemistry at the computer chip giant Nvidia. It took decades for AI to go from an uncertain future to a multibillion-dollar industry. Quantum computing may require a similarly long runway between early funding and commercial adoption, he says.

But while the billions of dollars in investment pouring into the quantum computing market this year underline quantum’s potential, experts say the technology has yet to bring practical benefits. The runway is strewn with both hardware and software problems that need to be remedied before the promise can ever be realized. Profits for builders of the computers and the companies that might use them could still be decades away.

What is a quantum computer?

First proposed in the early 1980s by physicist Richard Feynman, quantum computers harness the principles of quantum mechanics—wave-particle duality, superposition, entanglement, and the uncertainty principle—to solve problems. Using qubits—units of information that can be built from different materials and exist in multiple states at once—a quantum computer can process a lot of data in parallel.

Classical computers can model small numbers of qubits by brute-force calculation, but the resources required to crunch those numbers grow exponentially with every added qubit. Classical computers quickly fall behind.

In 1998, a team of researchers from the University of California, Berkeley; Massachusetts Institute of Technology; and Los Alamos National Laboratory built the first quantum computer using only 2 qubits. Today’s devices, made by a handful of companies, now reach 100 or more qubits, which are contained on chips that resemble those of classical computers.

A team at the Cleveland Clinic has modeled the solvent effects of methanol, ethanol, and methylamine using quantum hardware with an algorithm that samples electron energy. But the model still struggles to capture weak forces like hydrogen bonding and dispersion, says Kenneth Merz, a quantum molecular scientist at the institution who was involved in the study.

Researchers are also developing algorithms that address specific chemical problems. Using a new quantum algorithm, scientists at the University of Sydney achieved the first quantum simulation of chemical dynamics, modeling how a molecule’s structure evolves over time rather than just its static state The quantum computer company IonQ developed a mixed quantum-classical algorithm capable of accurately computing the forces between atoms, and Google recently announced an algorithm that could someday be used for analyzing nuclear magnetic resonance data.

The South Korean quantum algorithm start-up Qunova Computing has built a faster, more accurate version of VQE, says CEO Kevin Rhee. With it, Qunova modeled nitrogen reactions in molecules important for nitrogen fixation. In tests, the method was almost nine times as fast as one done on a classical computer, he says.

Quantum computers are also beginning to be used to model proteins. With the aid of classical processors, a 16-qubit computer found potential drugs that inhibit KRAS, a protein linked to many cancers. And IonQ and the software company Kipu Quantum simulated the folding of a 12-amino-acid chain—the largest protein-folding demonstration on quantum hardware to date.

Quantum’s advantage is still elusive

But many of these use cases don’t claim quantum advantage—the idea that a task can be done better, faster, or cheaper than with classical methods. And the problems being worked on now are too narrow to benefit industry, Merck’s Harbach says. “For academia, it’s about proving the technology. For us, it’s proving the value,” he says.

Many algorithms are restricted in what they can do because getting many qubits to work together is still difficult, Kais says. While companies like Qunova claim their algorithms can successfully handle up to 200 qubits, many chemistry problems will require substantially more.

“For academia, it’s about proving the technology. For us, it’s proving the value.”Philipp Harbach, head of digital innovation at the group science and technology office, Merck KGaA

Modeling cytochrome P450 enzymes or iron-molybdenum cofactor (FeMoco) are the kinds of tasks industrial researchers would like to see quantum computing take on. These are complex metalloenzymes that are important to metabolism and nitrogen fixation, respectively, and are difficult for classical computers to simulate.

In 2021, Google estimated that about 2.7 million physical qubits would be needed to model FeMoco; other studies around that time made similar estimates for P450. The French start-up Alice & Bob announced in October that its qubits could reduce the total requirement to a little under 100,000, still far more than what today’s hardware and algorithms can offer.

Tasks such as simulating large biomolecules and designing novel polymers, battery materials, superconductors, and catalysts each would require a similar number of qubits. But scaling quantum systems isn’t easy, because qubits are extremely fragile and easily lose their quantum states.

In the meantime, some companies are developing “quantum-inspired” algorithms, which take techniques that work on a quantum computer and run them on a classical one to solve similar problems, IBM’s Garcia says. Fujitsu, for example, is creating quantum-inspired software to discover a new catalyst for clean hydrogen production, and Toshiba is making optimization algorithms for choosing the best answer in a large dataset. But the inspired algorithms can’t fully replicate a quantum computer, she says.

The fault in our quantum computers

The reason quantum computers have largely failed to do anything better than a classical computer comes down to how qubits work, says Philipp Ernst, head of solutions at PsiQuantum, a quantum computer developer.

The number of qubits in a quantum computer matters. The more qubits it has, the greater its processing power and potential to run complex algorithms that solve larger problems. But qubit quality is just as important: a quantum computer may contain many qubits, but if they’re unstable or can’t interact with one another, the computer doesn’t have much practical use, Ernst says.

Scientists say we are in the noisy intermediate-scale quantum (NISQ) era, which is characterized by low qubit counts and a sensitivity to the environment that causes high error rates and makes computers unreliable.

Quantum noise comes from a lot of places, including thermal fluctuations, electromagnetic interference, material disturbances, and other interactions with the environment. Any amount of noise can mess with qubits by causing them to lose either their superposition or their entanglement.

“Quantum computing is essentially where conventional computing was in the ’60s or ’70s . . . nobody at that point in time came up with or could imagine how AI would be run and used today.”Philipp Ernst, head of quantum solutions, PsiQuantum

For problems like modeling protein folding, noisy qubits are a big issue. “If you try to model 10–15 amino acids, all hell breaks loose. The errors kill you,” Cleveland Clinic researcher Merz says.

Rather than trying to make machines that compensate for errors by piling on the qubits, researchers are pivoting to fault-tolerant computers that can detect and correct errors. These machines use many physical qubits to form a single logical qubit, which stores information redundantly so that if one qubit fails, the system can correct it before data are lost and errors build up.

“You basically need error correction, and you need several hundred logical qubits in order to do something useful,” Ernst says.

Quantum computer companies have made steps toward fault tolerance. Quantinuum recently reported simulating the ground-state energy of hydrogen using error-corrected logical qubits. Microsoft, working in partnership with Quantinuum, demonstrated a record 12 logical qubits operating with high reliability. IBM is developing quantum error-correction codes to reduce the number of physical qubits needed, and Alice & Bob is innovating “cat qubits” that naturally resist certain types of errors.

Most of these companies say they will launch fault-tolerant quantum computers with enough qubits for complex calculations by 2030.

Companies are taking different approaches to make qubits, a unit of information that can be in multiple states at once.

Superconducting

SpinQ gold superconducting quantum computing chip with numbered connectors on a white background.

Credit: SpinQ

Technology: Tiny circuits made from materials such as aluminum and niobium that carry current without resistance

Advantage: Run operations quickly

Drawback: Can lose their quantum state quickly

Companies: IBM, Microsoft, Google

Photonic

Xanadu silver photonics quantum computing chip resting on a fingertip.

Credit: Xanadu Quantum Technologies

Technology: Single photons encoding information

Advantage: Stable and can work at room temperature

Drawback: Hard to control photons

Companies: PsiQuantum, Xanadu Quantum Technologies

Trapped ion

Quantinuum trapped-ion quantum-computing chip with gold circuitry on a dark background.

Credit: Quantinuum

Technology: Charged atoms suspended in a vacuum and controlled by electromagnetic fields

Advantage: Stable

Drawback: Slow operation speed and hard to scale

Companies: IonQ, Quantinuum

Spin or neutral atom

Intel spin quantum computing chip with purple circuitry resting on a fingertip.

Credit: Intel

Technology: Atoms held by optical tweezers or single-electron spins confined in solid-state materials

Advantage: Scalable and can operate at temperatures slightly above absolute zero

Drawback: Still in early research phase

Companies: Academic laboratories only

Topological

Microsoft Majorana 1 topological quantum computing chip on a red and gold circuit board.

Credit: Microsoft

Technology: Exotic quasiparticles in 2D materials to encode data

Advantage: Good at maintaining a quantum state

Drawback: Hard to build and still experimental

Companies: Microsoft

Fault tolerance isn’t enough

As the number of qubits grows, getting them to work together becomes harder. In many systems, qubits can easily interact only with their nearest neighbors, and linking distant ones on different chips takes extra steps that slow things down. Making accurate multiqubit gates and managing thousands of control signals to influence the qubits adds to the difficulty.

Quantum computers may one day solve problems faster than classical computers because they can process many possibilities at once, but currently, most quantum processors are actually slower than classical chips. Each step can take thousands of times longer than in modern classical computers.

And because qubits are extremely fragile, their environment must remain stable. Some types of qubits need to operate near absolute zero, and heat is generated as qubits are added. The large installations that are required to do complex calculations need a lot of space and energy to keep everything cold.

Not surprisingly, building a quantum computer isn’t cheap. A single qubit costs around $10,000. Some estimates place a large-scale quantum computer at tens of billions of dollars, while smaller ones that are not fault tolerant are in the mid-to-high tens of millions.

If you have a 100-qubit quantum computer and a classical computer that can solve the same problem, then you have to consider the cost, Qunova’s Rhee says. “The quantum computers will be 10 times or 100 times more expensive.”

Because of costs and space requirements, quantum computing is unfolding as a cloud service. IBM, Google, and Amazon already rent out early-stage quantum processors by the minute. To access IBM’s fleet of quantum computers, prices range from $48 to $96 per minute.

A scientist adjusts some equipment in a computer lab.

IBM quantum scientist Maika Takita works on a superconducting quantum computer.  Credit: IBM

And yet quantum is bringing in billions

Despite the lack of real-world industry applications, and cost and engineering challenges yet to be overcome, investment in quantum computing is robust. Kais says the fear of missing out on the next big thing, as many did in the early days of AI, might be driving investors.

The consulting firm McKinsey & Co. projects the industry could be worth anywhere from $28 billion to $72 billionby 2035, up from $750 million in 2024. In the first quarter of 2025 alone, investors poured $1.25 billion into quantum hardware and over $250 million into software, according to Quantum Insider, a market intelligence firm.

Industry leaders like Quantinuum (valued at $10 billion) and IonQ (valued at nearly $19 billion) continue to draw major backing, while newer companies such as PsiQuantum are raising hundreds of millions of dollars. Governments in the US, China, the European Union, and Japan are also ramping up multimillion-dollar programs to support the technology.

Chemical and pharmaceutical industry players are starting to invest too. In the chemical sector, BASF, Covestro, Johnson Matthey, and Mitsubishi Chemical are partnering with quantum computing vendors to explore materials simulation and catalysis. Major drug companies—including AstraZeneca, Bayer, Merck KGaA, Novartis, Pfizer, Roche, and Sanofi—have disclosed quantum initiatives ranging from drug discovery partnerships to internal quantum-algorithm work.

“If [a pharmaceutical company] hasn’t invested yet, they will,” Nvidia’s Aspuru-Guzik says.

But even with rapid progress in quantum computing hardware and software, it’s not yet clear when quantum advantage will arrive and how much it will achieve.

“Quantum computing is essentially where conventional computing was in the ’60s or ’70s . . . nobody at that point in time could imagine how AI would be run and used today,” Ernst says.

While most experts agree that “useful” quantum chemistry is still years away—likely beyond 2030, and possibly not until 2040—they are adamant it will happen.

“There aren’t any fundamental things which prevent us from building these machines . . . it’s more of an engineering type of problem,” North Carolina State’s Kais says. “‘I’m really optimistic that within maybe the next 5 years we’ll start seeing computers solve very, very complex problems.”

Linde Wester Hansen, the head of quantum applications at Alice & Bob, says chemical companies should begin preparing now. Even today, they could benefit from modeling smaller molecules with quantum tools, she says.

Yet quantum computers’ success in chemistry or any other field is not preordained. “It’s not clear that they will have any impact,” SandboxAQ’s Lewis says. “It could be that quantum computers are very expensive prime number factoring machines, or it could be this massive disruptive thing.”

Article link: https://cen.acs.org/business/quantum-computing-chemistrys-next-AI/103/web/2025/11

Ontology is having its moment.

Posted by timmreardon on 11/28/2025
Posted in: Uncategorized.

Ontology is having its moment. There was a time when we called it the “O word.” Nothing killed a conversation with business – or IT – faster than mentioning ontology. The rule was simple: deliver value, keep the ontology part quiet.

But things have changed.

Ontology is shaping up to be the buzzword of 2026. A big part of that is Palantir’s extraordinary rise – their entire Foundry platform is built on ontological modeling. Microsoft is now moving ontology into Fabric. The race is on.

Why now?

Because structured ontologies offer something generative AI desperately needs: grounding.

LLMs are creative but lack logical constraints. Ontologies provide the formal structure to anchor meaning and harmonize wildly different data sources into a coherent semantic layer. When you need precision over plausibility, ontologies are one of the most reliable ways to keep AI from hallucinating.

But what actually is an ontology?

The idea traces back 2,000 years to Aristotle, who tried to formalize how we perceive reality. Ontology today is that same craft: defining the concepts and relationships that shape our world in a logically rigorous way.

Fast forward to the early web. Tim Berners-Lee envisioned the Semantic Web – an internet where data, not just documents, was linked. Give every ontological concept a unique identity, he argued, and meaning becomes a first-class citizen of the web.

Google later operationalized part of this vision with Schema.org, creating a shared vocabulary to help search engines understand the public internet.

But the current frontier isn’t the public web – it’s the enterprise.

Companies want AI agents that can reason about their internal data – their customers, assets, processes. They need semantic clarity applied to their own messy, private reality. Disambiguating meaning isn’t just useful anymore. It’s essential.

Here’s the problem: you can’t outsource your ontology. Enterprises need their own internal version of schema.org – fully owned, built on open standards. Anything less means IP leakage and vendor lock-in. In the age of AI, your ontology is a key part of your competitive moat.

So, ontology is back. And this time it’s not theory – it’s critical infrastructure.

⭕ Your Schema.org: https://lnkd.in/eumPB3Hj

⭕ Your Ontology Your IP: https://lnkd.in/ersgR-Df

Article link: https://www.linkedin.com/posts/tonyseale_ontology-is-having-its-moment-there-was-activity-7400096205990494208-iLxo?

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...