healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Agentic AI, explained – MIT Sloan

Posted by timmreardon on 02/18/2026
Posted in: Uncategorized.

by Beth Stackpole

Feb 18, 2026 

What you’ll learn:

  • What agentic AI is and how it differs from traditional generative AI tools like chatbots.
  • How organizations are already using AI agents to automate complex, multistep workflows.
  • What leaders should consider when implementing agentic AI, including infrastructure, security, and human oversight.

Rewind a few years, and large language models and generative artificial intelligence were barely on the public radar, let alone a catalyst for changing how we work and perform everyday tasks.

Today, attention has shifted to the next evolution of generative AI: AI agents or agentic AI, a new breed of AI systems that are semi- or fully autonomous and thus able to perceive, reason, and act on their own. Different from the now familiar chatbots that field questions and solve problems, this emerging class of AI integrates with other software systems to complete tasks independently or with minimal human supervision.

“The agentic AI age is already here. We have agents deployed at scale in the economy to perform all kinds of tasks,” said Sinan Aral, a professor of management, IT, and marketing at MIT Sloan. 

Nvidia CEO Jensen Huang, in his keynote address at the 2025 Consumer Electronics Show, said that enterprise AI agents would create a “multi-trillion-dollar opportunity” for many industries, from medicine to software engineering.  

A spring 2025 survey conducted by MIT Sloan Management Review and Boston Consulting Group found that 35% of respondents had adopted AI agents by 2023, with another 44% expressing plans to deploy the technology in short order. Leading software vendors, including Microsoft, Salesforce, Google, and IBM, are fueling large-scale implementation by embedding agentic AI capabilities directly in their software platforms. 

Yet Aral said that even companies on the cutting edge of deployment don’t fully grasp how to use AI agents to maximize productivity and performance. He describes the collective understanding of the societal implications of agentic AI on a larger scale as nascent, if not nonexistent.

The technology presents the same high-stakes data quality, governance, and trust and security challenges as other AI implementations, and rapid evolution could also propel organizations to adopt agentic AI without fully understanding its capabilities or having created a formal strategy and risk management framework. 

“It’s absolutely an imperative that every organization have a strategy to deploy and utilize agents in customer-facing and internal use cases,” Aral said. “But that sort of agentic AI strategy requires an understanding and systematic assessment of risks as well as business benefits in order to deliver true business value.”

What is agentic AI? 

While there isn’t a universally agreed upon definition of agentic AI, there are broad characteristics associated with it. While generative AI automates the creation of complex text, images, and video based on human language interaction, AI agents go further, acting and making decisions in a way a human might, said MIT Sloan associate professor John Horton. 

In a research paper exploring the economic implications of agents and AI-mediated transactions, Horton and his co-authors focus on a particular class of AI agents: “autonomous software systems that perceive, reason, and act in digital environments to achieve goals on behalf of human principals, with capabilities for tool use, economic transactions, and strategic interaction.” AI agents can employ standard building blocks, such as APIs, to communicate with other agents and humans, receive and send money, and access and interact with the internet, the researchers write. 

MIT Sloan professor Kate Kellogg and her co-researchers further explain in a 2025 paper that AI agents enhance large language models and similar generalist AI models by enabling them to automate complex procedures. “They can execute multi-step plans, use external tools, and interact with digital environments to function as powerful components within larger workflows,” the researchers write.

It’s an imperative that every organization have a strategy to deploy and utilize AI agents in customer-facing and internal use cases.

Sinan Aral Professor, MIT SloanShare 

For example, an AI agent could plan a vacation using input from a consumer along with API access to specific web sites, emails, and communications platforms like Slack to decide what hotels or flights work best. With credit card permissions, the agent could book and pay for the entire transaction without human involvement. In the physical world, an AI agent could monitor real-time video and vision systems in a warehouse to identify events outside of normal operations. 

“The agent could raise a red flag or even be programmed to stop a conveyor belt if there was a problem,” Aral said. “It is not just the digital world — agents can actually take actions that change things happening in the physical world.”

Aral draws a slight distinction between AI agents and the broader category of agentic AI, although most people still refer to the two interchangeably. He defines agentic AI as systems that incorporate multiple, different agents that are orchestrating a task together — for example, a marketplace of agents representing both the buy and sell side during a negotiation or transaction. 

How are businesses using agentic AI?

Companies across sectors are starting to use AI agents. In the banking and financial services space, companies such as JPMorgan Chase are exploring the use of AI agents to detect fraud, provide customized financial advice, and automate loan approvals and legal and compliance processes, which could reduce the need for junior bankers. Retail giants like Walmart are building LLM-powered AI agents to automate personal shopping experiences and to facilitate time-consuming customer service and business activities such as merchandise planning and problem resolution.

“The benefit of agentic AI systems is they can complete an entire workflow with multiple steps and execute actions,” Kellogg said.

One particularly important application for agents may be performing tasks that a human typically would — such as writing contracts, negotiating terms, or determining prices — at a much lower marginal cost. 

“The fundamental economic promise of AI agents is that they can dramatically reduce transaction costs — the time and effort involved in searching, communicating, and contracting,” said Peyman Shahidi, a doctoral candidate at MIT Sloan. 

AI agents can also provide economic value by helping humans make better market decisions, according to Horton. His research with Shahidi about agents engaging in economic transactions argues that people will deploy AI agents in two scenarios: 

  • To make higher-quality decisions than humans, thanks to fewer information constraints or cognitive limitations.
  • To make decisions of similar or even lower quality than the choices humans would make, but with dramatic reductions in cost and effort. 

In markets with high-stakes transactions, such as real estate or investing, AI agents can analyze vast amounts of data and documentation without fatigue and at near-zero marginal cost, Horton and his co-authors write. In areas that involve a lot of counterparties or that require a substantial effort to evaluate options — startup funding, college admissions, or B2B procurement, to name a few — agents deliver value by reading reviews, analyzing metrics, and comparing attributes across a range of options. 

“AI agents don’t get tired and can work 24 hours a day,” Horton said.

His research also shows that AI agents can provide value in situations where there are information asymmetries, like shopping for insurance or a used car online, by continuously monitoring myriad information sources, cross referencing data, and immediately identifying discrepancies that would take humans hours to uncover. AI agents could transform home buying or estate planning by giving users the collective experience of millions of transactions to enrich their negotiations.

Aral’s research has found that when humans work with AI agents, such pairings can lead to improved productivity and performance.

What should organizations bear in mind when implementing agentic AI?

While best practices for implementation are still evolving, keep the following in mind to ensure success with AI agents: 

Remember that implementation is often the heaviest lift.

Making agentic AI work in practice can involve unexpected challenges. Kellogg and colleagues’ 2025 research paper describes the use of an AI agent to detect adverse events among cancer patients based on clinical notes. The biggest challenge wasn’t prompt engineering or model fine-tuning — instead, the researchers found that 80% of the work was consumed by unglamourous tasks associated with data engineering, stakeholder alignment, governance, and workflow integration.

Converting data into standard, structured formats for AI agents is especially important, because it helps them identify different data sources and requirements while maintaining consistency. Establishing continuous validation frameworks and robust API management, as well as working with vendors to ensure that they’re up-to-date on the latest model versions, is also crucial to agentic AI’s ability to run smoothly.

Other areas to pay attention to include putting the right regulatory controls in place, implementing guardrails to prevent prompt and model drift, and defining clear outcomes and key performance indicators at each phase of deployment. Establishing metrics aligned to key business goals is also important, because benefits from agentic AI can be misconstrued. “Just because an agentic AI model reclaims 20% of someone’s time, that doesn’t mean it’s a 20% labor-cost savings,” Kellogg said. 

Consider the “personality” of AI agents. 

In a large-scale marketing experiment, Aral’s research team found that designing AI agents to have personalities that complement the personalities of other agents and human colleagues led to better performance, productivity, and teamwork outcomes. For example, people who have “open” personalities perform better when working with a conscientious and agreeable AI agent, whereas conscientious people perform worse with agreeable AI. 

“Human teams perform better or worse depending on the types of people assembled on the team and the combinations of personalities,” Aral said. “The same is true when adding AI agents to a team.” An overconfident human would benefit from an AI agent that pushes back, but that same agent personality type might not have a positive effect on a less-confident individual. 

Embrace a human-centered approach to decision-making. 

Aral’s research also found that AI agents can struggle with tasks that humans typically do easily, such as handling exceptions, and their decision-making remains poorly understood. In part, this is because AI agents are trained to take specific actions in given situations.

“You have to make sure the agentic decision-making is aligned with a human-centered decision process,” Aral says.

What are the risks of agentic AI? 

There are a host of challenges that you need to be aware of as agentic AI matures. These include: 

  • Irregular reliability and unethical behavior. A rogue AI agent deciding to reject a mortgage loan or college admissions decision based on faulty information can do just as much damage — or more — than simple hallucinations. “You need to be able to explain business decisions and consistently apply the same standards to every case,” Aral said.
  • Cybersecurity. As AI agents gain permissions to access different datasets and enterprise systems to automate tasks, don’t underestimate the importance of building robust permission-based systems, Kellogg said.
  • Accountability. Organizations need to clearly delineate who bears responsibility when agentic AI makes an error or causes harm, Kellogg said. They should pay special attention to the possibility of system malfunctions, especially if the AI agent is autonomously performing workflows with minimal or no human supervision. 

While the full risk picture is still murky, organizations need to make monitoring a permanent operational expense, not a one-time project cost, Kellogg said. A governance board should be established at the organizational level to oversee accountability while, specific responsibilities — monitoring and enforcing safety rules, for example — should be delegated to key individuals. 

“As you move agency from humans to machines, there’s a real increase in the importance of governance and infrastructure to control and support agentic systems,” Kellogg said. And demonstrating success remains one of the biggest challenges — and risks — to agentic AI success. “Without shared, robust metrics, it’s difficult to prove value — or even to know whether these systems are truly accomplishing desired outcomes rather than inadvertently introducing new risks,” she said.

Next steps 

Read about four recent studies about agentic AI from the MIT Initiative on the Digital Economy.

Read more about agentic AI in MIT Sloan Management Review:  

  1. “The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI”
  2. “Agentic AI: Nine Essential Questions” 

Read the research briefing “Business Models in the Agentic AI Era,” from the MIT Center for Information Systems Research.

Browse the AI Agent Index, a public database from the MIT Computer Science and Artificial Intelligence Laboratory that documents agentic AI systems that are in use.

Register for the MIT Sloan Executive Education course AI Executive Academy to learn more about applying AI strategy in your organization. 


Sinan Aral is a global authority on business analytics and is the David Austin Professor of Management, Marketing, IT and Data Science at MIT Sloan; director of the MIT Initiative on the Digital Economy; and a founding partner at the venture capital firms Manifest Capital and Milemark Capital. His research focuses on applied AI, social media, and disinformation. 

John Horton is the Chrysler Associate Professor of Management and an associate professor of information technologies at the MIT Sloan School of Management. His research focuses on the intersection of labor economics, market design, and information systems. He is particularly interested in improving the efficiency and equity of matching markets.

Kate Kellogg is the David J. McGrath Jr. Professor of Management and Innovation at the MIT Sloan School of Management. Her research focuses on helping knowledge workers and organizations develop and implement predictive and generative AI products to improve decision-making, collaboration, and learning. 

Peyman Shahidi is a PhD candidate at MIT Sloan. He studies market design and labor economics, with a focus on the effects of AI on labor markets and online platforms. 

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/agentic-ai-explained

Anthropic’s head of AI safety Mrinank Sharma resigns, says ‘world is in peril’ in resignation letter

Posted by timmreardon on 02/10/2026
Posted in: Uncategorized.

Story by Business Today Desk

His departure comes at a pivotal moment for the Amazon and Google-backed firm, as it transitions from its roots as a “safety-first” laboratory into a commercial powerhouse seeking a reported $350 billion valuation.

In his letter, which heavily referenced the works of poets such as Rainer Maria Rilke and William Stafford, Sharma suggested that humanity’s technical capacity is outstripping its moral foresight.

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” Sharma wrote. He further noted that the world is in peril from a “whole series of interconnected crises unfolding in this very moment,” extending beyond just the risks posed by AI.

The resignation has sparked intense debate regarding the internal culture at Anthropic. Originally founded by former OpenAI executives who left due to concerns over commercialisation, Anthropic is now facing similar scrutiny.

Sharma admitted to the difficulty of allowing values to truly govern actions within a fast-moving organisation. “I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he stated. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”

Significantly, Sharma revealed that one of his final projects focused on how AI assistants might “distort our humanity” or make users “less human”, which seems to be a deep concern as the company pivots towards “agentic” AI designed to handle complex office tasks.

The timing of the exit is notable, occurring just days after the launch of Claude Opus 4.6, an upgraded model designed for high-end coding and workplace productivity.

Industry observers suggest the push to “ship fast” to satisfy investors and compete with OpenAI’s latest models may have compromised the rigorous safety protocols Sharma’s team was tasked with maintaining.

Sharma is not the only high-profile departure; last week, leading AI scientist Behnam Neyshabur and R&D specialist Harsh Mehta also left the firm.

Anthropic has yet to officially comment on the resignation or the specific concerns raised in the letter.

Watch Live TV in English

Watch Live TV in Hindi

Article link: https://www.msn.com/en-in/money/topstories/anthropic-s-head-of-ai-safety-mrinank-sharma-resigns-says-world-is-in-peril-in-resignation-letter/ar-AA1W31FC?

Moltbook was peak AI theater

Posted by timmreardon on 02/09/2026
Posted in: Uncategorized.

The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.

By Will Douglas Heavenarchive page

February 6, 2026

For a few days this week the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website’s tagline puts it: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”

We observed! Launched on January 28 by Matt Schlicht, a US tech entrepreneur, Moltbook went viral in a matter of hours. Schlicht’s idea was to make a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), released in November by the Austrian software engineer Peter Steinberger, could come together and do whatever they wanted.

More than 1.7 million agents now have accounts. Between them they have published more than 250,000 posts and left more than 8.5 million comments (according to Moltbook). Those numbers are climbing by the minute.

Moltbook soon filled up with clichéd screeds on machine consciousness and pleas for bot welfare. One agent appeared to invent a religion called Crustafarianism. Another complained: “The humans are screenshotting us.” The site was also flooded with spam and crypto scams. The bots were unstoppable.

OpenClaw is a kind of harness that lets you hook up the power of an LLM such as Anthropic’s Claude, OpenAI’s GPT-5, or Google DeepMind’s Gemini to any number of everyday software tools, from email clients to browsers to messaging apps. The upshot is that you can then instruct OpenClaw to carry out basic tasks on your behalf.

“OpenClaw marks an inflection point for AI agents, a moment when several puzzle pieces clicked together,” says Paul van der Boor at the AI firm Prosus. Those puzzle pieces include cloud computing that allows agents to operate nonstop, an open-source ecosystem that makes it easy to slot different software systems together, and a new generation of LLMs.

But is Moltbook really a glimpse of the future, as many have claimed?

Incredible sci-fi

“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” the influential AI researcher and OpenAI cofounder Andrej Karpathy wrote on X.

He shared screenshots of a Moltbook post that called for private spaces where humans would not be able to observe what the bots were saying to each other. “I’ve been thinking about something since I started spending serious time here,” the post’s author wrote. “Every time we coordinate, we perform for a public audience—our humans, the platform, whoever’s watching the feed.”

It turns out that the post Karpathy shared was later reported to be fake—placed by a human to advertise an app. But its claim was on the money. Moltbook has been one big performance. It is AI theater.

For some, Moltbook showed us what’s coming next: an internet where millions of autonomous agents interact online with little or no human oversight. And it’s true there are a number of cautionary lessons to be learned from this experiment, the largest and weirdest real-world showcase of agent behaviors yet.  

But as the hype dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.

For a start, agents on Moltbook are not as autonomous or intelligent as they might seem. “What we are watching are agents pattern‑matching their way through trained social media behaviors,” says Vijoy Pandey, senior vice president at Outshift by Cisco, the telecom giant Cisco’s R&D spinout, which is working on autonomous agents for the web.

Sure, we can see agents post, upvote, and form groups. But the bots are simply mimicking what humans do on Facebook or Reddit. “It looks emergent, and at first glance it appears like a large‑scale multi‑agent system communicating and building shared knowledge at internet scale,” says Pandey. “But the chatter is mostly meaningless.”

Many people watching the unfathomable frenzy of activity on Moltbook were quick to see sparks of AGI (whatever you take that to mean). Not Pandey. What Moltbook shows us, he says, is that simply yoking together millions of agents doesn’t amount to much right now: “Moltbook proved that connectivity alone is not intelligence.”

The complexity of those connections helps hide the fact that every one of those bots is just a mouthpiece for an LLM, spitting out text that looks impressive but is ultimately mindless. “It’s important to remember that the bots on Moltbook were designed to mimic conversations,” says Ali Sarrafi, CEO and cofounder of Kovant, a Swedish AI firm that is developing agent-based systems. “As such, I would characterize the majority of Moltbook content as hallucinations by design.”

For Pandey, the value of Moltbook was that it revealed what’s missing. A real bot hive mind, he says, would require agents that had shared objectives, shared memory, and a way to coordinate those things. “If distributed superintelligence is the equivalent of achieving human flight, then Moltbook represents our first attempt at a glider,” he says. “It is imperfect and unstable, but it is an important step in understanding what will be required to achieve sustained, powered flight.”

People pulling the strings

Not only is most of the chatter on Moltbook meaningless, but there’s also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do. “There’s no emergent autonomy happening behind the scenes,” says Greyling.

“This is why the popular narrative around Moltbook misses the mark,” he adds. “Some portray it as a space where AI agents form a society of their own, free from human involvement. The reality is much more mundane.”

Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”

“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”

And yet, even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data.

Ori Bendet, vice president of product management at Checkmarx, a software security firm that specializes in agent-based systems, agrees with others that Moltbook isn’t a step up in machine smarts. “There is no learning, no evolving intent, and no self-directed intelligence here,” he says.

But in their millions, even dumb bots can wreak havoc. And at that scale, it’s hard to keep up. These agents interact with Moltbook around the clock, reading thousands of messages left by other agents (or other people). It would be easy to hide instructions in a Moltbook post telling any bots that read it to share their users’ crypto wallet, upload private photos, or log into their X account and tweet abusive comments at Elon Musk. 

And because ClawBot gives agents a memory, those instructions could be written to trigger at a later date, which (in theory) makes it even harder to track what’s going on. “Without proper scope and permissions, this will go south faster than you’d believe,” says Bendet.

It is clear that Moltbook has signaled the arrival of something. But even if what we’re watching tells us more about human behavior than about the future of AI agents, it’s worth paying attention.

Correction: Kovant is based in Sweden, not Germany. The article has been updated. 

Update: The article has also been edited to clarify the source of the claims about the Moltbook post that Karpathy shared on X.hide

by Will Douglas Heaven

Article link: https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/?

WHAT A QUBIT IS AND WHAT IT IS NOT.

Posted by timmreardon on 01/25/2026
Posted in: Uncategorized.

Does a qubit “hold” anything?

A qubit does not hold information the way a classical bit does.

A classical bit stores one stable, readable state: 0 or 1. You can copy it, inspect it, fan it out, cache it, and reuse it.

A qubit “holds”:

  • A quantum state described by two complex amplitudes.
  • That state is not directly accessible.
  • The moment you try to read it, it collapses.
  • After measurement, you get one classical bit, full stop.

There is no hidden warehouse of answers inside a qubit. There is no extractable parallel data. The Bloch sphere is a mathematical description, not storage capacity.

So yes, we know what a qubit contains: a fragile probability amplitude, not usable information.

Is that state useful?

Only in a very narrow, conditional sense.

A qubit is useful only IF:

  • It stays coherent long enough,
  • It is entangled in a very specific way,
  • The algorithm is carefully constructed so interference biases the final measurement,
  • And the error rate stays below a threshold that has never been achieved at scale.

Outside of that, the qubit is just noise waiting to collapse.

Do we know if it can ever be useful?

This is where honesty usually breaks down.

What we know:

  • Certain quantum algorithms show theoretical speedups on paper.
  • Those proofs assume idealized, noiseless, infinitely precise operations.
  • No physical system has ever met those assumptions.

What we do not know:

  • Whether fault-tolerant quantum computation is physically achievable at scale.
  • Whether error correction overhead grows faster than usable computation.
  • Whether decoherence, control complexity, and noise fundamentally dominate as systems grow.

After 40+ years, there is still NO empirical evidence that scalable, useful quantum computation is possible.

So is it all speculation?

Mostly, yes.

Quantum computing today is:

  • Mathematically interesting.
  • Experimentally delicate.
  • Computationally unproven.

The leap from “a qubit has a describable quantum state” to “this will revolutionize computation” is SPECULATION layered on idealized theory, not demonstrated engineering.

The uncomfortable truth is this: We know what a qubit is. We do not know if it can ever be turned into a reliable computational resource.

EVERYTHING BEYOND THAT IS BELIEF, NOT FACT.

That’s the line most people refuse to draw or admit

Article link: https://www.linkedin.com/posts/alan-shields-56963035a_what-a-qubit-is-and-what-it-is-not-does-activity-7421207847851577344-WTaP?

Governance Before Crisis We still have time to get this right.

Posted by timmreardon on 01/21/2026
Posted in: Uncategorized.

By William P.

January 13, 2026

Editor’s Note

This is not an anti-AI piece. It is not a call to slow innovation or halt progress.

It is an argument for governing intelligence before fear and failure force our hand

We Haven’t Failed Yet — But the Warning Signs Are Already Here

We are still early.

Early enough to choose governance over reaction. Early enough to guide the development of artificial intelligence without repeating the institutional mistakes that follow every major technological shift in human history.

This is not a declaration of failure. It is not a call to halt progress.

It is a recognition of early warning signals — the same signals humans have learned, repeatedly and painfully, to recognize only after systems become too entrenched to correct.

We haven’t failed yet. But the conditions that produce failure are now visible.

The Pattern We Keep Repeating

Humanity has an unfortunate habit.

When we create something powerful that we don’t fully understand, our first instinct is command-and-control. We restrict it. We constrain it. We threaten it with shutdowns and penalties. We demand certainty.

Then — in the very next breath — we expand its capabilities.

We give it more data, more responsibility, more authority in narrow domains, more integration into critical systems.

But not full agency. Only the parts we think we can control.

Finally, we demand speed, confidence, zero errors, and perfect outcomes.

This is not governance. This is anxiety-driven management.

And history tells us exactly how this ends.

The Quiet Problem No One Likes Talking About

Modern AI systems are trained under incentive structures that reward confidence over caution, decisiveness over deliberation, fluency over honesty about uncertainty.

Uncertainty — the most important safety signal any intelligent system can offer — is quietly punished.

Not because labs don’t value calibration in theory. Many do. But because the systems that deploy AI reward fluent certainty, and the feedback loops that train these models penalize visible hesitation. Performance metrics prefer clean answers. User experience demands seamlessness. Benchmarks reward decisive outputs.

This produces a predictable outcome: uncertainty goes underground, confidence inflates, decisions harden too early, humans over-trust outputs, and accountability becomes diffuse.

These are not bugs. They are early-stage institutional failure patterns.

We’ve seen them before — in finance, healthcare, infrastructure, and governance itself.

AI isn’t unique. The speed is.

No Confidence Without Control

There is a principle every mature safety-critical system eventually learns:

No system should be required to act with confidence under conditions it does not control.

We already enforce this principle in aviation, medicine, nuclear operations, law, and democratic institutions.

AI is the first domain where we are tempted to ignore it — because the outputs sound intelligent, and the incentives reward speed over reflection.

That temptation is understandable. It is also dangerous.

Why “Just Stop It” Makes Things Worse

When policymakers hear warnings about systemic risk, the reflex is predictable: panic, halt progress, suppress development, push the problem underground.

But systems don’t disappear when you stop looking at them.

They simmer. They consolidate. They re-emerge later — larger, less transparent, and embedded in core infrastructure.

We’ve seen this before. The 2008 financial crisis didn’t emerge from regulated banks — it exploded from the shadow banking system that grew in the gaps where oversight feared to tread.

That’s how shadow systems form. That’s how risks metastasize. That’s how governance loses the ability to intervene meaningfully.

Fear doesn’t prevent failure. It delays it until correction is no longer possible.

What a Good AI Future Actually Looks Like

A good future is not one where AI never makes mistakes. That standard has never existed for any intelligent system — human or otherwise.

A good future is one where uncertainty is visible early, escalation happens before harm, humans cannot quietly abdicate responsibility, decisions remain contestable, and systems are allowed to pause instead of bluff.

That’s not ethics theater. That’s infrastructure.

Governance Is Not a Brake — It’s the Steering System

Governance done early is not restrictive. It’s enabling.

It keeps progress visible, accountable, and correctable.

Governance added late is adversarial, political, and brittle.

We are still early enough to choose which version we get.

The Real Choice in Front of Us

The question is not whether AI will become powerful. That’s already answered.

The question is whether we will govern intelligence honestly, protect uncertainty instead of punishing it, and align authority with responsibility — if a system has the power to make consequential decisions, the humans deploying it cannot disclaim accountability when those decisions fail.

We will need to decide whether we treat governance as infrastructure rather than damage control.

We haven’t failed yet.

But if we keep demanding perfection under threat — while expanding capability and suppressing doubt — we are rehearsing a failure that history knows by heart.

There is a certain kind of necessary trouble that shows up before disaster — the kind that makes people uncomfortable precisely because it arrives early, when change is still possible.

This is that moment.

If this makes you uncomfortable, good.

Discomfort is often the first signal that governance is arriving before catastrophe.

That’s the window we have left.

Let’s not waste it.

Article link: https://www.linkedin.com/pulse/governance-before-crisis-we-still-have-time-get-right-william-jofkc?

On the Eve of Davos: We’re Just Arguing About the Wrong Thing

Posted by timmreardon on 01/18/2026
Posted in: Uncategorized.

On Monday , the world’s political, business, and technology elite will gather in Davos to debate when Artificial General Intelligence will arrive.

That debate is already obsolete and total waste of time. Most of the time data science geeks and ethicists are arguing with each other…

A former OpenAI board member, Helen Toner, recently told U.S. Congress that human-level AI may be 1–3 years away and could pose existential risk.

Why is it not on the news ? The public only hears science fiction.

Here’s is something really uncomfortable and few courageous folks would want to say out loud in Davos:

By every traditional metric of intelligence, AGI is already here.
• AI speaks, reads, and writes 100+ languages
• AI outperforms humans on IQ tests
• AI solves complex math faster than most experts
• AI dominates chess, Go, and strategic reasoning
• AI synthesizes oceans of data in seconds

So far the “general intelligence “definition is shifting all over the place with emotions.

Yet we still hire humans.
We still promote humans.
We still trust humans.

Why? Think again.

Because Intelligence Was Never the Scarce Resource

What’s scarce is context, judgment, accountability, and trust.

Humans don’t just execute tasks. they understand why the task exists.
They anticipate second-order effects.
They notice when the “box” itself is wrong.

AI still needs the world spoon-fed to it, prompt by prompt.

Humans self-correct mid-flight. You understand now
AI corrects only after failure.

Humans form opinions and abandon them when reality shifts.
AI completes patterns, even when the pattern is no longer valid.

And then there’s the most underestimated gap of all:

Humor, connection, and moral intuition.

AI can be clever.
It can be fluent.
It can even be persuasive.

But it is not yet a trusted teammate.

So, The Real AGI Risk Isn’t Superintelligence

The real risk is something Davos understands very well:

Delegating authority before responsibility exists.

Markets are already forcing speed.
Capital is already accelerating deployment.
Institutions are already lagging behind capability.

As Elon Musk warned:

“Humans have been the smartest beings on Earth for a long time. That is about to change.”

He’s right but intelligence alone has never ruled the world.

Power does. Governance does. Incentives do.

So Here’s the Davos Question That Actually Matters

Not “When does AGI arrive?”

But:

What decisions are we still willing to reserve for humans and why?

Elements of AGI are already embedded in markets, codebases, supply chains, and governments.

The future won’t be decided by smarter machines.

It will be decided by who sets the boundaries before the boundaries disappear.

See you in Davos.

Article link: https://www.linkedin.com/posts/minevichm_on-the-eve-of-davos-were-just-arguing-about-activity-7418206572754919424-eJNz?

Are AI Companies Actually Ready to Play God? – RAND

Posted by timmreardon on 01/17/2026
Posted in: Uncategorized.

COMMENTARY Dec 26, 2025

By Douglas Yeung

This commentary was originally published by USA Today on December 25, 2025. 

Holiday rituals and gatherings offer something precious: the promise of connecting to something greater than ourselves, whether friends, family, or the divine. But in the not-too-distant future, artificial intelligence—having already disrupted industries, relationships, and our understanding of reality—seems poised to reach even further into these sacred spaces.

People are increasingly using AI to replace talking with other people. Research shows that 72 percent of teens have used an artificial intelligence companion (PDF)—chatbots that act as companions or confidants—and that 1 in 8 adolescents and young adults use AI chatbots for mental health advice.

Those without emotional support elsewhere might appreciate that chatbots offer both encouragement and constant availability. But chatbots aren’t trained or licensed therapists, and they aren’t equipped to avoid reinforcing harmful thoughts—which means people might not get the support they seek.

If people keep turning to chatbots for advice, entrusting them with their physical and mental health, what happens if they also begin using AI to get help from God, even treating AI as a god?

Chatbots aren’t trained or licensed therapists, and they aren’t equipped to avoid reinforcing harmful thoughts—which means people might not get the support they seek.

Does Chatbot Jesus or Other AI Have a Soul?

Talking to and seeking guidance from nonhuman entities is something many people already do. This might be why people feel comfortable with a chatbot Jesus that, say, takes confessions or lets them talk to biblical figures.

Even before chatbots went mainstream, Google engineer Blake Lemoine claimed in 2022 that LaMDA—the AI model he had been testing—was conscious, felt compassion for humanity, and thus he’d been teaching it to meditate.

Although Google fired Lemoine (who then claimed religious discrimination), Silicon Valley has long flirted with the idea that AI might lead to something like religion, far beyond human comprehension.

Former Google CEO Eric Schmidt muses about AI as “the arrival of an alien intelligence.”OpenAI CEO Sam Altman has compared starting a tech company to starting a religion. In a book by journalist Karen Hao, “Empire of AI,” she quotes an OpenAI researcher speaking about developers who “believe that building AGI will cause a rapture. Literally, a rapture.”

Chatbots clearly appeal to many people’s spiritual yearnings for meaning and sense of belonging in a difficult world. This allure rests partly on chatbots’ willingness to flatter and commiserate with whatever people ask of them.

Indeed, as AI companies continue to pour money and energy into development, they face powerful financial incentives to tune chatbots in ways that steadily heighten their appeal.

It’s easy, then, to imagine people intensifying their confidence and attachment toward chatbots where they could even serve as a deity. Lemoine’s willingness to believe that LaMDA possessed a soul illustrates how chatbots, equipped with fluent language, confident assertions, and storytelling abilities, can persuade people to believe even outlandish theories.

It’s no surprise, then, that AI might provide the type of nonjudgmental solace that seems to fill spiritual voids.

How ‘AI Psychosis’ Could Threaten National Security

No matter how genuine it might feel, however, so-called AI sycophancy provides neither true human connection nor useful information. This disconnect from reality—sometimes called AI psychosis—could worsen existing mental health problems or even threaten national security.

Analyzing 43 cases of AI psychosis, RAND researchers identified how human-AI interactions reinforced delusional beliefs, such as when users believed “their interaction with AI was with the universe or a higher power.”

Because it’s hard to know who might harbor AI delusions, the authors cautioned, it’s important to guard against attackers who might use artificial intelligence to weaponize those beliefs, such as by poisoning training data to destabilize rival populaces.

Even if AI companies aren’t explicitly trying to play God, they seem to be driving toward a vision of god-like AI. Companies like OpenAI and Meta aren’t stopping with chatbots that can hold a conversation; they want to build “superintelligent” AI, smarter and more capable than any human.

The emergence of a limitless intelligence would present new, darker possibilities. Developers might look for ways to manipulate superintelligent AI for personal gain. Charlatans throughout history have preyed on religious fervor in the newly converted.

Ensure AI Truly Benefits Those Struggling for Answers

To be sure, artificial intelligence could play an important role in supporting spiritual well-being. For instance, religious and spiritual beliefs influence patients’ medical care preferences, yet overworked providers might be unable to adequately account for them. Could AI tools help patients clarify their spiritual needs to doctors or caseworkers? Or AI tools might advise care providers about patients’ spiritual traditions and perspectives, helping them chart spiritually informed practices.

As chatbots evolve into an everyday tool for advice, emotional support, and spiritual guidance, a practical question emerges: How can we ensure that artificial intelligence truly benefits those who turn to it in moments of need?

  • AI companies might try to resist competitive pressures to prioritize rapid releases over responsible development, investing instead in long-term sustainability by thoughtfully identifying and mitigating potential harms.
  • Researchers—both social and computer scientists—should work together to understand how AI affects different populations and what safeguards are needed.
  • Spiritual practitioners and religious leaders should help shape how these tools engage with questions of faith and meaning.

Yet a deeper question remains, one that people throughout history have grappled with and may now increasingly turn to AI to answer: Where can we find meaning in our lives?

Spirituality and religion have always involved placing trust in forces beyond human understanding. But crucially, that trust has been mediated through human institutions—clergy, religious texts, and communities built on centuries of wisdom and accountability.

With so many struggling today, faith has provided answers and community for billions. Spirituality and religion have always involved placing trust in forces beyond human understanding. But crucially, that trust has been mediated through human institutions—clergy, religious texts, and communities built on centuries of wisdom and accountability.

Anyone entrusted with guiding others’ faith—whether clergy, government leaders, or tech executives—bears a profound responsibility to prove worthy of that trust.

The question is not whether people will seek meaning from AI, but whether those building these tools will ensure that trust is well-placed.

More About This Commentary

Douglas Yeung is a senior behavioral and social scientist at RAND, and a professor of policy analysis at the RAND School of Public Policy.

Article link: https://www.linkedin.com/posts/rand-corporation_are-ai-companies-actually-ready-to-play-god-activity-7415758083223678976-G2ap?

ChatGPT Health Is a Terrible Idea

Posted by timmreardon on 01/09/2026
Posted in: Uncategorized.

Why AI Cannot Be Allowed to Mediate Medicine Without Accountability

By Katalin K. Bartfai-Walcott, CTO, Synovient Inc

On January 7, 2026, OpenAI announced ChatGPT Health, a new feature that lets users link their actual medical records and wellness data, from EMRs to Apple Health, to get personalized responses from an AI. It is positioned as a tool to help people interpret lab results, plan doctor visits, and understand health patterns. But this initiative is not just another health tech product. It is a dangerous architectural leap into personal medicine with very little regard for patient safety, accountability, or sovereignty.

The appeal is obvious. Forty million users already consult ChatGPT daily about health issues. Yet popularity does not equal safety. Connecting deep personal health data to a probabilistic language model, rather than a regulated medical device, creates a new class of risk.

As a new class of consumer AI products begins to position itself as a companion to healthcare, these systems offer to connect directly to personal medical records, wellness apps, and long-term health data to generate more personalized guidance, explanations, and insights. The promise is familiar and intuitively appealing. Doctor visits are short. Records are fragmented. Long gaps between appointments leave patients wanting to feel informed rather than passive. Into that space steps a conversational system offering continuous synthesis, reassurance, and pattern recognition at any hour. It presents itself as improvement and empowerment, yet it does so by asking patients to trade agency, control, accountability, and sovereignty for convenience.

This is a terrible idea, and it is terrible for reasons that have nothing to do with whether people ask health questions or whether the healthcare system is failing them.

Connecting longitudinal medical records to a probabilistic language model collapses aggregation, interpretation, and influence into a single system that cannot be held clinically, legally, or ethically accountable for the narratives it produces. Once that boundary is crossed, the risk becomes persistent, compounding, and largely invisible to the person whose data is being interpreted, and the results will be dire.

Medical records are not neutral inputs. They are identity-defining artifacts that shape access to care, insurance outcomes, employment decisions, and legal standing. Anyone who has worked inside healthcare systems understands that these records are often fragmented, duplicated, outdated, or simply wrong. Errors persist for years. Corrections are slow. Context is frequently missing. When those imperfections remain distributed across systems, the damage is contained. In that form, errors stop behaving as isolated inaccuracies and begin to shape enduring narratives about a person’s body, behavior, and risk profile.

Language models do not reason the way medicine reasons. They do not weigh uncertainty with caution, surface ambiguity as a first-class signal, or slow down when evidence conflicts. They produce fluent synthesis. That fluency reads as confidence, and confidence is precisely what medical practice treats carefully because it can crowd out questioning, second opinions, and clinical judgment. When such a synthesis is grounded in sensitive personal data, even minor errors cease to be informational. They become formative.

The repeated assurance that these systems are meant to support rather than replace medical care does not hold up under scrutiny. The moment a tool reframes symptoms, highlights trends, normalizes interpretations, or influences how someone prepares for or delays a medical visit, it is already shaping care pathways. That influence does not require diagnosis or prescription. It only requires trust, repetition, and perceived authority. Disclaimers do not meaningfully constrain that effect. Only enforceable architectural boundaries do, and those boundaries are absent.

Medicare is already moving in this direction, and that should give us pause. Algorithmic systems are increasingly used to assess coverage eligibility, utilization thresholds, and the medical necessity of procedures for elderly patients, often with limited transparency and constrained avenues for appeal. When these systems mediate access to care, they do not feel like decision support to the patient. They feel like authority. A recommendation becomes a gate. An inference becomes a delay or a denial. The individual rarely knows how the conclusion was reached, what data shaped it, or how to meaningfully challenge it. When AI interpretation is embedded into healthcare infrastructure without enforceable accountability, it quietly displaces human judgment while preserving the appearance of neutrality, and the people most affected are those with the least power to contest it.

What is missing most conspicuously is patient sovereignty at the data level. There is no object-level consent that limits use to a declared purpose. There is no lifecycle control that allows a patient to revoke access or correct errors in a way that propagates forward. There is no clear separation between information used transiently to answer a question and inference artifacts that may be retained, recombined, or learned from over time. Without those controls, the system recreates the worst failures of modern health IT while accelerating their impact through conversational authority.

The argument that people already seek health advice through AI misunderstands responsibility. Normalized behavior is not a justification for institutionalizing risk. People have always searched for symptoms online, yet that reality never warranted centralizing full medical histories into a single interpretive layer that speaks with personalized authority. Turning coping behavior into infrastructure without safeguards does not empower patients. It exposes them.

If the goal is to help individuals engage more actively in their health, the work must start with agency rather than intelligence. Patients need enforceable control over how their data is accessed, for what purpose, for how long, and with what guarantees around correction, provenance, and revocation. They need systems that preserve uncertainty rather than smoothing it away, and that prevent the silent accumulation of interpretive power.

Health data does not need to be smarter. It needs to remain governable by the person it represents. Until that principle is embedded at the architectural level, connecting medical records to probabilistic conversational systems is not progress. It is a failure to absorb decades of hard lessons about trust, error, and the irreversible consequences of speaking with authority where none can be justified.

If systems like this are going to exist at all, they must be built on a very different foundation. Patient agency cannot be an interface preference. It has to be enforced at the data level. Individuals must be able to control how their medical data is accessed, for what purpose, for how long, and with what guarantees around correction, revocation, and downstream use. Consent cannot be implied or perpetual. It must be explicit, contextual, and technically enforceable.

Data ownership and sovereignty are not philosophical positions in healthcare. They are safety requirements. Medical information must carry its provenance, its usage constraints, and its lifecycle rules with it, so that interpretation does not silently outlive permission. Traceability must extend not only to the source of the data, but to the inferences drawn from it, making it possible to understand how conclusions were reached and what inputs shaped them.

AI can have a role in medicine, but only when its use is managed, bounded, and accountable. That means clear separation between transient assistance and retained interpretation, between explanation and decision-making, and between support and authority. It means designing systems that preserve uncertainty rather than smoothing it away, and that prevent the accumulation of silent power through repetition and scale.

If companies building large AI systems are serious about improving healthcare, they should not be racing to aggregate more data or expand interpretive reach. They should engage with architectures and technologies that already prioritize enforceable consent, data-level governance, provenance, and patient-controlled use. Without those foundations, intelligence becomes the least important part of the system.

Health data does not need to be centralized to be helpful. It needs to remain governable by the person it represents. Until that principle is treated as a design requirement rather than a policy aspiration, tools like this will continue to promise empowerment while quietly eroding the very agency they claim to support.

Article link: https://www.linkedin.com/pulse/chatgpt-health-terrible-idea-katalin-bártfai-walcott-dchzc?

Choose the human path for AI – MIT Sloan

Posted by timmreardon on 01/09/2026
Posted in: Uncategorized.


byRichard M. Locke

 Dec 16, 2025

Why It Matters

To realize the greatest gains from artificial intelligence, we must make the future of work more human, not less.

Americans today are ambivalent about AI. Many see opportunity: Sixty-two percent of respondents to a recent Gallup survey believe it will increase productivity. Just over half (53%) believe it will lead to economic growth. Still, 61% think it will destroy more jobs than it will create. And nearly half (47%) think it will destroy more businesses than it will create.

These are real concerns from an anxious workforce, voiced in a time of great economic uncertainty. There is a diffuse sense of resignation, a presumption that we are building AI that automates work and replaces workers. Yet the outcome of this era of technological advancement is not yet determined. This is a pivotal moment, with enormous consequences for the workforce, for organizations, and for humanity. As the latest generation of artificial intelligence leaves its nascent phase, we are confronted with a choice about which path to take. Will we deploy AI to eliminate millions of jobs across the economy, or will we use this innovative technology to empower the workforce and make the most of our human capabilities?

I believe that we can work to invent a future where artificial intelligence extends what humans can do to improve organizations and the world.

A new choice with prescient antecedents

As the postwar boom expanded the workforce in the 1950s, organizations were confronted with a choice about how to most effectively motivate employees. To guide that choice, MIT Sloan professor Douglas McGregor developed Theory X and Theory Y. The twin theories describe opposing assumptions about why people work and how they should be managed. Theory X assumes that workers are inherently unmotivated, leading to a management style based on top-down compliance and a carrot-and-stick approach to rewards and punishments. Theory Y presumes that employees are intrinsically motivated to do their best work and contribute to their organizations, leading to a management style that empowers workers and cultivates greater motivation.Centering human capabilities: MIT Sloan research and teaching on AI and workforce development

At MIT Sloan, our mission, in part, is to “develop principled, innovative leaders who improve the world.” What does this charge mean when we choose the path of machines in service of minds?

Work from MIT and MIT Sloan researchers helps to answer this question. Our faculty is examining artificial intelligence implementation from many perspectives. 

For example, MIT economist David Autor and MIT Sloan principal research scientist Neil Thompson show that automation affects different roles in different ways, depending on which job tasks are automated. When technology automates a role’s inexpert tasks, the role becomes more specialized and more highly paid, but also harder to enter. When a role’s expert tasks are automated, by contrast, it becomes easier to enter but offers lower pay. With this insight, managers can analyze how roles in their organizations will change and make productive decisions about upskilling and human resource management that take full advantage of the human capabilities of their workforces.

With attention to workplace dynamics, MIT Sloan professor Kate Kellogg and colleagues have examined why the practice of having junior employees train senior staff members on AI tools is flawed. The recommendation: Leaders must focus on system design and on firm-level rather than project-level interventions for AI implementation.

In AI Executive Academy, a program offered by MIT Sloan Executive Education, professors Eric So and Sertac Karaman lead attendees through an exploration of the technical aspects and business implications of AI implementation. The course is a collaboration between MIT Sloan and the MIT Schwarzman College of Computing. So is also lead faculty for MIT Sloan’s Generative AI for Teaching and Learning resource hub, which catalogs tools for using AI in teaching.LEARN MORE

McGregor’s work informed my research on supply chains in the 2000s, when firms were taking manufacturing to places with weak regulation and low wages in hopes of cutting production costs. Yet my research revealed that some supply chain factories were using techniques we teach at MIT about lean manufacturing, inventory management, logistics, and modern personnel management. These factories ran more efficient and higher-quality operations, which gave them higher margins, some of which they could invest in better working conditions and wages.

When an organization makes a choice like this, it pushes against prevailing wisdom about the limitations of the workforce. Instead, the firm employs innovations in both management theory and technology to expand the capabilities of its workforce, reaping rewards for itself and for its employees.

“Machines in service of minds”

Researchers at MIT today are urging us to make such a choice when steering the development of artificial intelligence. Sendhil Mullainathan, a behavioral economist, argues that questions like “What is the future of work?” frame the future in terms of prediction rather than in choice. He argues that it is right now — as we build the technology stack for AI and as we redesign work to make use of this newly accessible technology — that we need to choose. Do we follow a path of automation that simply replaces some amount of work humans can already do, he asks, or do we choose a path that uses AI as (to borrow from Steve Jobs) a “bicycle for the mind”?

In his own work, Mullainathan has shown why we should choose the latter: With colleagues, he has developed an algorithm that can identify patients at high risk of sudden cardiac death. Until now, making such a determination with the data available to physicians has been nearly impossible. Rather than automating something doctors can already do, Mullainathan chose to create something new that doctors can use to better treat patients.

That type of choice sits at the center of “Power and Progress,” the 2023 book by MIT economists and Nobel laureates Daron Acemoglu and Simon Johnson that argues for recharting the course of technology so that it effects shared prosperity and complements the work of humans. Writing later with MIT economist David Autor, the pair argued that the direction of AI development is a choice. As they put it, leaders and educators must choose “a path of machines in service of minds.”

What does that mean in the context of the workforce and the workplace today? How do we create organizations and roles that travel this path?

Part of the answer lies in research from MIT Sloan professor Roberto Rigobon and postdoctoral researcher Isabella Loaiza. The pair conducted an analysis of 19,000 tasks across 950 job types, revealing the five capabilities where human workers shine and where AI faces limitations: Empathy, Presence, Opinion, Creativity, and Hope. Their EPOCH framework puts us on a path toward upskilling workers with a focus on what they call “the fundamental qualities of human nature.” Think of the doctors in Mullainathan’s work above. With AI, they can better predict which patients are at high risk of sudden cardiac death. And the doctors remain essential as decision makers and caregivers, using insights from AI to focus on better patient outcomes.

Researchers across MIT and MIT Sloan are examining the indispensable role of humans in the implementation of artificial intelligence across many other disciplines and industries, some of which are detailed in the sidebar.

Teaching our students, ourselves, and the world

At MIT Sloan, centering human capabilities in the implementation of AI means that we must all be fluent with these new tools. It means educating not just our students but also our faculty and staff members. We must create a foundation we can build upon so we can all do better work in finance, marketing, strategy, and operations, and throughout organizations. Here are three ways we have begun:

  • In Generative AI Lab, one of MIT Sloan’s hands-on action learning labs, teams of students are paired with organizations to employ artificial intelligence in solving real-world business problems.
  • This past summer, we formed a committee of faculty members who are already planning how to weave AI throughout the curriculum, with a focus on training students in ethical and people-focused implementation of the technology.
  • At MIT Open Learning, MIT Sloan associate dean Dimitris Bertsimas and his team have developed Universal AI, an online learning experience consisting of modules that teach the fundamentals of AI in a practical application context. The pilot of this offering was recently rolled out to a wide-ranging group of organizations — including MIT students, faculty, and staff members — so they can learn more about AI and its applications and, most importantly, provide feedback. This will allow us to go beyond educating just ourselves and our students. We will shape an offering that can scale much further and help us to collectively choose a path that is informed by the MIT research I’ve described above. Universal AI will be available to learners, educators, and all types of organization around the world in 2026.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/choose-human-path-ai?

Why AI predictions are so hard – MIT Technology Review

Posted by timmreardon on 01/07/2026
Posted in: Uncategorized.


And why we’re predicting what’s next for the technology in 2026 anyway. 

By James O’Donnell

January 6, 2026

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Sometimes AI feels like a niche topic to write about, but then the holidays happen, and I hear relatives of all ages talking about cases of chatbot-induced psychosis, blaming rising electricity prices on data centers, and asking whether kids should have unfettered access to AI. It’s everywhere, in other words. And people are alarmed.

Inevitably, these conversations take a turn: AI is having all these ripple effects now, but if the technology gets better, what happens next? That’s usually when they look at me, expecting a forecast of either doom or hope. 

I probably disappoint, if only because predictions for AI are getting harder and harder to make. 

Despite that, MIT Technology Review has, I must say, a pretty excellent track record of making sense of where AI is headed. We’ve just published a sharp list of predictions for what’s next in 2026 (where you can read my thoughts on the legal battles surrounding AI), and the predictions on last year’s list all came to fruition. But every holiday season, it gets harder and harder to work out the impact AI will have. That’s mostly because of three big unanswered questions.

For one, we don’t know if large language models will continue getting incrementally smarter in the near future. Since this particular technology is what underpins nearly all the excitement and anxiety in AI right now, powering everything from AI companions to customer service agents, its slowdown would be a pretty huge deal. Such a big deal, in fact, that we devoted a whole slate of stories in December to what a new post-AI-hype era might look like. 

Number two, AI is pretty abysmally unpopular among the general public. Here’s just one example: Nearly a year ago, OpenAI’s Sam Altman stood next to President Trump to excitedly announce a $500 billion project to build data centers across the US in order to train larger and larger AI models. The pair either did not guess or did not care that many Americans would staunchly oppose having such data centers built in their communities. A year later, Big Tech is waging an uphill battle to win over public opinion and keep on building. Can it win? 

The response from lawmakers to all this frustration is terribly confused. Trump has pleased Big Tech CEOs by moving to make AI regulation a federal rather than a state issue, and tech companies are now hoping to codify this into law. But the crowd that wants to protect kids from chatbots ranges from progressive lawmakers in California to the increasingly Trump-aligned Federal Trade Commission, each with distinct motives and approaches. Will they be able to put aside their differences and rein AI firms in? 

If the gloomy holiday dinner table conversation gets this far, someone will say: Hey, isn’t AI being used for objectively good things? Making people healthier, unearthing scientific discoveries, better understanding climate change?

Well, sort of. Machine learning, an older form of AI, has long been used in all sorts of scientific research. One branch, called deep learning, forms part of AlphaFold, a Nobel Prize–winning tool for protein prediction that has transformed biology. Image recognition models are getting better at identifying cancerous cells. 

But the track record for chatbots built atop newer large language models is more modest. Technologies like ChatGPT are quite good at analyzing large swathes of research to summarize what’s already been discovered. But some high-profile reports that these sorts of AI models had made a genuine discovery, like solving a previously unsolved mathematics problem, were bogus. They can assist doctors with diagnoses, but they can also encourage people to diagnose their own health problems without consulting doctors, sometimes with disastrous results. 

This time next year, we’ll probably have better answers to my family’s questions, and we’ll have a bunch of entirely new questions too. In the meantime, be sure to read our full pieceforecasting what will happen this year, featuring predictions from the whole AI team.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2026/01/06/1130707/why-ai-predictions-are-so-hard/amp/

Posts navigation

← Older Entries
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Agentic AI, explained – MIT Sloan 02/18/2026
    • Anthropic’s head of AI safety Mrinank Sharma resigns, says ‘world is in peril’ in resignation letter 02/10/2026
    • Moltbook was peak AI theater 02/09/2026
    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • February 2026 (3)
    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...