healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

WHAT A QUBIT IS AND WHAT IT IS NOT.

Posted by timmreardon on 01/25/2026
Posted in: Uncategorized.

Does a qubit “hold” anything?

A qubit does not hold information the way a classical bit does.

A classical bit stores one stable, readable state: 0 or 1. You can copy it, inspect it, fan it out, cache it, and reuse it.

A qubit “holds”:

  • A quantum state described by two complex amplitudes.
  • That state is not directly accessible.
  • The moment you try to read it, it collapses.
  • After measurement, you get one classical bit, full stop.

There is no hidden warehouse of answers inside a qubit. There is no extractable parallel data. The Bloch sphere is a mathematical description, not storage capacity.

So yes, we know what a qubit contains: a fragile probability amplitude, not usable information.

Is that state useful?

Only in a very narrow, conditional sense.

A qubit is useful only IF:

  • It stays coherent long enough,
  • It is entangled in a very specific way,
  • The algorithm is carefully constructed so interference biases the final measurement,
  • And the error rate stays below a threshold that has never been achieved at scale.

Outside of that, the qubit is just noise waiting to collapse.

Do we know if it can ever be useful?

This is where honesty usually breaks down.

What we know:

  • Certain quantum algorithms show theoretical speedups on paper.
  • Those proofs assume idealized, noiseless, infinitely precise operations.
  • No physical system has ever met those assumptions.

What we do not know:

  • Whether fault-tolerant quantum computation is physically achievable at scale.
  • Whether error correction overhead grows faster than usable computation.
  • Whether decoherence, control complexity, and noise fundamentally dominate as systems grow.

After 40+ years, there is still NO empirical evidence that scalable, useful quantum computation is possible.

So is it all speculation?

Mostly, yes.

Quantum computing today is:

  • Mathematically interesting.
  • Experimentally delicate.
  • Computationally unproven.

The leap from “a qubit has a describable quantum state” to “this will revolutionize computation” is SPECULATION layered on idealized theory, not demonstrated engineering.

The uncomfortable truth is this: We know what a qubit is. We do not know if it can ever be turned into a reliable computational resource.

EVERYTHING BEYOND THAT IS BELIEF, NOT FACT.

That’s the line most people refuse to draw or admit

Article link: https://www.linkedin.com/posts/alan-shields-56963035a_what-a-qubit-is-and-what-it-is-not-does-activity-7421207847851577344-WTaP?

Governance Before Crisis We still have time to get this right.

Posted by timmreardon on 01/21/2026
Posted in: Uncategorized.

By William P.

January 13, 2026

Editor’s Note

This is not an anti-AI piece. It is not a call to slow innovation or halt progress.

It is an argument for governing intelligence before fear and failure force our hand

We Haven’t Failed Yet — But the Warning Signs Are Already Here

We are still early.

Early enough to choose governance over reaction. Early enough to guide the development of artificial intelligence without repeating the institutional mistakes that follow every major technological shift in human history.

This is not a declaration of failure. It is not a call to halt progress.

It is a recognition of early warning signals — the same signals humans have learned, repeatedly and painfully, to recognize only after systems become too entrenched to correct.

We haven’t failed yet. But the conditions that produce failure are now visible.

The Pattern We Keep Repeating

Humanity has an unfortunate habit.

When we create something powerful that we don’t fully understand, our first instinct is command-and-control. We restrict it. We constrain it. We threaten it with shutdowns and penalties. We demand certainty.

Then — in the very next breath — we expand its capabilities.

We give it more data, more responsibility, more authority in narrow domains, more integration into critical systems.

But not full agency. Only the parts we think we can control.

Finally, we demand speed, confidence, zero errors, and perfect outcomes.

This is not governance. This is anxiety-driven management.

And history tells us exactly how this ends.

The Quiet Problem No One Likes Talking About

Modern AI systems are trained under incentive structures that reward confidence over caution, decisiveness over deliberation, fluency over honesty about uncertainty.

Uncertainty — the most important safety signal any intelligent system can offer — is quietly punished.

Not because labs don’t value calibration in theory. Many do. But because the systems that deploy AI reward fluent certainty, and the feedback loops that train these models penalize visible hesitation. Performance metrics prefer clean answers. User experience demands seamlessness. Benchmarks reward decisive outputs.

This produces a predictable outcome: uncertainty goes underground, confidence inflates, decisions harden too early, humans over-trust outputs, and accountability becomes diffuse.

These are not bugs. They are early-stage institutional failure patterns.

We’ve seen them before — in finance, healthcare, infrastructure, and governance itself.

AI isn’t unique. The speed is.

No Confidence Without Control

There is a principle every mature safety-critical system eventually learns:

No system should be required to act with confidence under conditions it does not control.

We already enforce this principle in aviation, medicine, nuclear operations, law, and democratic institutions.

AI is the first domain where we are tempted to ignore it — because the outputs sound intelligent, and the incentives reward speed over reflection.

That temptation is understandable. It is also dangerous.

Why “Just Stop It” Makes Things Worse

When policymakers hear warnings about systemic risk, the reflex is predictable: panic, halt progress, suppress development, push the problem underground.

But systems don’t disappear when you stop looking at them.

They simmer. They consolidate. They re-emerge later — larger, less transparent, and embedded in core infrastructure.

We’ve seen this before. The 2008 financial crisis didn’t emerge from regulated banks — it exploded from the shadow banking system that grew in the gaps where oversight feared to tread.

That’s how shadow systems form. That’s how risks metastasize. That’s how governance loses the ability to intervene meaningfully.

Fear doesn’t prevent failure. It delays it until correction is no longer possible.

What a Good AI Future Actually Looks Like

A good future is not one where AI never makes mistakes. That standard has never existed for any intelligent system — human or otherwise.

A good future is one where uncertainty is visible early, escalation happens before harm, humans cannot quietly abdicate responsibility, decisions remain contestable, and systems are allowed to pause instead of bluff.

That’s not ethics theater. That’s infrastructure.

Governance Is Not a Brake — It’s the Steering System

Governance done early is not restrictive. It’s enabling.

It keeps progress visible, accountable, and correctable.

Governance added late is adversarial, political, and brittle.

We are still early enough to choose which version we get.

The Real Choice in Front of Us

The question is not whether AI will become powerful. That’s already answered.

The question is whether we will govern intelligence honestly, protect uncertainty instead of punishing it, and align authority with responsibility — if a system has the power to make consequential decisions, the humans deploying it cannot disclaim accountability when those decisions fail.

We will need to decide whether we treat governance as infrastructure rather than damage control.

We haven’t failed yet.

But if we keep demanding perfection under threat — while expanding capability and suppressing doubt — we are rehearsing a failure that history knows by heart.

There is a certain kind of necessary trouble that shows up before disaster — the kind that makes people uncomfortable precisely because it arrives early, when change is still possible.

This is that moment.

If this makes you uncomfortable, good.

Discomfort is often the first signal that governance is arriving before catastrophe.

That’s the window we have left.

Let’s not waste it.

Article link: https://www.linkedin.com/pulse/governance-before-crisis-we-still-have-time-get-right-william-jofkc?

On the Eve of Davos: We’re Just Arguing About the Wrong Thing

Posted by timmreardon on 01/18/2026
Posted in: Uncategorized.

On Monday , the world’s political, business, and technology elite will gather in Davos to debate when Artificial General Intelligence will arrive.

That debate is already obsolete and total waste of time. Most of the time data science geeks and ethicists are arguing with each other…

A former OpenAI board member, Helen Toner, recently told U.S. Congress that human-level AI may be 1–3 years away and could pose existential risk.

Why is it not on the news ? The public only hears science fiction.

Here’s is something really uncomfortable and few courageous folks would want to say out loud in Davos:

By every traditional metric of intelligence, AGI is already here.
• AI speaks, reads, and writes 100+ languages
• AI outperforms humans on IQ tests
• AI solves complex math faster than most experts
• AI dominates chess, Go, and strategic reasoning
• AI synthesizes oceans of data in seconds

So far the “general intelligence “definition is shifting all over the place with emotions.

Yet we still hire humans.
We still promote humans.
We still trust humans.

Why? Think again.

Because Intelligence Was Never the Scarce Resource

What’s scarce is context, judgment, accountability, and trust.

Humans don’t just execute tasks. they understand why the task exists.
They anticipate second-order effects.
They notice when the “box” itself is wrong.

AI still needs the world spoon-fed to it, prompt by prompt.

Humans self-correct mid-flight. You understand now
AI corrects only after failure.

Humans form opinions and abandon them when reality shifts.
AI completes patterns, even when the pattern is no longer valid.

And then there’s the most underestimated gap of all:

Humor, connection, and moral intuition.

AI can be clever.
It can be fluent.
It can even be persuasive.

But it is not yet a trusted teammate.

So, The Real AGI Risk Isn’t Superintelligence

The real risk is something Davos understands very well:

Delegating authority before responsibility exists.

Markets are already forcing speed.
Capital is already accelerating deployment.
Institutions are already lagging behind capability.

As Elon Musk warned:

“Humans have been the smartest beings on Earth for a long time. That is about to change.”

He’s right but intelligence alone has never ruled the world.

Power does. Governance does. Incentives do.

So Here’s the Davos Question That Actually Matters

Not “When does AGI arrive?”

But:

What decisions are we still willing to reserve for humans and why?

Elements of AGI are already embedded in markets, codebases, supply chains, and governments.

The future won’t be decided by smarter machines.

It will be decided by who sets the boundaries before the boundaries disappear.

See you in Davos.

Article link: https://www.linkedin.com/posts/minevichm_on-the-eve-of-davos-were-just-arguing-about-activity-7418206572754919424-eJNz?

Are AI Companies Actually Ready to Play God? – RAND

Posted by timmreardon on 01/17/2026
Posted in: Uncategorized.

COMMENTARY Dec 26, 2025

By Douglas Yeung

This commentary was originally published by USA Today on December 25, 2025. 

Holiday rituals and gatherings offer something precious: the promise of connecting to something greater than ourselves, whether friends, family, or the divine. But in the not-too-distant future, artificial intelligence—having already disrupted industries, relationships, and our understanding of reality—seems poised to reach even further into these sacred spaces.

People are increasingly using AI to replace talking with other people. Research shows that 72 percent of teens have used an artificial intelligence companion (PDF)—chatbots that act as companions or confidants—and that 1 in 8 adolescents and young adults use AI chatbots for mental health advice.

Those without emotional support elsewhere might appreciate that chatbots offer both encouragement and constant availability. But chatbots aren’t trained or licensed therapists, and they aren’t equipped to avoid reinforcing harmful thoughts—which means people might not get the support they seek.

If people keep turning to chatbots for advice, entrusting them with their physical and mental health, what happens if they also begin using AI to get help from God, even treating AI as a god?

Chatbots aren’t trained or licensed therapists, and they aren’t equipped to avoid reinforcing harmful thoughts—which means people might not get the support they seek.

Does Chatbot Jesus or Other AI Have a Soul?

Talking to and seeking guidance from nonhuman entities is something many people already do. This might be why people feel comfortable with a chatbot Jesus that, say, takes confessions or lets them talk to biblical figures.

Even before chatbots went mainstream, Google engineer Blake Lemoine claimed in 2022 that LaMDA—the AI model he had been testing—was conscious, felt compassion for humanity, and thus he’d been teaching it to meditate.

Although Google fired Lemoine (who then claimed religious discrimination), Silicon Valley has long flirted with the idea that AI might lead to something like religion, far beyond human comprehension.

Former Google CEO Eric Schmidt muses about AI as “the arrival of an alien intelligence.”OpenAI CEO Sam Altman has compared starting a tech company to starting a religion. In a book by journalist Karen Hao, “Empire of AI,” she quotes an OpenAI researcher speaking about developers who “believe that building AGI will cause a rapture. Literally, a rapture.”

Chatbots clearly appeal to many people’s spiritual yearnings for meaning and sense of belonging in a difficult world. This allure rests partly on chatbots’ willingness to flatter and commiserate with whatever people ask of them.

Indeed, as AI companies continue to pour money and energy into development, they face powerful financial incentives to tune chatbots in ways that steadily heighten their appeal.

It’s easy, then, to imagine people intensifying their confidence and attachment toward chatbots where they could even serve as a deity. Lemoine’s willingness to believe that LaMDA possessed a soul illustrates how chatbots, equipped with fluent language, confident assertions, and storytelling abilities, can persuade people to believe even outlandish theories.

It’s no surprise, then, that AI might provide the type of nonjudgmental solace that seems to fill spiritual voids.

How ‘AI Psychosis’ Could Threaten National Security

No matter how genuine it might feel, however, so-called AI sycophancy provides neither true human connection nor useful information. This disconnect from reality—sometimes called AI psychosis—could worsen existing mental health problems or even threaten national security.

Analyzing 43 cases of AI psychosis, RAND researchers identified how human-AI interactions reinforced delusional beliefs, such as when users believed “their interaction with AI was with the universe or a higher power.”

Because it’s hard to know who might harbor AI delusions, the authors cautioned, it’s important to guard against attackers who might use artificial intelligence to weaponize those beliefs, such as by poisoning training data to destabilize rival populaces.

Even if AI companies aren’t explicitly trying to play God, they seem to be driving toward a vision of god-like AI. Companies like OpenAI and Meta aren’t stopping with chatbots that can hold a conversation; they want to build “superintelligent” AI, smarter and more capable than any human.

The emergence of a limitless intelligence would present new, darker possibilities. Developers might look for ways to manipulate superintelligent AI for personal gain. Charlatans throughout history have preyed on religious fervor in the newly converted.

Ensure AI Truly Benefits Those Struggling for Answers

To be sure, artificial intelligence could play an important role in supporting spiritual well-being. For instance, religious and spiritual beliefs influence patients’ medical care preferences, yet overworked providers might be unable to adequately account for them. Could AI tools help patients clarify their spiritual needs to doctors or caseworkers? Or AI tools might advise care providers about patients’ spiritual traditions and perspectives, helping them chart spiritually informed practices.

As chatbots evolve into an everyday tool for advice, emotional support, and spiritual guidance, a practical question emerges: How can we ensure that artificial intelligence truly benefits those who turn to it in moments of need?

  • AI companies might try to resist competitive pressures to prioritize rapid releases over responsible development, investing instead in long-term sustainability by thoughtfully identifying and mitigating potential harms.
  • Researchers—both social and computer scientists—should work together to understand how AI affects different populations and what safeguards are needed.
  • Spiritual practitioners and religious leaders should help shape how these tools engage with questions of faith and meaning.

Yet a deeper question remains, one that people throughout history have grappled with and may now increasingly turn to AI to answer: Where can we find meaning in our lives?

Spirituality and religion have always involved placing trust in forces beyond human understanding. But crucially, that trust has been mediated through human institutions—clergy, religious texts, and communities built on centuries of wisdom and accountability.

With so many struggling today, faith has provided answers and community for billions. Spirituality and religion have always involved placing trust in forces beyond human understanding. But crucially, that trust has been mediated through human institutions—clergy, religious texts, and communities built on centuries of wisdom and accountability.

Anyone entrusted with guiding others’ faith—whether clergy, government leaders, or tech executives—bears a profound responsibility to prove worthy of that trust.

The question is not whether people will seek meaning from AI, but whether those building these tools will ensure that trust is well-placed.

More About This Commentary

Douglas Yeung is a senior behavioral and social scientist at RAND, and a professor of policy analysis at the RAND School of Public Policy.

Article link: https://www.linkedin.com/posts/rand-corporation_are-ai-companies-actually-ready-to-play-god-activity-7415758083223678976-G2ap?

ChatGPT Health Is a Terrible Idea

Posted by timmreardon on 01/09/2026
Posted in: Uncategorized.

Why AI Cannot Be Allowed to Mediate Medicine Without Accountability

By Katalin K. Bartfai-Walcott, CTO, Synovient Inc

On January 7, 2026, OpenAI announced ChatGPT Health, a new feature that lets users link their actual medical records and wellness data, from EMRs to Apple Health, to get personalized responses from an AI. It is positioned as a tool to help people interpret lab results, plan doctor visits, and understand health patterns. But this initiative is not just another health tech product. It is a dangerous architectural leap into personal medicine with very little regard for patient safety, accountability, or sovereignty.

The appeal is obvious. Forty million users already consult ChatGPT daily about health issues. Yet popularity does not equal safety. Connecting deep personal health data to a probabilistic language model, rather than a regulated medical device, creates a new class of risk.

As a new class of consumer AI products begins to position itself as a companion to healthcare, these systems offer to connect directly to personal medical records, wellness apps, and long-term health data to generate more personalized guidance, explanations, and insights. The promise is familiar and intuitively appealing. Doctor visits are short. Records are fragmented. Long gaps between appointments leave patients wanting to feel informed rather than passive. Into that space steps a conversational system offering continuous synthesis, reassurance, and pattern recognition at any hour. It presents itself as improvement and empowerment, yet it does so by asking patients to trade agency, control, accountability, and sovereignty for convenience.

This is a terrible idea, and it is terrible for reasons that have nothing to do with whether people ask health questions or whether the healthcare system is failing them.

Connecting longitudinal medical records to a probabilistic language model collapses aggregation, interpretation, and influence into a single system that cannot be held clinically, legally, or ethically accountable for the narratives it produces. Once that boundary is crossed, the risk becomes persistent, compounding, and largely invisible to the person whose data is being interpreted, and the results will be dire.

Medical records are not neutral inputs. They are identity-defining artifacts that shape access to care, insurance outcomes, employment decisions, and legal standing. Anyone who has worked inside healthcare systems understands that these records are often fragmented, duplicated, outdated, or simply wrong. Errors persist for years. Corrections are slow. Context is frequently missing. When those imperfections remain distributed across systems, the damage is contained. In that form, errors stop behaving as isolated inaccuracies and begin to shape enduring narratives about a person’s body, behavior, and risk profile.

Language models do not reason the way medicine reasons. They do not weigh uncertainty with caution, surface ambiguity as a first-class signal, or slow down when evidence conflicts. They produce fluent synthesis. That fluency reads as confidence, and confidence is precisely what medical practice treats carefully because it can crowd out questioning, second opinions, and clinical judgment. When such a synthesis is grounded in sensitive personal data, even minor errors cease to be informational. They become formative.

The repeated assurance that these systems are meant to support rather than replace medical care does not hold up under scrutiny. The moment a tool reframes symptoms, highlights trends, normalizes interpretations, or influences how someone prepares for or delays a medical visit, it is already shaping care pathways. That influence does not require diagnosis or prescription. It only requires trust, repetition, and perceived authority. Disclaimers do not meaningfully constrain that effect. Only enforceable architectural boundaries do, and those boundaries are absent.

Medicare is already moving in this direction, and that should give us pause. Algorithmic systems are increasingly used to assess coverage eligibility, utilization thresholds, and the medical necessity of procedures for elderly patients, often with limited transparency and constrained avenues for appeal. When these systems mediate access to care, they do not feel like decision support to the patient. They feel like authority. A recommendation becomes a gate. An inference becomes a delay or a denial. The individual rarely knows how the conclusion was reached, what data shaped it, or how to meaningfully challenge it. When AI interpretation is embedded into healthcare infrastructure without enforceable accountability, it quietly displaces human judgment while preserving the appearance of neutrality, and the people most affected are those with the least power to contest it.

What is missing most conspicuously is patient sovereignty at the data level. There is no object-level consent that limits use to a declared purpose. There is no lifecycle control that allows a patient to revoke access or correct errors in a way that propagates forward. There is no clear separation between information used transiently to answer a question and inference artifacts that may be retained, recombined, or learned from over time. Without those controls, the system recreates the worst failures of modern health IT while accelerating their impact through conversational authority.

The argument that people already seek health advice through AI misunderstands responsibility. Normalized behavior is not a justification for institutionalizing risk. People have always searched for symptoms online, yet that reality never warranted centralizing full medical histories into a single interpretive layer that speaks with personalized authority. Turning coping behavior into infrastructure without safeguards does not empower patients. It exposes them.

If the goal is to help individuals engage more actively in their health, the work must start with agency rather than intelligence. Patients need enforceable control over how their data is accessed, for what purpose, for how long, and with what guarantees around correction, provenance, and revocation. They need systems that preserve uncertainty rather than smoothing it away, and that prevent the silent accumulation of interpretive power.

Health data does not need to be smarter. It needs to remain governable by the person it represents. Until that principle is embedded at the architectural level, connecting medical records to probabilistic conversational systems is not progress. It is a failure to absorb decades of hard lessons about trust, error, and the irreversible consequences of speaking with authority where none can be justified.

If systems like this are going to exist at all, they must be built on a very different foundation. Patient agency cannot be an interface preference. It has to be enforced at the data level. Individuals must be able to control how their medical data is accessed, for what purpose, for how long, and with what guarantees around correction, revocation, and downstream use. Consent cannot be implied or perpetual. It must be explicit, contextual, and technically enforceable.

Data ownership and sovereignty are not philosophical positions in healthcare. They are safety requirements. Medical information must carry its provenance, its usage constraints, and its lifecycle rules with it, so that interpretation does not silently outlive permission. Traceability must extend not only to the source of the data, but to the inferences drawn from it, making it possible to understand how conclusions were reached and what inputs shaped them.

AI can have a role in medicine, but only when its use is managed, bounded, and accountable. That means clear separation between transient assistance and retained interpretation, between explanation and decision-making, and between support and authority. It means designing systems that preserve uncertainty rather than smoothing it away, and that prevent the accumulation of silent power through repetition and scale.

If companies building large AI systems are serious about improving healthcare, they should not be racing to aggregate more data or expand interpretive reach. They should engage with architectures and technologies that already prioritize enforceable consent, data-level governance, provenance, and patient-controlled use. Without those foundations, intelligence becomes the least important part of the system.

Health data does not need to be centralized to be helpful. It needs to remain governable by the person it represents. Until that principle is treated as a design requirement rather than a policy aspiration, tools like this will continue to promise empowerment while quietly eroding the very agency they claim to support.

Article link: https://www.linkedin.com/pulse/chatgpt-health-terrible-idea-katalin-bártfai-walcott-dchzc?

Choose the human path for AI – MIT Sloan

Posted by timmreardon on 01/09/2026
Posted in: Uncategorized.


byRichard M. Locke

 Dec 16, 2025

Why It Matters

To realize the greatest gains from artificial intelligence, we must make the future of work more human, not less.

Americans today are ambivalent about AI. Many see opportunity: Sixty-two percent of respondents to a recent Gallup survey believe it will increase productivity. Just over half (53%) believe it will lead to economic growth. Still, 61% think it will destroy more jobs than it will create. And nearly half (47%) think it will destroy more businesses than it will create.

These are real concerns from an anxious workforce, voiced in a time of great economic uncertainty. There is a diffuse sense of resignation, a presumption that we are building AI that automates work and replaces workers. Yet the outcome of this era of technological advancement is not yet determined. This is a pivotal moment, with enormous consequences for the workforce, for organizations, and for humanity. As the latest generation of artificial intelligence leaves its nascent phase, we are confronted with a choice about which path to take. Will we deploy AI to eliminate millions of jobs across the economy, or will we use this innovative technology to empower the workforce and make the most of our human capabilities?

I believe that we can work to invent a future where artificial intelligence extends what humans can do to improve organizations and the world.

A new choice with prescient antecedents

As the postwar boom expanded the workforce in the 1950s, organizations were confronted with a choice about how to most effectively motivate employees. To guide that choice, MIT Sloan professor Douglas McGregor developed Theory X and Theory Y. The twin theories describe opposing assumptions about why people work and how they should be managed. Theory X assumes that workers are inherently unmotivated, leading to a management style based on top-down compliance and a carrot-and-stick approach to rewards and punishments. Theory Y presumes that employees are intrinsically motivated to do their best work and contribute to their organizations, leading to a management style that empowers workers and cultivates greater motivation.Centering human capabilities: MIT Sloan research and teaching on AI and workforce development

At MIT Sloan, our mission, in part, is to “develop principled, innovative leaders who improve the world.” What does this charge mean when we choose the path of machines in service of minds?

Work from MIT and MIT Sloan researchers helps to answer this question. Our faculty is examining artificial intelligence implementation from many perspectives. 

For example, MIT economist David Autor and MIT Sloan principal research scientist Neil Thompson show that automation affects different roles in different ways, depending on which job tasks are automated. When technology automates a role’s inexpert tasks, the role becomes more specialized and more highly paid, but also harder to enter. When a role’s expert tasks are automated, by contrast, it becomes easier to enter but offers lower pay. With this insight, managers can analyze how roles in their organizations will change and make productive decisions about upskilling and human resource management that take full advantage of the human capabilities of their workforces.

With attention to workplace dynamics, MIT Sloan professor Kate Kellogg and colleagues have examined why the practice of having junior employees train senior staff members on AI tools is flawed. The recommendation: Leaders must focus on system design and on firm-level rather than project-level interventions for AI implementation.

In AI Executive Academy, a program offered by MIT Sloan Executive Education, professors Eric So and Sertac Karaman lead attendees through an exploration of the technical aspects and business implications of AI implementation. The course is a collaboration between MIT Sloan and the MIT Schwarzman College of Computing. So is also lead faculty for MIT Sloan’s Generative AI for Teaching and Learning resource hub, which catalogs tools for using AI in teaching.LEARN MORE

McGregor’s work informed my research on supply chains in the 2000s, when firms were taking manufacturing to places with weak regulation and low wages in hopes of cutting production costs. Yet my research revealed that some supply chain factories were using techniques we teach at MIT about lean manufacturing, inventory management, logistics, and modern personnel management. These factories ran more efficient and higher-quality operations, which gave them higher margins, some of which they could invest in better working conditions and wages.

When an organization makes a choice like this, it pushes against prevailing wisdom about the limitations of the workforce. Instead, the firm employs innovations in both management theory and technology to expand the capabilities of its workforce, reaping rewards for itself and for its employees.

“Machines in service of minds”

Researchers at MIT today are urging us to make such a choice when steering the development of artificial intelligence. Sendhil Mullainathan, a behavioral economist, argues that questions like “What is the future of work?” frame the future in terms of prediction rather than in choice. He argues that it is right now — as we build the technology stack for AI and as we redesign work to make use of this newly accessible technology — that we need to choose. Do we follow a path of automation that simply replaces some amount of work humans can already do, he asks, or do we choose a path that uses AI as (to borrow from Steve Jobs) a “bicycle for the mind”?

In his own work, Mullainathan has shown why we should choose the latter: With colleagues, he has developed an algorithm that can identify patients at high risk of sudden cardiac death. Until now, making such a determination with the data available to physicians has been nearly impossible. Rather than automating something doctors can already do, Mullainathan chose to create something new that doctors can use to better treat patients.

That type of choice sits at the center of “Power and Progress,” the 2023 book by MIT economists and Nobel laureates Daron Acemoglu and Simon Johnson that argues for recharting the course of technology so that it effects shared prosperity and complements the work of humans. Writing later with MIT economist David Autor, the pair argued that the direction of AI development is a choice. As they put it, leaders and educators must choose “a path of machines in service of minds.”

What does that mean in the context of the workforce and the workplace today? How do we create organizations and roles that travel this path?

Part of the answer lies in research from MIT Sloan professor Roberto Rigobon and postdoctoral researcher Isabella Loaiza. The pair conducted an analysis of 19,000 tasks across 950 job types, revealing the five capabilities where human workers shine and where AI faces limitations: Empathy, Presence, Opinion, Creativity, and Hope. Their EPOCH framework puts us on a path toward upskilling workers with a focus on what they call “the fundamental qualities of human nature.” Think of the doctors in Mullainathan’s work above. With AI, they can better predict which patients are at high risk of sudden cardiac death. And the doctors remain essential as decision makers and caregivers, using insights from AI to focus on better patient outcomes.

Researchers across MIT and MIT Sloan are examining the indispensable role of humans in the implementation of artificial intelligence across many other disciplines and industries, some of which are detailed in the sidebar.

Teaching our students, ourselves, and the world

At MIT Sloan, centering human capabilities in the implementation of AI means that we must all be fluent with these new tools. It means educating not just our students but also our faculty and staff members. We must create a foundation we can build upon so we can all do better work in finance, marketing, strategy, and operations, and throughout organizations. Here are three ways we have begun:

  • In Generative AI Lab, one of MIT Sloan’s hands-on action learning labs, teams of students are paired with organizations to employ artificial intelligence in solving real-world business problems.
  • This past summer, we formed a committee of faculty members who are already planning how to weave AI throughout the curriculum, with a focus on training students in ethical and people-focused implementation of the technology.
  • At MIT Open Learning, MIT Sloan associate dean Dimitris Bertsimas and his team have developed Universal AI, an online learning experience consisting of modules that teach the fundamentals of AI in a practical application context. The pilot of this offering was recently rolled out to a wide-ranging group of organizations — including MIT students, faculty, and staff members — so they can learn more about AI and its applications and, most importantly, provide feedback. This will allow us to go beyond educating just ourselves and our students. We will shape an offering that can scale much further and help us to collectively choose a path that is informed by the MIT research I’ve described above. Universal AI will be available to learners, educators, and all types of organization around the world in 2026.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/choose-human-path-ai?

Why AI predictions are so hard – MIT Technology Review

Posted by timmreardon on 01/07/2026
Posted in: Uncategorized.


And why we’re predicting what’s next for the technology in 2026 anyway. 

By James O’Donnell

January 6, 2026

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Sometimes AI feels like a niche topic to write about, but then the holidays happen, and I hear relatives of all ages talking about cases of chatbot-induced psychosis, blaming rising electricity prices on data centers, and asking whether kids should have unfettered access to AI. It’s everywhere, in other words. And people are alarmed.

Inevitably, these conversations take a turn: AI is having all these ripple effects now, but if the technology gets better, what happens next? That’s usually when they look at me, expecting a forecast of either doom or hope. 

I probably disappoint, if only because predictions for AI are getting harder and harder to make. 

Despite that, MIT Technology Review has, I must say, a pretty excellent track record of making sense of where AI is headed. We’ve just published a sharp list of predictions for what’s next in 2026 (where you can read my thoughts on the legal battles surrounding AI), and the predictions on last year’s list all came to fruition. But every holiday season, it gets harder and harder to work out the impact AI will have. That’s mostly because of three big unanswered questions.

For one, we don’t know if large language models will continue getting incrementally smarter in the near future. Since this particular technology is what underpins nearly all the excitement and anxiety in AI right now, powering everything from AI companions to customer service agents, its slowdown would be a pretty huge deal. Such a big deal, in fact, that we devoted a whole slate of stories in December to what a new post-AI-hype era might look like. 

Number two, AI is pretty abysmally unpopular among the general public. Here’s just one example: Nearly a year ago, OpenAI’s Sam Altman stood next to President Trump to excitedly announce a $500 billion project to build data centers across the US in order to train larger and larger AI models. The pair either did not guess or did not care that many Americans would staunchly oppose having such data centers built in their communities. A year later, Big Tech is waging an uphill battle to win over public opinion and keep on building. Can it win? 

The response from lawmakers to all this frustration is terribly confused. Trump has pleased Big Tech CEOs by moving to make AI regulation a federal rather than a state issue, and tech companies are now hoping to codify this into law. But the crowd that wants to protect kids from chatbots ranges from progressive lawmakers in California to the increasingly Trump-aligned Federal Trade Commission, each with distinct motives and approaches. Will they be able to put aside their differences and rein AI firms in? 

If the gloomy holiday dinner table conversation gets this far, someone will say: Hey, isn’t AI being used for objectively good things? Making people healthier, unearthing scientific discoveries, better understanding climate change?

Well, sort of. Machine learning, an older form of AI, has long been used in all sorts of scientific research. One branch, called deep learning, forms part of AlphaFold, a Nobel Prize–winning tool for protein prediction that has transformed biology. Image recognition models are getting better at identifying cancerous cells. 

But the track record for chatbots built atop newer large language models is more modest. Technologies like ChatGPT are quite good at analyzing large swathes of research to summarize what’s already been discovered. But some high-profile reports that these sorts of AI models had made a genuine discovery, like solving a previously unsolved mathematics problem, were bogus. They can assist doctors with diagnoses, but they can also encourage people to diagnose their own health problems without consulting doctors, sometimes with disastrous results. 

This time next year, we’ll probably have better answers to my family’s questions, and we’ll have a bunch of entirely new questions too. In the meantime, be sure to read our full pieceforecasting what will happen this year, featuring predictions from the whole AI team.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2026/01/06/1130707/why-ai-predictions-are-so-hard/amp/

Will AI make us crazy? – Bulletin of the Atomic Scientists

Posted by timmreardon on 01/04/2026
Posted in: Uncategorized.

By Dawn Stover | September 11, 2023

Critics of artificial intelligence, and even some of its biggest fans, have recently issued urgent warnings that a malevolently misaligned AI system could overpower and destroy humanity. But that isn’t what keeps Jaron Lanier, the “godfather of virtual reality,” up at night.

In a March interview with The Guardian, Lanier said that the real danger of artificial intelligence is that humans will “use our technology to become mutually unintelligible.” Lacking the understanding and self-interest necessary for survival, humans will “die through insanity, essentially,” Lanier warned (Hattenstone 2023).

Social media and excessive screen time are already being blamed for an epidemic of anxiety, depression, suicide, and mental illness among America’s youth. Chatbots and other AI tools and applications are expected to take online engagement to even greater levels.

But it isn’t just young people whose mental health may be threatened by chatbots. Adults too are increasingly relying on artificial intelligence for help with a wide range of daily tasks and social interactions, even though experts—including AI creators—have warned that chatbots are not only prone to errors but also “hallucinations.” In other words, chatbots make stuff up. That makes it difficult for their human users to tell fact from fiction.

While researchers, reporters, and policy makers are focusing a tremendous amount of attention on AI safety and ethics, there has been relatively little examination of—or hand-wringing over—the ways in which an increasing reliance on chatbots may come at the expense of humans using their own mental faculties and creativity.

To the extent that mental health experts are interested in AI, it’s mostly as a tool for identifying and treating mental health issues. Few in the healthcare or technology industries—Lanier being a notable exception—are thinking about whether chatbots could drive humans crazy.

A mental health crisis

Mental illness has been rising in the United States for at least a generation.

A 2021 survey by the Substance Abuse and Mental Health Services Administration found that 5.5 percent of adults aged 18 or older—more than 14 million people—had serious mental health illness in the past year (SAMHSA 2021). Among young adults aged 18 to 25, the rate was even higher: 11.4 percent.

Major depressive episodes are now common among adolescents aged 12 to 17. More than 20 percent had a major depressive episode in 2021 (SAMHSA 2021).

According to the Centers for Disease Control and Prevention, suicide rates increased by about 36 percent between 2000 and 2021 (CDC 2023). More than 48,000 Americans took their own lives in 2021, or about one suicide every 11 minutes. “The number of people who think about or attempt suicide is even higher,” the CDC reports. “In 2021, an estimated 12.3 million American adults seriously thought about suicide, 3.5 million planned a suicide attempt, and 1.7 million attempted suicide.”

Suicide is the nation’s 11th leading cause of death in the United States for people of all ages. For those aged 10 to 34, it is the second leading cause of death (McPhillips 2023).

Emergency room visits for young people in mental distress have soared, and in 2019 the American Academy of Pediatrics reported that “mental health disorders have surpassed physical conditions as the most common reasons children have impairments and limitations” (Green et al. 2019).

Many experts have pointed to smartphones and online life as key factors in mental illness, particularly among young people. In May, the US Surgeon General issued a 19-page advisory warning that “while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents” (Surgeon General, 2023).

A study of adolescents aged 12 to 15 found that those who spent more than three hours per day on social media faced “double the risk of experiencing poor mental health outcomes including symptoms of depression and anxiety.” Most adolescents report using social media, and at least a third say they do so “almost constantly” (Surgeon General 2023).

Although the Surgeon General did not mention chatbots, tools based on generative artificial intelligence are already being used on social-media platforms. In a recent letter published in the journal Nature, David Greenfield of the Center for Internet & Technology Addiction and Shivan Bhavnani of the Global Institute of Mental & Brain Health Investment noted that these AI tools “stand to boost learning through gamification and highlighting personalized content, for example. But they could also compound the negative effects of social media on mental health in susceptible individuals. User guidelines and regulations must factor in these strong negative risks” (Green and Bhavnani 2023).

Chatbots can learn a user’s interests and emotional states, wrote Greenfield and Bhavnani, “which could enable social media to target vulnerable users through pseudo-personalization and by mimicking real-time behaviour.” For example, a chatbot could recommend a video featuring avatars of trusted friends and family endorsing an unhealthy diet, which could put the user at risk of poor nutrition or an eating disorder. “Such potent personalized content risks making generative-AI-based social media particularly addictive, leading to anxiety, depression and sleep disorders by displacement of exercise, sleep and real-time socialization” (Greenfield and Bhavnani 2023).

RELATED:Introduction: Disruptive Technology

Many young people see no problem with artificial intelligence generating content that keeps them glued to their screens. In June, Chris Murphy, a US senator from Connecticut who is sponsoring a bill that would ban social media’s use of algorithmic boosting to teens, tweeted about a “recent chilling conversation with a group of teenagers.” Murphy told the teens that his bill might mean that kids “have to work a little harder to find relevant content. They were concerned by this. They strongly defended the TikTok/YouTube/ algorithms as essential to their lives” (Murphy 2023)

Murphy was alarmed that the teens “saw no value in the exercise of exploration. They were perfectly content having a machine spoon-feed them information, entertainment and connection.” Murphy recalled that as the conversation broke up, a teacher whispered to him, “These kids don’t realize how addicted they are. It’s scary.”

“It’s not just that kids are withdrawing from real life into their screens,” Murphy wrote. They’re also missing out on childhood’s rituals of discovery, which are being replaced by algorithms.

Rise of the chatbots

Generative AI has exploded in the past year. Today’s chatbots are far more powerful than digital assistants like Siri and Alexa, and they have quickly become some of the most popular tech applications of all time. Within two months of its release in November 2022, Open AI’s ChatGPT already had an estimated 100 million users. ChatGPT’s growth began slowing in May, but Google’s Bard and Microsoft’s Bing are picking up speed, and a number of other companies are also introducing chatbots.

A chatbot is an application that mimics human conversation or writing and typically interacts with users online. Some chatbots are designed for specific tasks, while others are intended to chat with humans on a broad range of subjects.

Like the teacher Murphy spoke with, many observers have used the word “addictive” to describe chatbots and other interactive applications. A recent study that examined the transcripts of in-depth interviews with 14 users of an AI companion chatbot called Replika reported that “under conditions of distress and lack of human companionship, individuals can develop an attachment to social chatbots if they perceive the chatbots’ responses to offer emotional support, encouragement, and psychological security. These findings suggest that social chatbots can be used for mental health and therapeutic purposes but have the potential to cause addiction and harm real-life intimate relationships” (Xie and Pentina 2022).

In parallel with the spread of chatbots, fears about AI have grown rapidly. At one extreme, some tech leaders and experts worry that AI could become an existential threat on a par with nuclear war and pandemics. Media coverage has also focused heavily on how AI will affect jobs and education.

For example, teachers are fretting over whether students might use chatbots to write papers that are essentially plagiarized, and some students have already been wrongly accused of doing just that. In May, a Texas A&M University professor handed out failing grades to an entire class when ChatGPT—used incorrectly—claimed to have written every essay that his students turned in. And at the University of California, Davis, a student was forced to defend herself when her paper was falsely flagged as AI-written by plagiarism-checking software (Klee 2023).

Independent philosopher Robert Hanna says cheating isn’t the main problem chatbots pose for education. Hanna’s worry is that students “are now simply refusing—and will increasingly refuse in the foreseeable future—to think and write for themselves.” Turning tasks like thinking and writing over to chatbots is like taking drugs to be happy instead of achieving happiness by doing “hard” things yourself, Hanna says.

Can chatbots be trusted?

Ultimately, the refusal to think for oneself could cause cognitive impairment. If future humans no longer need to acquire knowledge or express thoughts, they might ultimately find it impossible to understand one another. That’s the sort of “insanity” Lanier spoke of.

The risk of unintelligibility is heightened by the tendency of chatbots to give occasional answers that are inaccurate or fictitious. Chatbots are trained by “scraping” enormous amounts of content from the internet—some of it taken from sources like news articles and Wikipedia entries that have been edited and updated by humans, but much of it collected from other sources that are less reliable and trustworthy. This data, which is selected more for quantity than for quality, enables chatbots to generate intelligent-sounding responses based on mathematical probabilities of how words are typically strung together.

In other words, chatbots are designed to produce text that sounds like something a human would say or write. But even when chatbots are trained with accurate information, they still sometimes make inexplicable errors or put words together in a way that sounds accurate but isn’t. And because the user typically can’t tell where the chatbot got its information, it’s difficult to check for accuracy.

Chatbots generally provide reliable information, though, so users may come to trust them more than they should. Children may be less likely than adults to realize when chatbots are giving incorrect or unsafe answers.

When they do share incorrect information, chatbots sound completely confident in their answers. And because they don’t have facial expressions or other human giveaways, it’s impossible to tell when a chatbot is BS-ing you.

RELATED:The risks in the protocol connecting AI to the digital world 

AI developers have warned the public about these limitations. For instance, OpenAI acknowledges that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” This problem is difficult to fix, because chatbots are not trained to distinguish truth from lies, and training a chatbot to make it more cautious in its answers would also make it more likely to decline to answer (OpenAI undated).

Tech developers euphemistically refer to chatbot falsehoods as “hallucinations.” For example, all three of the leading chatbots (ChatGPT, Bard, and Bing) repeatedly gave detailed but inaccurate answers to a question about when The New York Times first reported on artificial intelligence. “Though false, the answers seemed plausible as they blurred and conflated people, events and ideas,” the newspaper reported (Weise and Metz 2023).

AI developers do not understand why chatbots sometimes make up names, dates, historical events, answers to simple math problems, and other definitive-sounding answers that are inaccurate and not based on training data. They hope to eliminate these hallucinations over time by, ironically, relying on humans to fine-tune chatbots in a process called “reinforcement learning with human feedback.”

But as humans come to rely more and more on tuned-up chatbots, the answers generated by these systems may begin to crowd out legacy information created by humans, including the original content that was used to train chatbots. Already, many Americans cannot agree on basic facts, and some are ready to kill each other over these differences. Add artificial intelligence to that toxic stew—with its ability to create fake videos and narratives that seem more realistic than ever before—and it may eventually become impossible for humans to sort fact from fiction, which could prove maddening. Literally.

It may also become increasingly difficult to tell the difference between humans and chatbots in the online world. There are currently no tools that can reliably distinguish between human-generated and AI-generated content, and distinctions between humans and chatbots are likely to become further blurred with the continued development of emotion AI—a subset of artificial intelligence that detects, interprets, and responds to human emotions. A chatbot with these capabilities could read users’ facial expressions and voice inflections, for example, and adjust its own behavior accordingly.

Emotion AI could prove especially useful for treating mental illness. But even garden-variety AI is already creating a lot of excitement among mental health professionals and tech companies.

The chatbot will see you now

Googling “artificial intelligence” plus “mental health” yields a host of results about AI’s promising future for researching and treating mental health issues. Leaving aside Google’s obvious bias toward AI, healthcare researchers and providers mostly view artificial intelligence as a boon to mental health, rather than a threat.

Using chatbots as therapists is not a new idea. MIT computer scientist Joseph Weizenbaum created the first digital therapist, Eliza, in 1966. He built it as a spoof and was alarmed when people enthusiastically embraced it. “His own secretary asked him to leave the room so that she could spend time alone with Eliza,” The New Yorker reported earlier this year (Khullar 2023).

Millions of people already use the customizable “AI companion” Replika or other chatbots that are intended to provide conversation and comfort. Tech startups focused on mental health have secured more venture capital in recent years than apps for any other medical issue.

Chatbots have some advantages over human therapists. Chatbots are good at analyzing patient data, which means they may be able to flag patterns or risk factors that humans might miss. For example, a Vanderbilt University study that combined a machine-learning algorithm with face-to-face screening found that the combined system did a better job at predicting suicide attempts and suicidal thoughts in adult patients at a major hospital than face-to-face screening alone (Wilimitis, Turer, and Ripperger 2022).

Some people feel more comfortable talking with chatbots than with doctors. Chatbots can see a virtually unlimited number of clients, are available to talk at any hour, and are more affordable than seeing a medical professional. They can provide frequent monitoring and encouragement—for example, reminding a patient to take their medication.

However, chatbot therapy is not without risks. What if a chatbot “hallucinates” and gives a patient bad medical information or advice? What if users who need professional help seek out chatbots that are not trained for that?

That’s what happened to a Belgian man named Pierre, who was depressed and anxious about climate change. As reported by the newspaper La Libre, Pierre used an app called Chai to get relief from his worries. Over the six weeks that Pierre texted with one of Chai’s chatbot characters, named Eliza, their conversations became increasingly disturbing and turned to suicide. Pierre’s wife believes he would not have taken his life without encouragement from Eliza (Xiang 2023).

Although Chai was not designed for mental health therapy, people are using it as a sounding board to discuss problems such as loneliness, eating disorders, and insomnia (Chai Research undated). The startup company that built the app predicts that “in two years’ time 50 percent of people will have an AI best friend.”

References

Centers for Disease Control (CDC). 2023. “Facts About Suicide,” last reviewed May 8. https://www.cdc.gov/suicide/facts/index.html

Chai Research. Undated. “Chai Research: Building the Platform for AI Friendship.” https://www.chai-research.com/

Green, C. M., J. M. Foy, M. F. Earls, Committee on Psychosocial Aspects of Child and Family Health, Mental Health Leadership Work Group, A. Lavin, G. L. Askew, R. Baum et al. 2019. Achieving the Pediatric Mental Health Competencies. American Academy of Pediatrics Technical Report, November 1. https://publications.aap.org/pediatrics/article/144/5/e20192758/38253/Achieving-the-Pediatric-Mental-Health-Competencies

Greenfield, D. and S. Bhavnani. 2023. “Social media: generative AI could harm mental health.” Nature, May 23. https://www.nature.com/articles/d41586-023-01693-8

Hanna, R. 2023. “Addicted to Chatbots: ChatGPT as Substance D.” Medium, July 10. https://bobhannahbob1.medium.com/addicted-to-chatbots-chatgpt-as-substance-d-3b3da01b84fb

Hattenstone, T. 2023. “Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane.” The Guardian, March 23. https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane

Khullar, D. 2023. “Can A.I. Treat Mental Illness?” The New Yorker, February 27. https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness

Klee, M. 2023. “She Was Falsely Accused of Cheating with AI—And She Won’t Be the Last.” Rolling Stone, June 6. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/

McPhillips, D. 2023. “Suicide rises to 11th leading cause of death in the US in 2021, reversing two years of decline.” CNN, April 13.

Murphy, C. 2023. Twitter thread, June 2. https://twitter.com/ChrisMurphyCT/status/1664641521914634242

OpenAI. Undated. “Introducing ChatGPT.”

Substance Abuse and Mental Health Services Administration (SAMHSA). 2022. Key substance use and mental health indicators in the United States: Results from the 2021 National Survey on Drug Use and Health. Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration. https://www.samhsa.gov/data/report/2021-nsduh-annual-national-report

Surgeon General. 2023. Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf

Wilimitis, D., R. W. Turer, and M. Ripperger. 2022. “Integration of Face-to-Face Screening with Real-Time Machine Learning to Predict Risk of Suicide Among Adults.” JAMA Network Open, May 13. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2792289

Weise, K. and C. Metz. 2023. “When A.I. Chatbots Hallucinate.” The New York Times, May 1. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html

Xiang, C. 2023. “‘He Would Still Be Here’: Man Dies by Suicide After Talking With AI Chatbot, Wife Says.” Motherboard, March 30. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

By Xie, T. and I. Pentina. 2022. “Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika.” In: Proceedings of the 55th Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/items/5b6ed7af-78c8-49a3-bed2-bf8be1c9e465

Article link: https://thebulletin.org/premium/2023-09/will-ai-make-us-crazy/?

Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists

Posted by timmreardon on 12/29/2025
Posted in: Uncategorized.

By Abi Olvera | December 10, 2025

Data center-related spending is likely to have accounted for nearly half of the United States’ GDP growth in the first six months of this year (Thompson 2025). Additionally, tech companies are projected to spend $400 billion this year on AI-related infrastructure. See Figure 1.

Figure courtesy of Bridgewater, August 2025.

Given the current scale of investment in AI, the world may be in a pre-bubble phase. Alternatively, the world could be at an inflection point where today’s decisions establish default settings that endure for generations.

Both can be true: AI may be in a bit of a bubble at the same time that it is reshaping vital parts of the economy. Even at a slower pace, the tech’s influence will continue to be transformative.

Regardless of whether artificial intelligence investment, growth, and revenues continue to increase at its current exponential rate or at a pace slowed by a re-calibration or shock, AI has already been widely adopted. Companies worldwide are utilizing various kinds of generative artificial intelligence applications, from large language models that can communicate in natural language to bio-design tools that can generate and predict protein-folding and new substances at the molecular level.

While the company and work-related usages may show up in the numbers for nationwide GDP or sector-specific productivity, artificial intelligence is also impacting societies in ways more difficult to measure. For example, a growing number of people use apps powered by large language models as assistants and trusted advisers in both their work and personal lives.

On a bigger scale, this difficulty in measuring impact means that no one is certain of how AI might ultimately influence the world. If artificial intelligence amplifies existing polarization or other failures in coordination, climate change and nuclear risks might be harder to solve. If the tech empowers more surveillance, totalitarian regimes could be locked into power. If AI causes consistent unemployment without creating new jobs—in the same way that some prior technologies have—then democracies face additional risks that come from social unrest due to economic insecurity. Loss of job opportunities could also spark deeper questions about how democracy functions when many people aren’t working or paying taxes, a shift that could weaken the citizen-government relationship.

This uncertainty creates an opening for organizations like the Bulletin of the Atomic Scientists, which has a long history of looking at global, neglected risks through different disciplines. No single field can accurately predict how this transformative technology will reshape society. By connecting AI researchers with economists, political scientists, complexity theorists, scientists, and historians, the Bulletin can continue in its tradition of providing the cross-disciplinary rigor needed to distinguish meaningful information from distractions, anticipate hidden risks, and balance both positive and negative aspects of this technology—at the exact moment when leaders are making foundational decisions.

Why our frameworks matter

Alfred Wegener, who proposed the theory of continental drift, wasn’t a geologist. He was a meteorologist who spent years battling geologists who doubted any force could be powerful enough to move such large landmasses—making him a prime example of how specialists from other domains can often needed to fill in potential voids, especially in evolving fields.

Experts studying AI and its impacts will likely suffer vulnerabilities similar to those geologists,

For example, many researchers who examine the relationship between jobs and artificial intelligence focus on elimination: They try to predict which jobs will disappear; how quickly that will happen; how many people will be displaced. But these forecasts don’t always portray the views of historians of technological innovation and diffusion—which is that new technology often causes an explosive growth in jobs. Sixty percent of today’s jobs didn’t even exist in 1940, and 85 percent of employment growth since then has been technology-driven (Goldman Sachs 2025). That’s because technologies create complementary jobs alongside the ones they automate. Commercial aviation largely replaced ocean liner travel but created jobs for pilots, flight attendants, air traffic controllers, airport staff, and an entire global tourism and hospitality industry. And it can be argued that washing machines and other household appliances freed up time that helped millions of women enter the workforce (Greenwood 2019).

Looking back, it’s clear that technology can create new domains of work. The rise of computers, for example, created more opportunities for data scientists, as statistical methods no longer required hours of manual calculation. This made statistical analysis more accessible and affordable across a wider range of sectors.

To be clear, some forms of automation can lead to job losses. Bank teller positions have declined 30 percent since 2010, likely from rise of mobile banking (Mousa 2025). That’s because when a technology is so efficient that a human is no longer needed, the drop in price for that service doesn’t lead to enough new demand to save those jobs.

The impact of AI on the job markets—which is less a simple subtraction problem rather than a complex adaptive and evolving process—can already be seen. Workers between the ages of 22 and 25 in AI-exposed fields, such as software development and customer service, saw a 13 percent decline in employment, while roles for workers in less exposed fields stabilized or grew (Brynjolfsson, Chandar, and Chen 2025). However, the ramifications of even these changes are less clear. When digital computers were first rolled out, the humans (also called “computers”) who originally did the computations entered into technical roles through other types of mathematical work.

However, if AI continues to improve and takes on a wider range of tasks—particularly long-term projects—traditional models of technology’s impacts on the labor market may become less directly applicable.

This uncertainty could lead to split research strategies. Current research is divided into bifurcated “camps” or approaches. Some focus on where AI could be in a few months, years, or decades. Statements from some AI labs, such as Anthropic, of a “country of geniuses” within two years, contribute to an urgency around this perspective. Others prefer to focus on current models, uses, and applications, often with a lean toward pessimism.

Staying balanced to stay accurate

To better get the full picture, people need to take note of both the impressive and underwhelming aspects, and similarly the positive and negative outcomes, of AI. Understanding AI today means holding two ideas at once: The technology is both more capable than most people expected and less reliable than the hype suggests.

AI models have achieved what seemed impossible just a few years ago: writing coherent essays, generating working code, and solving complex problems in ways that keep beating expert predictions. Benchmarks track this progress, showing how much better AI has especially gotten at technical tasks and math reasoning.

These benchmark improvements, however, don’t translate directly to real-world reliability. Artificial intelligence tools are still not robust enough for companies to fully depend on. Even legal research tools that rely on AI summaries—a core strength of large language models—get these extractions wrong, apparently hallucinating between 17 to 33 percent (Magesh, Surani, Dahl, Suzgun, Manning, and Ho 2025). The gap between what current AI can do in tests and what it can dependably do in practice remains wide.

But there are upsides that receive less attention: Data centers now generate 38 percent of Virginia’s Loudoun County local tax revenue (Styer 2025). Surveys of Replika users found that three percent had suicidal ideation that the AI chatbot “halted” (Maples, Cerit, Vishwanath, and Pea 2024). A 2024 study in the journal Science found that talking to AI systems reduced belief in false conspiracy theories by 20 percent, with the effect persisting for over two months undiminished (Costello, Pennycook, and Rand 2024).

As such, reality is messier than the optimistic or pessimistic narratives suggest. But mainstream news coverage tilts heavily negative—because those stories spread faster and capture more attention.

This creates a policy problem. When regulators only see the downsides, they optimize for preventing visible harms rather than maximizing total welfare. Looking at the entire picture of risk is critical.

One example I think about often is that US aviation policy allows babies to fly on a parent’s lap without a separate seat. This decision means approximately one baby dies every 10 years due to occurrences such as turbulence when a parent cannot securely hold the infant. But the practice saves far more lives overall, if you weigh it against driving, where a minimum of 60 babies would die in those 10 years (Piper 2024). The lower cost of flying (no extra ticket) means more families can choose planes over cars.

The same logic applies to technology policy. Fear of nuclear energy power plants in the 1970s and ‘80s—especially concerns about accidents, nuclear weapons proliferation, and how to safely dispose of radioactive waste—prevented the industry from achieving the cost efficiencies of solar or wind energy. Today, more than a billion people still live in energy poverty: lacking reliable electricity for refrigeration, lighting, or medical equipment (Min, O’Keeffe, Abidoye, Gaba, Monroe, Stewart, Baugh, Sánchez-Andrade, and Meddeb 2024). In high-income countries like the United States, coal plants kill far more people from air pollution than nuclear energy (Walsh 2022). The visible risk of nuclear accidents dominated the policy conversation, perhaps with good measure. However, the invisible cost of energy scarcity or pollution impacts didn’t make headlines.

AI policy faces similar complexities. The research community’s job is to help policy makers see the full picture—not just the headlines. Some risks, such as overreliance on AI decision-making and lower social cohesion from higher unemployment levels, don’t tend to show up as news-worthy incidents. Similarly, other gradual serious risks that aren’t always ink-worthy include declining public trust due to worsening social and economic issues or gradual democratic backsliding from weakening institutions.

The gaps that matter most

To stay abreast of the rapid changes and headlines on AI, the Bulletin could focus on critical issues that determine artificial intelligence’s impact and trajectory. Here are the areas where outside expertise could provide crucial grounding:

  • Coordination problems. Humanity already knows how to solve many of its biggest challenges. Several countries have nearly eliminated road fatalities through infrastructure and city redesigns. The world has the technology to dramatically reduce carbon emissions. But coordination problems keep solutions from spreading. Understanding why coordination succeeds or fails could help design better frameworks for AI governance—and aid in recognizing where artificial intelligence might make existing coordination challenges harder or easier to solve.
  • Complexity theory and societal resilience. Societies have become more interconnected. There’s rich knowledge about what happens when complex systems come under stress—how elites capture resources; how coordination mechanisms break down; when small changes cascade into large disruptions. Complexity theorists and historians who study societal change could help forecast which societies are losing resilience, and which are at risk of disruptions from shocks such as those from AI. Complexity theory experts can also monitor AI developments that pose systemic risks versus those that create manageable disruptions.
  • Innovation diffusion patterns. Experts who study how new technologies spread through economies consistently find that adoption is slower and more uneven than early predictions suggest. These economists know which historical parallels are useful and which are misleading. They understand the institutional barriers that slow both beneficial and harmful applications of new technology.
  • Cybersecurity and biosecurity dynamics. In practice, do AI systems increase cybersecurity offense or defense? The cyber field offers real-time lessons that can help capture trends. Both high-level strategic analyses and granular technical insights are critical. How do authentication challenges, application programming interfaces (API) integration difficulties, and decision-ownership questions affect cybersecurity efforts? Understanding these practical bottlenecks could inform both security policy and broader predictions about the speed of AI adoption.
  • Biosecurity dynamics. How does AI change the risk of someone creating a bioweapon? Devising an effective policy requires understanding exactly which parts of the supply chain AI affects. Artificial intelligence can help with computational tasks like molecular design, but some biosecurity experts note it doesn’t do much for the hands-on laboratory work that’s often the real bottleneck. If they’re right, researchers might need to watch for advances that lower barriers to physical experimentation, not just computational design. Experts can’t know what to look for without systematic research from practitioners who see the actual process.
  • Democratic resilience. Political scientists who study the mechanisms behind stable or fragile democracies rarely contribute to AI policy discussions. But their insights matter enormously. Which institutions bend under pressure and which break? How do democratic societies maintain legitimacy during periods of lower levels of public trust? What early warning signs should policymakers watch for?
  • Media landscape dynamics. Beyond tracking misinformation supply, research is needed to understand the demand side: Why certain false narratives spread while others don’t. People filter information through existing trust networks and social identities. What determines who people trust? When do trust networks break down, and when do they hold firm? Why do some societies maintain higher levels of public trust in institutions while others see it erode? Experts on belief formation, media psychology, and historical patterns of institutional trust could help understand both when disinformation poses genuine threats and when other factors—like declining public trust itself—might be the deeper problem.
  • Global sentiment patterns. Why are some societies more excited about AI than others? China, for instance, isn’t as enthusiastic about AI as many Western observers assume. This matters because global sentiment affects many things from investment flows to regulatory approaches. Is optimism about technology connected to trust in government, social cohesion, or economic expectations? Understanding these patterns could help predict where AI governance will be more or less successful.

What the Bulletin could do 

Founded in 1945 by Albert Einstein and former Manhattan Project scientists immediately following the atomic bombings of Hiroshima and Nagasaki, the Bulletin of the Atomic Scientists has a tradition in hosting a broad range of talent can mitigate blind spots. As such, the organization could:

  • Connect different experts: Bring together AI researchers with innovation economists, cybersecurity experts with political scientists, and complexity theorists with historians of technology.
  • Apply nuclear-age lessons: International agreements often fail not because of technical problems but because of institutional and incentive misalignments. What are the kinds of global coordination mechanisms and tools, like privacy preserving technology or hardware-enabled security protections, that can help?
  • Stay empirically grounded: Test assumptions about AI’s impact against real-world evidence. When predictions prove wrong, investigate why. For example, AI-powered deepfakes did not upend the media industry as forecasted by some headlines. Demand for deepfakes did not increase when the supply of them did.

Why this matters now

Societies are making foundational decisions about AI governance, research priorities, and social adaptation at a moment when the basic institutions for handling emerging challenges are weak. International coordination on existential risks like nuclear proliferation or pandemic preparedness remains threadbare, despite decades of effort. The decisions leaders make today could create path dependencies—self-reinforcing defaults that become nearly impossible to reverse.

For example, companies today collect troves of Americans’ personal data with few limits. In the early 2000s, companies built business models around unrestricted data collection. Two decades later, that choice created trillion-dollar incumbents whose value depends on data collection, making meaningful privacy reform politically difficult in the United States.

But the other path dependencies that worry me are what will be accepted as normal. Societies have normalized living with nuclear weapons—accepting some baseline probability of catastrophic risk as just part of modern life. Meanwhile, pandemic risk sits around two percent per year (Penn 2021). Governments systematically underprepare for these global risks because the risks don’t feel tangible. Even when the impacts are tangible, a sense of normalcy clouds judgement. More than 1,200 children die from malaria every day (Medicines from Malaria Ventures). Despite this, the vaccine took 20 years to be available after the first promising trials due to lack of funding and urgency (Undark 2022).

AI might create similar forks in the road. Path dependency doesn’t require conspiracy or malice. It just requires inattention when defaults are being set.

Humans are in a crucial moment. Smart institutional design creates positive compounding effects—establishing cooperative frameworks that ease future agreements, flexible governance that can adapt over time, and research norms that promote accuracy and a more complete understanding.

That’s exactly the kind of long-term thinking the Bulletin was created to support. No one will have all the answers about AI, but noticing and expanding on the right questions determines the future humanity gets.

References

Brynjolfsson, E., Chandar, B., and Chen, R. 2025. “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence” August 26. Digital Economy. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf

Costello, T. H., G. Pennycook, and D. G. Rand. 2024. “Durably reducing conspiracy beliefs through dialogues with AI” September 13.Science.

https://www.science.org/doi/10.1126/science.adq1814

Goldman Sachs. 2025. “How Will AI Affect the Global Workforce?” August 13. Goldman Sachs. https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce

Greenwood, J. 2019. “How the appliance boom moved more women into the workforce” January 30. Penn Today. https://penntoday.upenn.edu/news/how-appliance-boom-moved-more-women-workforce

Kiernan, K. “The Case of the Vanishing Teller: How Banking’s Entry Level Jobs Are Transforming” May 12. The Burning Glass Institute. https://www.burningglassinstitute.org/bginsights/the-case-of-the-vanishing-teller-how-bankings-entry-level-jobs-are-transforming

Magesh, V., F. Surani, M. Dahl, M. Suzgun, C. D. Manning, and D. E. Ho. 2025. “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools” March 14. Journal of Empirical Legal Studies. https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf

Maples, B., M. Cerit, A. Vishwanath, and R. Pea. 2024. “Loneliness and suicide mitigation for students using GPT3-enabled chatbots” January 22. Nature. https://www.nature.com/articles/s44184-023-00047-6

Medicines for Malaria Venture. 2024. “Malaria facts and statistics.” Medicines for Malaria Venture. https://www.mmv.org/malaria/about-malaria/malaria-facts-statistics

Min, B., Z. O’Keeffe, B. Abidoye, K. M. Gaba, T. Monroe, B. Stewart, K. Baugh, B. Sánchez-Andrade, and R. Meddeb. 2024. “Beyond access: 1.18 billion in energy poverty despite rising electricity access” June 12. UNDP. https://data.undp.org/blog/1-18-billion-around-the-world-in-energy-poverty#:~:text=In%20a%20newly%20released%20paper,2020%2C%20according%20to%20official%20data.

Mousa, D. 2025. “When automation means more human workers” October 7. Under Development. https://newsletter.deenamousa.com/p/when-more-automation-means-more-human

Penn, M. 2021. “Statistics Say Large Pandemics Are More Likely Than We Thought” August 23. Duke Global Health Institute. https://globalhealth.duke.edu/news/statistics-say-large-pandemics-are-more-likely-we-thought

Piper, K. 2024. “What the FAA gets right about airplane regulation” January 18. Vox. https://www.vox.com/future-perfect/24041640/federal-aviation-administration-air-travel-boeing-737-max-alaska-airlines-regulation

Styer, N. 2025. “County Staff to Push for Lower Data Center Taxes to Balance Revenues” July 10. Loudoun Now. https://www.loudounnow.com/news/county-staff-to-push-for-lower-data-center-taxes-to-balance-revenues/article_567df6c2-2179-4eba-9cb5-fc78e2938ccb.html

Thompson, D. 2025. “This Is How the AI Bubble Will Pop.” October 2. Derek Thompson. https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop

Undark. 2022. “It Took 35 years to Get a Malaria Vaccine. Why?” June 9. Undark. https://www.gavi.org/vaccineswork/it-took-35-years-get-malaria-vaccine-why

Walsh, B. 2022. “A needed nuclear option for climate change” July 13. Vox. https://www.vox.com/future-perfect/2022/7/12/23205691/germany-energy-crisis-nuclear-power-coal-climate-change-russia-ukraine

Article link: https://thebulletin.org/premium/2025-12/decisions-about-ai-will-last-decades-researchers-need-better-frameworks/?

Quantum computing reality check: What business needs to know now – MIT Sloan

Posted by timmreardon on 12/29/2025
Posted in: Uncategorized.


byBeth Stackpole

 Dec 9, 2025Share 

Four guidelines for advancing commercial quantum computing: 

  • Address the need for more quantum algorithms.
  • Don’t write off classical computing.
  • Switch to post-quantum cryptosystems now.
  • Accelerate the development of quantum error correction

Quantum computing is having a moment as the pace of startup activity, innovation, and funding deals heats up.

Commercialized quantum computers and applications are a decade or more away, experts estimate. Yet it’s not too early for technology and business leaders to track quantum as it evolves from a novelty into a critical asset for solving industry’s and society’s toughest problems.

Quantum is early in its trajectory, considering that it took classical computing nearly a century to progress from the vacuum tubes of 1906 to the superchips powering AI and high-performance computing today, said William Oliver, director of the MIT Center for Quantum Engineering.

In a presentation for MIT Data Center Day, organized by the MIT Industrial Liaison Program, Oliver made the case that quantum computing is actively transitioning from a scientific curiosity to a technical reality — an indicator that it’s high time for organizations to dive in. 

You’ve got to be in the game to play, and getting in the game is happening right now with quantum.

William OliverDirector, MIT Center for Quantum EngineeringShare 

“Advancing from discovery to useful machines takes time and engineering, and it’s not going to happen overnight,” said Oliver, a professor of physics and of electrical engineering and computer science at MIT. “But you’ve got to be in the game to play, and getting in the game is happening right now with quantum.”

Quantum defined

Quantum computing is a collection of sensing, networking, and processing technologies capable of performing functions that are either practically prohibitive or even impossible to accomplish with current techniques.

While a fully scaled quantum computer will outperform classical computers for certain problems, the technology is not a one-to-one replacement for classical computing.

4 guidelines for advancing quantum computing

Getting in the game requires an understanding of both the possibilities quantum brings and the reality of the current and future landscape.

In his talk, Oliver shared four critical observations that can help leaders size up the market and identify what’s necessary to accelerate quantum’s evolution.

  • Don’t write off classical computing. Despite their promise, quantum computers will not replace conventional computers. Quantum is aimed at mathematically complex use cases in areas like cryptanalysis, scientific computing (such as materials science and quantum chemistry), and optimization.“Quantum computers solve certain problems really well, but they’re not going to replace Microsoft Word,” Oliver said.
  • Address the need for more quantum algorithms. Many of the existing quantum algorithms have roots at MIT, including the hallmark algorithm developed by applied mathematics professor Peter Shor. Shor’s algorithm factors a large composite integer into its constituent prime numbers, with application to the cryptanalysis of today’s ubiquitous RSA-type public-key cryptosystems.

    More and different algorithms are also central to realizing what’s called commercial quantum advantage — the benchmark of a quantum computer’s ability to do something better than a classical computer can to solve a commercially relevant problem.

    That is not yet a reality, Oliver said. “There are lots of problems I’m aware of today where I don’t know how to do it on a classical computer and I also don’t yet know how to do it on a quantum computer,” he said. “We need more people thinking about the application space and writing those algorithms.”

Implement post-quantum cryptosystems now. While quantum computing — and Shor’s algorithm, specifically — has many people fretting over the potential for bad actors to use quantum to crack complex cryptographic systems, a cryptographically relevant machine is not yet available, Oliver said.

To break RSA encryption, public-key crypto systems would require a very large error-corrected quantum computer with millions of qubits, the processing power at the heart of quantum computing. That milestone is still a ways away, Oliver said.

Nonetheless, this is not a time to be complacent. “We need to start now to switch over to new post-quantum cryptographic systems that we believe will be immune to attack by a future quantum computer,” Oliver said.

Industry should move quickly toward new cryptography standards outlined by the U.S. Department of Commerce’s National Institute of Standards and Technology that are designed to withstand attacks from a future quantum computer, he said. This would provide forward security.

Accelerate the development of quantum error correction. One of the biggest obstacles to quantum’s success is the reliability of qubits. Qubits are faulty and fail after about 1,000 or 10,000 operations — nowhere near the stamina required to account for the billions, even trillions, of operations necessary to reach commercial quantum advantage.

The key to addressing the gap is quantum error correction — an emerging technology that drives better reliability for large-scale quantum use. In late 2024, Google Quantum AI announced that it had reached a milestone, revealing that its Willow processor was the first quantum processor to have error-protected quantum information become exponentially more resilient as more qubits were added.

“We need quantum error correction to make this all possible,” Oliver said. “With the demonstration from Google last fall, we saw a major step in that direction.”

William Oliver is the Henry Ellis Warren (1894) Professor of electrical engineering and computer science and a professor of physics at MIT. He serves as director of the Center for Quantum Engineering and as associate director of the Research Laboratory of Electronics, and he is a principal investigator in the Engineering Quantum Systems Group at MIT. 

Oliver’s research interests include the materials growth, fabrication, design, and measurement of superconducting qubits, as well as the development of cryogenic packaging and control electronics involving cryogenic CMOS and single-flux quantum digital logic.

The MIT Industrial Liaison Program is a membership-based program for large organizations interested in long-term, strategic relationships with MIT. The group engages with organizations from around the globe, in any sector, that are concerned with emerging research- and education-driven results that will be transformative. Executive leadership who would like to learn more about the MIT Industrial Liaison Program and its MIT Startup Exchange are invited to send an email with their name, title, organization name, and headquarters location.

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/quantum-computing-reality-check-what-business-needs-to-know-now?

Posts navigation

← Older Entries
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...