healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

New MIT report captures state of quantum computing – MIT Sloan

Posted by timmreardon on 10/27/2025
Posted in: Uncategorized.

by Beth Stackpole

 Aug 19, 2025

Why It Matters

Quantum computing is evolving into a tangible technology that holds significant business and commercial promise, although the exact timing of when it will impact those areas remains unclear, according to a new report led by researchers at the MIT Initiative on the Digital Economy.i

The “Quantum Index Report” is a comprehensive assessment of the technology and the global landscape, from patents to the quantum workforce.

The “Quantum Index Report 2025” charts the technology’s momentum, with a comprehensive, data-driven assessment of the state of quantum technologies. 

The inaugural report aims to make quantum computing and networking technologies more accessible to entrepreneurs, investors, teachers, and business decision makers — all of whom will play a critical role in how quantum computing is developed, commercialized, and governed. 

“There are a lot of folks who are interested in what’s going on in quantum, but the field is impenetrable to them,” said Jonathan Ruane, a research scientist at MIT IDE and editor-in-chief of the “Quantum Index Report.” The report is co-authored by researchers Elif Kiesow and Johannes Galatsanos from MIT IDE, and Carl Dukatz, Edward Blomquist, and Prashant Shuklafrom Accenture.

Senior business executives across industries are fast becoming what Ruane calls “quantum curious,” inspired in part by the rapid rise of artificial intelligence. “The speed at which AI is transforming industries has alerted managers to the concept that technologies that are simmering in the background can explode really quickly and have tremendous impact,” Ruane said. “They want to make sure they have competency and insights into quantum so they don’t get caught out on missing the next big thing.”

A wide range of quantum impacts

The “Quantum Index Report” team considered activity in the quantum sector through a broad range of perspectives, from both publicly available data and novel, original data. The entire report, raw data, and data visualizations are available on an interactive website. 

While the research team acknowledges the still-nascent nature of the quantum computing field and some inherent bias in the 2025 research, including a U.S. focus, Ruane stressed that there is substantial market momentum underway.

Insights from the “Quantum Index Report 2025” include the following:

Quantum processor performance is improving, with the U. S. leading the field. Two-dozen manufacturers are now commercially offering more than 40 quantum processing units (QPUs), which are the processing hardware for a quantum computer. This is an indicator that the technology is becoming more accessible to business. While there have been impressive advancements in performance, QPUs do not yet meet the requirements for running large-scale commercial applications such as chemical simulations or cryptanalysis.

Quantum technology patents are soaring, with the total number increasing fivefold from 2014 to 2024. Corporations and universities are spearheading innovation efforts, accounting for 91% of the patents filed, with corporations holding 54% and universities 37%. China held 60% of quantum patents as of 2024, followed by the U.S. and Japan.

Venture capital funding for quantum technology reached a new high point in 2024. Quantum computing firms received the most funding ($1.6 billion in publicly announced investments), followed by quantum software companies at $621 million. The researchers note that quantum received less than 1% of total venture capital funding worldwide.

Businesses are buzzing over quantum computing.The report tracks how often the technology was mentioned across more than 50,000 corporate communication vehicles, including press releases and earnings calls, from 2022 to 2024. There was a significant uptick in mentions each quarter in 2024, with the frequency outpacing that of previous years by a substantial margin. The researchers said that this positively correlates with the maturing of the quantum market and the growing presence of quantum technology in mainstream business discourse.  

Quantum skills and training are growing in importance as companies begin to focus on workforce development. The demand for quantum skills has nearly tripled since 2018, according to the report. In response, universities are establishing quantum hubs and standing up programs that connect business leaders with researchers. 

“What we are seeing here is rapid progress and developments across a range of vectors — not just the improvement of technology benchmarks or the performance of quantum processing units,” Ruane said. “We are also seeing impact across a wide range of areas that are important to business leaders. It sends a signal that there’s breadth and depth in development.”

READ THE “QUANTUM INDEX REPORT 2025” 

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/new-mit-report-captures-state-quantum-computing?

Introducing Quantum Echoes: a breakthrough algorithm on our Willow quantum chip – Google Research

Posted by timmreardon on 10/22/2025
Posted in: Uncategorized.

Today, we’re announcing a major algorithmic breakthrough published in Nature Magazine that marks a significant step toward the first beyond classical, verifiable real-world application of quantum computing.

Google Research’s Quantum AI team has demonstrated the first-ever verifiable quantum advantage running the out-of-order time correlator (OTOC) algorithm, which we call Quantum Echoes.

This is the first time any quantum computer has successfully run a verifiable algorithm on hardware that surpasses the ability of classical supercomputers.

What you should know about Quantum Echoes ✨ :

✨ Verifiable Advantage: The algorithm calculates a specific, predictable value, meaning its result can be verified by another quantum computer of similar caliber. This contrasts with non-verifiable random sampling experiments.

✨ Scale and Speed: Quantum Echoes are useful in learning the structure of quantum systems, from molecules to magnets to black holes, and we’ve demonstrated it runs 13,000 times faster on our Willow quantum processor than the best classical algorithm on a supercomputer. This beyond-classical performance is enabled by the low error rates and long coherence times of our hardware.

✨ Near-Term Application:  A separate proof-of-principle experiment showed how data from Nuclear Magnetic Resonance (NMR) can be used to gain more information about chemical structure than existing methods, opening an avenue for a near-term application only possible on quantum computers.

Quantum computing enhanced NMR could become a powerful tool in drug discovery, helping determine how potential medicines bind to their targets, or in materials science for characterizing the molecular structure of new materials like polymers, battery components or even the materials that comprise our quantum bits (qubits).

We remain focused on scaling our systems toward a full-scale, error-corrected quantum computer. Now, we’re focused on achieving Milestone 3 on our quantum hardware roadmap, a long-lived logical qubit.

Bravo to the Google Research Quantum AI team!

More in the blog by Vadim Smelyanskiy and Hartmut Neven: https://lnkd.in/dg8n7UiV

The Nature paper: https://lnkd.in/dGCgar4z

Technical blog on verifiable quantum advantage by Xiao Mi and Kostyantyn Kechedzhi: https://goo.gle/3JiHUc7

New paper on Quantum computation of molecular geometry via many-body nuclear spin echoes: https://lnkd.in/d-pTxba3

Article link: https://www.linkedin.com/posts/yossimatias_today-were-announcing-a-major-algorithmic-activity-7386780979618738176-Myh9?

Tell me about QUANTUM COMPUTING in 2-minutes or less, using language my kid can understand.

Posted by timmreardon on 10/17/2025
Posted in: Uncategorized.

Challenge accepted.

This was a question I got recently in a Q&A. I tried to channel my inner Hemingway. Big ideas, small words and short sentences!

So if you fancy learning something new today – here’s my take, and some useful resources worth checking out if you want a deeper dive.

⬇️

Imagine a computer that doesn’t just think in ones and zeros, like the ones we use today. A quantum computer uses “qubits” instead of bits. A bit can be a 1 or a 0. But a qubit can be both at the same time — this is called “superposition”. It’s like flipping a coin and having it be heads and tails until you look.
 
Quantum computers also use something called entanglement. When two qubits are entangled, what happens to one instantly affects the other, even if they’re far apart. This lets quantum computers connect ideas in powerful new ways.
 
Because of superposition and entanglement, a quantum computer can explore many answers at once instead of one by one. That makes it super fast for some problems. It could help discover new medicines, protect data (search “quantum safe”), fight climate change, or even train smarter (ethical) AI.
 
But quantum computers are very hard to build. Qubits are delicate and can lose their power if they get too hot or too noisy. Scientists all over the world are racing to make them stronger and more stable. Quantum computers have to be kept at extremely low temperatures (-459°F) which is even colder than in outer space!
 
If they succeed, quantum computers could solve problems so big that today’s fastest supercomputers would take thousands of years to finish. Quantum computers won’t replace classical computers – but they will help us to solve many problems that we’ve never been able to solve before.
 
Quantum computers are not just faster – they give us a whole new way to understand the world.

[263 words / 2 minutes]

⬇️

Want a Deeper Dive?

🥶 WATCH: Quantum computers exaplained by MKBHD [17 mins]
https://lnkd.in/eNdRycfu

📒 READ: Wired’s Easy Guide to Quantum Computing – Why It Works & How It Could Change The World
https://lnkd.in/eiuAHxnQ

📖 FREE book “The Quantum Decade” from IBM Institute for Business Value
https://lnkd.in/ejMCnKTX

🗺️ FUTURE: The Next 5 Years? Technology Atlas by IBM
https://lnkd.in/ePaWdATp

📝 LEARN: 10 FREE courses (Most courses cost $2,500+ These 10 will get you started) https://lnkd.in/eM3k-Dtt

Article link: https://www.linkedin.com/posts/jeremypaulwaite_tell-me-about-quantum-computing-in-2-minutes-activity-7384129167895990272-XsbQ?

Why some quantum materials stall while others scale – MIT News

Posted by timmreardon on 10/15/2025
Posted in: Uncategorized.

In a new study, MIT researchers evaluated quantum materials’ potential for scalable commercial success — and identified promising candidates.

Zach Winn | MIT News

Publication Date:

October 15, 2025

People tend to think of quantum materials — whose properties arise from quantum mechanical effects — as exotic curiosities. But some quantum materials have become a ubiquitous part of our computer hard drives, TV screens, and medical devices. Still, the vast majority of quantum materials never accomplish much outside of the lab.

What makes certain quantum materials commercial successes and others commercially irrelevant? If researchers knew, they could direct their efforts toward more promising materials — a big deal since they may spend years studying a single material.

Now, MIT researchers have developed a system for evaluating the scale-up potential of quantum materials. Their framework combines a material’s quantum behavior with its cost, supply chain resilience, environmental footprint, and other factors. The researchers used their framework to evaluate over 16,000 materials, finding that the materials with the highest quantum fluctuation in the centers of their electrons also tend to be more expensive and environmentally damaging. The researchers also identified a set of materials that achieve a balance between quantum functionality and sustainability for further study.

The team hopes their approach will help guide the development of more commercially viable quantum materials that could be used for next generation microelectronics, energy harvesting applications, medical diagnostics, and more.

“People studying quantum materials are very focused on their properties and quantum mechanics,” says Mingda Li, associate professor of nuclear science and engineering and the senior author of the work. “For some reason, they have a natural resistance during fundamental materials research to thinking about the costs and other factors. Some told me they think those factors are too ‘soft’ or not related to science. But I think within 10 years, people will routinely be thinking about cost and environmental impact at every stage of development.”

The paper appears in Materials Today. Joining Li on the paper are co-first authors and PhD students Artittaya Boonkird, Mouyang Cheng, and Abhijatmedhi Chotrattanapituk, along with PhD students Denisse Cordova Carrizales and Ryotaro Okabe; former graduate research assistants Thanh Nguyen and Nathan Drucker; postdoc Manasi Mandal; Instructor Ellan Spero of the Department of Materials Science and Engineering (DMSE); Professor Christine Ortiz of the Department of DMSE; Professor Liang Fu of the Department of Physics; Professor Tomas Palacios of the Department of Electrical Engineering and Computer Science (EECS); Associate Professor Farnaz Niroui of EECS; Assistant Professor Jingjie Yeo of Cornell University; and PhD student Vsevolod Belosevich and Assostant Professor Qiong Ma of Boston College.

Materials with impact

Cheng and Boonkird say that materials science researchers often gravitate toward quantum materials with the most exotic quantum properties rather than the ones most likely to be used in products that change the world.

“Researchers don’t always think about the costs or environmental impacts of the materials they study,” Cheng says. “But those factors can make them impossible to do anything with.”

Li and his collaborators wanted to help researchers focus on quantum materials with more potential to be adopted by industry. For this study, they developed methods for evaluating factors like the materials’ price and environmental impact using their elements and common practices for mining and processing those elements. At the same time, they quantified the materials’ level of “quantumness” using an AI model created by the same group last year, based on a concept proposed by MIT professor of physics Liang Fu, termed quantum weight.

“For a long time, it’s been unclear how to quantify the quantumness of a material,” Fu says. “Quantum weight is very useful for this purpose. Basically, the higher the quantum weight of a material, the more quantum it is.”

The researchers focused on a class of quantum materials with exotic electronic properties known as topological materials, eventually assigning over 16,000 materials scores on environmental impact, price, import resilience, and more.

For the first time, the researchers found a strong correlation between the material’s quantum weight and how expensive and environmentally damaging it is.

“That’s useful information because the industry really wants something very low-cost,” Spero says. “We know what we should be looking for: high quantum weight, low-cost materials. Very few materials being developed meet that criteria, and that likely explains why they don’t scale to industry.”

The researchers identified 200 environmentally sustainable materials and further refined the list down to 31 material candidates that achieved an optimal balance of quantum functionality and high-potential impact.

The researchers also found that several widely studied materials exhibit high environmental impact scores, indicating they will be hard to scale sustainably. “Considering the scalability of manufacturing and environmental availability and impact is critical to ensuring practical adoption of these materials in emerging technologies,” says Niroui.

Guiding research

Many of the topological materials evaluated in the paper have never been synthesized, which limited the accuracy of the study’s environmental and cost predictions. But the authors say the researchers are already working with companies to study some of the promising materials identified in the paper.

“We talked with people at semiconductor companies that said some of these materials were really interesting to them, and our chemist collaborators also identified some materials they find really interesting through this work,” Palacios says. “Now we want to experimentally study these cheaper topological materials to understand their performance better.”

“Solar cells have an efficiency limit of 34 percent, but many topological materials have a theoretical limit of 89 percent. Plus, you can harvest energy across all electromagnetic bands, including our body heat,” Fu says. “If we could reach those limits, you could easily charge your cell phone using body heat. These are performances that have been demonstrated in labs, but could never scale up. That’s the kind of thing we’re trying to push forward.”

This work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.

Article link: https://news.mit.edu/2025/why-some-quantum-materials-stall-while-others-scale-1015

What’s next for AI in 2025 – MIT Technology Review

Posted by timmreardon on 10/11/2025
Posted in: Uncategorized.


You already know that agents and small language models are the next big things. Here are five other hot trends you should watch out for this year.

By James O’Donnell, Will Douglas Heaven, &Melissa Heikkilä

January 8, 2025

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

For the last couple of years we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again.

How did we score last time round? Our four hot trends to watch out for in 2024 included what we called customized chatbots—interactive helper apps powered by multimodal large language models (check: we didn’t know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now); generative video (check: few technologies have improved so fast in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other this December); and more general-purpose robots that can do a wider range of tasks (check: the payoffs from large language models continue to trickle down to other parts of the tech industry, and robotics is top of the list). 

We also said that AI-generated election disinformation would be everywhere, but here—happily—we got it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground. 

So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agentsand smaller, more efficient, language modelswill continue to shape the industry. Instead, here are five alternative picks from our AI team.

1. Generative virtual playgrounds 

If 2023 was the year of generative images and 2024 was the year of generative video—what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round.

2. Large language models that “reason”

The buzz was justified. When OpenAI revealed o1 in September, it introduced a new paradigm in how large language models work. Two months later, the firm pushed that paradigm forward in almost every way with o3—a model that just might reshape this technology for good. 

Most models, including OpenAI’s flagship GPT-4, spit out the first response they come up with. Sometimes it’s correct; sometimes it’s not. But the firm’s new models are trained to work through their answers step by step, breaking down tricky problems into a series of simpler ones. When one approach isn’t working, they try another. This technique, known as “reasoning” (yes—we know exactly how loaded that term is), can make this technology more accurate, especially for math, physics, and logic problems.

It’s also crucial for agents.

In December, Google DeepMind revealed an experimental new web-browsing agent called Mariner. In the middle of a preview demo that the company gave to MIT Technology Review, Mariner seemed to get stuck. Megha Goel, a product manager at the company, had asked the agent to find her a recipe for Christmas cookies that looked like the ones in a photo she’d given it. Mariner found a recipe on the web and started adding the ingredients to Goel’s online grocery basket.

Then it stalled; it couldn’t figure out what type of flour to pick. Goel watched as Mariner explained its steps in a chat window: “It says, ‘I will use the browser’s Back button to return to the recipe.’”

It was a remarkable moment. Instead of hitting a wall, the agent had broken the task down into separate actions and picked one that might resolve the problem. Figuring out you need to click the Back button may sound basic, but for a mindless bot it’s akin to rocket science. And it worked: Mariner went back to the recipe, confirmed the type of flour, and carried on filling Goel’s basket.

Google DeepMind is also building an experimental version of Gemini 2.0, its latest large language model, that uses this step-by-step approach to problem solving, called Gemini 2.0 Flash Thinking.

But OpenAI and Google are just the tip of the iceberg. Many companies are building large language models that use similar techniques, making them better at a whole range of tasks, from cooking to coding. Expect a lot more buzz about reasoning (we know, we know) this year.

—Will Douglas Heaven

3. It’s boom time for AI in science 

One of the most exciting uses for AI is speeding up discovery in the natural sciences. Perhaps the greatest vindication of AI’s potential on this front came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize for chemistry to Demis Hassabis and John M. Jumper from Google DeepMind for building the AlphaFold tool, which can solve protein folding, and to David Baker for building tools to help design new proteins.

Expect this trend to continue next year, and to see more data sets and models that are aimed specifically at scientific discovery. Proteins were the perfect target for AI, because the field had excellent existing data sets that AI models could be trained on. 

The hunt is on to find the next big thing. One potential area is materials science. Meta has released massive data sets and models that could help scientists use AI to discover new materials much faster, and in December, Hugging Face, together with the startup Entalpic, launched LeMaterial, an open-source project that aims to simplify and accelerate materials research. Their first project is a data set that unifies, cleans, and standardizes the most prominent material data sets. 

AI model makers are also keen to pitch their generative products as research tools for scientists. OpenAI let scientists test its latest o1 model and see how it might support them in research. The results were encouraging. 

Having an AI tool that can operate in a similar way to a scientist is one of the fantasies of the tech sector. In a manifesto published in October last year, Anthropic founder Dario Amodei highlighted science, especially biology, as one of the key areas where powerful AI could help. Amodei speculates that in the future, AI could be not only a method of data analysis but a “virtual biologist who performs all the tasks biologists do.” We’re still a long way away from this scenario. But next year, we might see important steps toward it. 

—Melissa Heikkilä

4. AI companies get cozier with national security

There is a lot of money to be made by AI companies willing to lend their tools to border surveillance, intelligence gathering, and other national security tasks. 

The US military has launched a number of initiatives that show it’s eager to adopt AI, from the Replicator program—which, inspired by the war in Ukraine, promises to spend $1 billion on small drones—to the Artificial Intelligence Rapid Capabilities Cell, a unit bringing AI into everything from battlefield decision-making to logistics. European militaries are under pressure to up their tech investment, triggered by concerns that Donald Trump’s administration will cut spending to Ukraine. Rising tensions between Taiwan and China weigh heavily on the minds of military planners, too. 

In 2025, these trends will continue to be a boon for defense-tech companies like Palantir, Anduril, and others, which are now capitalizing on classified military data to train AI models. 

The defense industry’s deep pockets will tempt mainstream AI companies into the fold too. OpenAI in December announced it is partnering with Anduril on a program to take down drones, completing a year-long pivotaway from its policy of not working with the military. It joins the ranks of Microsoft, Amazon, and Google, which have worked with the Pentagon for years. 

Other AI competitors, which are spending billions to train and develop new models, will face more pressure in 2025 to think seriously about revenue. It’s possible that they’ll find enough non-defense customers who will pay handsomely for AI agents that can handle complex tasks, or creative industries willing to spend on image and video generators. 

But they’ll also be increasingly tempted to throw their hats in the ring for lucrative Pentagon contracts. Expect to see companies wrestle with whether working on defense projects will be seen as a contradiction to their values. OpenAI’s rationale for changing its stance was that “democracies should continue to take the lead in AI development,” the company wrote, reasoning that lending its models to the military would advance that goal. In 2025, we’ll be watching others follow its lead. 

—James O’Donnell

5. Nvidia sees legitimate competition

For much of the current AI boom, if you were a tech startup looking to try your hand at making an AI model, Jensen Huang was your man. As CEO of Nvidia, the world’s most valuable corporation, Huang helped the company become the undisputed leader of chips used both to train AI models and to ping a model when anyone uses it, called “inferencing.”

A number of forces could change that in 2025. For one, behemoth competitors like Amazon, Broadcom, AMD, and others have been investing heavily in new chips, and there are early indications that these could compete closely with Nvidia’s—particularly for inference, where Nvidia’s lead is less solid. 

A growing number of startups are also attacking Nvidia from a different angle. Rather than trying to marginally improve on Nvidia’s designs, startups like Groq are making riskier bets on entirely new chip architectures that, with enough time, promise to provide more efficient or effective training. In 2025 these experiments will still be in their early stages, but it’s possible that a standout competitor will change the assumption that top AI models rely exclusively on Nvidia chips.

Underpinning this competition, the geopolitical chip war will continue. That war thus far has relied on two strategies. On one hand, the West seeks to limit exports to China of top chips and the technologies to make them. On the other, efforts like the US CHIPS Act aim to boost domestic production of semiconductors.

Donald Trump may escalate those export controls and has promised massive tariffs on any goods imported from China. In 2025, such tariffs would put Taiwan—on which the US relies heavily because of the chip manufacturer TSMC—at the center of the trade wars. That’s because Taiwan has said it will help Taiwanese firms operating in China relocate back to the island to help them avoid the proposed tariffs. That could draw further criticism from Trump, who has expressed frustration with US spending to defend Taiwan from China. 

It’s unclear how these forces will play out, but it will only further incentivize chipmakers to reduce reliance on Taiwan, which is the entire purpose of the CHIPS Act. As spending from the bill begins to circulate, next year could bring the first evidence of whether it’s materially boosting domestic chip production. 

—James O’Donnell

Correction: we have clarified that Taiwan’s Economy Minister was talking about Taiwanese firms being relocated back to Taiwan.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2025/01/08/1109188/whats-next-for-ai-in-2025/amp/

Harvard researchers hail quantum computing breakthrough with machine that can run for two hours — atomic loss quashed by experimental design, systems that can run forever just 3 years away

Posted by timmreardon on 10/08/2025
Posted in: Uncategorized.

By Jowi Morales published October 2, 2025

That’s an over 55,000% increase in operational 

A group of physicists from Harvard and MIT just built a quantum computer that ran continuously for more than two hours. Although it doesn’t sound like much versus regular computers (like servers that run 24/7 for months, if not years), this is a huge breakthrough in quantum computing. As reported by The Harvard Crimson, most current quantum computers run for only a few milliseconds, with record-breaking machines only able to operate for a little over 10 seconds.

Although two hours is still a bit limited, researchers say that the concept behind this could allow future quantum computers to run for much longer, maybe even indefinitely. “There is still a way to go and scale from where we are now,” says research associate Tout T. Wang, “But the roadmap is now clear based on the breakthrough experiments that we’ve done here at Harvard.”

The main difference between “regular” and quantum computing is that the latter uses qubits, which are subatomic particles, to hold and process data. But unlike the former, which retain information even without power, quantum computers can lose these qubits in a process called “atom loss”. This results in information loss and eventually system failure.

The research team addressed this by developing the “optical lattice conveyor belt” and “optical tweezers” to replace qubits as they’re lost. This system has 3,000 qubits and allows them to inject 300,000 atoms per second into the quantum computer, overcoming the qubit loss. “There’s now fundamentally nothing limiting how long our usual atom and quantum computers can run for,” said Wang. “Even if atoms get lost with a small probability, we can bring fresh atoms in to replace them and not affect the quantum information being stored in the system.”

Other team members believe that this breakthrough will allow us to have quantum computers that can run forever in about three years. Before this, experts said that it was at least half a decade away, if not longer. Quantum computing has the potential to change the way we do computing, breaking barriers in cryptography, finance, medicine, and more. However, despite these advancements, it’s unlikely that we’ll have personal quantum computers in our living rooms and offices within the next decade, unless you’re a physicist or researcher working on these cutting-edge devices.

Article link: https://www.tomshardware.com/tech-industry/quantum-computing/harvard-researchers-hail-quantum-computing-breakthrough-with-machine-that-can-run-for-two-hours-atomic-loss-quashed-by-experimental-design-systems-that-can-run-forever-just-3-years-away

A short history of AI, and what it is (and isn’t) – MIT Technology Review

Posted by timmreardon on 10/07/2025
Posted in: Uncategorized.


Maybe it’s magic, maybe it’s math—nobody can decide.

By Melissa Heikkilä

July 16, 2024

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s the simplest questions that are often the hardest to answer. That applies to AI, too. Even though it’s a technology being sold as a solution to the world’s problems, nobody seems to know what it really is. It’s a label that’s been slapped on technologies ranging from self-driving cars to facial recognition, chatbots to fancy Excel. But in general, when we talk about AI, we talk about technologies that make computers do things we think need intelligence when done by people. 

For months, my colleague Will Douglas Heaven has been on a quest to go deeper to understand why everybody seems to disagree on exactly what AI is, why nobody even knows, and why you’re right to care about it. He’s been talking to some of the biggest thinkers in the field, asking them, simply: What is AI? It’s a great piece that looks at the past and present of AI to see where it is going next. You can read it here. 

Here’s a taste of what to expect: 

Artificial intelligence almost wasn’t called “artificial intelligence” at all. The computer scientist John McCarthy is credited with coming up with the term in 1955 when writing a funding application for a summer research program at Dartmouth College in New Hampshire. But more than one of McCarthy’s colleagues hated it. “The word ‘artificial’ makes you think there’s something kind of phony about this,” said one. Others preferred the terms “automata studies,” “complex information processing,” “engineering psychology,” “applied epistemology,” “neural cybernetics,”  “non-numerical computing,” “neuraldynamics,” “advanced automatic programming,” and “hypothetical automata.” Not quite as cool and sexy as AI.

AI has several zealous fandoms. AI has acolytes, with a faith-like belief in the technology’s current power and inevitable future improvement. The buzzy popular narrative is shaped by a pantheon of big-name players, from Big Tech marketers in chief like Sundar Pichai and Satya Nadella to edgelords of industry like Elon Musk and Sam Altman to celebrity computer scientists like Geoffrey Hinton. As AI hype has ballooned, a vocal anti-hype lobby has risen in opposition, ready to smack down its ambitious, often wild claims. As a result, it can feel as if different camps are talking past one another, not always in good faith.

This sometimes seemingly ridiculous debate has huge consequences that affect us all. AI has a lot of big egos and vast sums of money at stake. But more than that, these disputes matter when industry leaders and opinionated scientists are summoned by heads of state and lawmakers to explain what this technology is and what it can do (and how scared we should be). They matter when this technology is being built into software we use every day, from search engines to word-processing apps to assistants on your phone. AI is not going away. But if we don’t know what we’re being sold, who’s the dupe?

For example, meet the TESCREALists. A clunky acronym (pronounced “tes-cree-all”) replaces an even clunkier list of labels: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. It was coined by Timnit Gebru, who founded the Distributed AI Research Institute and was Google’s former ethical AI co-lead, and Émile Torres, a philosopher and historian at Case Western Reserve University. Some anticipate human immortality; others predict humanity’s colonization of the stars. The common tenet is that an all-powerful technology is not only within reach but inevitable. TESCREALists believe that artificial general intelligence, or AGI, could not only fix the world’s problems but level up humanity. Gebru and Torres link several of these worldviews—with their common focus on “improving” humanity—to the racist eugenics movements of the 20th century.

Is AI math or magic? Either way, people have strong, almost religious beliefs in one or the other. “It’s offensive to some people to suggest that human intelligence could be re-created through these kinds of mechanisms,” Ellie Pavlick, who studies neural networks at Brown University, told Will. “People have strong-held beliefs about this issue—it almost feels religious. On the other hand, there’s people who have a little bit of a God complex. So it’s also offensive to them to suggest that they just can’t do it.”

Will’s piece really is the definitive look at this whole debate. No spoilers—there are no simple answers, but lots of fascinating characters and viewpoints. I’d recommend you read the whole thing here—and see if you can make your mind up about what AI really is.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/07/16/1095001/a-short-history-of-ai-and-what-it-is-and-isnt/amp/

Why handing over total control to AI agents would be a huge mistake – MIT Technology Review

Posted by timmreardon on 10/05/2025
Posted in: Uncategorized.

When AI systems can control multiple sources simultaneously, the potential for harm explodes. We need to keep humans in the loop.

By Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli

March 24, 2025

AI agents have set the tech industry abuzz. Unlike chatbots, these groundbreaking new systems operate outside of a chat window, navigating multiple applications to execute complex tasks, like scheduling meetings or shopping online, in response to simple user commands. As agents are developed to become more capable, a crucial question emerges: How much control are we willing to surrender, and at what cost? 

New frameworks and functionalities for AI agents are announced almost weekly, and companies promote the technology as a way to make our lives easier by completing tasks we can’t do or don’t want to do. Prominent examples include “computer use,” a function that enables Anthropic’s Claude system to act directly on your computer screen, and the “general AI agent” Manus, which can use online tools for a variety of tasks, like scouting out customers or planning trips.

These developments mark a major advance in artificial intelligence: systems designed to operate in the digital world without direct human oversight.

The promise is compelling. Who doesn’t want assistance with cumbersome work or tasks there’s no time for? Agent assistance could soon take many different forms, such as reminding you to ask a colleague about their kid’s basketball tournament or finding images for your next presentation. Within a few weeks, they’ll probably be able to make presentations for you. 

There’s also clear potential for deeply meaningful differences in people’s lives. For people with hand mobility issues or low vision, agents could complete tasks online in response to simple language commands. Agents could also coordinate simultaneous assistance across large groups of people in critical situations, such as by routing traffic to help drivers flee an area en masse as quickly as possible when disaster strikes. 

But this vision for AI agents brings significant risks that might be overlooked in the rush toward greater autonomy. Our research team at Hugging Face has spent years implementing and investigating these systems, and our recent findings suggest that agent development could be on the cusp of a very serious misstep. 

Giving up control, bit by bit

This core issue lies at the heart of what’s most exciting about AI agents: The more autonomous an AI system is, the more we cede human control. AI agents are developed to be flexible, capable of completing a diverse array of tasks that don’t have to be directly programmed. 

For many systems, this flexibility is made possible because they’re built on large language models, which are unpredictable and prone to significant(and sometimes comical) errors. When an LLM generates text in a chat interface, any errors stay confined to that conversation. But when a system can act independently and with access to multiple applications, it may perform actions we didn’t intend, such as manipulating files, impersonating users, or making unauthorized transactions. The very feature being sold—reduced human oversight—is the primary vulnerability.

Levels of AI Agent

The more autonomous the system, the more we’ve ceded human control. Multi-agent systems may combine agents with different agentic levels. These levels don’t tell the whole story, but provide a basic framework to help understand what AI agents are. Each level brings with it many potential benefits, but also risks. For more details on agents and agentic levels, please see our course on AI agents.

To understand the overall risk-benefit landscape, it’s useful to characterize AI agent systems on a spectrum of autonomy. The lowest level consists of simple processors that have no impact on program flow, like chatbots that greet you on a company website. The highest level, fully autonomous agents, can write and execute new code without human constraints or oversight—they can take action (moving around files, changing records, communicating in email, etc.) without your asking for anything. Intermediate levels include routers, which decide which human-provided steps to take; tool callers, which run human-written functions using agent-suggested tools; and multistep agents that determine which functions to do when and how. Each represents an incremental removal of human control.

Related Story

A photo illustration of a young woman surrounded by pixelation, illustrating an AI clone

We need to start wrestling with the ethics of AI agents

AI could soon not only mimic our personality, but go out and act on our behalf. There are some things we need to sort out before then.

It’s clear that AI agents can be extraordinarily helpful for what we do every day. But this brings clear privacy, safety, and security concerns. Agents that help bring you up to speed on someone would require that individual’s personal information and extensive surveillance over your previous interactions, which could result in serious privacy breaches. Agents that create directions from building plans could be used by malicious actors to gain access to unauthorized areas. 

And when systems can control multiple information sources simultaneously, potential for harm explodes. For example, an agent with access to both private communications and public platforms could share personal information on social media. That information might not be true, but it would fly under the radar of traditional fact-checking mechanisms and could be amplified with further sharing to create serious reputational damage. We imagine that “It wasn’t me—it was my agent!!” will soon be a common refrain to excuse bad outcomes.

Keep the human in the loop

Historical precedent demonstrates why maintaining human oversight is critical. In 1980, computer systems falsely indicated that over 2,000 Soviet missiles were heading toward North America. This error triggered emergency procedures that brought us perilously close to catastrophe. What averted disaster was human cross-verification between different warning systems. Had decision-making been fully delegated to autonomous systems prioritizing speed over certainty, the outcome might have been catastrophic.

Some will counter that the benefits are worth the risks, but we’d argue that realizing those benefits doesn’t require surrendering complete human control. Instead, the development of AI agents must occur alongside the development of guaranteed human oversight in a way that limits the scope of what AI agents can do.

Open-source agent systems are one way to address risks, since these systems allow for greater human oversight of what systems can and cannot do. At Hugging Face we’re developing smolagents, a framework that provides sandboxed secure environments and allows developers to build agents with transparency at their core so that any independent group can verify whether there is appropriate human control. 

This approach stands in stark contrast to the prevailing trend toward increasingly complex, opaque AI systems that obscure their decision-making processes behind layers of proprietary technology, making it impossible to guarantee safety.

As we navigate the development of increasingly sophisticated AI agents, we must recognize that the most important feature of any technology isn’t increasing efficiency but fostering human well-being. 

This means creating systems that remain tools rather than decision-makers, assistants rather than replacements. Human judgment, with all its imperfections, remains the essential component in ensuring that these systems serve rather than subvert our interests.

Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli all work for Hugging Face, a global startup in responsible open-source AI.

Dr. Margaret Mitchell is a machine learning researcher and Chief Ethics Scientist at Hugging Face, connecting human values to technology development.

Dr. Sasha Luccioni is Climate Lead at Hugging Face, where she spearheads research, consulting and capacity-building to elevate the sustainability of AI systems. 

Dr. Avijit Ghosh is an Applied Policy Researcher at Hugging Face working at the intersection of responsible AI and policy. His research and engagement with policymakers has helped shape AI regulation and industry practices.

Dr. Giada Pistilli is a philosophy researcher working as Principal Ethicist at Hugging Face.

Article link: https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/?

MIT report: 95% of generative AI pilots at companies are failing – Fortune

Posted by timmreardon on 09/27/2025
Posted in: Uncategorized.

BY SHERYL ESTRADA

August 18, 2025 at 6:54 AM EDT

Good morning. Companies are betting on AI—yet nearly all enterprise pilots are stuck at the starting line.

The GenAI Divide: State of AI in Business 2025, a new report published by MIT’s NANDAinitiative, reveals that while generative AI holds promise for enterprises, most initiatives to drive rapid revenue growth are falling flat.

Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.

To unpack these findings, I spoke with Aditya Challapally, the lead author of the report, and a research contributor to project NANDA at MIT.

“Some large companies’ pilots and younger startups are really excelling with generative AI,” Challapally said. Startups led by 19- or 20-year-olds, for example, “have seen revenues jump from zero to $20 million in a year,” he said. “It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.

But for 95% of companies in the dataset, generative AI implementation is falling short. “The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide,” the report states. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.

The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

What’s behind successful AI deployments?

How companies adopt AI is crucial. Purchasing AI tools from specialized vendors and building partnerships succeed about 67% of the time, while internal builds succeed only one-third as often.

This finding is particularly relevant in financial services and other highly regulated sectors, where many firms are building their own proprietary generative AI systems in 2025. Yet, MIT’s research suggests companies see far more failures when going solo.

Companies surveyed were often hesitant to share failure rates, Challapally noted. “Almost everywhere we went, enterprises were trying to build their own tool,” he said, but the data showed purchased solutions delivered more reliable results.

Other key factors for success include empowering line managers—not just central AI labs—to drive adoption, and selecting tools that can integrate deeply and adapt over time.

Workforce disruption is already underway, especially in customer support and administrative roles. Rather than mass layoffs, companies are increasingly not backfilling positions as they become vacant. Most changes are concentrated in jobs previously outsourced due to their perceived low value.

The report also highlights the widespread use of “shadow AI”—unsanctioned tools like ChatGPT—and the ongoing challenge of measuring AI’s impact on productivity and profit.

Looking ahead, the most advanced organizations are already experimenting with agentic AI systems that can learn, remember, and act independently within set boundaries—offering a glimpse at how the next phase of enterprise AI might unfold.

Article link: https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/?

We need to start wrestling with the ethics of AI agents – MIT Technology Review

Posted by timmreardon on 09/27/2025
Posted in: Uncategorized.


AI could soon not only mimic our personality, but go out and act on our behalf. There are some things we need to sort out before then.

By James O’Donnell

November 26, 2024

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

Generative AI models have become remarkably good at conversing with us, and creating images, videos, and music for us, but they’re not all that good at doing things for us. 

AI agents promise to change that. Think of them as AI models with a script and a purpose. They tend to come in one of two flavors. 

The first, called tool-based agents, can be coached using natural human language (rather than coding) to complete digital tasks for us. Anthropic released one such agent in October—the first from a major AI model-maker—that can translate instructions (“Fill in this form for me”) into actions on someone’s computer, moving the cursor to open a web browser, navigating to find data on relevant pages, and filling in a form using that data. Salesforce has released its own agent too, and OpenAI reportedly plans to release one in January. 

The other type of agent is called a simulation agent, and you can think of these as AI models designed to behave like human beings. The first people to work on creating these agents were social science researchers. They wanted to conduct studies that would be expensive, impractical, or unethical to do with real human subjects, so they used AI to simulate subjects instead. This trend particularly picked up with the publication of an oft-cited 2023 paper by Joon Sung Park, a PhD candidate at Stanford, and colleagues called “Generative Agents: Interactive Simulacra of Human Behavior.” 

Last week Park and his team published a new paper on arXiv called “Generative Agent Simulations of 1,000 People.” In this work, researchers had 1,000 people participate in two-hour interviews with an AI. Shortly after, the team was able to create simulation agents that replicated each participant’s values and preferences with stunning accuracy.

There are two really important developments here. First, it’s clear that leading AI companies think it’s no longer good enough to build dazzling generative AI tools; they now have to build agents that can accomplish things for people. Second, it’s getting easier than ever to get such AI agents to mimic the behaviors, attitudes, and personalities of real people. What were once two distinct types of agents—simulation agents and tool-based agents—could soon become one thing: AI models that can not only mimic your personality but go out and act on your behalf. 

Research on this is underway. Companies like Tavus are hard at work helping users create “digital twins” of themselves. But the company’s CEO, Hassaan Raza, envisions going further, creating AI agents that can take the form of therapists, doctors, and teachers. 

If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. The first is that these agents could create even more personal, and even more harmful, deepfakes. Image generation tools have already made it simple to create nonconsensual pornography using a single image of a person, but this crisis will only deepen if it’s easy to replicate someone’s voice, preferences, and personality as well. (Park told me he and his team spent more than a year wrestling with ethical issues like this in their latest research project, engaging in many conversations with Stanford’s ethics board and drafting policies on how the participants could withdraw their data and contributions.) 

The second is the fundamental question of whether we deserve to know whether we’re talking to an agent or a human. If you complete an interview with an AI and submit samples of your voice to create an agent that sounds and responds like you, are your friends or coworkers entitled to know when they’re talking to it and not to you? On the other side, if you ring your cell service provider or doctor’s office and a cheery customer service agent answers the line, are you entitled to know whether you’re talking to an AI?

This future feels far off, but it isn’t. There’s a chance that when we get there, there will be even more pressing and pertinent ethical questions to ask. In the meantime, read more from my piece on AI agents here, and ponder how well you think an AI interviewer could get to know you in two hours.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/11/26/1107309/we-need-to-start-wrestling-with-the-ethics-of-ai-agents/amp/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...