healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Microchips – their past, present and future – WEF

Posted by timmreardon on 04/03/2024
Posted in: Uncategorized.

Mar 27, 2024

Victoria Masterson

Senior Writer, Forum Agenda

  • Tech company Nvidia has unveiled a new artificial intelligence (AI) chip that can perform some tasks 30 times faster than before.
  • Microchip history began in 1947 with the invention of the transistor and today is being advanced to meet the growing power and energy needs of AI technologies.
  • Both generative AI and sustainable computing make the World Economic Forum’s Top 10 Emerging Technologies list.

Microchips are entering a whole new world of possibility.

Artificial intelligence (AI) and its vast need for power and speed is driving a new generation of microchip innovations. Examples include the AI chip just unveiled by Californian technology corporation Nvidia.

The company says the chip can do some jobs 30 times faster than its predecessor, reports the BBC.

It has also introduced a line of chips designed to run chatbots in cars and discussed chips that can create humanoid robots.

So, what are microchips and how have they changed our world?

What are microchips?

A microchip is a set of electronic circuits on a small, flat wafer of silicon, explains semiconductor specialist ASML.

Silicon is hard and brittle, like crystal, and is the second most abundant chemical element on Earth after oxygen. It is made from sand and has unique electrical and thermal properties for chip manufacture.

A brief history of microchips

An important date in the history of the microchip is 1947, when the transistor, a key precursor to the microchip, was invented at Bell Labs, an American telecoms research and development company known for its pioneering innovations.

Transistors are essentially tiny switches that turn electrical currents on or off. Today, a tiny chip can hold many billions of transistors, explains the BBC’s Made on Earth series.

Then in 1958, an electrical engineer at electronics company Texas Instruments, Jack Kilby, created the first integrated circuit. This was a foundational breakthrough for modern microchips, according to electronics news site, Electropages.

Have you read?
  • Everyone wants this pricey chip for their AI. But what is it?
  • How to balance regulating semiconductors with global security and technological progress
  • When the chips are down: How the semiconductor industry is dealing with a worldwide shortage

Integrated circuits are small electronic chips made up of interconnected components, including transistors.

Another critical innovation followed in 1959 when physicist Robert Noyce developed the first practical integrated circuit, made in one piece, that could be mass-produced.

In the early 1960s, NASA helped to drive the development of microchip technology as an early adopter.

How have microchips changed the world?

Without microchips, everyday technology – from the internet to handheld calculators – would be the stuff of science fiction, suggests news site Slate.

Microchips have brought increasingly smaller, more powerful and more efficient electronic devices.

For example, ultrasound scanners are typically large machines wheeled around on trolleys in hospitals and clinics. Shrinking microchip technology means they’re now available as pocket-sized mobile devices, Texas Instruments says.

A new dawn for microchips

Scientists are unlocking next-generation advances in microchip technology by using particles of light to carry data – instead of electricity.

Boston-based startup Lightmatter, for example, is usinglight to multiply the processing power and cut the huge energy demand of chips used in AI technologies.

Light is way more energy efficient in transmitting information than electrical signals travelling over wires, the company tells Reuters.

German startup, Semron, is developing a chip to run AI programs locally on smartphones, earbuds, virtual reality headsets and other mobile devices, explains news site TechCrunch.

Instead of electrical currents to perform calculations – the conventional way that computer chips have worked – Semron’s chips use electrical fields. This improves the energy efficiency and cuts the manufacturing cost of the chips, the company says.

AI and other technologies shaping our future

Generative AI and sustainable computing are two emerging technologies explored in the World Economic Forum’s Top 10 Emerging Technologies of 2023report.

Right now, generative AI is still mostly focused on producing text, computer programming, images and sound, the report explains. In the future, however, it could have applications in drug design, space travel, food production and other industries.

For example, NASA is looking at AI systems that can build lightweight spaceflight instruments 10 times faster than currently, but with improved structural performance.

Sustainable computing looks at how to address the spiralling energy demand of data processingas technologies multiply and advance.

From Google searches and sending an email to using AI or metaverse applications, the data centres processing these requests consume an estimated 1% of the electricity produced globally, the Forum says. And this will only increase as people and technologies drive more demand for data.

And we could soon see net-zero data centres, created by using AI and other technologies to reduce their energy consumption and efficiency.

Article link: https://www.weforum.org/agenda/2024/03/ai-drives-microchip-technology/?

Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI – MIT Sloan

Posted by timmreardon on 03/30/2024
Posted in: Uncategorized.


by

Sara Brown

 May 23, 2023

Why It Matters

Geoffrey Hinton, a respected researcher who recently stepped down from Google, said it’s time to confront the existential dangers of artificial intelligence. Share 

A deep learning pioneer is raising concerns about rapid advancements in artificial intelligence and how they will affect humans.

Geoffrey Hinton, 75, a professor emeritus at the University of Toronto and until recently a vice president and engineering fellow at Google, announced in early May that he was leaving the company — in part because of his age, he said, but also because he’s changed his mind about the relationship between humans and digital intelligence.

In a widely discussed interview with The New York Times, Hinton said generative intelligence could spread misinformation and, eventually, threaten humanity. 

Speaking two days after that article was published, Hinton reiterated his concerns. “I’m sounding the alarm, saying we have to worry about this,” he said at the EmTech Digital conference, hosted by MIT Technology Review.

Hinton said he is worried about the increasingly powerful machines’ ability to outperform humans in ways that are not in the best interest of humanity, and the likely inability to limit AI development.

The growing power of AI 

In 2018, Hinton shared a Turing Award for work related to neural networks. He has been called “a godfather of AI,” in part for his fundamental research about using back-propagation to help machines learn.  

I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.

Geoffrey Hinton Former Vice President and Engineering Fellow, GoogleShare 

Hinton said he long thought that computer models weren’t as powerful as the human brain. Now, he sees artificial intelligence as a relatively imminent “existential threat.”

Computer models are outperforming humans, including doing things humans can’t do. Large language models like GPT-4 use neural networks with connections like those in the human brain and are starting to do commonsense reasoning, Hinton said.

These AI models have far fewer neural connections than humans do, but they manage to know a thousand times as much as a human, Hinton said.

In addition, models are able to continue learning and easily share knowledge. Many copies of the same AI model can run on different hardware but do exactly the same thing.

“Whenever one [model] learns anything, all the others know it,” Hinton said. “People can’t do that. If I learn a whole lot of stuff about quantum mechanics and I want you to know all that stuff about quantum mechanics, it’s a long, painful process of getting you to understand it.”

AI is also powerful because it can process vast quantities of data — much more than a single person can. And AI models can detect trends in data that aren’t otherwise visible to a person — just like a doctor who had seen 100 million patients would notice more trends and have more insights than a doctor who had seen only a thousand.  

AI concerns: Manipulating humans, or even replacing them 

Hinton’s concern with this burgeoning power centers around the alignment problem — how to ensure that AI is doing what humans want it to do. “What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial for us,” Hinton said. “But we need to try and do that in a world where there [are] bad actors who want to build robot soldiers that kill people. And it seems very hard to me.”

Humans have inherent motivations, such as finding food and shelter and staying alive, but AI doesn’t. “My big worry is, sooner or later someone will wire into them the ability to create their own subgoals,” Hinton said. (Some versions of the technology, like ChatGPT, already have the ability to do that, he noted.)

“I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals,” Hinton said. “And if these things get carried away with getting more control, we’re in trouble.”

Artificial intelligence can also learn bad things — like how to manipulate people “by reading all the novels that ever were and everything Machiavelli ever wrote,” for example. “And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on,” Hinton said. “So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.”

At worst, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence,” Hinton said. Biological intelligence evolved to create digital intelligence, which can absorb everything humans have created and start getting direct experience of the world. 

“It may keep us around for a while to keep the power stations running, but after that, maybe not,“ he added. “We’ve figured out how to build beings that are immortal. These digital intelligences, when a piece of hardware dies, they don’t die. If … you can find another piece of hardware that can run the same instructions, you can bring it to life again. So we’ve got immortality, but it’s not for us.”

Barriers to stopping AI advancement 

Hinton said he does not see any clear or straightforward solutions. “I wish I had a nice, simple solution I could push, but I don’t,” he said. “But I think it’s very important that people get together and think hard about it and see whether there is a solution.”

More than 27,000 people, including several tech executives and researchers, have signed an open letter calling for a pause on training the most powerful AI systems for at least six months because of “profound risks to society and humanity,” and several leaders from the Association for the Advancement of Artificial Intelligencesigned a letter calling for collaboration to address the promise and risks of AI.

It might be rational to stop developing artificial intelligence, but that’s naive and unlikely, Hinton said, in part because of competition between companies and countries.

“If you’re going to live in a capitalist system, you can’t stop Google [from] competing with Microsoft,” he said, noting that he doesn’t think Google, his former employer, has done anything wrong in developing AI programs. “It’s just inevitable in the capitalist system or a system with competition between countries like the U.S. and China that this stuff will be developed,” he said.

It is also hard to stop developing AI because there are benefits in fields like medicine, he noted.  

Researchers are looking at guardrails for these systems, but there is the chance that AI can learn to write and execute programs itself. “Smart things can outsmart us,” Hinton said.

One note of hope: Everyone faces the same risk. “If we allow it to take over, it will be bad for all of us,” Hinton said. “We’re all in the same boat with respect to the existential threat. So we all ought to be able to cooperate on trying to stop it.”

Article link: https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai?

The Danger Of Unmanaged AI In The Enterprise – Forbes

Posted by timmreardon on 03/29/2024
Posted in: Uncategorized.

Arti Raman Forbes Councils Member

Forbes Technology Council COUNCIL POST

Mar 22, 2024,09:45am EDT

Arti Raman is the founder and CEO of Portal26. She is an expert on managing and mitigating risk for enterprise GenAI and data.

There’s no doubt that generative AI (GenAI) is rapidly evolving in the enterprise, with more than 50% of organizations gearing up to implement AI in the coming year. While the surge in AI is both powerful and necessary, it brings forth a new array of vulnerabilities that demand attention. To stay ahead while maintaining infrastructure integrity and combatting unmanaged AI, it becomes paramount for these companies and their security teams to enact governance policies for monitoring employee usage.

There’s no doubt that generative AI (GenAI) is rapidly evolving in the enterprise, with more than 50% of organizations gearing up to implement AI in the coming year. While the surge in AI is both powerful and necessary, it brings forth a new array of vulnerabilities that demand attention. To stay ahead while maintaining infrastructure integrity and combatting unmanaged AI, it becomes paramount for these companies and their security teams to enact governance policies for monitoring employee usage.

Ways AI Is UnmanagedUnmanaged AI refers to GenAI use within organizations that lacks proper oversight, control or governance. You can’t manage what you can’t see.

Unmanaged AI takes various forms. For instance, organizations eager to stay competitive may overlook security considerations in the rush to adopt the technology amid the AI boom. This rapid adoption, without due regard for security, creates vulnerabilities. Moreover, not fully comprehending AI’s potential in the workplace can result in unforeseen security risks, emphasizing the importance of GenAI training and robust company policies.

Unmanaged AI, be it through trends like Bring Your Own AI (BYOAI), shadow AI or intentional and unintentional misuse, exposes organizations to potential attacks and damage to their bottom line. Consequently, addressing the chronic lack of visibility native to many GenAI tools has never been more crucial for organizations.

Intentional Misuse Of GenAI Tools

While there are many ways GenAI tools can be intentionally misused, there are three particularly harmful ways. The first method is AI data poisoning. This occurs when undetectable inaccuracies are intentionally embedded during the model’s training. Even a small amount of bad data can ruin outcomes in a calculation, projection or analysis.

GenAI’s ability to generate large volumes of data also introduces the potential for malicious use of generated data. This can jeopardize sensitive information and critical assets, which can be challenging to recover from in terms of the trust lost between consumers and key stakeholders.

Finally, there is fake content propagation. A complex aspect of GenAI ethics involves the surfacing of deepfakes and the consequent spread of misinformation. Insufficient governance of GenAI inputs due to limited education or standards may lead to deepfake generation. Organizations then bear the responsibility of identifying and countering the spread of false information, necessitating innovative solutions to detect and combat the proliferation of fake content.

Shadow AI

Brand Contributor to Forbes Sharon Maher defines shadow AI as “unsanctioned or ad hoc generative AI use within an organization that’s outside IT governance.” Highlighting the threat, a 2023 survey revealed a significant 93% of respondents are concerned about shadow AI.

Primarily arising from employees seeking AI-based assistance in their roles with tools like ChatGPT, these tools often turn into shadow AI due to a need for more education among users. For example, a recent study found that 6% of employees utilize GenAI by copying and pasting sensitive intellectual property into tools like ChatGPT. Moreover, the survey indicated over 50% of organizations receive fewer than five hours of annual education and training on GenAI issues.

Unintentional Misuse Of GenAI Tools

While there are many ways to misuse GenAI intentionally, there are instances where we may unintentionally and unknowingly deploy AI irresponsibly. One form of unintentional misuse is AI bias. This occurs when systems generate biased results, reflecting and perpetuating historical and current human biases. AI systems learn decision-making from training data, and if data is mislabeled or over- or underrepresents certain groups or characteristics, the model can produce skewed results, eroding trust in the organization.

Similarly, human bias influences AI models as we unconsciously imbue them with our own experiences and biases. Furthermore, the consequences extend beyond mistrust; according to a Scientific American article, individuals interacting with AI models can unconsciously incorporate the distorted data into their decision-making, creating a cycle of mutual amplification of stereotypes and inequalities between humans and AI models.

The BYOAI Dilemma

BYOAI, or Bring Your Own AI, is a relatively new term that refers to employees using any form of mainstream or experimental AI tool to accomplish business tasks whether their organization approves of it or not. These tools could be virtual assistants like Siri or Alexa, AI writing tools like Grammarly or AI transcription tools like Otter.ai.

The key aspect of BYOAI is that the business does not sanction the tool. Because AI use is not slowing down anytime soon, employers need a forward-thinking approach to managing the BYOAI trend. Recent surveys show that 58% of organizations are already managing more than five tools. While BYOAI tools can help overall employee efficiency, they also inadvertently create security risks by adding unmonitored tools.

Undoubtedly, GenAI stands as an excellent tool for enhancing productivity and streamlining processes within organizations. However, when unmanaged, there are significant risks, such as loss of intellectual property (IP) and sensitive data, inadvertent use of false model output and legal and compliance risks. As GenAI use increases in businesses, prioritizing guardrails that focus on security, privacy and legal risks is paramount.

The spectrum of unmanaged AI risks—from shadow AI, BYOAI and intentional and unintentional misuse—underscores the need for holistic organizational visibility and governance. A thorough visibility and governance platform can aid businesses in effectively managing their AI.

Through rich visibility into usage and the ability to create and enforce GenAI governance policies, organizations can defend against compliance, privacy, and IP-related risks. However, to ensure effectiveness, GenAI ethics and governance methods must be fully integrated and accessible across the entire enterprise.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.

Article link: https://www-forbes-com.cdn.ampproject.org/c/s/www.forbes.com/sites/forbestechcouncil/2024/03/22/the-danger-of-unmanaged-ai-in-the-enterprise/amp/

White House sets ‘binding requirements’ for agencies to vet AI tools before using them

Posted by timmreardon on 03/29/2024
Posted in: Uncategorized.

The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, but keep risks in check

Jory Heckman@jheckmanWFED

March 28, 2024 5:01 am

The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, but in a way that keeps the risk of misuse in check.

The Office of Management and Budget on Thursday released its first governmentwide policy on how agencies should mitigate the risks of AI while harnessing its benefits.

Among its mandates, OMB will require agencies to publicly report on how they’re using AI, the risks involved and how they’re managing those risks.

Senior administration officials told reporters Wednesday that OMB’s guidance will give agency leaders, such as their chief AI officers or AI governance boards, the information they need to independently assess their use of AI tools, identify flaws, prevent biased or discriminatory results and suggest improvements.

Vice President Kamala Harris told reporters in a call that OMB’s guidance sets up several “binding requirements to promote the safe, secure and responsible use of AI by our federal government.”

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Harris said.


‘Concrete safeguards’ for agency AI use

OMB  is giving agencies until Dec. 1, 2024, to implement “concrete safeguards” that protect Americans’ rights or safety when agencies use AI tools.

“These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” OMB wrote in a fact sheet.

By putting these safeguards in place, OMB says travelers in airports will be able to opt out of AI facial recognition tools used by the Transportation Security Administration,” without any delay or losing their place in line.”

The Biden administration also expects AI algorithms used in the federal health care system will have a human being overseeing the process to verify the AI algorithm’s results and avoid biased results.

“If the Veterans Administration wants to use AI in VA hospitals, to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses,” Harris said.

A senior administration official said OMB is providing overarching AI guidelines for the entire federal government, “as well as individual guidelines for specific agencies.”

“Each agency is in its own unique place in its technology and innovation journey related to AI. So we will make sure, based on the policy, that we will know how all government agencies are using AI, what steps agencies are taking to mitigate risks. We will be providing direct input on the government’s most useful impacts of AI. And we will make sure, based on the guidance, that any member of the public is able to seek remedy when AI potentially leads to misinformation or false decisions about them.”

OMB’s first-of-its-kind guidance covers all federal use of AI, including projects developed internally by federal officials and those purchased from federal contractors.

Under OMB’s policy, agencies that don’t follow these steps “must cease using the AI system,” except in some limited cases where doing so would create an “unacceptable impediment to critical agency operations.”

OMB is requiring agencies to release expanded inventories of their AI use cases every year, including identifying use cases that impact rights or safety, and how the agency is addressing the relevant risks.

Agencies have already identified hundreds of AI use cases on AI.gov.

“The American people have a right to know when and how their government is using AI, that it is being used in a responsible way. And we want to do it in a way that holds leaders accountable for the responsible use of AI,” Harris said.

OMB will also require agencies to release government-owned AI code, models and data — as long as it doesn’t pose a risk to the public or government operations.

The guidance requires agencies to designate chief AI officers — although many agencies have already done so after it released its draft guidance last May Those agency chief AI officers have recently met with OMB and other White House officials as part of the recently launched Chief AI Officer Council.

OMB’s guidance also gives agencies until May 27 to establish AI governance boards that will be led by their deputy secretaries or an equivalent executive.

The Departments of Defense, Veterans Affairs, Housing and Urban Development and State have already created their AI governance boards.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said.

A senior administration official said the OMB guidance expects federal agency leadership, in many cases, to assess whether AI tools adopted by the agency adhere to risk management standards and standards to protect the public.

Federal government ‘leading by example’ on AI

OMB Director Shalanda Young said the finalized guidance “demonstrates that the federal government is leading by example in its own use of AI.”

“AI presents not only risks, but also a tremendous opportunity to improve public services,” Young said. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services.”

Young said the OMB guidance will make it easier for agencies to share and collaborate across government, as well as with industry partners. She said it’ll also “remove unnecessary barriers to the responsible use of AI in government.”

Many agencies are already putting AI tools to work.

The Centers for Disease Control and Prevention is using AI to predict the spread of disease and detect illegal opioids, while the Center for Medicare and Medicaid Services is using AI to reduce waste and identify anomalies in drug costs.

The Federal Aviation Administration is using AI to manage air traffic in major metropolitan areas and  improve travel time.

OMB’s guidance encourages agencies to “responsibly experiment” with generative AI, with adequate safeguards in place. The administration notes that many agencies have already started this work, including by using AI chatbots to improve customer experience.

100 new AI hires coming to agencies by this summer

Young said the federal government is on track to hire at least 100 AI professionals into the federal workforce this summer, and holding a career fair on April 18 to fill AI roles across the federal government

President Joe Biden called for an “AI talent surge” across the government in his executive order last fall.

As federal agencies increasingly adopt AI, Young said agencies must also “not leave the existing federal workforce behind.”

OMB is calling on agencies to adopt the Labor Department’s upcoming principles for mitigating AI’s potential harm to employees.

The White House says the Labor Department is leading by example, consulting with federal employees and labor unions the development of those principles, as well as its own governance and use of AI.

Later this year, OMB will take additional steps to ensure agencies’ AI contracts align with its new policy and protect the rights and safety of the public from AI-related risks.

OMB will be taking further action later this year to address federal procurement of AI. It released a request for information on Thursday, to collect public input on that work.

A senior administration official said OMB, as part of the RFI, is looking for feedback on how to “support a strong and diverse and competitive federal ecosystem of AI vendors,” as well as how to incorporate OMB’s new AI risk management requirements into federal contracts.

The public has until April 28 to respond to the RFI.

Article link; https://federalnewsnetwork.com/artificial-intelligence/2024/03/omb-sets-binding-requirements-for-agencies-to-vet-ai-tools-before-using-them/

The State of the Federal EHR – FEHRM

Posted by timmreardon on 03/26/2024
Posted in: Uncategorized.

On April 4, from noon to 2 p.m. ET, the Federal Electronic Health Record Modernization (FEHRM) office will host The State of the Federal EHR, an event held twice a year to discuss the current and future state of the federal electronic health record (EHR), health information technology, and health information exchange. The event highlights the progress made in implementing a single, common federal EHR and related capabilities across the Department of Defense, Department of Veterans Affairs, Department of Homeland Security’s U.S. Coast Guard, Department of Commerce’s National Oceanic and Atmospheric Administration, and other federal agencies.

The event will provide the latest updates and insights about the federal EHR including the recent deployment of the federal EHR at the Captain James A. Lovell Federal Health Care Center.

This event will be virtual via Microsoft Teams and open to the public. We invite active participation from individuals who possess relevant broad-based knowledge and experience. Additional details regarding the agenda and meeting will be distributed to registered participants prior to the event.

If you have any questions about this event, please send an email to FEHRMcommunications@va.gov and include “The State of the Federal EHR” in the subject line.

RSVP before April 3, 2024.

Article link: https://www.fehrm.gov/fehrm-industry-interoperability-roundtable/

How Generative AI Works – IBM

Posted by timmreardon on 03/23/2024
Posted in: Uncategorized.

Generative AI typically goes through three distinct stages in its development:

1️⃣ Training
2️⃣ Tuning
3️⃣ Generation, evaluation and retuning

Discover how these phases enable generative AI to produce useful and high-quality outputs below.

Article link: https://www.linkedin.com/posts/ibm-watsonx_how-generative-ai-works-activity-7177280432990748672-CddB?

IBM Shares Quantum Use Cases In Dazzling New Book – Forbes

Posted by timmreardon on 03/22/2024
Posted in: Uncategorized.

Karl FreundContributorOpinions expressed by Forbes Contributors are their own.

Founder and Principal Analyst, Cambrian-AI Research LLC

The IBM Institute for Business Value (IBV) has published a beautiful book, good to read and also a substantial addition to any executive office: the fourth edition of The Quantum Decade. The 168-page tome written by over 70 professionals in every industry, clearly lays out the appropriate problems, the approaches to solutions, and the amazing technology being invented as we speak. With dozens of uses cases, and in-depth portrayals, this book is a must-read for every CEO and CTO. Here’s a summary of select sections I found particularly interesting, and a few use cases from the book.

Quantum Thinking

The IBV did a CEO study in 2021 that revealed that 89% of over 3,000 chief executives surveyed did not consider quantum computing as a key technology for delivering business results in the next two to three years. While this lack of recognition may be understandable in the short term, given quantum computing’s disruptive potential in the coming decade, CEOs need to start mobilizing resources to understand and engage with quantum technology now. IBV research also finds that in 2023, organizations invested 7% of their R&D budget in quantum computing, up 29% from 2021. By 2025, this is expected to further increase by another 25%.

Ignoring quantum computing could pose significant risks, the authors assert, with consequences potentially greater than missing out on the opportunity presented by artificial intelligence a decade ago. Phase 1 of the quantum computing playbook involves acknowledging that the computing landscape is undergoing a fundamental shift. This shift from analytics to discovery of forward-looking models that can run on Quantum opens up possibilities for uncovering solutions that were previously impossible.

Phase 2 involves asking important questions: How might quantum computing disrupt and reshape your business model? How could it enhance your existing AI and classical computing workflows? What could be the “killer app” for quantum computing in your industry? How can your organization deepen its quantum computing capabilities, either internally or through partnerships with ecosystems? This is the time to experiment, iterate with scenario planning, and cultivate talent proficient in quantum computing to educate internal stakeholders and leverage deep tech resources.

IBM says it is important to note that quantum computing doesn’t replace classical computing. Instead, quantum forms a progressive partnership with classical computing and AI, where the three work together iteratively, becoming more powerful as a collective than they are individually. In the hardware configuration above, each Quantum chassis is surrounded by classical computers, and the black rows are likely inference processing servers. So one needs to think about how to factor the solution to take advantage of these closely-knit but disparate systems.

Phase 3, known as Quantum Advantage, marks a significant milestone where quantum computing demonstrates its ability to perform specific tasks more efficiently, cost-effectively, or with better quality than classical computers. Today, IBM’s quantum systems deliver utility-scale performance: the point at which quantum computers can now serve as scientific tools to explore new classes of problems beyond brute-force, classical simulation of quantum mechanics. Quantum utility is an important step toward “advantage,” when the combination of quantum computers with classical systems enables significantly better performance than classical systems alone. As advancements in hardware, software, and algorithms in quantum computing converge, they enable substantial performance improvements over classical computing, unlocking new opportunities for competitive advantage across industries

Use Cases

However, achieving business value from quantum computing requires prioritizing the right use cases—those with the potential to truly transform an organization or an entire industry. Identifying and focusing on these strategic use cases is crucial for realizing the benefits of quantum technology. Here are a few examples that IBM articulates in the book.

Exxon Mobile and the Global Supply Chain

ExxonMobil is exploring the potential of quantum computing to optimize global shipping routes, a crucial component of international trade that relies heavily on maritime transport. With around 90% of the world’s trade carried by sea, involving over 50,000 ships and potentially 20,000 containers per ship, optimizing these routes is a complex challenge beyond the capabilities of classical computers. In partnership with IBM, ExxonMobil is leveraging the IBM Quantum Network, which it joined in 2019 as the first energy company, to develop methods for mapping the global routing of merchant ships to quantum computers.

The core advantage of quantum computing in this context lies in its ability to minimize incorrect solutions and enhance correct ones, making it particularly suited for complex optimization problems. Utilizing the Qiskit quantum optimization module, ExxonMobil has tested various quantum algorithms to find the most effective ones for this task. They found that heuristic quantum algorithms and the Variational Quantum Eigensolver (VQE)-based optimization showed promise, particularly when the right ansatz (a physics term for an educated guess) is chosen.

This exploration into quantum computing for maritime shipping optimization not only has the potential to significantly impact the logistics and transportation sectors but also demonstrates broader applications in other industries facing similar optimization challenges, such as goods delivery, ride-sharing services, and urban waste management.

The University of California and Machine Learning

Researchers from IBM Quantum and the University of California, Berkeley have developed a breakthrough algorithm in quantum machine learning, demonstrating a theoretical Quantum Advantage. Traditional quantum machine learning algorithms often required quantum states of data, but this new approach works with classical data, making it more applicable to real-world scenarios.

The team focused on supervised machine learning, where they utilized quantum circuits to map classical data into a higher dimensional space—a task naturally suited for quantum computing due to the high-dimensional nature of multiple qubit states. They then estimated a quantum kernel, a measure of similarity between data points, which was used within a classical support vector machine to effectively separate the data.

In late 2020, the researchers provided solid proof that their quantum feature map circuit outperforms all possible binary classical classifiers when only classical data is available. This advancement opens up new possibilities for quantum computing in various applications, such as forecasting, predicting properties from data, or conducting risk analysis, marking a significant step forward in the field of quantum machine learning.

E.ON and Machine Learning

E.ON, a major energy operator in Europe, is leveraging quantum computing to enhance risk management and achieve its emission reduction goals. With a vast customer base and a significant increase in renewable assets expected by 2030, the company faces the challenge of managing weather-related risks and ensuring affordable energy costs. Collaborating with IBM, E.ON has implemented quantum computing strategies to conduct complex Monte Carlo simulations across various factors like locations, contracts, and weather conditions.

Key quantum computing applications include:

  • Using quantum nonlinear transformations for calculating energy contract gross margins via quantum Taylor expansions.
  • Performing risk analysis with quantum amplitude estimation to improve dynamic circuit leveraging.
  • Integrating quadratic speed-ups in classical Monte Carlo methods to optimize hardware resources.

These strategies have enabled real-time planning, finer risk diversification, and more frequent portfolio risk reassessments, thus aiding in the renegotiation of hedging contracts. E.ON views quantum computing as a pivotal technology for advancing machine learning, risk analysis, accelerated Monte Carlo techniques, and combinatorial optimization for logistics and scheduling, marking a significant shift in managing energy-related challenges.

Wells Fargo and Financial Trading

Wells Fargo is actively exploring the potential of quantum computing for practical applications in the financial sector, partnering with IBM within the IBM Quantum Network. This collaboration grants Wells Fargo access to IBM’s quantum computers via the cloud, allowing for pioneering work in quantum computing use cases, including sampling, optimization, and machine learning, aimed at deriving valuable results from quantum technologies.

A notable area of investigation between Wells Fargo and IBM is sequence modeling, particularly for predicting mid-price movements in financial markets. This involves analyzing the Limit Order Book, which records ask-and-bid orders on exchanges, and focuses on the mid-price—the average between the lowest ask and the highest bid prices at any moment.

Wells Fargo has explored using quantum hidden Markov models (QHMMs) for stochastic generation, a quantum approach to sequence modeling. QHMMs aim to generate sequences of likely symbols (e.g., representing price increases or decreases) from a given start state, similar to how large language models generate text. This quantum approach has shown to be more efficient than its classical counterpart, hidden Markov models (HMMs), offering new ways to enhance artificial intelligence technology in finance through the more efficient definition of stochastic process languages.

JSR and Chip Manufacturing

IBM and JSR are exploring how quantum computing could shape the future of computer chip manufacturing. Gordon Moore famously predicted in 1965 that the number of transistors on a computer chip would double approximately every two years, a forecast that has held true for decades, known as “Moore’s Law.” This progress has been largely enabled by innovations in semiconductor manufacturing, notably the development of a photoresist-based method by IBM in the 1980s. This technique, which uses a light-sensitive material to print transistors on chips, became widespread, with companies like JSR Corporation becoming leading producers.

The continuous miniaturization and performance improvement of chips are challenged by the costs and complexities of designing new photoresist molecules, a task for which modern supercomputers struggle due to the difficulty of simulating quantum-scale phenomena. Quantum computing, which operates on the principles of quantum mechanics, offers a potential solution by efficiently simulating molecular systems, including those comprising photoresist materials.

In a collaborative effort, IBM and JSR Corporation have started to explore the application of quantum computing in this field. A 2022 study demonstrated the use of IBM Quantum hardware to simulate small molecules akin to parts of a photoresist. This research represents a step toward utilizing quantum computing for developing new materials, potentially ensuring that Moore’s Law can continue to apply well into the future by enabling further advancements in semiconductor technology.

Conclusions

As you can see, the new edition of IBM’s Quantum Decade is a fabulous resource that should start more conversations and exploration in board rooms around the world. And thats exactly what IBM intended; by collaborating with early thinkers, we can jump start the Quantum Journey and accelerate the time to real-world solutions.

Article link: https://www-forbes-com.cdn.ampproject.org/c/s/www.forbes.com/sites/karlfreund/2024/03/15/ibm-shares-quantum-use-cases-in-dazzling-new-book/amp/

Bosonic Qiskit – IBM

Posted by timmreardon on 03/22/2024
Posted in: Uncategorized.

Bosonic Qiskit is an open-source software package from the Qiskit ecosystem that invites users to construct, simulate, and analyze quantum circuits using bosonic modes. In our latest for the IBM Quantum blog, two of the developers behind Bosonic Qiskit show users how to use their package to explore one of the most promising applications of hybrid qubit-bosonic hardware: bosonic error correction.

A common approach to quantum error correction is to redundantly encode a logical qubit using many physical qubits. In practice, a major hurdle for implementing such many-qubit-based codes is the fact that, as the code “scales-up” in the number of errors it can correct, there is a corresponding increase in the number of noisy physical qubits over which the logical information must be spread. This, in turn, creates more opportunities for errors to occur.

One promising alternative strategy is to instead encode a logical qubit into a single bosonic mode, using its infinite-dimensional Hilbert space to provide the “extra space” to encode redundancy and detect errors. An immediate advantage of this approach is its hardware efficiency, requiring just a single mode (of, for example, a microwave cavity) to encode an error-corrected logical qubit. Another advantage is the simplicity of the dominant error channel, photon loss.

In this blog post, Bosonic Qiskit developers Zixiong Liu and Kevin C. Smith show users how to simulate the binomial “kitten code,” a simple variant of one of the three bosonic codes that have recently been demonstrated to surpass the “break-even” point. Take a look: https://ibm.co/3x3Ivbi

Date

20 Mar 2024

Authors

Zixiong Liu

Kevin C. Smith

In a previous post on the Qiskit Medium, we introduced Bosonic Qiskit, an open-source project in the Qiskit ecosystem that allows users to construct, simulate, and analyze quantum circuits that contain bosonic modes (also known as oscillators or qumodes) alongside qubits. Now, we’re following up on that work with a closer look at one of the most promising applications of hybrid qubit-bosonic hardware: bosonic error correction.

Bosonic error correction is an error handling strategy in which logical qubits are encoded into the infinite-dimensional Hilbert space of a type of quantum system known as the quantum harmonic oscillator. Much like the quantum systems we use to encode qubits, a quantum harmonic oscillator has discrete or “quantized” energy levels. However, qubits have only two energy levels while oscillators are formally described as having infinite levels, a quality that offers key advantages for applications like quantum error correction. Physical realizations of quantum harmonic oscillators include the electromagnetic modes of a microwave cavity and the vibrational modes of a chain of ions, to name a few examples.

In this article, we will demonstrate the utility of Bosonic Qiskit for exploring this promising approach to quantum error correction through an explicit, tutorial-style demonstration of one of the simplest bosonic error correction codes — the Binomial “kitten” code. We’ve also created a companion Jupyter notebook that serves as a tutorial on the Bosonic Qiskit GitHub repository. The notebook provides many further details on the basics of error correction, the kitten code, and its implementation in Bosonic Qiskit.

Benefits of bosonic error correction

A common approach to quantum error correction is to redundantly encode a logical qubit using many physical qubits. A well-known example of such a strategy is the surface code, where logical information is nonlocally “smeared” across many qubits such that it cannot be corrupted by local errors.

In practice, a major hurdle for implementing such many-qubit-based codes is the fact that, as the code “scales-up” in the number of errors it can correct, there is a corresponding increase in the number of noisy physical qubits over which the logical information must be spread. This, in turn, creates more opportunities for errors to occur. Implementing an error correction code reliable enough to tame this vicious cycle and improve the lifetime of logical information — despite the increase in additional parts — is not a trivial task.

One promising alternative strategy is to instead encode a logical qubit into a single bosonic mode, using its infinite-dimensional Hilbert space to provide the “extra space” to encode redundancy and detect errors. An immediate advantage of this approach is its hardware efficiency, requiring just a single mode (of, for example, a microwave cavity) to encode an error-corrected logical qubit. Another advantage is the simplicity of the dominant error channel, photon loss.

Technically speaking, an ancillary nonlinear element such as a superconducting qubit is also needed for universal control and measurement of the bosonic mode.

These advantages have recently been realized experimentally in superconducting platforms, with three distinct bosonic codes demonstrated to surpass the “break-even” point, meaning the lifetime of the logical qubit exceeds that of all base components in the system. These include the cat code [1], the GKP code [2], and the binomial code [3]. In the next section of this blog post (and in the companion Jupyter notebook), we will discuss a simple variant of the binomial code known as the “kitten code.” In particular, we will describe its basic properties, and show how the built-in functionality of Bosonic Qiskit allows us to simulate and analyze this code.

Error correction with the binomial “kitten” code

The binomial code refers to a family of codes which seek to approximately correct amplitude damping noise (i.e., photon loss errors). This is a desirable property given that photon loss is the dominant error of bosonic modes. The simplest version of the binomial code is commonly referred to as the ‘kitten’ code, which can correct a single photon loss error. Multiple photon loss events can result in a logical error.

The code words — physical states that encode logical information — of Bosonic error correcting codes can be represented as superpositions of Fock states. Fock states are quantum states with definite photon number n, labelled by |n⟩. In the kitten code, our two logical code words are:

Group 630734.png

Beyond the property that single photon loss errors do not result in a logical error, there are a few additional advantages to defining our logical states as such.

First, the average photon number for both code words is two. As the average rate of photon loss for a qumode is proportional to the photon number, this rate will be the same in both code words. Crucially, this means it is therefore impossible to infer logical information based on the rate of photon loss detection.

Second, as both code words have even photon number parity, single photon loss errors will map the logical states onto odd parity Fock states (|1L⟩ → |1⟩, |0L⟩ → |3⟩), conveniently enabling the detection of single photon loss errors by measuring the photon number parity (i.e., whether it is even or odd). Importantly, this is done without learning the photon number. By extracting only the minimal information needed to detect errors, the logical information will be preserved.

Implementing the kitten code in Bosonic Qiskit

Now that we have a sense of how the kitten code works, let’s see how we can use it to simulate error correction by leveraging the built-in functionality of Bosonic Qiskit. In brief, we’ll need to implement the following components:

  1. The photon-loss error channel (for simulating errors)
  2. Measurement of the photon-number parity (for detecting errors)
  3. Recovery operation (when an error is detected)

Below, we briefly highlight the various functionalities that support our simulations. Please refer to our Jupyter Notebook for further implementation details.

Photon loss

Photon loss is implemented using Bosonic Qiskit’s PhotonLossNoisePass(), which takes as input the loss rate of the mode. Here, we choose 0.5 per millisecond, corresponding to a lifetime of 2 ms — roughly speaking, the average amount of time that Fock state |1⟩ will “survive.” We model idling time between error correction steps using the cv_delay() gate.

Error detection

As previously discussed, we can detect single photon loss errors by measuring the photon-number parity. In practice, this is achieved via the Hadamard test, a technique that relies on phase kickback using an ancillary qubit. In particular, we can use a controlled-rotation operation of the form CR(π/2) = e-i(π/2)σza†a such that final qubit state depends on the photon-number parity (see Figure 1).

parity-check-subcircuit.png

Figure 1: Circuit diagram for the parity check subcircuit. This subcircuit uses the Hadamard test to measure the parity of the qumode. The second uncontrolled rotation R(θ) = e-iθa†a removes a global phase. As discussed in our previous blog post, Bosonic Qiskit represents a single qumode (with some cutoff) using a logarithmic number of qubits.

Recovery operation

When an error is detected, we need to apply a correction that maps the system back onto the appropriate logical state. For the purposes of this tutorial, we will not go into detail on the specific sequence of gates needed to physically implement the error recovery operators for the binomial code. Instead, we will simply state the expected theoretical state transfer operation that maps the error states back onto our logical qubits.

If parity changes due to photon loss, the qumode will end up in one of two error states depending on the initial logical state: |1L⟩ → |1⟩ ≡ |E1 ⟩ or |0L ⟩ → |3⟩ ≡ |E0⟩. To correct the error state back to the logical state, we therefore require some unitary operation Ûoddthat realizes the following mapping:

Group 630735.png

What if an error is not detected? You might guess that the answer is to do nothing, but this turns out to be incorrect. Quite non-intuitively, not detecting photon loss itself produces error!

To see why, imagine someone gives you a qumode prepared either in Fock state |0⟩ or |2⟩, and you have the ability to detect single photon loss through parity measurements. Now imagine that you repeatedly measure the parity over a timescale much longer than expected rate of photon loss, and keep finding that no photon loss has occurred. You might begin to suspect that the qumode is in the state |0⟩, and each repeated measurement will only enhance this suspicion.

In this way, even when the mode is in a superposition of |0⟩ or |2⟩, the state will decay toward |0⟩ through this phenomenon known as “no-jump backaction”. Consequently, we can further improve our error correction protocol by applying a correction Ûeventhat undoes this no-jump backaction when even parity is measured.

Putting it all together

Finally, we can put these components together to realize the protocol illustrated in Figure 2a. As this simulation does not include any other errors beyond photon loss, it is possible for us to achieve arbitrary enhancement by increasing the rate of error-correction. (Note: This would not be the case in a more realistic model where ancilla and gate errors are included. We’ve included the ability to add these additional error types as an option in the full Jupyter notebook).

In Figure 2b, we show the average fidelity of the initially prepared logical state |1L ⟩ as a function of time for various error-correction rates. These results clearly demonstrate the expected trend that the lifetime of the logical state increases with the rate of error-correction.

For simplicity, here we have shown the results for the application of Û_odd only. For complete results that include Û_even , see the full implementation in the Jupyter notebook.

figure2.png

Figure 2: Implementation of the binomial “kitten” error correcting code. (a) A circuit implementing the error correction cycle. Here, cv_delay(T)implements the error channel for chosen time T. The parity check (during which errors can also occur) is used to detect errors. Finally, we apply a recovery operation dependent on the outcome of this measurement. This cycle is repeated for 12 ms. (b) State fidelity averaged over 1000 shots as a function of time for various error correction rate. The no error correction case is shown in blue, while successive colors illustrate the (theoretical) improvement in lifetime enabled by error correction. To extract estimates of the error-corrected lifetime, we have fit exponential curves to each data set.

Try it yourself!

In this article, we have described the basics of the binomial “kitten” code and have illustrated how to simulate it using the built-in features of Bosonic Qiskit. For more details on the full implementation that includes additional options (such as inclusion of Ûeven and ancillary qubit errors), we recommend that interested readers download the full Jupyter notebook, designed to be interactive such that users can generate error-correction simulations using parameters of their choosing.

For more information about Bosonic Qiskit, including installation instructions, see the GitHub repository. We particularly encourage new users to look at the tutorials and PyTest test cases. For those unfamiliar with bosonic quantum computation in general, we recommend the recently developed Bosonic Qiskit Textbook written by members of the Yale undergraduate Quantum Computing club Ben McDonough, Jeb Cui, and Gabriel Marous.

Bosonic Qiskit is a Qiskit ecosystem community project. Click here to discover more Qiskit ecosystem projects and add your own!

Article link: https://www.ibm.com/quantum/blog/bosonic-error-correction?social_post=sf187169101&sf187169101=1


Get started with Qiskit

View documentation

The National Strategy on Microelectronics Research – DARPA

Posted by timmreardon on 03/21/2024
Posted in: Uncategorized.

ICYMI: The National Strategy on Microelectronics Research, recently released by the White House Office of Science and Technology Policy, provides future-focused framework to shape U.S. leadership in #microelectronics. This guidance outlines four central goals supporting research and development, manufacturing infrastructure, an innovation ecosystem, and a robust technical workforce for next-gen advances. While DARPA is not funded by the CHIPS Act, agency leaders remain part of the cross-agency strategy, working collaboratively to accelerate U.S. #semiconductor R&D and strengthen national and economic security.

Article link: https://www.linkedin.com/posts/darpa_national-strategy-on-microelectronics-research-activity-7176220495833395200-d4wO?

The AI Act is done. Here’s what will (and won’t) change – MIT Technology Review

Posted by timmreardon on 03/20/2024
Posted in: Uncategorized.

The hard work starts now.

By  Melissa Heikkiläarchive page

March 19, 2024

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s official. After three years, the AI Act, the EU’s new sweeping AI law, jumped through its final bureaucratic hoop last week when the European Parliament voted to approve it. (You can catch up on the five main things you need to know about the AI Act with this story I wrote last year.) 

This also feels like the end of an era for me personally: I was the first reporter to get the scoop on an early draft of the AI Act in 2021, and have followed the ensuing lobbying circus closely ever since. 

But the reality is that the hard work starts now. The law will enter into force in May, and people living in the EU will start seeing changes by the end of the year. Regulators will need to get set up in order to enforce the law properly, and companies will have between up to three years to comply with the law.

Here’s what will (and won’t) change:

1. Some AI uses will get banned later this year

The Act places restrictions on AI use cases that pose a high risk to people’s fundamental rights, such as in healthcare, education, and policing. These will be outlawed by the end of the year. 

It also bans some uses that are deemed to pose an “unacceptable risk.” They include some pretty out-there and ambiguous use cases, such as AI systems that deploy “subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making,” or exploit vulnerable people. The AI Act also bans systems that infer sensitive characteristics such as someone’s political opinions or sexual orientation, and the use of real-time facial recognition software in public places. The creation of facial recognition databases by scraping the internet à la Clearview AI will also be outlawed. 

There are some pretty huge caveats, however. Law enforcement agencies are still allowed to use sensitive biometric data, as well as facial recognition software in public places to fight serious crime, such as terrorism or kidnappings. Some civil rights organizations, such as digital rights organization Access Now, have called the AI Act a “failure for human rights” because it did not ban controversial AI use cases such as facial recognition outright. And while companies and schools are not allowed to use software that claims to recognize people’s emotions, they can if it’s for medical or safety reasons.

2. It will be more obvious when you’re interacting with an AI system

Tech companies will be required to label deepfakes and AI-generated content and notify people when they are interacting with a chatbot or other AI system. The AI Act will also require companies to develop AI-generated media in a way that makes it possible to detect. This is promising news in the fight against misinformation, and will give research around watermarking and content provenance a big boost. 

However, this is all easier said than done, and research lags far behind what the regulation requires. Watermarks are still an experimental technology and easy to tamper with. It is still difficult to reliably detect AI-generated content. Some efforts show promise, such as the C2PA, an open-source internet protocol, but far more work is needed to make provenance techniques reliable, and to build an industry-wide standard. 

3. Citizens can complain if they have been harmed by an AI

The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement (and they are hiring). Thanks to the AI Act, citizens in the EU cansubmit complaints about AI systems when they suspect they have been harmed by one, and can receive explanations on why the AI systems made decisions they did. It’s an important first step toward giving people more agency in an increasingly automated world. However, this will require citizens to have a decent level of AI literacy, and to be aware of how algorithmic harms happen. For most people, these are still very foreign and abstract concepts. 

4. AI companies will need to be more transparent

Most AI uses will not require compliance with the AI Act. It’s only AI companies developing technologies in “high risk” sectors, such as critical infrastructure or healthcare, that will have new obligations when the Act fully comes into force in three years. These include better data governance, ensuring human oversight and assessing how these systems will affect people’s rights.

AI companies that are developing “general purpose AI models,” such as language models, will also need to create and keep technical documentation showing how they built the model, how they respect copyright law, and publish a publicly available summary of what training data went into training the AI model. 

This is a big change from the current status quo, where tech companies are secretive about the data that went into their models, and will require an overhaul of the AI sector’s messy data management practices. 

The companies with the most powerful AI models, such as GPT-4 and Gemini, will face more onerous requirements, such as having to perform model evaluations and risk-assessments and mitigations, ensure cybersecurity protection, and report any incidents where the AI system failed. Companies that fail to comply will face huge fines or their products could be banned from the EU. 

It’s also worth noting that free open-source AI models that share every detail of how the model was built, including the model’s architecture, parameters, and weights, are exempt from many of the obligations of the AI Act.

Article link: https://www.technologyreview.com/2024/03/19/1089919/the-ai-act-is-done-heres-what-will-and-wont-change/

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...