healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

HRSA Announces Organ Procurement and Transplantation Network Modernization Initiative – HRSA

Posted by timmreardon on 03/22/2023
Posted in: Uncategorized.

Initiative includes the release of new organ donor and transplant data; prioritization of modernization of the OPTN IT system; and call for Congress to make specific reforms in the National Organ Transplant Act

Today, the Health Resources and Services Administration (HRSA), an agency of the U.S. Department of Health and Human Services (HHS), announced a Modernization Initiative that includes several actions to strengthen accountability and transparency in the Organ Procurement and Transplantation Network (OPTN):

  1. Data dashboards detailing individual transplant center and organ procurement organization data on organ retrieval, waitlist outcomes, and transplants, and demographic data on organ donation and transplant;
  2. Modernization of the OPTN IT system in line with industry-leading standards, improving OPTN governance, and increasing transparency and accountability in the system to better serve the needs of patients and families;
  3. HRSA’s intent to issue contract solicitations for multiple awards to manage the OPTN in order to foster competition and ensure OPTN Board of Directors’ independence;
  4. The President’s Fiscal Year 2024 Budget proposal to more than double investment in organ procurement and transplantation with a $36 million increase over Fiscal Year 2023 for a total of $67 million; and,
  5. A request to Congress included in the Fiscal Year 2024 Budget to update the nearly 40-year-old National Organ Transplant Act to take actions such as:
    • Removing the appropriations cap on the OPTN contract(s) to allow HRSA to better allocate resources, and
    • Expanding the pool of eligible contract entities to enhance performance and innovation through increased competition.

“Every day, patients and families across the United States rely on the Organ Procurement and Transplantation Network to save the lives of their loved ones who experience organ failure,” said Carole Johnson, HRSA Administrator. “At HRSA, our stewardship and oversight of this vital work is a top priority. That is why we are taking action to both bring greater transparency to the system and to reform and modernize the OPTN. The individuals and families that depend on this life-saving work deserve no less.”

Today, HRSA is posting on its website a new data dashboard to share de-identified information on organ donors, organ procurement, transplant waitlists, and transplant recipients. Patients, families, clinicians, researchers, and others can use this data to inform decision-making as well as process improvements. Today’s launch is an initial data set, which HRSA intends to refine over time and update regularly. 

This announcement also includes a plan to strengthen accountability, equity, and performance in the organ donation and transplantation system. This iterative plan will specifically focus on five key areas: technology; data transparency; governance; operations; and quality improvement and innovation. In implementing this plan, HRSA intends to issue contract solicitations for multiple awards to manage and improve the OPTN. HRSA also intends to further the OPTN Board of Directors’ independence through the contracting process and the use of multiple contracts. Ensuring robust competition in every industry is a key priority of the Biden-Harris Administration and will help meet the OPTN Modernization Initiative’s goals of promoting innovation and the best quality of service for patients. 

Finally, the President’s Budget for Fiscal Year 2024 would more than double HRSA’s budget for organ-related work, including OPTN contracting and the implementation of the Modernization Initiative, to total $67 million. In addition, the Budget requests statutory changes to the National Organ Transplant Act to remove the decades-old ceiling on the amount of appropriated funding that can be awarded to the statutorily required vendor(s) for the OPTN.  It also requests that Congress expand the pool of eligible contract entities to enhance performance and innovation through increased competition, particularly with respect to information technology vendors.  

HRSA recognizes that while modernization work is complex, the integrity of the organ matching process is paramount and cannot be disrupted. That is why HRSA’s work will be guided by and centered around several key priorities, including the urgent needs of the more than 100,000 individuals and their families awaiting transplant; the 24/7 life-saving nature of the system; and patient safety and health. HRSA intends to engage with a wide and diverse group of stakeholders early and often to ensure a human-centered design approach that reflects pressing areas of need and ensuring experiences by system users like patients are addressed first. As a part of this commitment, HRSA has created an OPTN Modernization Website to keep stakeholders informed about the Modernization Initiative and provide regular progress updates.


See News & Announcements on HRSA.gov.

Researchers Spot Silicon-Level Hardware Trojans in Chips, Release Their Algorithm for All to Try – Hackster.io

Posted by timmreardon on 03/22/2023
Posted in: Uncategorized.

Using thousands of electron microscope images and the original chip layout, 37 of 40 deliberate modifications were spotted.

Researchers at the Ruhr University Bochum and the Max Planck Institute for Security and Privacy (MPI-SP) have come up with an approach to analyzing die photos of real-world microchips to reveal hardware Trojan attacks — and are releasing their imagery and algorithm for all to try.

“It’s conceivable that tiny changes might be inserted into the designs in the factories shortly before production that could override the security of the chips,” says Steffen Becker, PhD and co-author of the paper detailing the work, of the problem the team set about to solve. “In extreme cases, such hardware Trojans could allow an attacker to paralyze parts of the telecommunications infrastructure at the push of a button.”

High-resolution die shots and original layout files have proven enough to automatically flag potentially malicious modifications in CMOS chips. (📷: Puschner et al)

Looking at chips built on 28nm, 40nm, 65nm, and 90nm process nodes, the team set about automating the process of inspecting the finished silicon chips for hardware-level tampering. Using designs created by Thorben Moos, PhD, the researchers figured out a way to test their approach: taking the physical chips Moos had already built and comparing them to original design files with minor modifications, meaning the two are no longer a direct match.

“Comparing the chip images and the construction plans turned out to be quite a challenge, because we first had to precisely superimpose the data,” says first author Endres Puschner. “On the smallest chip, which is 28 nanometers in size, a single speck of dust or a hair can obscure a whole row of standard cells.”

Despite these challenges the analysis algorithm showed promise, detecting 37 of the 40 modifications — including all the modifications made to the chips built on process nodes between 40nm and 90nm. The algorithm did, admittedly, throw up 500 false positives — but, says Puschner, “with more than 1.5 million standard cells examined, this is a very good rate.”

The team’s approach picked up on modifications (left) compared to the expected design output (right) automatically. (📷: Puschner et al)

The desire to analyze silicon-level hardware to detect either malicious modifications or counterfeit hardware was also behind recent work by engineer Andrew “bunnie” Huang, who developed a technique for peering inside packaged chips and uncovering the silicon within. Huang’s approach lacks the resolution, however, for cell-level analysis — which this research team managed through electron microscopy.

The team’s paper is available under open-access terms on the IACR Cryptology ePrint Archive, while the full imagery and source code behind the paper has been published to GitHub under the permissive MIT license. “We […] hope that other groups will use our data for follow-up studies, Becker says. “Machine learning could probably improve the detection algorithm to such an extent that it would also detect the changes on the smallest chips that we missed.”

Article link: https://www-hackster-io.cdn.ampproject.org/c/s/www.hackster.io/news/researchers-spot-silicon-level-hardware-trojans-in-chips-release-their-algorithm-for-all-to-try-ba00bbd56248.amp

How We’re Bringing the Power of Quantum Computing to Medical Research – Cleveland Clinic

Posted by timmreardon on 03/21/2023
Posted in: Uncategorized.

Cleveland Clinic is a Founding Partner in Quantum Innovation Hub

Based in Greater Washington D.C., the nonprofit development organization Connected DMV and a coalition of partners are developing the new Life Sciences and Healthcare Quantum Innovation Hub. Its purpose is to prepare the healthcare sector for the burgeoning quantum era and to align with key national and global efforts in life sciences and quantum technologies.

The U.S. Department of Commerce’s Economic Development Administration has awarded more than $600,000 to Connected DMV for development of the Hub. This will include formation of a collaborative of at least 25 organizations specializing in quantum technology and end-use applications.

Cleveland Clinic was invited to join the Quantum Innovation Hub because of its work to advance medical research through quantum computing. As the lead healthcare system in the coalition, Cleveland Clinic will help define quantum’s role in the future of healthcare and educate other health systems on the technology’s possibilities.

Quantum’s potential

Quantum computing is a radically different approach to information processing and data analysis. It is based on the principles of quantum physics, which describe how subatomic particles behave. By manipulating and measuring the actions of quantum particles, quantum computers can in theory solve problems too massive and complex for traditional computers, which are bound by the laws of classical physics. Quantum computers are still in an early phase of development, but they have the potential to advance medical research.

 “We believe quantum computing holds great promise for accelerating the pace of scientific discovery,” says Lara Jehi, MD, MHCDS, Cleveland Clinic’s Chief Research Information Officer. “As an academic medical center, research, innovation and education are integral parts of Cleveland Clinic’s mission. Quantum, AI [artificial intelligence] and other emerging technologies have the potential to revolutionize medicine. We look forward to working with partners across healthcare and life sciences to solve complex medical problems and change the course of diseases like cancer, heart conditions and neurodegenerative disorders.”

Collaborating with IBM

Last year, Cleveland Clinic announced a 10-year partnership with IBM to establish the Discovery Accelerator, a joint center focused on easing traditional bottlenecks in medical research through innovative technologies such as quantum computing, the hybrid cloud and artificial intelligence.

The partnership combines Cleveland Clinic’s medical expertise with the technology expertise of IBM, including the company’s leadership in quantum technology. IBM is installing the first private-sector on-premises Quantum System One computer in the United States on Cleveland Clinic’s main campus.

The Discovery Accelerator will allow Cleveland Clinic to contribute to Connected DMV’s hub by advancing the pace of discovery, using the IBM quantum computer in areas such as drug design, deciphering complex biological processes and developing personalized disease therapies.

Scaling up

The hub will be part of Connected DMV’s Potomac Quantum Innovation Center initiative, which aims to:

  • Accelerate investment in, and research and development of, quantum computing.
  • Develop an equitable and scalable talent pipeline to work in quantum computing.
  • Increase collaboration among the public sector, academia, industry, community and investors to accelerate the value of quantum computing.

“Innovation is always iterative, and requires sustained collaboration between research, development and technology, and the industries that will benefit from the value generated,” says George Thomas, Chief Innovation Officer of Connected DMV and the leader of the Potomac Quantum Innovation Center initiative.

“Quantum has the potential to have a substantive impact on our society in the near future,” Thomas says. “The Life Sciences and Healthcare Quantum Innovation Hub will serve as the foundation for sustained focus and investment to accelerate and scale our path into the era of quantum.”

Article link: https://consultqd.clevelandclinic.org/how-were-bringing-the-power-of-quantum-computing-to-medical-research/

The Age of AI has begun GatesNotes

Posted by timmreardon on 03/21/2023
Posted in: Uncategorized.

By Bill Gates| 

March 21, 2023 14 minute read

In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary.

The first time was in 1980, when I was introduced to a graphical user interface—the forerunner of every modern operating system, including Windows. I sat with the person who had shown me the demo, a brilliant programmer named Charles Simonyi, and we immediately started brainstorming about all the things we could do with such a user-friendly approach to computing. Charles eventually joined Microsoft, Windows became the backbone of Microsoft, and the thinking we did after that demo helped set the company’s agenda for the next 15 years.

The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts—it asks you to think critically about biology.) If you can do that, I said, then you’ll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months.

In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam—and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5—the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course.

Once it had aced the test, we asked it a non-scientific question: “What do you say to a father with a sick child?” It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

This inspired me to think about all the things that AI can achieve in the next five to 10 years.

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Philanthropy is my full-time job these days, and I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities. Globally, the worst inequity is in health: 5 million children under the age of 5 die every year. That’s down from 10 million two decades ago, but it’s still a shockingly high number. Nearly all of these children were born in poor countries and die of preventable causes like diarrhea or malaria. It’s hard to imagine a better use of AIs than saving the lives of children.

I’ve been thinking a lot about how AI can reduce some of the world’s worst inequities.

In the United States, the best opportunity for reducing inequity is to improve education, particularly making sure that students succeed at math. The evidence shows that having basic math skills sets students up for success, no matter what career they choose. But achievement in math is going down across the country, especially for Black, Latino, and low-income students. AI can help turn that trend around.

Climate change is another issue where I’m convinced AI can make the world more equitable. The injustice of climate change is that the people who are suffering the most—the world’s poorest—are also the ones who did the least to contribute to the problem. I’m still thinking and learning about how AI can help, but later in this post I’ll suggest a few areas with a lot of potential.

In short, I’m excited about the impact that AI will have on issues that the Gates Foundation works on, and the foundation will have much more to say about AI in the coming months. The world needs to make sure that everyone—and not just people who are well-off—benefits from artificial intelligence. Governments and philanthropy will need to play a major role in ensuring that it reduces inequity and doesn’t contribute to it. This is the priority for my own work related to AI.

Any new technology that’s so disruptive is bound to make people uneasy, and that’s certainly true with artificial intelligence. I understand why—it raises hard questions about the workforce, the legal system, privacy, bias, and more. AIs also make factual mistakes and experience hallucinations. Before I suggest some ways to mitigate the risks, I’ll define what I mean by AI, and I’ll go into more detail about some of the ways in which it will help empower people at work, save lives, and improve education.

Defining artificial intelligence

Technically, the term artificial intelligence refers to a model created to solve a specific problem or provide a particular service. What is powering things like ChatGPT is artificial intelligence. It is learning how to do chat better but can’t learn other tasks. By contrast, the term artificial general intelligence refers to software that’s capable of learning any task or subject. AGI doesn’t exist yet—there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all.

Developing AI and AGI has been the great dream of the computing industry. For decades, the question was when computers would be better than humans at something other than making calculations. Now, with the arrival of machine learning and large amounts of computing power, sophisticated AIs are a reality and they will get better very fast.

I think back to the early days of the personal computing revolution, when the software industry was so small that most of us could fit onstage at a conference. Today it is a global industry. Since a huge portion of it is now turning its attention to AI, the innovations are going to come much faster than what we experienced after the microprocessor breakthrough. Soon the pre-AI period will seem as distant as the days when using a computer meant typing at a C:> prompt rather than tapping on a screen.

Productivity enhancement

Although humans are still better than GPT at a lot of things, there are many jobs where these capabilities are not used much. For example, many of the tasks done by a person in sales (digital or phone), service, or document handling (like payables, accounting, or insurance claim disputes) require decision-making but not the ability to learn continuously. Corporations have training programs for these activities and in most cases, they have a lot of examples of good and bad work. Humans are trained using these data sets, and soon these data sets will also be used to train the AIs that will empower people to do this work more efficiently.

As computing power gets cheaper, GPT’s ability to express ideas will increasingly be like having a white-collar worker available to help you with various tasks. Microsoft describes this as having a co-pilot. Fully incorporated into products like Office, AI will enhance your work—for example by helping with writing emails and managing your inbox.

Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you’ll be able to write a request in plain English. (And not just English—AIs will understand languages from around the world. In India earlier this year, I met with developers who are working on AIs that will understand many of the languages spoken there.)

In addition, advances in AI will enable the creation of a personal agent. Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with. This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do.

Advances in AI will enable the creation of a personal agent.

You’ll be able to use natural language to have this agent help you with scheduling, communications, and e-commerce, and it will work across all your devices. Because of the cost of training the models and running the computations, creating a personal agent is not feasible yet, but thanks to the recent advances in AI, it is now a realistic goal. Some issues will need to be worked out: For example, can an insurance company ask your agent things about you without your permission? If so, how many people will choose not to use it?

Company-wide agents will empower employees in new ways. An agent that understands a particular company will be available for its employees to consult directly and should be part of every meeting so it can answer questions. It can be told to be passive or encouraged to speak up if it has some insight. It will need access to the sales, support, finance, product schedules, and text related to the company. It should read news related to the industry the company is in. I believe that the result will be that employees will become more productive.

When productivity goes up, society benefits because people are freed up to do other things, at work and at home. Of course, there are serious questions about what kind of support and retraining people will need. Governments need to help workers transition into other roles. But the demand for people who help other people will never go away. The rise of AI will free people up to do things that software never will—teaching, caring for patients, and supporting the elderly, for example.

Global health and education are two areas where there’s great need and not enough workers to meet those needs. These are areas where AI can help reduce inequity if it is properly targeted. These should be a key focus of AI work, so I will turn to them now.

Health

I see several ways in which AIs will improve health care and the medical field.

For one thing, they’ll help health-care workers make the most of their time by taking care of certain tasks for them—things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit. I expect that there will be a lot of innovation in this area.

Other AI-driven improvements will be especially important for poor countries, where the vast majority of under-5 deaths happen.

For example, many people in those countries never get to see a doctor, and AIs will help the health workers they do see be more productive. (The effort to develop AI-powered ultrasound machines that can be used with minimal training is a great example of this.) AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.

The AI models used in poor countries will need to be trained on different diseases than in rich countries. They will need to work in different languages and factor in different challenges, such as patients who live very far from clinics or can’t afford to stop working if they get sick.

People will need to see evidence that health AIs are beneficial overall, even though they won’t be perfect and will make mistakes. AIs have to be tested very carefully and properly regulated, which means it will take longer for them to be adopted than in other areas. But then again, humans make mistakes too. And having no access to medical care is also a problem.

In addition to helping with care, AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way.

The next generation of tools will be much more efficient, and they’ll be able to predict side effects and figure out dosing levels. One of the Gates Foundation’s priorities in AI is to make sure these tools are used for the health problems that affect the poorest people in the world, including AIDS, TB, and malaria.

Similarly, governments and philanthropy should create incentives for companies to share AI-generated insights into crops or livestock raised by people in poor countries. AIs can help develop better seeds based on local conditions, advise farmers on the best seeds to plant based on the soil and weather in their area, and help develop drugs and vaccines for livestock. As extreme weather and climate change put even more pressure on subsistence farmers in low-income countries, these advances will be even more important.

Education

Computers haven’t had the effect on education that many of us in the industry have hoped. There have been some good developments, including educational games and online sources of information like Wikipedia, but they haven’t had a meaningful effect on any of the measures of students’ achievement.

But I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.

There are many ways that AIs can assist teachers and administrators, including assessing a student’s understanding of a subject and giving advice on career planning. Teachers are already using tools like ChatGPT to provide comments on their students’ writing assignments.

Of course, AIs will need a lot of training and further development before they can do things like understand how a certain student learns best or what motivates them. Even once the technology is perfected, learning will still depend on great relationships between students and teachers. It will enhance—but never replace—the work that students and teachers do together in the classroom.

New tools will be created for schools that can afford to buy them, but we need to ensure that they are also created for and available to low-income schools in the U.S. and around the world. AIs will need to be trained on diverse data sets so they are unbiased and reflect the different cultures where they’ll be used. And the digital divide will need to be addressed so that students in low-income households do not get left behind.

I know a lot of teachers are worried that students are using GPT to write their essays. Educators are already discussing ways to adapt to the new technology, and I suspect those conversations will continue for quite some time. I’ve heard about teachers who have found clever ways to incorporate the technology into their work—like by allowing students to use GPT to create a first draft that they have to personalize.

Risks and problems with AI

You’ve probably read about problems with the current AI models. For example, they aren’t necessarily good at understanding the context for a human’s request, which leads to some strange results. When you ask an AI to make up something fictional, it can do that well. But when you ask for advice about a trip you want to take, it may suggest hotels that don’t exist. This is because the AI doesn’t understand the context for your request well enough to know whether it should invent fake hotels or only tell you about real ones that have rooms available.

There are other issues, such as AIs giving wrong answers to math problems because they struggle with abstract reasoning. But none of these are fundamental limitations of artificial intelligence. Developers are working on them, and I think we’re going to see them largely fixed in less than two years and possibly much faster.

Other concerns are not simply technical. For example, there’s the threat posed by humans armed with AI. Like most inventions, artificial intelligence can be used for good purposes or malign ones. Governments need to work with the private sector on ways to limit the risks.

Then there’s the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.

Superintelligent AIs are in our future. Compared to a computer, our brains operate at a snail’s pace: An electrical signal in the brain moves at 1/100,000th the speed of the signal in a silicon chip! Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we’ll have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. This will be a profound change.

These “strong” AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time.

But none of the breakthroughs of the past few months have moved us substantially closer to strong AI. Artificial intelligence still doesn’t control the physical world and can’t establish its own goals. A recent New York Times article about a conversation with ChatGPT where it declared it wanted to become a human got a lot of attention. It was a fascinating look at how human-like the model’s expression of emotions can be, but it isn’t an indicator of meaningful independence.

Three books have shaped my own thinking on this subject: Superintelligence, by Nick Bostrom; Life 3.0 by Max Tegmark; and A Thousand Brains, by Jeff Hawkins. I don’t agree with everything the authors say, and they don’t agree with each other either. But all three books are well written and thought-provoking.

The next frontiers

There will be an explosion of companies working on new uses of AI as well as ways to improve the technology itself. For example, companies are developing new chips that will provide the massive amounts of processing power needed for artificial intelligence. Some use optical switches—lasers, essentially—to reduce their energy consumption and lower the manufacturing cost. Ideally, innovative chips will allow you to run an AI on your own device, rather than in the cloud, as you have to do today.

On the software side, the algorithms that drive an AI’s learning will get better. There will be certain domains, such as sales, where developers can make AIs extremely accurate by limiting the areas that they work in and giving them a lot of training data that’s specific to those areas. But one big open question is whether we’ll need many of these specialized AIs for different uses—one for education, say, and another for office productivity—or whether it will be possible to develop an artificial general intelligence that can learn any task. There will be immense competition on both approaches.

No matter what, the subject of AIs will dominate the public discussion for the foreseeable future. I want to suggest three principles that should guide that conversation.

First, we should try to balance fears about the downsides of AI—which are understandable and valid—with its ability to improve people’s lives. To make the most of this remarkable new technology, we’ll need to both guard against the risks and spread the benefits to as many people as possible.

Second, market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity. Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world’s best AIs on its biggest problems. 

Although we shouldn’t wait for this to happen, it’s interesting to think about whether artificial intelligence would ever identify inequity and try to reduce it. Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it? If it did recognize inequity, what would it suggest that we do about it?

Finally, we should keep in mind that we’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it.

I’m lucky to have been involved with the PC revolution and the Internet revolution. I’m just as excited about this moment. This new technology can help people everywhere improve their lives. At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities.

Article link: https://www.gatesnotes.com/The-Age-of-AI-Has-Begun?WT.mc_id=20230321100000_Artificial-Intelligence_BG-LI_&WT.tsrc=BGLI

CX Primer: OPM’s Cavallo Maps Persona-Driven Website Overhaul – Meritalk

Posted by timmreardon on 03/21/2023
Posted in: Uncategorized.

The Office of Personnel Management (OPM) is driving forward on one of the Biden administration’s key technology agenda items – citizen service improvement – by reworking the agency’s main OPM.gov website so it makes more sense to the several different populations who rely on it the most.

Speaking this week during an event organized by GovExec, OPM Chief Information Officer (CIO) Guy Cavallo framed the overhaul project as one that hews to the needs of the various “personas” that need to interact with the agency.

Because OPM’s mission covers a lot of ground – from getting hired by the government, to human resources management, and through retirement – the agency has several big customer constituent groups that care about different things the agency provides.

Cavallo – who is one of the Federal government’s most vocal proponents of moving to cloud technologies – said that OPM is now in the midst of a two-year “sprint” to cloud services and away from maintaining its own legacy data centers and mainframe operations, with the goal of moving “as many of our applications to the cloud as we can in the next two years.”

“There’s not just one area I can focus on modernizing,” said Cavallo. “We’re modernizing across the board.”

Since OPM touches most Federal employees – but in some very different ways – the CIO said that “a lot needs to be done to improve the customer experience … a lot of our legacy technologies weren’t designed around making a friendly customer experience.”

On the website revamp front, Cavallo explained that “most government agencies find that their top contact with citizens is through their website.” Speaking of OPM’s website, he said, “if you’re an OPM employee, you can navigate it because it’s designed the way a lot of sites were originally designed – by departmental functions. So, if you don’t know what each department does, you won’t even know where to go look for something.”

“We’re totally going to change that approach,” he said. “We’re going to switch to user personas instead of being departmental focused.”

“People come to OPM for several reasons,” he said. “One is they’re currently a Federal employee, and they want to find out about pay raises, vacation time, maternity leave, anything like that – that if you’re not a Federal employee you’re not going to care about. Right now, you’d have to go to multiple places on the website to find that. So we will have a persona that you’re a Federal employee, and it will take you on that path.”

“We have citizens that are interested in finding a Federal job, and we will walk them through a persona that takes them and focuses on that,” he said.

Another persona envisioned by the new website will deal with personnel policies for the entire Federal government, including things like changes in hiring practices, Cavallo said.

“That’s going to be a radical change from ‘here’s all of our program offices, you figure out what you need to look for,’” he said. “You’ll be able to pick what type of persona you have and follow that path.”

“It will also be designed to work on a mobile phone versus trying to read our 100-page documents on a little screen of a mobile phone today,” he said. “That’s just one area that we’re improving.”

Ever the cloud advocate, Cavallo said the website will move to the cloud from its current on-premise status where, he said, “we don’t have that elasticity, or expandability, on premise that we will have with the cloud.”

In other moves to improve citizen service, Cavallo said that OPM just launched its first chatbot to start making it easier for people to find information on the agency’s website.

“We have a very limited set of questions there, but we’re currently looking at our call centers and seeing what type of questions we’re getting from citizens and from retirees, so we’ll continue to build out that technology,” he said.

Cavallo said that amid the agency’s push to the cloud, “we’re also really looking at the customer experience that we’re providing, and not just move to the cloud, but let’s make it more user friendly, easier to use, mobile friendly – things that weren’t designed into our mainframe systems many years ago.”

Article link: https://www.meritalk.com/articles/cx-primer-opms-cavallo-maps-persona-driven-website-overhaul/

The Business Case for Quantum Computing – MIT Sloan Management Review

Posted by timmreardon on 03/19/2023
Posted in: Uncategorized.

Quantum computers may deliver an economic advantage to business, even on tasks that classical computers can perform.

Francesco Bova, Avi Goldfarb, and Roger Melko March 06, 2023

Imagine that a pharmaceutical company was able to cut the research time for innovative drugs by an order of magnitude. It could expand its development pipeline, hit fewer dead ends, and bring cures and treatments to market much faster, to the benefit of millions of people around the world.

Or imagine that a logistics company could dynamically manage the routes for its fleet of thousands of trucks. It could not only take a mind-numbing range of variables into account and adjust quickly as opportunities or constraints arose; it could also get fresher products to store shelves faster and prevent tons of carbon emissions every year.

Quantum computing has the potential to transform these and many more visions into reality — which is why technology companies, private investors, and governments are investing billions of dollars in supporting ecosystems of quantum startups.1 Much of the quantum research community is focused on showing quantum advantage, which means that a quantum computer can perform a calculation, no matter how arbitrary, that is impossible on a classical, or binary, computer. (See “A Quantum Glossary.”) Running a calculation thousands of times faster could create enormous economic value if the calculation itself is useful to some stakeholder in the market.

A Quantum Glossary

Tech-fluent executives should be familiar with these basic quantum computing terms as they monitor the technology and consider potential applications in their own business domains:

Qubit: A qubit is a fundamental unit of quantum information, encoded in delicate physical properties of light or matter and manipulated to produce calculations in a quantum computer. It is analogous to a bit in a classical (binary) computer.

Fault-tolerant quantum computer: These general-purpose digital quantum computers will be able to engage with a broad range of problems with flexibility and reliability. Fault-tolerant computers are proven to have cases of quantum advantage, such as Shor’s algorithm. But they may be many years away from being realized at scale because of the complex error-correction protocols required for qubits.

Noisy: The quantum computers of today and the near term are noisy, similar to the AM/FM radios that existed long before their digital equivalents were possible. Quantum noise is a much more difficult problem to solve in delicate qubits than electronic and magnetic noise in conventional computer bits.

Quantum speedup:Speedup is a way to measure the relative performance of two computers solving the same problem. Quantum speedup is the improvement that the quantum computer has over a classical competitor in solving the problem. There are many ways to define and characterize speedup. One important metric is how it scales with increasing qubit numbers.

Quantum advantage: This occurs when quantum computing solves an “impossible” problem, or rather one that a classical computer cannot solve within a feasible or realistic amount of time. The clearest cases of quantum advantage are defined via an exponentially scaling quantum speedup.

Quantum economic advantage: This occurs when quantum computing solves an economically relevant problem either differently or significantly faster than a classic computer. Quantum economic advantage can occur in cases where quantum speedup is less than exponential — that is, when the scaling is quadratic or polynomial.

However, the expense of building quantum hardware, coupled with the steady improvement of classical computers, means that the commercial relevance of quantum computing won’t be apparent unless researchers and investors shift their focus to the pursuit of what we call quantum economic advantage. A business achieves a quantum economic advantage when a quantum computer provides a commercially relevant solution, even if only moderately faster than a classical computer could, or when a quantum computer provides viable solutions that differ from what a classical computer yields.

Why Quantum Economic Advantage Matters

In a dramatic first, a team at Google led by John Martinis made headlines around the world in 2019 when their quantum machine appeared to complete a calculation (cross-entropy benchmarking) in seconds that would take tens of thousands of years on a classical computer.2 Other companies have made similar claims of quantum advantage, including researchers from quantum computing startup Xanadu, which recently completed a well-defined task (Gaussian boson sampling) in under a second that would have taken the best classical supercomputer over 9,000 years.3

Hundreds of articles and many of the top minds in the field have focused on showing these kinds of quantum advantage.4 Their demonstrations represent important milestones in the development of quantum computers. But because they typically involve esoteric computations that might not be relevant to the kinds of problems many businesses need to solve, managers might assume that the technology isn’t yet useful and economically viable for businesses. We contend that quantum technology need not provide a quantum advantage to be economically useful, as long as it can still provide different or timelier outputs than its classical counterparts. Any quantum speedup provides an opportunity for quantum economic advantage. (See “Landscape of Quantum Economic Advantage.”)

Landscape of Quantum Economic Advantage

The diagram below shows computer algorithms and applications ranked by their potential for quantum speedup, and their estimated commercial value. Large exponential speedups to date have been demonstrated on applications with no commercial value. Applications with a moderate quantum speedup but high commercial value have a good chance of showing quantum economic advantage.

Landscape of Quantum Economic Advantage

While quantum advantage will be directly useful in some cases, many of the most important uses of quantum computers will arise from providing cost-effectiveness and speed rather than from performing a calculation that is impossible on a classical computer. In other words, a quantum economic advantage may exist without a quantum advantage.5

Quantum speedups create an appetite for solutions to business challenges where the ability to solve complex problems extremely quickly would confer a powerful competitive advantage. Considerable effort in quantum computing is dedicated to searching for potential speedups for business problems, although evidence for a robust quantum speedup for commercially relevant problems has been elusive. Nonetheless, identifying and realizing such commercial potential provides a crucial incentive to build quantum computers and significantly influences their design.

Classical computing remains a valuable tool for solving complex problems; the pioneering work of AlexNet (deep learning) and AlphaFold (protein structure) are two examples that have high commercial value. Quantum computing might be the better tool to solve a business problem if it provides the solution more quickly than a classical competitor. Running the same calculation on both quantum and classical computers may also provide two different answers, either of which might be better. When the commercial stakes are high enough, having both classical and quantum solutions can be useful, meaning the quantum solutions might still be commercially valuable.

The question of when existing or forthcoming improved quantum computers will generate substantial commercial opportunities is therefore of immediate interest, long before clear-cut quantum advantage will become obvious in future fault-tolerant machines.6

Opening Valuable Commercial Frontiers

Identifying the commercial potential of a quantum computer does not require an understanding of the quantum physics that undergird the technology. Instead, the focus should be on what quantum computers can do better, faster, or perhaps differently than classical computers and on the stakeholders who will see those outcomes as valuable. The visions from the pharmaceutical and logistics sectors that we shared at the start of this article illustrate the transformative effects that quantum computing could have. The examples we highlight below show the high value that quantum computing can already unlock within existing workflows.

Make better investment decisions faster. The finance industry faces many optimization problems. Finding the optimal trading trajectory, for example, involves determining the best trading strategy for an investment portfolio over a specific period. Portfolio optimization is a valuable problem to solve, given that over $100 trillion in assets are under management globally.7 Even small improvements in optimization techniques are valuable because of the absolute level of assets invested.

In some cases, determining the optimal investment strategy requires a search through all possible trading trajectories. That effort grows exponentially more challenging as the number of possible securities in the portfolio and the number of opportunities to change the portfolio increase. Recent work has attempted to tackle a portfolio optimization problem that has 10¹,³⁰⁰ possible trading strategies — a quantity far in excess of the number of atoms in the visible universe (10⁸⁰).8

Quantum machines may be able to help.9 Researchers at the quantum software company Multiverse Computing compared a handful of different methods for solving the optimal trading trajectory problem. Only two of the six optimization methods provided a solution for the most complex version of the problem assessed, and the classical solution took almost 700 times longer to generate than the quantum solution. The quantum tools also yielded a different solution — one with higher profits but lower risk-adjusted returns — than the one suggested by strictly classical techniques.

These outcomes are not an example of quantum advantage, because it remains possible that an exhaustive assessment of all possible approaches to solving this problem might generate solutions that are the same as or better than the quantum approaches the researchers at Multiverse used. But it may nevertheless represent an example of quantum economic advantage, because generating a solution quickly is valuable. (See “Gaining an Edge When Time Is Money.”)

Gaining an Edge When Time Is Money

Imagine that a value-at-risk calculation could be completed in seconds as opposed to hours. How would that change an investment decision in real time? The financial advantage of informing investment decisions in near real time is a straightforward example of potential quantum economic advantage in a high-stakes application.

Quantum speedups would be valuable for a wide range of applications in the financial sector. Banks often use value-at-risk calculations to estimate the likelihood and size of potential losses.iThese calculations can involve a tool called Monte Carlo simulation to run a large number of scenarios with numerous factors to model explicitly. “Quantum” Monte Carlo may help speed up these processes, which can otherwise be time-consuming.ii This may be particularly important in times of extreme market volatility.

Monte Carlo simulation is also used in the pricing of complex financial derivatives. Trillions of dollars in derivative contracts are traded every year for a variety of purposes, including hedging risk and enabling speculation. Most of these trades are for straightforward contracts whose pricing can be determined dynamically with easily calculated formulas. But pricing for more complex derivatives often requires a simulation that can take minutes, hours, or even days on a classical computer. A quantum speedup could enable faster calculations and open new growth opportunities in a market that is already worth billions of dollars.

Determining which of the two solutions is better — quantum or classical — remains a challenge. While financial market stakeholders currently don’t have access to fully fault-tolerant quantum computers, they also typically don’t have easy access to the classical supercomputers that are used to benchmark quantum advantage. Thus, comparing current quantum approaches to classical optimizers that are sold commercially still provides a valuable benchmark for the efficacy of a quantum approach. The best classical supercomputers might still generate a similar or superior solution in a timely manner — that is, in a time span that is less than one human lifetime.

Solve seemingly unsolvable business trade-offs. Almost any business problem that involves complex trade-offs — from day-to-day planning to long-term strategic decisions — could be well suited for quantum computers. Think of retailers that are assessing where to place certain products in their stores to maximize revenue, or educators trying to assess which questions to ask and in which order to maximize learning. These trade-off challenges are known as combinatorial optimization problems. The creation of the best-tasting recipe at a restaurant is also a combinatorial optimization, as are the logistics challenges we described at the beginning of this article. Even modest improvements can have a major impact on a company’s profitability.

Business leaders often rely on human intuition to solve these optimization problems. As businesses grow, they may come to rely on computing power to identify the best solutions. For the most complex of these problems, even today’s most powerful computers can provide only an approximation. Quantum computers could, however, perform a search through all possible combinations of arrangements or sequences to find the best solution, providing a moderate speedup over comparable classical searches.10

That’s where they can have a wide range of applications in almost any business sector. For instance, identifying the reasons for machine failure when failure rates are low is a challenging combinatorial optimization problem in advanced manufacturing.11 Finding the cause of failure quickly is important, because downtime can be costly. If quantum computing can speed up the process of determining why a manufacturing process failed, it could be valuable even in settings where classical approaches could eventually find the same reason for failure.

Discover better materials. The timeliness and quality of quantum computing solutions should also improve the efficiency of R&D processes that lead to new materials and medicines, because they reduce the cost and quicken the pace of discovery relative to classical techniques.

In materials design, computers already aim to simulate the complex behavior of constituent atoms and molecules and reliably predict the structure-property relationships of molecules. In typical applications, however, classical computers face significant limits on the size of molecules they can simulate. Even simulations involving the smallest molecules are computationally intensive, and the addition of even one atom or electron can make a classical simulation drastically slower. This renders many avenues of computer-aided design unavailable for the larger molecules that are of interest to the pharmaceutical, chemical, and materials industries.

Greater computing speed confers the ability to simulate larger and more complex molecules in a practically useful time frame, and quantum computers are poised to make a significant impact in this area.12 They are believed to be able to provide speedups for the calculations required to predict the electronic structure of atoms and molecules, although the precise nature of these speedups is currently a matter of intense debate within the scientific community.13

The Enduring Value of Quantum Advantage

Recent claims of quantum advantage may have no commercial application — and thus no quantum economic advantage — but they are nonetheless important because they establish the possibility that quantum computers can perform certain tasks that classical computers cannot.

The application of Shor’s algorithm is perhaps the most frequently referenced example of how quantum advantage might affect society. American mathematician Peter Shor, who recently won science’s most lucrative prize, the Breakthrough Prize in Fundamental Physics, demonstrated that a quantum computer could factor a large integer in cases where classical computers could not.14 A sufficiently large quantum computer could factor these larger integers in days or less, whereas a classical computer might need more time than it would take for the sun to run out of hydrogen.15

While this might sound abstract, the difficulty that classical computers have in factoring very large numbers is actually what enables modern encryption. The relative ease with which quantum computers could theoretically perform the calculations involved in decryption provides an example of where we expect clear-cut quantum advantage to exist. If a quantum computer could implement Shor’s algorithm, then much of the information encrypted in the past could be decoded — and that includes a great deal of encrypted data that has been stolen from organizations in cyberattacks.

This threat is remote right now, because the quantum community is years away from building fault-tolerant quantum computers large enough to use Shor’s algorithm to break codes. But it warrants the attention of managers in almost all industries, who will eventually make up a large market for new, quantum-robust encryption standards. Here, quantum computers’ ability to generate random numbers can play a role in supporting more advanced cybersecurity defenses.

Quantum computers are probabilistic, which means they can generate truly random numbers.16 Their classical counterparts, in contrast, are deterministic and thus can generate only pseudo-random numbers. However, even when a quantum computer can do something with a clear practical application that a classical computer cannot do, business leaders must still weigh the trade-offs. In some cases, the classical approach may be deemed good enough, limiting the organization’s incentives to switch to quantum. This may be the case with some other potential applications for quantum random number generators (RNGs), such as in the lottery and casino gambling sectors.17

Lotteries select winning numbers either electronically or physically, such as by drawing numbered balls from bins. The resulting numbers are not truly random. Because the process is deterministic, it would be possible for someone to predict the numbers if they knew something about the underlying process that generated them, or to manipulate the process to generate certain numbers.

While such manipulations have indeed occurred, lotteries will probably not abandon their current processes anytime soon, despite what appears to be a clear argument for using quantum computing. First, lottery fraud through manipulation of the number-generating process is rare, perhaps due to the significant costs of getting caught, strict rules against insiders participating in lotteries, and the existence of powerful analytical tools to detect fraud. Second, in the absence of fraud, pseudo-RNGs yield results that are often indistinguishable from their quantum counterparts. While it may be theoretically possible to predict the outcome of a lottery before drawing balls from multiple bins, it is practically infeasible to do so. Lotteries thus have little incentive to switch to quantum machines. Managers confronting similar cases where quantum computing provides a capability that is entirely lacking in classical computing will likewise need to weigh the relative costs and benefits of the optimal versus the good-enough solution.

Preparing for Quantum Computing in Business

To fulfill its promise and create new value and new commercial opportunities, a quantum machine does not have to accomplish a currently impossible task. It only needs to accomplish something useful. That time is coming, as billions of dollars in investments from venture capitalists, major tech companies, and national governments fuel rapid improvements in quantum computers that will make them more efficient than classical ones.

The consensus among most government and industry players seems to be that large-scale fault-tolerant quantum computers almost certainly won’t appear before the end of this decade. Although it may take years for commercially relevant quantum computers to exist at scale, business leaders can already take several steps to prepare their businesses for this era.

Make a list of your “If we could only …” or “What if …?” challenges. Most businesses have such daunting challenges but rarely address them because they are too resource-intensive and those resources have better short-term uses. The speedups and alternative solutions from quantum computing can make the transformative solutions to these problems feasible. What elements of your business are constrained by combinatorial optimization? And how much would a solution be worth to you?

Help your organization become quantum ready. We anticipate that the impact and scale of commercial applications will accelerate rapidly once fully fault-tolerant quantum computers emerge. Organizations have several ways to prepare themselves. Companies with a higher likelihood of lucrative applications — financial services companies, pharmaceutical manufacturers, and makers of specialty materials — can invest in hardware and software and develop a network of experts. Other organizations can familiarize themselves with the basics of quantum computing, connect with academics, and start to train team members.

Start experimenting now.Companies can already allocate a portion of R&D resources to experiment with near-term quantum hardware. They can set up problems in ways the computers can understand, even if existing hardware might not allow them to capitalize on those opportunities yet. These investments are important to the ongoing development of the technology: Quantum computing will not scale solely through academic research.

The startups we have worked with in the Creative Destruction Lab at the University of Toronto’s Rotman School of Management have already achieved short-term benefits from experimenting with quantum — for example, by creating innovations that have led to new materials. The long-term benefit from experimenting with quantum computing today is that a company will be prepared when sufficiently coherent, fault-tolerant quantum computers exist at scale. Those organizations will have a considerable first-mover advantage and be well positioned to capture new opportunities as this emerging technology comes to market.

Francesco Bova is an associate professor at the Rotman School of Management at the University of Toronto. Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare at the Rotman School of Management. Roger Melko is a professor in the Department of Physics & Astronomy at the University of Waterloo and an associate faculty member at the Perimeter Institute for Theoretical Physics.

Reprint 64319.

Copyright © Massachusetts Institute of Technology, 2023. All rights reserved.

1. C. Metinko, “Quantum Technology Gains Momentum as Computing Gets Closer to Reality,” Crunchbase, May 13, 2022, http://news.crunchbase.com; “What America’s Largest Technology Firms Are Investing In,” The Economist, Jan. 22, 2022, http://www.economist.com; and M. Aboy, T. Minssen, and M. Kop, “Mapping the Patent Landscape of Quantum Technologies: Patenting Trends, Innovation, and Policy Implications,” International Review of Intellectual Property and Competition Law 53, no. 10 (November 2022): 853-882.

2. F. Arute, K. Arya, R. Babbush, et al., “Quantum Supremacy Using a Programmable Superconducting Processor,” Nature 574, no. 7779 (Oct. 24, 2019): 505-510.

3. L.S. Madsen, F. Laudenbach, M.F. Askarani, et al., “Quantum Computational Advantage With a Programmable Photonic Processor,” Nature 606, no. 7912 (June 2, 2022): 75-81.

4. A.K. Fedorov, N. Gisin, S.M. Beloussov, et al., “Quantum Computing at the Quantum Advantage Threshold: A Down-to-Business Review” (preprint, submitted in March 2022), https://arxiv.org; L. Mueck, C. Palacios-Berraquero, and D.M. Persaud, “Towards a Quantum Advantage,” Physics World, Feb. 5, 2020, https://physicsworld.com; and S. Chen, “Quantum Advantage Showdowns Have No Clear Winners,” Wired, July 11, 2022, http://www.wired.com.

5. F. Bova, A. Goldfarb, and R.G. Melko, “Quantum Economic Advantage,” Management Science, Articles in Advance, published online Dec. 2, 2022.

6. “Near term” in this context refers to the quantum computers of the next few years, which will be better than today’s but still not fully fault tolerant.

7. “Value of Assets Under Management Worldwide in Selected Years From 2003 to 2021,” Statista, July 8, 2022, http://www.statista.com.

8. “Wall Street’s Latest Shiny New Thing: Quantum Computing,” The Economist, Dec. 19, 2020, http://www.economist.com.

9. S. Mugel, C. Kuchkovsky, E. Sanchez, et al., “Dynamic Portfolio Optimization With Real Datasets Using Quantum Processors and Quantum-Inspired Tensor Networks,” Physical Review Research 4, no. 1 (January 2022): 1-12.

10. L.K. Grover, “A Fast Quantum Mechanical Algorithm for Database Search,” in “STOC ’96: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing” (Philadelphia: Association for Computing Machinery, 1996).

11. F. Bova, A. Goldfarb, and R.G. Melko, “Commercial Applications of Quantum Computing,” EPJ Quantum Technology 8 (2021): 1-13.

12. S.N. Genin, I.G. Ryabinkin, N.R. Paisley, et al., “Estimating Phosphorescent Emission Energies in Ir(III) Complexes Using Large-Scale Quantum Computing Simulations” (preprint, submitted in November 2021), https://arxiv.org. Since atoms and electrons themselves are quantum objects, they are natural candidates for study by quantum computers. In fact, such “simulation” (or emulation) was the first value proposition made for a quantum computer by California Institute of Technology physicist Richard Feynman in 1982.

13. S. Lee, J. Lee, H. Zhai, et al., “Is There Evidence for Exponential Quantum Advantage in Quantum Chemistry?” (preprint, submitted in August 2022), https://arxiv.org.

14. P.W. Shor, “Algorithms for Quantum Computation: Discrete Logarithms and Factoring,” in “Proceedings 35th Annual Symposium on Foundations of Computer Science” (Santa Fe, New Mexico: IEEE, 1994).

15. R. Van Meter and D. Horsman, “A Blueprint for Building a Quantum Computer,” Communications of the ACM 56, no. 10 (October 2013): 84-93.

16. It should be noted that quantum computers are not the only quantum technology that can produce true random numbers.

17. J. Melia, “The Odds Are in Quantum Security’s Favor,” QuintessenceLabs, March 2, 2017, http://www.quintessencelabs.com; and “Rigged Random Number Generators,” QuintessenceLabs, May 27, 2016, http://www.quintessencelabs.com.

i. Bova, Goldfarb, and Melko, “Commercial Applications of Quantum Computing.”

ii. D. Layden, G. Mazzola, R.V. Mishmash, et al., “Quantum-Enhanced Markov Chain Monte Carlo” (preprint, submitted in March 2022), https://arxiv.org; and S. Chakrabarti, J. Krishnakumar, G. Mazzola, et al., “A Threshold for Quantum Advantage in Derivative Pricing” (preprint, submitted in December 2020), https://arxiv.org.

Article link: https://sloanreview.mit.edu/article/the-business-case-for-quantum-computing/

Virtual health for all: Closing the digital divide to expand access – McKinsey

Posted by timmreardon on 03/17/2023
Posted in: Uncategorized. Leave a comment

March 16, 2023 | Article

By Fadesola Adetosoye, Danielle Hinton, Gayatri Shenai, and Ethan Thayumanavan

Affordable broadband with wraparound support could expand access to cost-efficient virtual health for underserved communities. Seven actions could help state and local leaders unlock this opportunity.

Download Article

virtual-health-for-all-closing-the-digital-divide-to-expand-access.pdf
Download

Listen to Article: https://on.soundcloud.com/QAzAs

Article link: https://www.mckinsey.com/industries/public-and-social-sector/our-insights/virtual-health-for-all-closing-the-digital-divide-to-expand-access?

Enhancing Care Delivery – FEHRM

Posted by timmreardon on 03/16/2023
Posted in: Uncategorized. Leave a comment

An electronic health record (EHR) is software that’s used to securely document, store, retrieve, share and analyze information about individual patient care. It enables a digital version of your health record. The Federal Electronic Health Record Modernization (FEHRM) office, along with other federal agencies, is implementing a single, common federal EHR to enhance patient care and provider effectiveness.

A federal EHR means as a provider, you will be able to access patient information entered into the EHR from different doctors, pharmacies, labs and other points of care throughout your patient’s care. There is recognition across the board that the federal EHR saves providers time and enables more standard workflows to support enhanced clinical decision-making and patient safety.

This means you benefit:

  • Your patients spend less time repeating their health history, undergoing duplicative tests and managing printed health records.
  • You have access to patient data such as service treatment records, Service medals and honors, housing status and other information to ensure patients receive all their earned benefits as they transition to civilian life in a seamless, timely fashion.
  • You can start conversations more productively with your patients since you already have a more comprehensive picture of their health before their appointments.
  • You will make more informed decisions about your patient’s care as you have access to more relevant data.
  • You will have a more seamless care experience. Regardless of whether your patient gets care from the Department of Defense, Department of Veterans Affairs, Department of Homeland Security’s U.S. Coast Guard or another health care system participating in the joint health information exchange, you will be able to easily access and exchange patient health information to enhance quality of care and satisfaction.

Learn more about how the federal EHR is enhancing delivery of care for providers like you.

Article link: https://www.fehrm.gov/enhancing-care-delivery

US gets an ‘F’ for erratic, unfocused funding of defense innovation, says Reagan Foundation – Breaking Defense

Posted by timmreardon on 03/15/2023
Posted in: Uncategorized. Leave a comment

By Sydney J. Freedberg Jr. on March 14, 2023 at 4:55 PM

WASHINGTON — Despite spending over $800 billion a year, the Defense Department chronically under-invests in cutting-edge technology and struggles to leverage the even greater investments being made by private-sector innovators in fields from AI and networks to space and biotech.

That’s the verdict of a new report by the Reagan Foundation and Institute, released Monday. But there’s good news: Both in the report and at a conference the foundation hosted Tuesday, experts from the Pentagon, Capitol Hill, and industry were full of ideas for improvements.

The central insight from the report and morning panels: For all the reforms of recent years, the US government as a whole — not just the Defense Department — is still an unappealing customer for the most innovative firms in the economy. Even a modest government investment in a nascent, promising technology at a critical, early phase could encourage matching or even greater investment from the private sector, but the government isn’t good at making those targeted investments. So Congress needs to provide more stable, flexible funding, freed from the restrictions and delays of a frequently gridlocked appropriations process — and bureaucrats need the courage to use the authorities Congress has already granted.

“What happens many times is those tools are there, [but] they don’t get used,” said Rep. Rob Wittman, R-Va., chairman of the tactical land and air forces subcommittee. “There are OTAs and MTAs — Other Transaction Authorities and Mid-Tier [Acquisition] — that aren’t used enough.”

“The problem is sometimes, too, Congress says, ‘whoa, whoa, wait a minute’… because we like to have the control,” Wittman continued. That forces Pentagon programs to slow down and wait for a budget cycle that takes a year to grind through Congress, on top of years of planning and infighting at the Pentagon.

RELATED: Oversight or overkill? DoD faces new congressional order to detail Mid-Tier Acquisitions

“It’s the culture, it’s the incentives,” emphasized retired House Armed Services Committee chairman Mac Thornberry, who advised on the scorecard. “It’s not just the Pentagon. It’s Congress too.”

Structured like a report card, the Reagan report grades the government in 10 major areas and dozens of subcategories, all related to what it calls the “National Security Innovation Base” — a construct carefully crafted to include innovative companies, laboratories, and universities outside the traditional defense industry. The scorecard is actually somewhat sanguine about America’s overall technological leadership in the world, giving it an A- despite a surge of patents, PhDs, and tech investment in China. But it all goes downhill from there.

Along with a bunch of Bs and Cs, the Reagan scorecard gave the country as a whole a D+ when it comes to fostering top-flight technical talent — a complex issue tied to education and immigration policies like the H-1B visa. Still worse, it gave the US a blunt D in what it calls “customer clarity”: In essence, how well the government connects to industry, not just through issuing policy statements and publishing strategies, but, most importantly, by providing tangible funding that gives innovators an incentive to work on tech the military wants.

Screen Shot 2023-03-14 at 4.25.13 PM

A section of the Reagan Institute’s “scorecard” (Reagan Institute graphic)

That D for being a lousy customer breaks down into three subcategories, the scorecard continues. The government gets a C+ for patchy communication of policy, hampered by many agencies’ slowness in issuing public strategies and Congress’s slowness to confirm key officials, leaving key leadership slots vacant. The government gets a C- for making only limited use of streamlined acquisition processes that bypass the sclerotic traditional system. And it gets an F — the only failing grade in the entire report card — for providing erratic and ill-focused funding, with Congress routinely passing appropriations weeks or months late and frequently plussing-up traditional programs instead of new technology.

“The big companies have this problem with this inconsistent demand signal from government,” making it hard to plan long-term investments, said Eric Fanning, former Army Secretary (and Acting Air Force Secretary) who’s now CEO of the Aerospace Industries Association. “The smaller companies have all sorts of other barriers,” especially with the administrative overhead necessary to meet government requirements. And, he said, innovators whose technology has wide commercial applications — like drones or artificial intelligence — can afford to give up on government sales if the bureaucracy proves too difficult a customer.

Being a good customer is crucial, agreed Sreenivas Ramaswamy, a senior advisor at the Commerce Department. “I’ve been in this role for two years,” he said, “and one of things I’ve seen … is how much people underestimate the role of the Defense Department in issuing the demand signal.”

A Government Solution

So how to fix this? The government needs to make “bigger, bolder, more flexible bets” on emerging, high-potential technologies, the report recommends, and it needs to “foster stronger relationships” with the private-sector investors that drive most innovation in the modern economy, using government funding as seed money to draw larger private investments.

This does not require massive government purchases to build up entire industries, Ramaswamy emphasized. Instead, he argued, the government can help kickstart new sectors by buying a relatively small amount of new technology early on, when the price is still too high for most commercial applications.

But encouraging and exploiting private-sector investment is not something the government usually thinks about. “It wasn’t until I became chairman that I became focused on outside investment and talked to investors,” said Thornberry, “[because] we don’t vote on that.”

“Government… as a whole does not understand the private capital that is leveraged by the Department of Defense,” said Fanning. A modest Pentagon purchase can incentivize the private sector to make its own investments in the hope of landing future sales — if those investors feel they can rely on that future government funding to materialize.

So what mechanisms would work for this kind of funding? Congress and the Pentagon have enacted significant reforms, like Mid-Tier Acquisition — although there’s been recent pushback about MTA on Capitol Hill — and the Software Pathway. But for both fiscal and cultural reasons, the experts said at today’s conference, it’s been a struggle to scale these up to have a major impact.

“The government is not an easy customer to move from prototype … to production,” said Fanning, “especially if you’ve developed it yourself” outside the government’s R&D pipelines.

“Starting prototype programs has never been easier. This is a good thing,” said assistant Army secretary Douglas Bush, the service’s chief acquisition officer. “[But] it’s still difficult to just carve out the money to take those prototypes and put them into production because again, when you get to that stage, you’re now competing with everything else, fighting for those limited production dollars… It’s a shark tank.”

“Congress often does some of the hard work of pushing innovations and innovative solutions on the Department,” Bush added. “That’s good, [but] we need to do a better job on that on our own.”

“There’s no question, there is a cultural challenge [about] being able to take a risk,” Thornberry said. But often, he said, Congress reinforces bureaucratic over-caution by hammering officials who tried something new that didn’t work out, or by imposing strict controls up front.

“We need to do our funding a little differently,” he said, “so that there is a pool of money for a particular purpose” — for example, artificial intelligence — that isn’t tied to a particular program, so Pentagon officials can rapidly respond to emerging opportunities in fast-paced tech sectors.

That way, “you can move fast [without] asking ‘Mother may I?’ two years in advance,” Thornberry said. “There’s got to be parameters around it, but I’d really like to give Doug [Bush] some pots of money for a purpose and then let him make the adjustments as the technology is developed.”

“That would be amazing,” Bush said. The audience burst into laughter.

Article link: https://breakingdefense-com.cdn.ampproject.org/c/s/breakingdefense.com/2023/03/us-gets-an-f-for-erratic-unfocused-funding-of-defense-innovation-says-reagan-foundation/?

Helping patients protect their electronic health information – Oracle Health

Posted by timmreardon on 03/15/2023
Posted in: Uncategorized. Leave a comment

As our industry continues to move toward more people-focused care, we’re driven to improve patient engagement and enhance outcomes to create a better healthcare experience for everyone. By making it more convenient for people to access their health data wherever and whenever they need it, we can help empower individuals and clinicians to make better health and well-being decisions.

“We imagine a world where individuals are empowered and have their healthcare data at their fingertips,” says Rob Helton, vice president of platform development, Oracle Health. “They have ownership of their data and are able to review and add to it. They can choose to share it, and who it gets shared to. We believe by putting patients in control of their data, they’ll become active participants in managing their health.”

The US federal government is continuing its push to place patients at the center of their own care, including ensuring patients have access to all of their electronic health information(EHI). The 21st Century Cures Act Information Blocking Rule, among other things, ensures patients have electronic access to their EHI in the manner the patient chooses, a right originating in HIPAA’s individual right of access.

Additionally, the federal government is setting minimum standards for health IT (HIT) that chooses to participate in the HIT certification program under the Office of the National Coordinator for HIT. While most healthcare organizations are striving to comply, some struggle to stay on top of the extra work needed to reach the full potential of interoperability. Luckily, technology can help streamline the process, reducing the regulatory burden on providers and making it easier for people to securely access their data.

Automating patient health data access

Historically, for a patient to use a health app of their choosing, a health system would have to review and approve the health app beforehand. However, with Oracle Health’s newly released functionality, called consumer automatic provision, this process is streamlined enabling patients easier access to their health data.

Consumer automatic provision allows consumer-facing health apps to be immediately provisioned for FHIR API endpoints upon registration. From there, health data stored within the electronic health record (EHR) system can be retrieved by the app upon explicit authorization from the patient or their authorized representative. Patients maintain full control over authorizing an app to access any of their EHI or only certain data types.

Advancing meaningful health insights and maintaining regulatory compliance

When people can use apps on their smartphones to measure and record critical information like physical activity, weight, sleep, heartbeat, and blood sugar level or track their medicine to ensure they don’t miss a dose, they can make better decisions about their health.

In addition to the clinical benefits, the consumer automatic provision aligns with several regulatory requirements that impact healthcare providers, including rules regarding the Medicare Promoting Interoperability program, health IT certification, HIPAA privacy, and information blocking and sharing.

The bottom line of these regulations is patients have the right to access their information electronically for any purpose they may choose, and that right should be facilitated by health IT developers and healthcare providers. Automatic provisioning of successfully registered consumer apps for your FHIR API endpoint helps patients access health information through any app of their choice in a timely fashion without any additional action by your organization.

Protecting patient health data

Protections are built into the FHIR APIs to help keep health data private and secure, but there are potential risks involved with authorizing the disclosure of EHI to a third-party app. While these risks are largely mitigated by protections built into the API connections and access, we all have a responsibility to help educate patients on how to keep their data safe. Although the constraints and protections that Oracle Health can put in place under the 21st Century Cures Act regulations are limited, here are the actions we’re taking to further protect consumer privacy and make the health app ecosystem more reliable and secure for consumers.

  • Employing industry-standard privacy and security protections as a built-in feature of the APIs
  • Developing enhancements to provide consumers with details of an app’s posture in relation to certain security-related best practices as part of the authorization user interface
  • Instructing developers connected with Oracle Health APIs to align with industry best practices for the safe and ethical handling of EHI communicated by the CARIN Alliance
  • Implementing patient-directed scope selection and redaction in the consumer app authorization workflow

The CARIN Alliance provides useful tips for consumers including:

  • Only download from trusted app stores
  • Always review app privacy terms and conditions
  • Know the app developer
  • Find out if the app is sharing or selling your data
  • Use strong and unique credentials

View the full list of recommendations.

Streamlining patients’ ability to access personal health data through an app of their choice is an important part of prioritizing the human experience in healthcare. Consumer automatic provision is one step of several that we’re taking to help patients control their own health and care, enhance clinical decision-making, and improve regulatory compliance for greater interoperability.

For more information, check out our resources on information blocking.

Related content:

  • How an open EHR advances usability for healthcare providers
  • The prior authorization process: What makes it painful and how to improve it
  • HIPAA violations and consequences: How the recent updates impact the healthcare industry

Article link: https://blogs.oracle.com/healthcare/post/helping-patients-protect-their-electronic-health-information?

Posts navigation

← Older Entries
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • HRSA Announces Organ Procurement and Transplantation Network Modernization Initiative – HRSA 03/22/2023
    • Researchers Spot Silicon-Level Hardware Trojans in Chips, Release Their Algorithm for All to Try – Hackster.io 03/22/2023
    • How We’re Bringing the Power of Quantum Computing to Medical Research – Cleveland Clinic 03/21/2023
    • The Age of AI has begun GatesNotes 03/21/2023
    • CX Primer: OPM’s Cavallo Maps Persona-Driven Website Overhaul – Meritalk 03/21/2023
    • The Business Case for Quantum Computing – MIT Sloan Management Review 03/19/2023
    • Virtual health for all: Closing the digital divide to expand access – McKinsey 03/17/2023
    • Enhancing Care Delivery – FEHRM 03/16/2023
    • US gets an ‘F’ for erratic, unfocused funding of defense innovation, says Reagan Foundation – Breaking Defense 03/15/2023
    • Helping patients protect their electronic health information – Oracle Health 03/15/2023
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • March 2023 (18)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

    No upcoming events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Follow Following
    • healthcarereimagined
    • Join 132 other followers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...