This story is a part of MIT Technology Review’s What’s Next series, where we look across industries, trends, and technologies to give you a first look at the future
In 2023, progress in quantum computing will be defined less by big hardware announcements than by researchers consolidating years of hard work, getting chips to talk to one another, and shifting away from trying to make do with noise as the field gets ever more international in scope.
For years, quantum computing’s news cycle was dominated by headlines about record-setting systems. Researchers at Google and IBM have had spats over who achieved what—and whether it was worth the effort. But the time for arguing over who’s got the biggest processor seems to have passed: firms are heads-down and preparing for life in the real world. Suddenly, everyone is behaving like grown-ups.
As if to emphasize how much researchers want to get off the hype train, IBM is expected to announce a processor in 2023 that bucks the trend of putting ever more quantum bits, or “qubits,” into play. Qubits, the processing units of quantum computers, can be built from a variety of technologies, including superconducting circuitry, trapped ions, and photons, the quantum particles of light.
IBM has long pursued superconducting qubits, and over the years the company has been making steady progress in increasing the number it can pack on a chip. In 2021, for example, IBM unveiled one with a record-breaking 127 of them. In November, it debuted its 433-qubit Osprey processor, and the company aims to release a 1,121-qubit processor called Condor in 2023.
But this year IBM is also expected to debut its Heron processor, which will have just 133 qubits. It might look like a backwards step, but as the company is keen to point out, Heron’s qubits will be of the highest quality. And, crucially, each chip will be able to connect directly to other Heron processors, heralding a shift from single quantum computing chips toward “modular” quantum computers built from multiple processors connected together—a move that is expected to help quantum computers scale up significantly.
Heron is a signal of larger shifts in the quantum computing industry. Thanks to some recent breakthroughs, aggressive roadmapping, and high levels of funding, we may see general-purpose quantum computers earlier than many would have anticipated just a few years ago, some experts suggest. “Overall, things are certainly progressing at a rapid pace,” says Michele Mosca, deputy director of the Institute for Quantum Computing at the University of Waterloo.
Here are a few areas where experts expect to see progress.
Stringing quantum computers together
IBM’s Heron project is just a first step into the world of modular quantum computing. The chips will be connected with conventional electronics, so they will not be able to maintain the “quantumness” of information as it moves from processor to processor. But the hope is that such chips, ultimately linked together with quantum-friendly fiber-optic or microwave connections, will open the path toward distributed, large-scale quantum computers with as many as a million connected qubits. That may be how many are needed to run useful, error-corrected quantum algorithms. “We need technologies that scale both in size and in cost, so modularity is key,” says Jerry Chow, director at IBM Quantum Hardware System Development.
Other companies are beginning similar experiments. “Connecting stuff together is suddenly a big theme,” says Peter Shadbolt, chief scientific officer of PsiQuantum, which uses photons as its qubits. PsiQuantum is putting the finishing touches on a silicon-based modular chip. Shadbolt says the last piece it requires—an extremely fast, low-loss optical switch—will be fully demonstrated by the end of 2023. “That gives us a feature-complete chip,” he says. Then warehouse-scale construction can begin: “We’ll take all of the silicon chips that we’re making and assemble them together in what is going to be a building-scale, high-performance computer-like system.”
The desire to shuttle qubits among processors means that a somewhat neglected quantum technology will come to the fore now, according to Jack Hidary, CEO of SandboxAQ, a quantum technology company that was spun out of Alphabet last year. Quantum communications, where coherent qubits are transferred over distances as large as hundreds of kilometers, will be an essential part of the quantum computing story in 2023, he says.
“The only pathway to scale quantum computing is to create modules of a few thousand qubits and start linking them to get coherent linkage,” Hidary told MIT Technology Review. “That could be in the same room, but it could also be across campus, or across cities. We know the power of distributed computing from the classical world, but for quantum, we have to have coherent links: either a fiber-optic network with quantum repeaters, or some fiber that goes to a ground station and a satellite network.”
Many of these communication components have been demonstrated in recent years. In 2017, for example, China’s Micius satellite showed that coherent quantum communications could be accomplished between nodes separated by 1,200 kilometers. And in March 2022, an international group of academic and industrial researchers demonstrated a quantum repeater that effectively relayed quantum information over 600 kilometers of fiber optics.
Taking on the noise
At the same time that the industry is linking up qubits, it is also moving away from an idea that came into vogue in the last five years—that chips with just a few hundred qubits might be able to do useful computing, even though noise easily disrupts their operations.
This notion, called “noisy intermediate-scale quantum” (NISQ), would have been a way to see some short-term benefits from quantum computing, potentially years before reaching the ideal of large-scale quantum computers with many hundreds of thousands of qubits devoted to correcting errors. But optimism about NISQ seems to be fading. “The hope was that these computers could be used well before you did any error correction, but the emphasis is shifting away from that,” says Joe Fitzsimons, CEO of Singapore-based Horizon Quantum Computing.
Some companies are taking aim at the classic form of error correction, using some qubits to correct errors in others. Last year, both Google Quantum AI and Quantinuum, a new company formed by Honeywell and Cambridge Quantum Computing, issuedpapersdemonstrating that qubits can be assembled into error-correcting ensembles that outperform the underlying physical qubits.
Other teams are trying to see if they can find a way to make quantum computers “fault tolerant” without as much overhead. IBM, for example, has been exploring characterizing the error-inducing noise in its machines and then programming in a way to subtract it (similar to what noise-canceling headphones do). It’s far from a perfect system—the algorithm works from a prediction of the noise that is likely to occur, not what actually shows up. But it does a decent job, Chow says: “We can build an error-correcting code, with a much lower resource cost, that makes error correction approachable in the near term.”
Maryland-based IonQ, which is building trapped-ion quantum computers, is doing something similar. “The majority of our errors are imposed by us as we poke at the ions and run programs,” says Chris Monroe, chief scientist at IonQ. “That noise is knowable, and different types of mitigation have allowed us to really push our numbers.”
Getting serious about software
For all the hardware progress, many researchers feel that more attention needs to be given to programming. “Our toolbox is definitely limited, compared to what we need to have 10 years down the road,” says Michal Stechly of Zapata Computing, a quantum software company based in Boston.
The way code runs on a cloud-accessible quantum computer is generally “circuit-based,” which means the data is put through a specific, predefined series of quantum operations before a final quantum measurement is made, giving the output. That’s problematic for algorithm designers, Fitzsimons says. Conventional programming routines tend to involve looping some steps until a desired output is reached, and then moving into another subroutine. In circuit-based quantum computing, getting an output generally ends the computation: there is no option for going round again.
Horizon Quantum Computing is one of the companies that have been building programming tools to allow these flexible computation routines. “That gets you to a different regime in terms of the kinds of things you’re able to run, and we’ll start rolling out early access in the coming year,” Fitzsimons says.
Helsinki-based Algorithmiq is also innovating in the programming space. “We need nonstandard frameworks to program current quantum devices,” says CEO Sabrina Maniscalco. Algorithmiq’s newly launched drug discovery platform, Aurora, combines the results of a quantum computation with classical algorithms. Such “hybrid” quantum computing is a growing area, and it’s widely acknowledged as the way the field is likely to function in the long term. The company says it expects to achieve a useful quantum advantage—a demonstration that a quantum system can outperform a classical computer on real-world, relevant calculations—in 2023.
Competition around the world
Change is likely coming on the policy front as well. Government representatives including Alan Estevez, US undersecretary of commerce for industry and security, have hinted that trade restrictions surrounding quantum technologies are coming.
Tony Uttley, COO of Quantinuum, says that he is in active dialogue with the US government about making sure this doesn’t adversely affect what is still a young industry. “About 80% of our system is components or subsystems that we buy from outside the US,” he says. “Putting a control on them doesn’t help, and we don’t want to put ourselves at a disadvantage when competing with other companies in other countries around the world.”
And there are plenty of competitors. Last year, the Chinese search company Baidu opened access to a 10-superconducting-qubit processorthat it hopes will help researchers make forays into applying quantum computing to fields such as materials design and pharmaceutical development. The company says it has recently completed the design of a 36-qubit superconducting quantum chip. “Baidu will continue to make breakthroughs in integrating quantum software and hardware and facilitate the industrialization of quantum computing,” a spokesman for the company told MIT Technology Review. The tech giant Alibaba also has researchers working on quantum computing with superconducting qubits.
In Japan, Fujitsu is working with the Riken research institute to offer companies access to the country’s first home-grown quantum computer in the fiscal year starting April 2023. It will have 64 superconducting qubits. “The initial focus will be on applications for materials development, drug discovery, and finance,” says Shintaro Sato, head of the quantum laboratory at Fujitsu Research.
Not everyone is following the well-trodden superconducting path, however. In 2020, the Indian government pledged to spend 80 billion rupees ($1.12 billion when the announcement was made) on quantum technologies. A good chunk will go to photonics technologies—for satellite-based quantum communications, and for innovative “qudit” photonics computing.
Qudits expand the data encoding scope of qubits—they offer three, four, or more dimensions, as opposed to just the traditional binary 0 and 1, without necessarily increasing the scope for errors to arise. “This is the kind of work that will allow us to create a niche, rather than competing with what has already been going on for several decades elsewhere,” says Urbasi Sinha, who heads the quantum information and computing laboratory at the Raman Research Institute in Bangalore, India.
Though things are getting serious and internationally competitive, quantum technology remains largely collaborative—for now. “The nice thing about this field is that competition is fierce, but we all recognize that it’s necessary,” Monroe says. “We don’t have a zero-sum-game mentality: there are different technologies out there, at different levels of maturity, and we all play together right now. At some point there’s going to be some kind of consolidation, but not yet.”
Michael Brooks is a freelance science journalist based in the UK.
By placing equity at the center of public health programs, government agencies could more effectively address disparities and improve health outcomes for all Americans.
The approach centers on equity as an explicit and nonnegotiable objective of every aspect of the program, from setting strategic goals and metrics to designing a new operating model. The term “health equity” can mean different things to different people. Aligning on a clear definition of health equity goals—including how they are measured—is a critical first step in this approach.
Set ambitious health equity goals. Under the integrated approach, the highest office or entity sets and commits to aspirational equity targets at the onset of the program or, for existing programs, at important milestones. Each goal is structured as “SMART”—that is, specific, measurable, attainable, relevant, and time-bound. The specific directive to achieve 85 percent coverage of first-dose vaccinations for the adult population in the 25 highest-need locations (based on a social-vulnerability index) within three months is an example of a SMART goal.
Leverage existing data to delineate metrics. After equity goals have been set, leaders can identify clear metrics to track their progress. A common concern is the lack of precise data; for example, state vaccination rates can fluctuate because of interstate travel and overlap with vaccinations given by federal entities. In cases like these, it may be necessary to take a minimum viable product (MVP) approach, in which metrics are tracked using the best data available while the organization continuously works to improve data quality.
Adapt goals based on progress and data availability. As progress is made toward equity goals, and as better data becomes available, program leaders may need to adjust their initial goals to meet evolving circumstances. Examples include updating goals to include additional demographics as vaccines become available, increasing the granularity of data and metrics from the county to the zip code level, and expanding efforts to related program delivery areas such as maternal health or nutrition. Whenever goals change, it is important to gain stakeholder buy-in, acknowledge the shift, celebrate wins, and invest in any additional capabilities required to reach the new objectives. These actions can go a long way toward mitigating concerns about shifting goalposts that can derail goodwill and lead to burnout.
2. Embrace new ways of working
The next step is to implement new ways of working to ensure that ambitious equity goals are clearly translated and reinforced in day-to-day work.
Empower change from the top. In addition to setting equity-based goals, the approach calls for the highest office or leadership to publicly empower their teams to deliver on those objectives. During the COVID-19 response, this often meant creating new modes of engagement—for example, by breaking down agency-level silos; creating partnerships among public, private, and academic institutions; working across partisan lines; collaborating with the community in new ways; and harnessing the power of nontraditional media.
Adapt governance and decision making. Under this approach, governance structures and processes adapt to support these new ways of working and deliver equitable impact. An effective governance model could include periodic touchpoints to track outcomes against equity targets, clear accountability across teams, and escalation pathways to address challenges or falling short of targets.
Create a collaborative and interdisciplinary working model.A collaborative and cross-functional operating model builds alignment among internal stakeholders (such as agency heads, data analysts, logistics teams, communications teams, and local and regional officials) and external partners (such as community and faith-based organizations, private-sector organizations, and schools). Weekly “thematic” touchpoints can also facilitate close coordination and communication among stakeholders.
A collaborative and cross-functional operating model builds alignment among internal stakeholders and external partners.
Implement agile ways of working. Agile practices, such as sprint-based and iterative work cycles, can help organizations continuously track progress and adjust the strategy within a constantly shifting landscape. While this may be particularly important in times of crisis, such as the COVID-19 pandemic, long-term programs can also benefit from the greater flexibility and speed of agile principles. Feedback is critical; successful initiatives and learnings should be shared and scaled across the organization, while failures and setbacks should be quickly identified and addressed.
3. Walk the talk
The third element of the approach is relentless execution with a strong emphasis on data-driven decision making, capability building, and local engagement.
Fact check with data. Data can be a powerful tool for organizations to use to establish a robust fact base, create transparency around equity outcomes, and “myth bust” perceived roadblocks or misinformation. Organizations must usually meet two conditions: data that is granular enough to drive actionable insights, and decision makers at all levels with access to the latest data and insights. Then organizations can harness this data to make better decisions and quickly verify (or debunk) their assumptions. For example, after a public health agency tested various COVID-19 vaccine strategies (including adding multiple pop-up sites and extending hours for permanent sites), it found that low vaccine uptake in “hesitant” areas was due to lack of access rather than low user adoption, as had previously been assumed.
Build capabilities to execute.Translating data to insights, and insights to action, often requires a wide range of skills and capabilities that may not exist within the organization. For many institutions, the pandemic highlighted a need to train employees on unconstrained problem solving, stakeholder management, data analysis, event coordination, and verbal communication. Agencies found success through various capability-building innovations, including running on-the-ground trainings and workshops in response to immediate needs (for example, crisis management), collaborating with private and academic partners (such as in supply chain and logistics management), establishing rewards for high performance, flattening the traditional organizational hierarchy, and creating new career paths for those on the front lines.
Understand the local context.Even the most rigorous, data-driven initiative could fail if it is not grounded in the local context of the people it is meant to serve. Several COVID-19 vaccination efforts were hamstrung by a lack of cultural sensitivity and awareness of local needs. For example, the vaccine delivery program had limited success when it offered only pediatric vaccines in certain locations where multigenerational families preferred to get vaccinated together. This lesson—that all vaccine events need to provide all doses to all age groups—was then applied more broadly to other locations. Organizations can hire diverse teams and engage experts and community partners to ensure that local perspectives are consulted in the design and execution of any initiative.
While this approach found some success in vaccine delivery, we recognize that there are certain constraints and criteria for success. First, the approach does not address the various and interconnected aspects of systemic and structural inequity that shape individuals’ abilities to make certain choices about their health. In other words, it focuses on improving access to healthcare rather than changing consumer behaviors and decisions. Second, the transformational nature of the approach requires a significant level of commitment from public health agencies—including sustained engagement from the highest levels of leadership, a focus on talent management and capability building, a broader cultural shift in institutional ways of working, and effective execution at all levels of the organization.
The COVID-19 pandemic revealed the profound need to embed equity into public health programs at the federal, state, and local levels. As we emerge from the crisis, public and social institutions have an opportunity to reimagine their approach to delivering public health services and contribute to a more equitable and resilient future for all Americans.
ABOUT THE AUTHOR(S)
Angie Cui is an alumna of McKinsey’s New Jersey office, Ellen Feehan, MD,is a partner in the New Jersey office, JP Julien is a partner in the Philadelphia office, and Neeraja Nagarajan, MD, is an associate partner in the Washington, DC, office.
The authors wish to thank Jason Forrest, Dipti Pai, and Ashley Pitt for their contributions to this article.
Initiative includes the release of new organ donor and transplant data; prioritization of modernization of the OPTN IT system; and call for Congress to make specific reforms in the National Organ Transplant Act
Today, the Health Resources and Services Administration (HRSA), an agency of the U.S. Department of Health and Human Services (HHS), announced a Modernization Initiative that includes several actions to strengthen accountability and transparency in the Organ Procurement and Transplantation Network (OPTN):
Data dashboards detailing individual transplant center and organ procurement organization data on organ retrieval, waitlist outcomes, and transplants, and demographic data on organ donation and transplant;
Modernization of the OPTN IT system in line with industry-leading standards, improving OPTN governance, and increasing transparency and accountability in the system to better serve the needs of patients and families;
HRSA’s intent to issue contract solicitations for multiple awards to manage the OPTN in order to foster competition and ensure OPTN Board of Directors’ independence;
The President’s Fiscal Year 2024 Budget proposal to more than double investment in organ procurement and transplantation with a $36 million increase over Fiscal Year 2023 for a total of $67 million; and,
A request to Congress included in the Fiscal Year 2024 Budget to update the nearly 40-year-old National Organ Transplant Act to take actions such as:
Removing the appropriations cap on the OPTN contract(s) to allow HRSA to better allocate resources, and
Expanding the pool of eligible contract entities to enhance performance and innovation through increased competition.
“Every day, patients and families across the United States rely on the Organ Procurement and Transplantation Network to save the lives of their loved ones who experience organ failure,” said Carole Johnson, HRSA Administrator. “At HRSA, our stewardship and oversight of this vital work is a top priority. That is why we are taking action to both bring greater transparency to the system and to reform and modernize the OPTN. The individuals and families that depend on this life-saving work deserve no less.”
Today, HRSA is posting on its website a new data dashboard to share de-identified information on organ donors, organ procurement, transplant waitlists, and transplant recipients. Patients, families, clinicians, researchers, and others can use this data to inform decision-making as well as process improvements. Today’s launch is an initial data set, which HRSA intends to refine over time and update regularly.
This announcement also includes a plan to strengthen accountability, equity, and performance in the organ donation and transplantation system. This iterative plan will specifically focus on five key areas: technology; data transparency; governance; operations; and quality improvement and innovation. In implementing this plan, HRSA intends to issue contract solicitations for multiple awards to manage and improve the OPTN. HRSA also intends to further the OPTN Board of Directors’ independence through the contracting process and the use of multiple contracts. Ensuring robust competition in every industry is a key priority of the Biden-Harris Administration and will help meet the OPTN Modernization Initiative’s goals of promoting innovation and the best quality of service for patients.
Finally, the President’s Budget for Fiscal Year 2024 would more than double HRSA’s budget for organ-related work, including OPTN contracting and the implementation of the Modernization Initiative, to total $67 million. In addition, the Budget requests statutory changes to the National Organ Transplant Act to remove the decades-old ceiling on the amount of appropriated funding that can be awarded to the statutorily required vendor(s) for the OPTN. It also requests that Congress expand the pool of eligible contract entities to enhance performance and innovation through increased competition, particularly with respect to information technology vendors.
HRSA recognizes that while modernization work is complex, the integrity of the organ matching process is paramount and cannot be disrupted. That is why HRSA’s work will be guided by and centered around several key priorities, including the urgent needs of the more than 100,000 individuals and their families awaiting transplant; the 24/7 life-saving nature of the system; and patient safety and health. HRSA intends to engage with a wide and diverse group of stakeholders early and often to ensure a human-centered design approach that reflects pressing areas of need and ensuring experiences by system users like patients are addressed first. As a part of this commitment, HRSA has created an OPTN Modernization Website to keep stakeholders informed about the Modernization Initiative and provide regular progress updates.
Using thousands of electron microscope images and the original chip layout, 37 of 40 deliberate modifications were spotted.
Researchers at the Ruhr University Bochum and the Max Planck Institute for Security and Privacy (MPI-SP) have come up with an approach to analyzing die photos of real-world microchips to reveal hardware Trojan attacks — and are releasing their imagery and algorithm for all to try.
“It’s conceivable that tiny changes might be inserted into the designs in the factories shortly before production that could override the security of the chips,” says Steffen Becker, PhD and co-author of the paper detailing the work, of the problem the team set about to solve. “In extreme cases, such hardware Trojans could allow an attacker to paralyze parts of the telecommunications infrastructure at the push of a button.”
High-resolution die shots and original layout files have proven enough to automatically flag potentially malicious modifications in CMOS chips. (📷: Puschner et al)
Looking at chips built on 28nm, 40nm, 65nm, and 90nm process nodes, the team set about automating the process of inspecting the finished silicon chips for hardware-level tampering. Using designs created by Thorben Moos, PhD, the researchers figured out a way to test their approach: taking the physical chips Moos had already built and comparing them to original design files with minor modifications, meaning the two are no longer a direct match.
“Comparing the chip images and the construction plans turned out to be quite a challenge, because we first had to precisely superimpose the data,” says first author Endres Puschner. “On the smallest chip, which is 28 nanometers in size, a single speck of dust or a hair can obscure a whole row of standard cells.”
Despite these challenges the analysis algorithm showed promise, detecting 37 of the 40 modifications — including all the modifications made to the chips built on process nodes between 40nm and 90nm. The algorithm did, admittedly, throw up 500 false positives — but, says Puschner, “with more than 1.5 million standard cells examined, this is a very good rate.”
The team’s approach picked up on modifications (left) compared to the expected design output (right) automatically. (📷: Puschner et al)
The desire to analyze silicon-level hardware to detect either malicious modifications or counterfeit hardware was also behind recent work by engineer Andrew “bunnie” Huang, who developed a technique for peering inside packaged chips and uncovering the silicon within. Huang’s approach lacks the resolution, however, for cell-level analysis — which this research team managed through electron microscopy.
The team’s paper is available under open-access terms on the IACR Cryptology ePrint Archive, while the full imagery and source code behind the paper has been published to GitHub under the permissive MIT license. “We […] hope that other groups will use our data for follow-up studies, Becker says. “Machine learning could probably improve the detection algorithm to such an extent that it would also detect the changes on the smallest chips that we missed.”
Cleveland Clinic is a Founding Partner in Quantum Innovation Hub
Based in Greater Washington D.C., the nonprofit development organization Connected DMV and a coalition of partners are developing the new Life Sciences and Healthcare Quantum Innovation Hub. Its purpose is to prepare the healthcare sector for the burgeoning quantum era and to align with key national and global efforts in life sciences and quantum technologies.
The U.S. Department of Commerce’s Economic Development Administration has awarded more than $600,000 to Connected DMV for development of the Hub. This will include formation of a collaborative of at least 25 organizations specializing in quantum technology and end-use applications.
Cleveland Clinic was invited to join the Quantum Innovation Hub because of its work to advance medical research through quantum computing. As the lead healthcare system in the coalition, Cleveland Clinic will help define quantum’s role in the future of healthcare and educate other health systems on the technology’s possibilities.
Quantum’s potential
Quantum computing is a radically different approach to information processing and data analysis. It is based on the principles of quantum physics, which describe how subatomic particles behave. By manipulating and measuring the actions of quantum particles, quantum computers can in theory solve problems too massive and complex for traditional computers, which are bound by the laws of classical physics. Quantum computers are still in an early phase of development, but they have the potential to advance medical research.
“We believe quantum computing holds great promise for accelerating the pace of scientific discovery,” says Lara Jehi, MD, MHCDS, Cleveland Clinic’s Chief Research Information Officer. “As an academic medical center, research, innovation and education are integral parts of Cleveland Clinic’s mission. Quantum, AI [artificial intelligence] and other emerging technologies have the potential to revolutionize medicine. We look forward to working with partners across healthcare and life sciences to solve complex medical problems and change the course of diseases like cancer, heart conditions and neurodegenerative disorders.”
Collaborating with IBM
Last year, Cleveland Clinic announced a 10-year partnership with IBM to establish the Discovery Accelerator, a joint center focused on easing traditional bottlenecks in medical research through innovative technologies such as quantum computing, the hybrid cloud and artificial intelligence.
The partnership combines Cleveland Clinic’s medical expertise with the technology expertise of IBM, including the company’s leadership in quantum technology. IBM is installing the first private-sector on-premises Quantum System One computer in the United States on Cleveland Clinic’s main campus.
The Discovery Accelerator will allow Cleveland Clinic to contribute to Connected DMV’s hub by advancing the pace of discovery, using the IBM quantum computer in areas such as drug design, deciphering complex biological processes and developing personalized disease therapies.
Accelerate investment in, and research and development of, quantum computing.
Develop an equitable and scalable talent pipeline to work in quantum computing.
Increase collaboration among the public sector, academia, industry, community and investors to accelerate the value of quantum computing.
“Innovation is always iterative, and requires sustained collaboration between research, development and technology, and the industries that will benefit from the value generated,” says George Thomas, Chief Innovation Officer of Connected DMV and the leader of the Potomac Quantum Innovation Center initiative.
“Quantum has the potential to have a substantive impact on our society in the near future,” Thomas says. “The Life Sciences and Healthcare Quantum Innovation Hub will serve as the foundation for sustained focus and investment to accelerate and scale our path into the era of quantum.”
In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary.
The first time was in 1980, when I was introduced to a graphical user interface—the forerunner of every modern operating system, including Windows. I sat with the person who had shown me the demo, a brilliant programmer named Charles Simonyi, and we immediately started brainstorming about all the things we could do with such a user-friendly approach to computing. Charles eventually joined Microsoft, Windows became the backbone of Microsoft, and the thinking we did after that demo helped set the company’s agenda for the next 15 years.
The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn’t been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts—it asks you to think critically about biology.) If you can do that, I said, then you’ll have made a true breakthrough.
I thought the challenge would keep them busy for two or three years. They finished it in just a few months.
In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam—and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5—the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course.
Once it had aced the test, we asked it a non-scientific question: “What do you say to a father with a sick child?” It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.
I knew I had just seen the most important advance in technology since the graphical user interface.
This inspired me to think about all the things that AI can achieve in the next five to 10 years.
The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.
Philanthropy is my full-time job these days, and I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities. Globally, the worst inequity is in health: 5 million children under the age of 5 die every year. That’s down from 10 million two decades ago, but it’s still a shockingly high number. Nearly all of these children were born in poor countries and die of preventable causes like diarrhea or malaria. It’s hard to imagine a better use of AIs than saving the lives of children.
I’ve been thinking a lot about how AI can reduce some of the world’s worst inequities.
In the United States, the best opportunity for reducing inequity is to improve education, particularly making sure that students succeed at math. The evidence shows that having basic math skills sets students up for success, no matter what career they choose. But achievement in math is going down across the country, especially for Black, Latino, and low-income students. AI can help turn that trend around.
Climate change is another issue where I’m convinced AI can make the world more equitable. The injustice of climate change is that the people who are suffering the most—the world’s poorest—are also the ones who did the least to contribute to the problem. I’m still thinking and learning about how AI can help, but later in this post I’ll suggest a few areas with a lot of potential.
In short, I’m excited about the impact that AI will have on issues that the Gates Foundation works on, and the foundation will have much more to say about AI in the coming months. The world needs to make sure that everyone—and not just people who are well-off—benefits from artificial intelligence. Governments and philanthropy will need to play a major role in ensuring that it reduces inequity and doesn’t contribute to it. This is the priority for my own work related to AI.
Any new technology that’s so disruptive is bound to make people uneasy, and that’s certainly true with artificial intelligence. I understand why—it raises hard questions about the workforce, the legal system, privacy, bias, and more. AIs also make factual mistakes and experience hallucinations. Before I suggest some ways to mitigate the risks, I’ll define what I mean by AI, and I’ll go into more detail about some of the ways in which it will help empower people at work, save lives, and improve education.
Defining artificial intelligence
Technically, the term artificial intelligence refers to a model created to solve a specific problem or provide a particular service. What is powering things like ChatGPT is artificial intelligence. It is learning how to do chat better but can’t learn other tasks. By contrast, the term artificial general intelligence refers to software that’s capable of learning any task or subject. AGI doesn’t exist yet—there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all.
Developing AI and AGI has been the great dream of the computing industry. For decades, the question was when computers would be better than humans at something other than making calculations. Now, with the arrival of machine learning and large amounts of computing power, sophisticated AIs are a reality and they will get better very fast.
I think back to the early days of the personal computing revolution, when the software industry was so small that most of us could fit onstage at a conference. Today it is a global industry. Since a huge portion of it is now turning its attention to AI, the innovations are going to come much faster than what we experienced after the microprocessor breakthrough. Soon the pre-AI period will seem as distant as the days when using a computer meant typing at a C:> prompt rather than tapping on a screen.
Productivity enhancement
Although humans are still better than GPT at a lot of things, there are many jobs where these capabilities are not used much. For example, many of the tasks done by a person in sales (digital or phone), service, or document handling (like payables, accounting, or insurance claim disputes) require decision-making but not the ability to learn continuously. Corporations have training programs for these activities and in most cases, they have a lot of examples of good and bad work. Humans are trained using these data sets, and soon these data sets will also be used to train the AIs that will empower people to do this work more efficiently.
As computing power gets cheaper, GPT’s ability to express ideas will increasingly be like having a white-collar worker available to help you with various tasks. Microsoft describes this as having a co-pilot. Fully incorporated into products like Office, AI will enhance your work—for example by helping with writing emails and managing your inbox.
Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you’ll be able to write a request in plain English. (And not just English—AIs will understand languages from around the world. In India earlier this year, I met with developers who are working on AIs that will understand many of the languages spoken there.)
In addition, advances in AI will enable the creation of a personal agent. Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with. This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do.
Advances in AI will enable the creation of a personal agent.
You’ll be able to use natural language to have this agent help you with scheduling, communications, and e-commerce, and it will work across all your devices. Because of the cost of training the models and running the computations, creating a personal agent is not feasible yet, but thanks to the recent advances in AI, it is now a realistic goal.Some issues will need to be worked out: For example, can an insurance company ask your agent things about you without your permission? If so, how many people will choose not to use it?
Company-wide agents will empower employees in new ways. An agent that understands a particular company will be available for its employees to consult directly and should be part of every meeting so it can answer questions. It can be told to be passive or encouraged to speak up if it has some insight. It will need access to the sales, support, finance, product schedules, and text related to the company. It should read news related to the industry the company is in. I believe that the result will be that employees will become more productive.
When productivity goes up, society benefits because people are freed up to do other things, at work and at home. Of course, there are serious questions about what kind of support and retraining people will need. Governments need to help workers transition into other roles. But the demand for people who help other people will never go away. The rise of AI will free people up to do things that software never will—teaching, caring for patients, and supporting the elderly, for example.
Global health and education are two areas where there’s great need and not enough workers to meet those needs. These are areas where AI can help reduce inequity if it is properly targeted. These should be a key focus of AI work, so I will turn to them now.
Health
I see several ways in which AIs will improve health care and the medical field.
For one thing, they’ll help health-care workers make the most of their time by taking care of certain tasks for them—things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit. I expect that there will be a lot of innovation in this area.
Other AI-driven improvements will be especially important for poor countries, where the vast majority of under-5 deaths happen.
For example, many people in those countries never get to see a doctor, and AIs will help the health workers they do see be more productive. (The effort to develop AI-powered ultrasound machines that can be used with minimal training is a great example of this.) AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
The AI models used in poor countries will need to be trained on different diseases than in rich countries. They will need to work in different languages and factor in different challenges, such as patients who live very far from clinics or can’t afford to stop working if they get sick.
People will need to see evidence that health AIs are beneficial overall, even though they won’t be perfect and will make mistakes. AIs have to be tested very carefully and properly regulated, which means it will take longer for them to be adopted than in other areas. But then again, humans make mistakes too. And having no access to medical care is also a problem.
In addition to helping with care, AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way.
The next generation of tools will be much more efficient, and they’ll be able to predict side effects and figure out dosing levels. One of the Gates Foundation’s priorities in AI is to make sure these tools are used for the health problems that affect the poorest people in the world, including AIDS, TB, and malaria.
Similarly, governments and philanthropy should create incentives for companies to share AI-generated insights into crops or livestock raised by people in poor countries. AIs can help develop better seeds based on local conditions, advise farmers on the best seeds to plant based on the soil and weather in their area, and help develop drugs and vaccines for livestock. As extreme weather and climate change put even more pressure on subsistence farmers in low-income countries, these advances will be even more important.
Education
Computers haven’t had the effect on education that many of us in the industry have hoped. There have been some good developments, including educational games and online sources of information like Wikipedia, but they haven’t had a meaningful effect on any of the measures of students’ achievement.
But I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.
There are many ways that AIs can assist teachers and administrators, including assessing a student’s understanding of a subject and giving advice on career planning. Teachers are already using tools like ChatGPT to provide comments on their students’ writing assignments.
Of course, AIs will need a lot of training and further development before they can do things like understand how a certain student learns best or what motivates them. Even once the technology is perfected, learning will still depend on great relationships between students and teachers. It will enhance—but never replace—the work that students and teachers do together in the classroom.
New tools will be created for schools that can afford to buy them, but we need to ensure that they are also created for and available to low-income schools in the U.S. and around the world. AIs will need to be trained on diverse data sets so they are unbiased and reflect the different cultures where they’ll be used. And the digital divide will need to be addressed so that students in low-income households do not get left behind.
I know a lot of teachers are worried that students are using GPT to write their essays. Educators are already discussing ways to adapt to the new technology, and I suspect those conversations will continue for quite some time. I’ve heard about teachers who have found clever ways to incorporate the technology into their work—like by allowing students to use GPT to create a first draft that they have to personalize.
Risks and problems with AI
You’ve probably read about problems with the current AI models. For example, they aren’t necessarily good at understanding the context for a human’s request, which leads to some strange results. When you ask an AI to make up something fictional, it can do that well. But when you ask for advice about a trip you want to take, it may suggest hotels that don’t exist. This is because the AI doesn’t understand the context for your request well enough to know whether it should invent fake hotels or only tell you about real ones that have rooms available.
There are other issues, such as AIs giving wrong answers to math problems because they struggle with abstract reasoning. But none of these are fundamental limitations of artificial intelligence. Developers are working on them, and I think we’re going to see them largely fixed in less than two years and possibly much faster.
Other concerns are not simply technical. For example, there’s the threat posed by humans armed with AI. Like most inventions, artificial intelligence can be used for good purposes or malign ones. Governments need to work with the private sector on ways to limit the risks.
Then there’s the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.
Superintelligent AIs are in our future. Compared to a computer, our brains operate at a snail’s pace: An electrical signal in the brain moves at 1/100,000th the speed of the signal in a silicon chip! Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we’ll have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. This will be a profound change.
These “strong” AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time.
But none of the breakthroughs of the past few months have moved us substantially closer to strong AI. Artificial intelligence still doesn’t control the physical world and can’t establish its own goals. A recent New York Times article about a conversation with ChatGPT where it declared it wanted to become a human got a lot of attention. It was a fascinating look at how human-like the model’s expression of emotions can be, but it isn’t an indicator of meaningful independence.
Three books have shaped my own thinking on this subject: Superintelligence, by Nick Bostrom; Life 3.0 by Max Tegmark; and A Thousand Brains, by Jeff Hawkins. I don’t agree with everything the authors say, and they don’t agree with each other either. But all three books are well written and thought-provoking.
The next frontiers
There will be an explosion of companies working on new uses of AI as well as ways to improve the technology itself. For example, companies are developing new chips that will provide the massive amounts of processing power needed for artificial intelligence. Some use optical switches—lasers, essentially—to reduce their energy consumption and lower the manufacturing cost. Ideally, innovative chips will allow you to run an AI on your own device, rather than in the cloud, as you have to do today.
On the software side, the algorithms that drive an AI’s learning will get better. There will be certain domains, such as sales, where developers can make AIs extremely accurate by limiting the areas that they work in and giving them a lot of training data that’s specific to those areas. But one big open question is whether we’ll need many of these specialized AIs for different uses—one for education, say, and another for office productivity—or whether it will be possible to develop an artificial general intelligence that can learn any task. There will be immense competition on both approaches.
No matter what, the subject of AIs will dominate the public discussion for the foreseeable future. I want to suggest three principles that should guide that conversation.
First, we should try to balance fears about the downsides of AI—which are understandable and valid—with its ability to improve people’s lives. To make the most of this remarkable new technology, we’ll need to both guard against the risks and spread the benefits to as many people as possible.
Second, market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity. Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world’s best AIs on its biggest problems.
Although we shouldn’t wait for this to happen, it’s interesting to think about whether artificial intelligence would ever identify inequity and try to reduce it. Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it? If it did recognize inequity, what would it suggest that we do about it?
Finally, we should keep in mind that we’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it.
I’m lucky to have been involved with the PC revolution and the Internet revolution. I’m just as excited about this moment. This new technology can help people everywhere improve their lives. At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities.
The Office of Personnel Management (OPM) is driving forward on one of the Biden administration’s key technology agenda items – citizen service improvement – by reworking the agency’s main OPM.gov website so it makes more sense to the several different populations who rely on it the most.
Speaking this week during an event organized by GovExec, OPM Chief Information Officer (CIO) Guy Cavallo framed the overhaul project as one that hews to the needs of the various “personas” that need to interact with the agency.
Because OPM’s mission covers a lot of ground – from getting hired by the government, to human resources management, and through retirement – the agency has several big customer constituent groups that care about different things the agency provides.
Cavallo – who is one of the Federal government’s most vocal proponents of moving to cloud technologies – said that OPM is now in the midst of a two-year “sprint” to cloud services and away from maintaining its own legacy data centers and mainframe operations, with the goal of moving “as many of our applications to the cloud as we can in the next two years.”
“There’s not just one area I can focus on modernizing,” said Cavallo. “We’re modernizing across the board.”
Since OPM touches most Federal employees – but in some very different ways – the CIO said that “a lot needs to be done to improve the customer experience … a lot of our legacy technologies weren’t designed around making a friendly customer experience.”
On the website revamp front, Cavallo explained that “most government agencies find that their top contact with citizens is through their website.” Speaking of OPM’s website, he said, “if you’re an OPM employee, you can navigate it because it’s designed the way a lot of sites were originally designed – by departmental functions. So, if you don’t know what each department does, you won’t even know where to go look for something.”
“We’re totally going to change that approach,” he said. “We’re going to switch to user personas instead of being departmental focused.”
“People come to OPM for several reasons,” he said. “One is they’re currently a Federal employee, and they want to find out about pay raises, vacation time, maternity leave, anything like that – that if you’re not a Federal employee you’re not going to care about. Right now, you’d have to go to multiple places on the website to find that. So we will have a persona that you’re a Federal employee, and it will take you on that path.”
“We have citizens that are interested in finding a Federal job, and we will walk them through a persona that takes them and focuses on that,” he said.
Another persona envisioned by the new website will deal with personnel policies for the entire Federal government, including things like changes in hiring practices, Cavallo said.
“That’s going to be a radical change from ‘here’s all of our program offices, you figure out what you need to look for,’” he said. “You’ll be able to pick what type of persona you have and follow that path.”
“It will also be designed to work on a mobile phone versus trying to read our 100-page documents on a little screen of a mobile phone today,” he said. “That’s just one area that we’re improving.”
Ever the cloud advocate, Cavallo said the website will move to the cloud from its current on-premise status where, he said, “we don’t have that elasticity, or expandability, on premise that we will have with the cloud.”
In other moves to improve citizen service, Cavallo said that OPM just launched its first chatbot to start making it easier for people to find information on the agency’s website.
“We have a very limited set of questions there, but we’re currently looking at our call centers and seeing what type of questions we’re getting from citizens and from retirees, so we’ll continue to build out that technology,” he said.
Cavallo said that amid the agency’s push to the cloud, “we’re also really looking at the customer experience that we’re providing, and not just move to the cloud, but let’s make it more user friendly, easier to use, mobile friendly – things that weren’t designed into our mainframe systems many years ago.”
Quantum computers may deliver an economic advantage to business, even on tasks that classical computers can perform.
Francesco Bova, Avi Goldfarb, and Roger Melko March 06, 2023
Imagine that a pharmaceutical company was able to cut the research time for innovative drugs by an order of magnitude. It could expand its development pipeline, hit fewer dead ends, and bring cures and treatments to market much faster, to the benefit of millions of people around the world.
Or imagine that a logistics company could dynamically manage the routes for its fleet of thousands of trucks. It could not only take a mind-numbing range of variables into account and adjust quickly as opportunities or constraints arose; it could also get fresher products to store shelves faster and prevent tons of carbon emissions every year.
Quantum computing has the potential to transform these and many more visions into reality — which is why technology companies, private investors, and governments are investing billions of dollars in supporting ecosystems of quantum startups.1 Much of the quantum research community is focused on showing quantum advantage, which means that a quantum computer can perform a calculation, no matter how arbitrary, that is impossible on a classical, or binary, computer. (See “A Quantum Glossary.”) Running a calculation thousands of times faster could create enormous economic value if the calculation itself is useful to some stakeholder in the market.
A Quantum Glossary
Tech-fluent executives should be familiar with these basic quantum computing terms as they monitor the technology and consider potential applications in their own business domains:
Qubit: A qubit is a fundamental unit of quantum information, encoded in delicate physical properties of light or matter and manipulated to produce calculations in a quantum computer. It is analogous to a bit in a classical (binary) computer.
Fault-tolerant quantum computer: These general-purpose digital quantum computers will be able to engage with a broad range of problems with flexibility and reliability. Fault-tolerant computers are proven to have cases of quantum advantage, such as Shor’s algorithm. But they may be many years away from being realized at scale because of the complex error-correction protocols required for qubits.
Noisy: The quantum computers of today and the near term are noisy, similar to the AM/FM radios that existed long before their digital equivalents were possible. Quantum noise is a much more difficult problem to solve in delicate qubits than electronic and magnetic noise in conventional computer bits.
Quantum speedup:Speedup is a way to measure the relative performance of two computers solving the same problem. Quantum speedup is the improvement that the quantum computer has over a classical competitor in solving the problem. There are many ways to define and characterize speedup. One important metric is how it scales with increasing qubit numbers.
Quantum advantage: This occurs when quantum computing solves an “impossible” problem, or rather one that a classical computer cannot solve within a feasible or realistic amount of time. The clearest cases of quantum advantage are defined via an exponentially scaling quantum speedup.
Quantum economic advantage: This occurs when quantum computing solves an economically relevant problem either differently or significantly faster than a classic computer. Quantum economic advantage can occur in cases where quantum speedup is less than exponential — that is, when the scaling is quadratic or polynomial.
However, the expense of building quantum hardware, coupled with the steady improvement of classical computers, means that the commercial relevance of quantum computing won’t be apparent unless researchers and investors shift their focus to the pursuit of what we call quantum economic advantage. A business achieves a quantum economic advantage when a quantum computer provides a commercially relevant solution, even if only moderately faster than a classical computer could, or when a quantum computer provides viable solutions that differ from what a classical computer yields.
Why Quantum Economic Advantage Matters
In a dramatic first, a team at Google led by John Martinis made headlines around the world in 2019 when their quantum machine appeared to complete a calculation (cross-entropy benchmarking) in seconds that would take tens of thousands of years on a classical computer.2 Other companies have made similar claims of quantum advantage, including researchers from quantum computing startup Xanadu, which recently completed a well-defined task (Gaussian boson sampling) in under a second that would have taken the best classical supercomputer over 9,000 years.3
Hundreds of articles and many of the top minds in the field have focused on showing these kinds of quantum advantage.4 Their demonstrations represent important milestones in the development of quantum computers. But because they typically involve esoteric computations that might not be relevant to the kinds of problems many businesses need to solve, managers might assume that the technology isn’t yet useful and economically viable for businesses. We contend that quantum technology need not provide a quantum advantage to be economically useful, as long as it can still provide different or timelier outputs than its classical counterparts. Any quantum speedup provides an opportunity for quantum economic advantage. (See “Landscape of Quantum Economic Advantage.”)
Landscape of Quantum Economic Advantage
The diagram below shows computer algorithms and applications ranked by their potential for quantum speedup, and their estimated commercial value. Large exponential speedups to date have been demonstrated on applications with no commercial value. Applications with a moderate quantum speedup but high commercial value have a good chance of showing quantum economic advantage.
While quantum advantage will be directly useful in some cases, many of the most important uses of quantum computers will arise from providing cost-effectiveness and speed rather than from performing a calculation that is impossible on a classical computer. In other words, a quantum economic advantage may exist without a quantum advantage.5
Quantum speedups create an appetite for solutions to business challenges where the ability to solve complex problems extremely quickly would confer a powerful competitive advantage. Considerable effort in quantum computing is dedicated to searching for potential speedups for business problems, although evidence for a robust quantum speedup for commercially relevant problems has been elusive. Nonetheless, identifying and realizing such commercial potential provides a crucial incentive to build quantum computers and significantly influences their design.
Classical computing remains a valuable tool for solving complex problems; the pioneering work of AlexNet (deep learning) and AlphaFold (protein structure) are two examples that have high commercial value. Quantum computing might be the better tool to solve a business problem if it provides the solution more quickly than a classical competitor. Running the same calculation on both quantum and classical computers may also provide two different answers, either of which might be better. When the commercial stakes are high enough, having both classical and quantum solutions can be useful, meaning the quantum solutions might still be commercially valuable.
The question of when existing or forthcoming improved quantum computers will generate substantial commercial opportunities is therefore of immediate interest, long before clear-cut quantum advantage will become obvious in future fault-tolerant machines.6
Opening Valuable Commercial Frontiers
Identifying the commercial potential of a quantum computer does not require an understanding of the quantum physics that undergird the technology. Instead, the focus should be on what quantum computers can do better, faster, or perhaps differently than classical computers and on the stakeholders who will see those outcomes as valuable. The visions from the pharmaceutical and logistics sectors that we shared at the start of this article illustrate the transformative effects that quantum computing could have. The examples we highlight below show the high value that quantum computing can already unlock within existing workflows.
Make better investment decisions faster. The finance industry faces many optimization problems. Finding the optimal trading trajectory, for example, involves determining the best trading strategy for an investment portfolio over a specific period. Portfolio optimization is a valuable problem to solve, given that over $100 trillion in assets are under management globally.7 Even small improvements in optimization techniques are valuable because of the absolute level of assets invested.
In some cases, determining the optimal investment strategy requires a search through all possible trading trajectories. That effort grows exponentially more challenging as the number of possible securities in the portfolio and the number of opportunities to change the portfolio increase. Recent work has attempted to tackle a portfolio optimization problem that has 10¹,³⁰⁰ possible trading strategies — a quantity far in excess of the number of atoms in the visible universe (10⁸⁰).8
Quantum machines may be able to help.9 Researchers at the quantum software company Multiverse Computing compared a handful of different methods for solving the optimal trading trajectory problem. Only two of the six optimization methods provided a solution for the most complex version of the problem assessed, and the classical solution took almost 700 times longer to generate than the quantum solution. The quantum tools also yielded a different solution — one with higher profits but lower risk-adjusted returns — than the one suggested by strictly classical techniques.
These outcomes are not an example of quantum advantage, because it remains possible that an exhaustive assessment of all possible approaches to solving this problem might generate solutions that are the same as or better than the quantum approaches the researchers at Multiverse used. But it may nevertheless represent an example of quantum economic advantage, because generating a solution quickly is valuable. (See “Gaining an Edge When Time Is Money.”)
Gaining an Edge When Time Is Money
Imagine that a value-at-risk calculation could be completed in seconds as opposed to hours. How would that change an investment decision in real time? The financial advantage of informing investment decisions in near real time is a straightforward example of potential quantum economic advantage in a high-stakes application.
Quantum speedups would be valuable for a wide range of applications in the financial sector. Banks often use value-at-risk calculations to estimate the likelihood and size of potential losses.iThese calculations can involve a tool called Monte Carlo simulation to run a large number of scenarios with numerous factors to model explicitly. “Quantum” Monte Carlo may help speed up these processes, which can otherwise be time-consuming.ii This may be particularly important in times of extreme market volatility.
Monte Carlo simulation is also used in the pricing of complex financial derivatives. Trillions of dollars in derivative contracts are traded every year for a variety of purposes, including hedging risk and enabling speculation. Most of these trades are for straightforward contracts whose pricing can be determined dynamically with easily calculated formulas. But pricing for more complex derivatives often requires a simulation that can take minutes, hours, or even days on a classical computer. A quantum speedup could enable faster calculations and open new growth opportunities in a market that is already worth billions of dollars.
Determining which of the two solutions is better — quantum or classical — remains a challenge. While financial market stakeholders currently don’t have access to fully fault-tolerant quantum computers, they also typically don’t have easy access to the classical supercomputers that are used to benchmark quantum advantage. Thus, comparing current quantum approaches to classical optimizers that are sold commercially still provides a valuable benchmark for the efficacy of a quantum approach. The best classical supercomputers might still generate a similar or superior solution in a timely manner — that is, in a time span that is less than one human lifetime.
Solve seemingly unsolvable business trade-offs. Almost any business problem that involves complex trade-offs — from day-to-day planning to long-term strategic decisions — could be well suited for quantum computers. Think of retailers that are assessing where to place certain products in their stores to maximize revenue, or educators trying to assess which questions to ask and in which order to maximize learning. These trade-off challenges are known as combinatorial optimization problems. The creation of the best-tasting recipe at a restaurant is also a combinatorial optimization, as are the logistics challenges we described at the beginning of this article. Even modest improvements can have a major impact on a company’s profitability.
Business leaders often rely on human intuition to solve these optimization problems. As businesses grow, they may come to rely on computing power to identify the best solutions. For the most complex of these problems, even today’s most powerful computers can provide only an approximation. Quantum computers could, however, perform a search through all possible combinations of arrangements or sequences to find the best solution, providing a moderate speedup over comparable classical searches.10
That’s where they can have a wide range of applications in almost any business sector. For instance, identifying the reasons for machine failure when failure rates are low is a challenging combinatorial optimization problem in advanced manufacturing.11 Finding the cause of failure quickly is important, because downtime can be costly. If quantum computing can speed up the process of determining why a manufacturing process failed, it could be valuable even in settings where classical approaches could eventually find the same reason for failure.
Discover better materials. The timeliness and quality of quantum computing solutions should also improve the efficiency of R&D processes that lead to new materials and medicines, because they reduce the cost and quicken the pace of discovery relative to classical techniques.
In materials design, computers already aim to simulate the complex behavior of constituent atoms and molecules and reliably predict the structure-property relationships of molecules. In typical applications, however, classical computers face significant limits on the size of molecules they can simulate. Even simulations involving the smallest molecules are computationally intensive, and the addition of even one atom or electron can make a classical simulation drastically slower. This renders many avenues of computer-aided design unavailable for the larger molecules that are of interest to the pharmaceutical, chemical, and materials industries.
Greater computing speed confers the ability to simulate larger and more complex molecules in a practically useful time frame, and quantum computers are poised to make a significant impact in this area.12 They are believed to be able to provide speedups for the calculations required to predict the electronic structure of atoms and molecules, although the precise nature of these speedups is currently a matter of intense debate within the scientific community.13
The Enduring Value of Quantum Advantage
Recent claims of quantum advantage may have no commercial application — and thus no quantum economic advantage — but they are nonetheless important because they establish the possibility that quantum computers can perform certain tasks that classical computers cannot.
The application of Shor’s algorithm is perhaps the most frequently referenced example of how quantum advantage might affect society. American mathematician Peter Shor, who recently won science’s most lucrative prize, the Breakthrough Prize in Fundamental Physics, demonstrated that a quantum computer could factor a large integer in cases where classical computers could not.14 A sufficiently large quantum computer could factor these larger integers in days or less, whereas a classical computer might need more time than it would take for the sun to run out of hydrogen.15
While this might sound abstract, the difficulty that classical computers have in factoring very large numbers is actually what enables modern encryption. The relative ease with which quantum computers could theoretically perform the calculations involved in decryption provides an example of where we expect clear-cut quantum advantage to exist. If a quantum computer could implement Shor’s algorithm, then much of the information encrypted in the past could be decoded — and that includes a great deal of encrypted data that has been stolen from organizations in cyberattacks.
This threat is remote right now, because the quantum community is years away from building fault-tolerant quantum computers large enough to use Shor’s algorithm to break codes. But it warrants the attention of managers in almost all industries, who will eventually make up a large market for new, quantum-robust encryption standards. Here, quantum computers’ ability to generate random numbers can play a role in supporting more advanced cybersecurity defenses.
Quantum computers are probabilistic, which means they can generate truly random numbers.16 Their classical counterparts, in contrast, are deterministic and thus can generate only pseudo-random numbers. However, even when a quantum computer can do something with a clear practical application that a classical computer cannot do, business leaders must still weigh the trade-offs. In some cases, the classical approach may be deemed good enough, limiting the organization’s incentives to switch to quantum. This may be the case with some other potential applications for quantum random number generators (RNGs), such as in the lottery and casino gambling sectors.17
Lotteries select winning numbers either electronically or physically, such as by drawing numbered balls from bins. The resulting numbers are not truly random. Because the process is deterministic, it would be possible for someone to predict the numbers if they knew something about the underlying process that generated them, or to manipulate the process to generate certain numbers.
While such manipulations have indeed occurred, lotteries will probably not abandon their current processes anytime soon, despite what appears to be a clear argument for using quantum computing. First, lottery fraud through manipulation of the number-generating process is rare, perhaps due to the significant costs of getting caught, strict rules against insiders participating in lotteries, and the existence of powerful analytical tools to detect fraud. Second, in the absence of fraud, pseudo-RNGs yield results that are often indistinguishable from their quantum counterparts. While it may be theoretically possible to predict the outcome of a lottery before drawing balls from multiple bins, it is practically infeasible to do so. Lotteries thus have little incentive to switch to quantum machines. Managers confronting similar cases where quantum computing provides a capability that is entirely lacking in classical computing will likewise need to weigh the relative costs and benefits of the optimal versus the good-enough solution.
Preparing for Quantum Computing in Business
To fulfill its promise and create new value and new commercial opportunities, a quantum machine does not have to accomplish a currently impossible task. It only needs to accomplish something useful. That time is coming, as billions of dollars in investments from venture capitalists, major tech companies, and national governments fuel rapid improvements in quantum computers that will make them more efficient than classical ones.
The consensus among most government and industry players seems to be that large-scale fault-tolerant quantum computers almost certainly won’t appear before the end of this decade. Although it may take years for commercially relevant quantum computers to exist at scale, business leaders can already take several steps to prepare their businesses for this era.
Make a list of your “If we could only …” or “What if …?” challenges. Most businesses have such daunting challenges but rarely address them because they are too resource-intensive and those resources have better short-term uses. The speedups and alternative solutions from quantum computing can make the transformative solutions to these problems feasible. What elements of your business are constrained by combinatorial optimization? And how much would a solution be worth to you?
Help your organization become quantum ready. We anticipate that the impact and scale of commercial applications will accelerate rapidly once fully fault-tolerant quantum computers emerge. Organizations have several ways to prepare themselves. Companies with a higher likelihood of lucrative applications — financial services companies, pharmaceutical manufacturers, and makers of specialty materials — can invest in hardware and software and develop a network of experts. Other organizations can familiarize themselves with the basics of quantum computing, connect with academics, and start to train team members.
Start experimenting now.Companies can already allocate a portion of R&D resources to experiment with near-term quantum hardware. They can set up problems in ways the computers can understand, even if existing hardware might not allow them to capitalize on those opportunities yet. These investments are important to the ongoing development of the technology: Quantum computing will not scale solely through academic research.
The startups we have worked with in the Creative Destruction Lab at the University of Toronto’s Rotman School of Management have already achieved short-term benefits from experimenting with quantum — for example, by creating innovations that have led to new materials. The long-term benefit from experimenting with quantum computing today is that a company will be prepared when sufficiently coherent, fault-tolerant quantum computers exist at scale. Those organizations will have a considerable first-mover advantage and be well positioned to capture new opportunities as this emerging technology comes to market.
Francesco Bova is an associate professor at the Rotman School of Management at the University of Toronto. Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare at the Rotman School of Management. Roger Melko is a professor in the Department of Physics & Astronomy at the University of Waterloo and an associate faculty member at the Perimeter Institute for Theoretical Physics.
5. F. Bova, A. Goldfarb, and R.G. Melko, “Quantum Economic Advantage,” Management Science, Articles in Advance, published online Dec. 2, 2022.
6. “Near term” in this context refers to the quantum computers of the next few years, which will be better than today’s but still not fully fault tolerant.
10. L.K. Grover, “A Fast Quantum Mechanical Algorithm for Database Search,” in “STOC ’96: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing” (Philadelphia: Association for Computing Machinery, 1996).
12. S.N. Genin, I.G. Ryabinkin, N.R. Paisley, et al., “Estimating Phosphorescent Emission Energies in Ir(III) Complexes Using Large-Scale Quantum Computing Simulations” (preprint, submitted in November 2021), https://arxiv.org. Since atoms and electrons themselves are quantum objects, they are natural candidates for study by quantum computers. In fact, such “simulation” (or emulation) was the first value proposition made for a quantum computer by California Institute of Technology physicist Richard Feynman in 1982.
By Fadesola Adetosoye, Danielle Hinton, Gayatri Shenai, and Ethan Thayumanavan
Affordable broadband with wraparound support could expand access to cost-efficient virtual health for underserved communities. Seven actions could help state and local leaders unlock this opportunity.
An electronic health record (EHR) is software that’s used to securely document, store, retrieve, share and analyze information about individual patient care. It enables a digital version of your health record. The Federal Electronic Health Record Modernization (FEHRM) office, along with other federal agencies, is implementing a single, common federal EHR to enhance patient care and provider effectiveness.
A federal EHR means as a provider, you will be able to access patient information entered into the EHR from different doctors, pharmacies, labs and other points of care throughout your patient’s care. There is recognition across the board that the federal EHR saves providers time and enables more standard workflows to support enhanced clinical decision-making and patient safety.
This means you benefit:
Your patients spend less time repeating their health history, undergoing duplicative tests and managing printed health records.
You have access to patient data such as service treatment records, Service medals and honors, housing status and other information to ensure patients receive all their earned benefits as they transition to civilian life in a seamless, timely fashion.
You can start conversations more productively with your patients since you already have a more comprehensive picture of their health before their appointments.
You will make more informed decisions about your patient’s care as you have access to more relevant data.
You will have a more seamless care experience. Regardless of whether your patient gets care from the Department of Defense, Department of Veterans Affairs, Department of Homeland Security’s U.S. Coast Guard or another health care system participating in the joint health information exchange, you will be able to easily access and exchange patient health information to enhance quality of care and satisfaction.