Businessman Mark Cuban is making it his “mission” to shake up U.S. health care.
Cuban is known for owning the Dallas Mavericks and for his role as an angel investor on the ABC reality TV show “Shark Tank.”
He hasn’t shied away from the political arena as a prominent critic of President Trump. But Cuban says he has no interest in running for higher office, opting instead to disrupt health care.
“If I could change health care in this country, that would be amazing,” Cuban said in an interview for The Hill’s “Health Next Summit.”
For Cuban, that starts with his company Cost Plus Drugs, which launched in 2022 with the goal of pharmacies leapfrogging middleman wholesalers.
“I’m not saying I’m going to be able to pull it off, but I know we’ve had a significant impact with CostPlusDrugs.com, and I think getting there and focusing on that and just changing people’s lives for the better on the health care side, that’s my mission,” he said.
Cuban said the company carries more than 2,500 medications and that when customers type in the name of a medication, they immediately see its actual cost followed by the markup, which is always 15 percent. The businessman said the company’s transparency is what sets it apart from its competitors, which include Amazon Prime, Costco and GoodRX.
“As crazy as it sounds, we are the only company that publishes their entire price list,” he said.
However, Cuban said Cost Plus Drugs has run into the roadblock of pharmacy benefit managers (PBMs). Those are the intermediaries that manage drug coverage for businesses across the country. PBMs and brand drug manufacturers negotiate discounts in the form of rebates, and the PBM then passes most of the rebates on to employers.
Critics argue PBMs do not always find the best deals and are forcing independent drug stores out of business by not paying them enough to cover their costs. The PBM industry, says it saves employers and patients billions on drug costs.
“When we talk to manufacturers and we say, ‘Look, why can’t we get in the same rebate programs as these big PBMs?’ They tell us without telling us that they don’t want to lose their position on the formulary, which is hundreds of millions or billions of dollars in sales,” Cuban said.
“The big PBMs just won’t let them deal with us. That is our problem. Period. End of story,” he added.
But Cuban said he and his team are slowly making progress through going to drug manufacturer CEOs and working with pass-through PBMs, which eliminate discounts, rebates and fees directly to the payer.
“All of the rebates are passed through so you actually get significant savings, and slowly but surely we’re having inroads,” he said.
Cuban is also touting the company’s partnership with independent pharmacies through paying them a $12 fill fee per prescription, fully reimbursing the pharmacy and allowing them to make a profit.
“I want independent pharmacies to stay in business,” Cuban said. “I think as a country, we don’t want to see pharmacy deserts. We need them to stay in business. All those people, senior citizens that have been going to the same pharmacist for decades and that pharmacist knows them, they know their family, that’s important. That saves lives.”
“The problem is that the biggest PBMs won’t fully reimburse,” he said.
Cuban argued that Cost Plus Drugs would be able to dramatically cut the cost of brand medication if the pharmaceutical industry allows the company to participate in rebate programs.
“The amount of money that we can save taxpayers and patients will be, to paraphrase, like you’ve never seen before. Most in history ever,” he said, appearing to mimic Trump.
Cuban said he is hopeful the Trump administration will work to lower prices through the Centers for Medicare and Medicaid Services (CMS) and the Department of Health and Human Services (HHS). However, he added that he believes Republicans have been afraid to work with him, particularly those from the Department of Government Efficiency (DOGE)
Cuban described the approach of the Trump administration and DOGE head Elon Musk as “ready, fire, aim,” which, he added, “is no way to govern.”
“Particularly when the you-know-what, you know, rolls downhill onto the small-to-medium-sized communities and cities where all of a sudden, who knows how many people are losing jobs, who knows how many companies have to close because their grants have been cut, and who knows the impact on that community in terms of services they’re going to be able to offer, raising taxes,” he said.
Cuban is one of the most outspoken critics of Trump from the business community, and he opted to campaign with then-Vice President Kamala Harris last year.
Cuban said he does not particularly care about the future of the Democratic Party because of his status as an Independent. However, he encouraged Democrats to go into communities to get a sense of how Trump’s tariffs and DOGE cuts are playing out locally.
“You’re not going to get people to all of the sudden turn on Donald Trump, but what you can do, as we’ve seen with some of the town halls, is get people to turn on some of the Republican House members who are going to have to make a really tough choice,” he said.
“If their communities are being negatively impacted by the all-at-once cuts of DOGE and the impact of the tariffs, well, put the pressure on those House Republicans to make a choice. Either you support Donald Trump, or you go against Donald Trump and say these tariffs and these cuts are awful for my town, or you lose your job,” he said.
What happens when quantum computers can finally crack encryption and break into the world’s best-kept secrets? It’s called Q-Day—the worst holiday maybe ever.
ONE DAY SOON, at a research lab near Santa Barbara or Seattle or a secret facility in the Chinese mountains, it will begin: the sudden unlocking of the world’s secrets. Your secrets.
Cybersecurity analysts call this Q-Day—the day someone builds a quantum computer that can crack the most widely used forms of encryption. These math problems have kept humanity’s intimate data safe for decades, but on Q-Day, everything could become vulnerable, for everyone: emails, text messages, anonymous posts, location histories, bitcoin wallets, police reports, hospital records, power stations, the entire global financial system.
“We’re kind of playing Russian roulette,” says Michele Mosca, who coauthored the most recent “Quantum Threat Timeline” report from the Global Risk Institute, which estimates how long we have left. “You’ll probably win if you only play once, but it’s not a good game to play.” When Mosca and his colleagues surveyed cybersecurity experts last year, the forecast was sobering: a one-in-three chance that Q-Day happens before 2035. And the chances it has already happened in secret? Some people I spoke to estimated 15 percent—about the same as you’d get from one spin of the revolver cylinder.
The corporate AI wars may have stolen headlines in recent years, but the quantum arms race has been heating up too. Where today’s AI pushes the limits of classical computing—the kind that runs on 0s and 1s—quantum technology represents an altogether different form of computing. By harnessing the spooky mechanics of the subatomic world, it can run on 0s, 1s, or anything in between. This makes quantum computers pretty terrible at, say, storing data but potentially very good at, say, finding the recipe for a futuristic new material (or your email password). The classical machine is doomed to a life of stepwise calculation: Try one set of ingredients, fail, scrap everything, try again. But quantum computers can explore many potential recipes simultaneously.
So, naturally, tech giants such as Google, Huawei, IBM, and Microsoft have been chasing quantum’s myriad positive applications—not only for materials science but also communications, drug development, and market analysis. China is plowing vast resources into state-backed efforts, and both the US and the European Union have pledged millions in funding to support homegrown quantum industries. Of course, whoever wins the race won’t just have the next great engine of world-saving innovation. They’ll also have the greatest code-breaking machine in history. So it’s normal to wonder: What kind of Q-Day will humanity get—and is there anything we can do to prepare?
If you had a universal picklock, you might tell everyone—or you might keep it hidden in your pocket for as long as you possibly could. From a typical person’s vantage point, maybe Q-Day wouldn’t be recognizable as Q-Day at all. Maybe it would look like a series of strange and apparently unconnected news stories spread out over months or years. London’s energy grid goes down on election day, plunging the city into darkness. A US submarine on a covert mission surfaces to find itself surrounded by enemy ships. Embarrassing material starts to show up online in greater and greater quantities: classified intelligence cables, presidential cover-ups, billionaires’ dick pics. In this scenario, it might be decades before we’re able to pin down exactly when Q-Day actually happened.
Then again, maybe the holder of the universal picklock prefers the disaster-movie outcome: everything, everywhere, all at once. Destroy the grid. Disable the missile silos. Take down the banking system. Open all the doors and let the secrets out.
ILLUSTRATION: NICHOLAS LAW
SUPPOSE YOU ASK a classical computer to solve a simple math problem: Break the number 15 into its smallest prime factors. The computer would try all the options one by one and give you a near-instantaneous answer: 3 and 5. If you then ask the computer to factor a number with 1,000 digits, it would tackle the problem in exactly the same way—but the calculation would take millennia. This is the key to a lot of modern cryptography.
Take RSA encryption, developed in the late 1970s and still usedfor securing email, websites, and much more. In RSA, you (or your encrypted messaging app of choice) create a private key, which consists of two or more large prime numbers. Those numbers, multiplied together, form part of your public key. When someone wants to send you a message, they use your public key to encrypt it. You’re the only person who knows the original prime numbers, so you’re the only person who can decrypt it. Until, that is, someone else builds a quantum computer that can use its spooky powers of parallel computation to derive the private key from the public one—not in millennia but in minutes. Then the whole system collapses.
The algorithm to do this already exists. In 1994, decades before anyone had built a real quantum computer, an AT&T Bell Labs researcher named Peter Shor designed the killer Q-Day app. Shor’s algorithm takes advantage of the fact that quantum computers run not on bits but on qubits. Rather than being locked in a state of 0 or 1, they can exist as both simultaneously—in superposition. When you run an operation on a handful of qubits in a given quantum state, you’re actually running that same operation on those same qubits in all their potential quantum states. With qubits, you’re not confined to trial and error. A quantum computer can explore all potential solutions simultaneously. You’re calculating probability distributions, waves of quantum feedback that pile onto each other and peak at the correct answer. With Shor’s algorithm, carefully designed to amplify certain mathematical patterns, that’s exactly what happens: Large numbers go in one end, factors come out the other.
In theory, at least. Qubits are incredibly difficult to build in real life, because the slightest environmental interference can nudge them out of the delicate state of superposition, where they balance like a spinning coin. But Shor’s algorithm ignited interest in the field, and by the 2010s, a number of projects were starting to make progress on building the first qubits. In 2016, perhaps sensing the nascent threat of Q-Day, the US National Institute for Standards and Technology (NIST) launched a competition to develop quantum-proof encryption algorithms. These largely work by presenting quantum computers with complex multidimensional mazes, called structured lattices, that even they can’t navigate without directions.
In 2019, Google’s quantum lab in Santa Barbara claimed that it had achieved “quantum supremacy.” Its 53-qubit chip could complete in just 200 seconds a task that would have taken 100,000 conventional computers about 10,000 years. Google’s latest quantum processor, Willow, has 105 qubits. But to break encryption with Shor’s algorithm, a quantum computer will need thousands or even millions.
There are now hundreds of companies trying to build quantum computers using wildly different methods, all geared toward keeping qubits isolated from the environment and under control: superconducting circuits, trapped ions, molecular magnets, carbon nanospheres. While progress on hardware inches forward, computer scientists are refining quantum algorithms, trying to reduce the number of qubits required to run them. Each step brings Q-Day closer.
That’s bad news not just for RSA but also for a dizzying array of other systems that will be vulnerable on Q-Day. Security consultant Roger A. Grimes lists some of them in his book Cryptography Apocalypse: the DSA encryption used by many US government agencies until recently, the elliptic-curve cryptography used to secure cryptocurrencies like Bitcoin and Ethereum, the VPNs that let political activists and porn aficionados browse the web in secrecy, the random number generators that power online casinos, the smartcards that let you tap through locked doors at work, the security on your home Wi-Fi network, the two-factor authentication you use to log in to your email account.
Experts from one national security agency told me they break the resulting threats down into two broad areas: confidentiality and authentication. In other words, keeping secrets and controlling access to critical systems. Chris Demchak, a former US Army officer who is a professor of cybersecurity at the US Naval War College and spoke with me in a personal capacity, says that a Q-Day computer could let an adversary eavesdrop on classified military data in real time. “It would be very bad if they knew exactly where all of our submarines were,” Demchak says. “It would be very bad if they knew exactly what our satellites are looking at. And it would be very bad if they knew exactly how many missiles we had and their range.” The balance of geopolitical power in, say, the Taiwan Strait could quickly tilt.
Beyond that real-time threat to confidentiality, there’s also the prospect of “harvest now, decrypt later” attacks. Hackers aligned with the Chinese state have reportedly been hoovering up encrypted data for years in hopes of one day having a quantum computer that can crack it. “They wolf up everything,” Demchak told me. (The US almost certainly does this too.) The question then becomes: How long will your sensitive data remain valuable? “There might be some needles in that haystack,” says Brian Mullins, the CEO of Mind Foundry, which helps companies implement quantum technology. Your current credit card details might be irrelevant in 10 years, but your fingerprint won’t be. A list of intelligence assets from the end of the Iraq War might seem useless until one of those assets becomes a prominent politician.
The threat to authentication may be even scarier. “Pretty much anything that says a person is who they say they are is underpinned by encryption,” says Deborah Frincke, a computer scientist and national security expert at Sandia National Laboratories. “Some of the most sensitive and valuable infrastructure that we have would be open to somebody coming in and pretending to be the rightful owner and issuing some kind of command: to shut down a network, to influence the energy grid, to create financial disruption by shutting down the stock market.”
ILLUSTRATION: NICHOLAS LAW
THE EXACT LEVEL of Q-Day chaos will depend on who has access to the first cryptographically relevant quantum computers. If it’s the United States, there will be a “fierce debate” at the highest levels of government, Demchak believes, over whether to release it for scientific purposes or keep it secret and use it for intelligence. “If a private company gets there first, the US will buy it and the Chinese will try to hack it,” she claims. If it’s one of the US tech companies, the government could put it under the strict export controls that now apply to AI chips.
Most nation-state attacks are on private companies—say, someone trying to break into a defense contractor like Lockheed Martin and steal plans for a next-generation fighter jet. But over time, as quantum computers become more widely available, the focus of the attacks could broaden. The likes of Microsoft and Amazon are already offering researchers access to their primitive quantum devices on the cloud—and big tech companies haven’t always been great at policing who uses their platforms. (The soldier who blew up a Cybertruck outside the Trump International Hotel in Las Vegas early this year queried ChatGPT to help plan the attack.) You could have a bizarre scenario where a cybercriminal uses Amazon’s cloud quantum computing platform to break into Amazon Web Services.
Cybercriminals with access to a quantum computer could use it to go after the same targets more effectively, or take bigger swings: hijacking the SWIFT international payments system to redirect money transfers, or conducting corporate espionage to collect kompromat. The earliest quantum computers probably won’t be able to run Shor’s algorithm that quickly—they might only get one or two keys a day. But combining a quantum computer with an artificial intelligence that can map out an organization’s weakness and highlight which keys to decrypt to cause the most damage could yield devastating results.
And then there’s Bitcoin. The cryptocurrency is exquisitely vulnerable to Q-Day. Because each block in the Bitcoin blockchain captures the data from the previous block, Bitcoin cannot be upgraded to post-quantum cryptography, according to Kapil Dhiman, CEO of Quranium, a post-quantum blockchain security company. “The only solution to that seems to be a hard fork—give birth to a new chain and the old chain dies.”
But that would require a massive organizational effort. First, 51 percent of Bitcoin node operators would have to agree. Then everyone who holds bitcoin would have to manually move their funds from the old chain to the new one (including the elusive Satoshi Nakamoto, the Bitcoin developer who controls wallets containing around $100 billion of the cryptocurrency). If Q-Day happens before the hard fork, there’s nothing to stop bitcoin going to zero. “It’s like a time bomb,” says Dhiman.
THAT BOMB GOING off will only be the beginning. When Q-Day becomes public knowledge, either via grim governmental address or cheery big-tech press release, the world will enter the post-quantum age. It will be an era defined by mistrust and panic—the end of digital security as we know it. “And then the scramble begins,” says Demchak.
All confidence in the confidentiality of our communications will collapse. Of course, it’s unlikely that everyone’s messages will actually be targeted, but the perception that you could be spied on at any time will change the way we live. And if NIST’s quantum-proof algorithms haven’t rolled out to your devices by that point, you face a real problem—because any attempts to install updates over the cloud will also be suspect. What if that download from Apple isn’t actually from Apple? Can you trust the instructions telling you to transfer your crypto to a new quantum-secure wallet?
Grimes, the author of Cryptography Apocalypse, predicts enormous disruptions. We might have to revert to Cold War methods of transmitting sensitive data. (It’s rumored that after a major hack in 2011, one contractor purportedly asked its staff to stop using email for six weeks.) Fill a hard drive, lock it in a briefcase, put someone you trust on a plane with the payload handcuffed to their wrist. Or use one-time pads—books of pre-agreed codes to encrypt and decrypt messages. Quantum-secure, but not very scalable. Expect major industries—energy, finance, health care, manufacturing, transportation—to slow to a crawl as companies with sensitive data switch to paper-based methods of doing business and scramble to hire expensive cryptography consultants. There will be a spike in inflation. Most people might just accept the inevitable: a post-privacy society in which any expectation of secrecy evaporates unless you’re talking to someone in person in a secluded area with your phones switched off. Big Quantum is Watching You.
The best-case scenario looks something like Y2K, where we have a collective panic, make the necessary upgrades to encryption, and by the time Q-Day rolls around it’s such an anticlimax that it becomes a joke. That outcome may still be possible. Last summer, NIST released its first set of post-quantum encryption standards. One of Joe Biden’s last acts as president was to sign an executive order changing the deadline for government agencies to implement NIST’s algorithms from 2035 to “as soon as practicable.”
Already, NIST’s post-quantum cryptography has been rolled out on messaging platforms such as Signal and iMessage. Sources told me that sensitive national security data is probably being locked up in ways that are quantum-secure. But while your email account can easily be Q-proofed over the internet (assuming the update doesn’t come from a quantum imposter!), other things can’t. Public bodies like the UK’s National Health Service are still using hardware and software from the 1990s. “Microsoft is not going to upgrade some of its oldest operating systems to be post-quantum secure,” says Ali El Kaafarani, the CEO of PQShield, a company that makes quantum-resistant hardware. Updates to physical infrastructure can take decades, and some of that infrastructure has vulnerable cryptography in places it can’t be changed: The energy grid, military hardware, and satellites could all be at risk.
And there’s a balance to be struck. Rushing the transition risks introducing vulnerabilities that weren’t there before. “How do you make transitions slow enough that you can be confident and fast enough that you don’t dawdle?” asks Chris Ballance, CEO of Oxford Ionics, a quantum computing company. Some of those vulnerabilities might even be there by design: Memos leaked by Edward Snowden indicate that the NSA may have inserted a backdoor into a pseudorandom number generator that was adopted by NIST in 2006. “Anytime anybody says you should use this particular algorithm and there’s a nation-state behind it, you’ve got to wonder whether there’s a vested interest,” says Rob Young, director of Lancaster University’s Quantum Technology Centre.
Then again, several people I spoke to pointed out that any nation-state with the financial muscle and technical knowledge to build a quantum device that can run Shor’s algorithm could just as easily compromise the financial system, the energy grid, or an enemy’s security apparatus through conventional methods. Why invent a new computing paradigm when you can just bribe a janitor?
Long before quantum technology is good enough to break encryption, it will be commercially and scientifically useful enough to tilt the global balance. As researchers solve the engineering challenge of isolating qubits from the environment, they’ll develop exquisitely sensitive quantum sensors that will be able to unmask stealth ships and map hidden bunkers, or give us new insight into the human body. Similarly, pharma companies of the future could use quantum to steal a rival’s inventions—or use it to dream up even better ones. So ultimately the best way to stave off Q-Day may be to share those benefits around: Take the better batteries, the miracle drugs, the far-sighted climate forecasting, and use them to build a quantum utopia of new materials and better lives for everyone. Or—let the scramble begin.
Let us know what you think about this article. Submit a letter to the editor atmail@wired.com.
From 2023, Lesley Stahl’s report on AI, chatbots and a world of unknowns. From 2024, Stahl’s story on Kenyan workers training AI who say they’re overworked, underpaid and exploited by big American tech companies. Also from 2024, Anderson Cooper’s report on “nudify” sites that use AI to create realistic, revealing images of actual people. And from 2021, Bill Whitaker’s look at the use of artificial intelligence to create deepfakes.
MIT researchers developed a photon-shuttling “interconnect” that can facilitate remote entanglement, a key step toward a practical quantum computer.
Adam Zewe | MIT News
Publication Date: March 21, 2025
Quantum computers have the potential to solve complex problems that would be impossible for the most powerful classical supercomputer to crack.
Just like a classical computer has separate, yet interconnected, components that must work together, such as a memory chip and a CPU on a motherboard, a quantum computer will need to communicate quantum information between multiple processors.
Current architectures used to interconnect superconducting quantum processors are “point-to-point” in connectivity, meaning they require a series of transfers between network nodes, with compounding error rates.
On the way to overcoming these challenges, MIT researchers developed a new interconnect device that can support scalable, “all-to-all” communication, such that all superconducting quantum processors in a network can communication directly with each other.
They created a network of two quantum processors and used their interconnect to send microwave photons back and forth on demand in a user-defined direction. Photons are particles of light that can carry quantum information.
The device includes a superconducting wire, or waveguide, that shuttles photons between processors and can be routed as far as needed. The researchers can couple any number of modules to it, efficiently transmitting information between a scalable network of processors.
They used this interconnect to demonstrate remote entanglement, a type of correlation between quantum processors that are not physically connected. Remote entanglement is a key step toward developing a powerful, distributed network of many quantum processors.
“In the future, a quantum computer will probably need both local and nonlocal interconnects. Local interconnects are natural in arrays of superconducting qubits. Ours allows for more nonlocal connections. We can send photons at different frequencies, times, and in two propagation directions, which gives our network more flexibility and throughput,” says Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) and lead author of a paper on the interconnect.
Her co-authors include Beatriz Yankelevich, a graduate student in the EQuS Group; senior author William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science (EECS) and professor of Physics, director of the Center for Quantum Engineering, and associate director of RLE; and others at MIT and Lincoln Laboratory. The research appears today in Nature Physics.
In the new work, they took that architecture a step further by connecting two modules to a waveguide in order to emit photons in a desired direction and then absorb them at the other end.
Each module is composed of four qubits, which serve as an interface between the waveguide carrying the photons and the larger quantum processors.
The qubits coupled to the waveguide emit and absorb photons, which are then transferred to nearby data qubits.
The researchers use a series of microwave pulses to add energy to a qubit, which then emits a photon. Carefully controlling the phase of those pulses enables a quantum interference effect that allows them to emit the photon in either direction along the waveguide. Reversing the pulses in time enables a qubit in another module any arbitrary distance away to absorb the photon.
“Pitching and catching photons enables us to create a ‘quantum interconnect’ between nonlocal quantum processors, and with quantum interconnects comes remote entanglement,” explains Oliver.
“Generating remote entanglement is a crucial step toward building a large-scale quantum processor from smaller-scale modules. Even after that photon is gone, we have a correlation between two distant, or ‘nonlocal,’ qubits. Remote entanglement allows us to take advantage of these correlations and perform parallel operations between two qubits, even though they are no longer connected and may be far apart,” Yankelevich explains.
However, transferring a photon between two modules is not enough to generate remote entanglement. The researchers need to prepare the qubits and the photon so the modules “share” the photon at the end of the protocol.
Generating entanglement
The team did this by halting the photon emission pulses halfway through their duration. In quantum mechanical terms, the photon is both retained and emitted. Classically, one can think that half-a-photon is retained and half is emitted.
Once the receiver module absorbs that “half-photon,” the two modules become entangled.
But as the photon travels, joints, wire bonds, and connections in the waveguide distort the photon and limit the absorption efficiency of the receiving module.
To generate remote entanglement with high enough fidelity, or accuracy, the researchers needed to maximize how often the photon is absorbed at the other end.
“The challenge in this work was shaping the photon appropriately so we could maximize the absorption efficiency,” Almanakly says.
They used a reinforcement learning algorithm to “predistort” the photon. The algorithm optimized the protocol pulses in order to shape the photon for maximal absorption efficiency.
When they implemented this optimized absorption protocol, they were able to show photon absorption efficiency greater than 60 percent.
This absorption efficiency is high enough to prove that the resulting state at the end of the protocol is entangled, a major milestone in this demonstration.
“We can use this architecture to create a network with all-to-all connectivity. This means we can have multiple modules, all along the same bus, and we can create remote entanglement among any pair of our choosing,” Yankelevich says.
In the future, they could improve the absorption efficiency by optimizing the path over which the photons propagate, perhaps by integrating modules in 3D instead of having a superconducting wire connecting separate microwave packages. They could also make the protocol faster so there are fewer chances for errors to accumulate.
“In principle, our remote entanglement generation protocol can also be expanded to other kinds of quantum computers and bigger quantum internet systems,” Almanakly says.
This work was funded, in part, by the U.S. Army Research Office, the AWS Center for Quantum Computing, and the U.S. Air Force Office of Scientific Research.
Last year, Nvidia’s annual GTC conference—hailed as the “Woodstock of AI”—drew a crowd of 18,000 to a packed arena befitting rock legends like the Rolling Stones. On stage, CEO Jensen Huang, clad in a shiny black leather jacket, delivered his keynote for the AI chip behemoth’s annual developer’s conference with the flair of a headlining act.
Today, a year later, Huang was onstage once again, shooting off a series of T-shirt cannons and clad this time in an edgy motorcycle black leather jacket worthy of a halftime show. This time, Nvidia-watchers tossed around the metaphor of the “Super Bowl of AI” like a football. Nvidia did not shy away from the pigskin comparison, offering a keynote “pre-game” event and a live broadcast that had guest commentators like Dell CEO Michael Dell calling plays on how Nvidia would continue to rule the AI world.
As Huang took the stage in front of a stadium-sized image of the Nvidia headquarters—making sure to highlight the “gaussian splatting” 3D rendering tech behind it to his high-tech audience—his message was clear, even if unspoken: Nvidia’s best defense is a strong offense. With recent reasoning models from Chinese startup DeepSeek shaking up AI, followed by others from companies including OpenAI, Baidu and Google, Nvidia wants its business customers to know they need its GPUs and software more than ever.
That’s because DeepSeek’s R1 model, which debuted in January, created some doubts about Nvidia’s momentum. The new model, its maker claimed, had been trained for a fraction of the cost and computing power of U.S. models. As a result, Nvidia’s stock took a beating from investors worried that companies would no longer need to buy as many of Nvidia’s chips.
Reasoning models require more computing power
But Huang thinks those selling off made a big mistake. Reasoning models, he said, require morecomputing power, not less. A lot more, in fact, thanks to their more detailed answers, or in the parlance of AI folks, “inference.” The ChatGPT revolution was about a chatbot spitting out answers to queries—but today’s models must “think” harder, which requires more “tokens,” or the fundamental units text models use—whether it’s a word in a phrase or just part of a word.
The more tokens used, the more efficiency customers demand, and the more computing power AI reasoning models will require. So making sure Nvidia customers can process more tokens, faster, is the not-so-secret Nvidia play—and Huang did not need to mention DeepSeek until one hour into the keynote to get that point across.
All of the Nvidia GTC announcements that followed were positioned with that in mind. Stock-watchers might well have wanted to see an accelerated timeline for Nvidia’s new AI chip, the Vera Rubin, to be released at the end of 2026, or more details about the company’s short-term roadmap. But Huang focused on the fact that while AI pundits had insisted over the past year that the pace of AI once rapid improvements were slowing down, Nvidia believes getting AI improvements to “scale” is increasing faster than ever. Of course, that would be to Nvidia’s benefit in terms of revenue. “The amount of computation we need as a result of agentic AI, as a result of reasoning, is easily 100 times more than we thought we needed this time last year,” Huang said.
Will Nvidia’s efforts to drive growth be enough to win?
Nvidia’s announcements that followed were all about making sure customers understand they will have everything they need to keep up in a world where extreme speed at providing detailed answers and better reasoning will be the difference between a company’s AI success and failure. Blackwell GPUs, Nvidia’s latest, top of the line AI chips, are in full production—with 3.6 million of them already used. An upgraded version, the Blackwell Ultra, boasts 3x performance. The new Vera Rubin chip and infrastructure is on the way. Nvidia’s “world’s smallest AI supercomputer” is at the ready. Software for AI agents is quickly being used in the physical world, including self-driving cars, robotics, and manufacturing.
But will Nvidia’s efforts to drive growth be enough to keep enterprise companies investing in Nvidia products? Will buying Nvidia’s costly AI chips—which can cost between $30,000 to $40,000 each, prove too expensive, given the still-unclear-ROI of AI investments? Ultimately, Nvidia’s premium picks and shovels require enough customers willing to keep digging.
Huang is confident that there are enough—and that Nvidia’s Super Bowl win is not just a victory for the 31-year-old company. “Everyone wins,” he insisted.
Perhaps, but there is no doubt that as Nvidia seeks to establish a dynasty in the AI era, expectations remain higher than ever. Huang, for his part, appears undaunted even as the AI continues to evolve at high speed. He’s always reaching for the brass ring, it seems—or in this case, the Super Bowl ring.
Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.
The past few months have demonstrated how AI can bring us together. Meta released a model that can translate speech from more than 100 languages, and people across the world are finding solace, assistance, and even romance with chatbots. However, it’s also abundantly clear how the technology is dividing us—for example, the Pentagon is using AI to detect humans on its “kill list.” Elsewhere, the changes Mark Zuckerberg has made to his social media company’s guidelines mean that hate speech is likely to become far more prevalent on our timelines.
Estimates for how much it would cost VA to fully deploy its new electronic health record system have ranged from $16.1 billion to almost $50 billion.
The Department of Veterans Affairs needs to figure out the cost of its troubled electronic health record modernization program as it moves to restart deployments of the new software, lawmakers and witnesses warned during a House hearing on Monday.
VA initially signed a $10 billion contract — which was later revised to over $16 billion — with Cerner in May 2018 to modernize its legacy health record system and make it interoperable with the Pentagon’s new health record, which was also provided by Cerner. Oracle later acquired Cerner in 2022.
Almost as soon as the new EHR system went live in 2020 at the Mann-Grandstaff VA Medical Center in Spokane, Washington, however, the modernization project was beset by a host of problems, including cost overruns, patient safety concerns and technical glitches. VA subsequently paused deployments of the EHR system in April 2023 as part of a “reset” to address problems at the facilities where the software had been deployed.
Following the successful joint VA-DOD rollout of the EHR software at the Captain James A. Lovell Federal Health Care Center in North Chicago, Illinois last March — the sixth facility to use the new system — and citing significant improvements at the other sites using the Oracle Cerner EHR, VA announced in December that it was “beginning early-stage planning” to restart deployments in mid-2026.
Even as VA looks to implement the Oracle Cerner EHR system at four of its Michigan-based medical facilities in 2026, members of the House Veterans’ Affairs Technology Modernization Subcommittee pressed the department on Monday to pin down the long-term costs and time commitments needed to complete the modernization project.
Rep. Tom Barrett, R-Mich., the panel’s chairman, said he was not convinced VA has fixed all of the EHR program’s problems during its reset period and noted that — despite the fact VA is 7 years into the original 10-year contract for the project — “Congress has not received a schedule nor an up-to-date cost estimate to evaluate this program’s current state.”
Cost estimates for the entirety of the project have varied, with GAO Information Technology and Cybersecurity Director Carol Harris noting that these have ranged from $16.1 billion to an independent analysis that pegged the project’s total cost at almost $50 billion.
“While the latter is more realistic, neither reflects the many changes and delays to the program,” Harris said.
Seema Verma, Oracle Health’s executive vice president, said that an accelerated deployment of the new system — bolstered by progress made during the reset phase — would help drive down the project’s total cost and said that the company did not agree with the $50 billion figure cited by Harris.
The department and Oracle Cerner previously renegotiated their existing contract in May 2023 to include more accountability provisions in the agreement and to change the terms of the remaining contract from a 5-year term to five 1-year terms.
With the current contract slated to fully expire in May 2028, and VA still needing to deploy the new software to more than 160 other VA medical facilities, Neil Evans — acting program executive director of VA’s Electronic Health Record Modernization Integration Office — said the project would not be completed by that time.
To get a better understanding of the full scope of the modernization project, acting VA Inspector General David Case said the department “must develop and maintain an integrated master schedule to clearly track and project the program’s cost to completion.”
Evans told the lawmakers that the department is committed to developing a detailed integrated master schedule and updated life cycle cost estimate to help guide the modernization project’s path forward.
“We recognize the importance of providing this information to inform decisionmaking and ensure the success of future deployments,” he said.
The department said it will be adding nine sites to its 2026 deployment schedule, although it added that it will announce the specific facilities later this year.
The Department of Veterans Affairs is planning to increase deployments of its new Oracle Cerner electronic health record system to 13 VA medical facilities in 2026, with the ultimate goal of completing the troubled modernization project as early as 2031.
VA said in December that it would be rolling out the new EHR software at a total of four medical sites in mid-2026. In a Thursday release outlining its new plans, however, VA said it would announce the nine additional medical facilities slated to receive the modernized EHR system later this year, after consulting with VA officials, clinicians and Oracle Cerner representatives.
The department said it is “pursuing a market-based approach to site selection for its deployments going forward,” which will enable it “to scale up the number of concurrent deployments, while also enabling staff to work as efficiently as possible.”
The announcement comes as VA moves out of an operational pause on most rollouts of the new software that was instituted in April 2023 following a series of technical glitches, patient safety concerns and training challenges.
VA has deployed the software at just six of its 170 medical centers. One of those rollouts occurred at a joint VA-Defense Department medical facility in North Chicago last March during the agency’s “reset” period, with that deployment being seen, in part, as a crucial test of the efforts to right the modernization project.
During his January confirmation hearing, current VA Secretary Doug Collins echoed bipartisan concerns about the EHR modernization project but told lawmakers that the effort — which is intended to help streamline the delivery of medical records for servicemembers transitioning from active duty to civilian life — was necessary and that “there’s no reason in the world we cannot get this done.”
At the time, Collins also told lawmakers that he believed the department could restart deployments faster than the mid-2026 timeframe that the then-Biden administration set for resuming EHR system rollouts.
“America’s Veterans deserve a medical records system that’s integrated across all VA and DOD components, and that is exactly what we will deliver,” Collins said in a Thursday statement. “We can and will move faster on this important priority. But we’re going to listen to our doctors, nurses and vendor partners along the way in order to ensure patient safety, quality and customer service.”
Although VA said its goal is to complete all deployments of the new EHR system as soon as 2031, that timeframe already exceeds the limits of the current contract it has with Oracle Cerner.
The department initially entered into a 10-year, $10 billion contract with Cerner in May 2018 to modernize its legacy health record system. The cost of that contract was later revised to over $16 billion, and Cerner was subsequently acquired by Oracle in 2022.
VA and Oracle Cerner subsequently renegotiated their existing contract in May 2023 to include more accountability provisions in the agreement, as well as to revise the remaining contract from an additional 5-year term to five 1-year terms.
In addition to pressing VA to develop a master schedule for future EHR system deployments, lawmakers have pushed for the department to figure out the total cost of its modernization project.
During a House hearing last month, Republicans and Democrats both calledfor the VA official overseeing the EHR system’s rollout to develop an updated lifecycle cost estimate. Estimates for the project’s full completion have ranged from $16.1 billion to almost $50 billion.
Last Thursday, OpenAI released GPT-4.5, a new version of its flagship large language model. With each release of its GPT models, OpenAI has shown that bigger means better. But there has been a lot of talk about how that approach is hitting a wall—including remarks from OpenAI’s former chief scientist Ilya Sutskever. In this edition of What’s Next in Tech, find out everything you need to know about OpenAI’s latest model.
TOMORROW: The AI model market is shifting. Are you ready? Join us tomorrow, March 5, for our latest LinkedIn Live, “Disruption in the AI Model Market,” where we’ll break down the biggest changes shaping the landscape, and what they mean for you. Register for free today.
OpenAI says GPT-4.5 is its biggest and best chat model yet—but it could be the last release in the company’s classic LLM lineup.
Since the releases of its so-called reasoning models o1 and o3, OpenAI has been pushing two product lines. GPT-4.5 is part of the non-reasoning lineup—what Nick Ryder, a research scientist at the company, calls “an installment in the classic GPT series.”
All large language models pick up patterns across the billions of documents they are trained on. Smaller models learned syntax and basic facts. Bigger models can find more specific patterns like emotional cues, such as when a speaker’s words signal hostility, says Ryder: “All of these subtle patterns that come through a human conversation—those are the bits that these larger and larger models will pick up on.”
OpenAI won’t say exactly how big its new model is. But it claims the jump in scale from GPT-4o to GPT-4.5 is the same as the jump from GPT-3.5 to GPT-4o. Read the full story to learn more about how GPT-4.5 was trained, the benchmarks it has been tested on, and some of the model’s skills.
While Modular Open Systems Approach (MOSA) is familiar within the DoD, a new guidebook sheds light on statutes and policies that require it. Implementing a Modular Open Systems Approach in Department of Defense Programs guidebook delivers practical advice on planning, executing, and assessing MOSA. Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&E)) Systems Engineering and Architecture (SE&A) prepared this guidebook. The office works to ensure architectures are modular and open to enhance competition, incorporate innovation, support interoperability, and enable rapid insertion of technology in DoD acquisitions, providing cutting-edge capabilities to the warfighter. Inside this new guidebook you will find: ✅ Implementation principles and best practices 📈 Benefits and challenges of MOSA 💡 Real-world insights from DoD and industry experts Learn more about MOSA: https://lnkd.in/dexBVdVN Download the guidebook here: https://lnkd.in/eQnE4m-W#MOSA#DoD#Acquisition#Innovation#OpenSystems