healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Photonic processor could enable ultrafast AI computations with extreme energy efficiency – MIT News

Posted by timmreardon on 12/26/2024
Posted in: Uncategorized.

This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.

Adam Zewe | MIT News

December 2, 2024

The deep neural network models that power today’s most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware.

Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can’t perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency.

Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip.

The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92 percent accuracy — performance that is on par with traditional hardware.

The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics.

In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications.

“There are a lot of cases where how well the model performs isn’t the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms,” says Saumil Bandyopadhyay ’17, MEng ’18, PhD ’23, a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip.

Bandyopadhyay is joined on the paper by Alexander Sludds ’18, MEng ’19, PhD ’23; Nicholas Harris PhD ’17; Darius Bunandar PhD ’19; Stefan Krastanov, a former RLE research scientist who is now an assistant professor at the University of Massachusetts at Amherst; Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research; Matthew Streshinsky, a former silicon photonics lead at Nokia who is now co-founder and CEO of Enosemi; Michael Hochberg, president of Periplous, LLC; and Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE, and senior author of the paper. The research appears today in Nature Photonics.

Machine learning with light

Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer.

But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems.

In 2017, Englund’s group, along with researchers in the lab of Marin Soljačić, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light.

But at the time, the device couldn’t perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations.

“Nonlinearity in optics is quite challenging because photons don’t interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way,” Bandyopadhyay explains.

They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip.

The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations.

A fully-integrated network

At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs.

The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy.

“We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency,” Bandyopadhyay says.

Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situtraining that typically consumes a huge amount of energy in digital hardware.

“This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time,” he says.

The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond.     

“This work demonstrates that computing — at its essence, the mapping of inputs to outputs — can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed,” says Englund.

The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process.

Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency.

This research was funded, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.

Article link: https://news.mit.edu/2024/photonic-processor-could-enable-ultrafast-ai-computations-1202

Procedures for deploying and using IPv6 in Department of Defense – DOD CIO

Posted by timmreardon on 12/26/2024
Posted in: Uncategorized.

The Office of the DoD Chief Information Officer approved DoDI 8440.02 for publication. This instruction establishes policy, assigns responsibilities, and prescribes procedures for deploying and using IPv6 in Department of Defense information systems. The transition to IPv6 is critical to ensure availability of address space for an increasing number of connected devices. One of the many reasons for the transition is to address the Internet of Things (IoT) and future battlefields in which all weapons and fighters have an IPv6 address. This brings the network onto the battlefield, something the Department has been doing since 2005. The policy maintains momentum for transition, which has accelerated since the fall of 2019 across the federal government with the publication of Office of Management and Budget Memorandum M-21-07. Implementation of the policy will harmonize approach and pace of DoD IPv6 implementation.

https://lnkd.in/gJBpHCt5

Article link: https://www.linkedin.com/posts/dod-cio_dodi-844002-dod-implementation-of-internet-activity-7278087848069623808-0Bxr?

AI’s emissions are about to skyrocket even further – MIT Technology Review

Posted by timmreardon on 12/22/2024
Posted in: Uncategorized.

Data center emissions have tripled since 2018. As more complex AI models like OpenAI’s Sora see broad release, those figures will likely go through the roof.

By James O’Donnellarchive page

December 13, 2024

It’s no secret that the current AI boom is using up immense amounts of energy. Now we have a better idea of how much. 

A new paper, from teams at the Harvard T.H. Chan School of Public Health and UCLA Fielding School of Public Health, examined 2,132 data centers operating in the United States (78% of all facilities in the country). These facilities—essentially buildings filled to the brim with rows of servers—are where AI models get trained, and they also get “pinged” every time we send a request through models like ChatGPT. They require huge amounts of energy both to power the servers and to keep them cool. 

Since 2018, carbon emissions from data centers in the US have tripled. For the 12 months ending August 2024, data centers were responsible for 105 million metric tons of CO2, accounting for 2.18% of national emissions (for comparison, domestic commercial airlines are responsible for about 131 million metric tons). About 4.59% of all the energy used in the US goes toward data centers, a figure that’s doubled since 2018.

It’s difficult to put a number on how much AI in particular, which has been booming since ChatGPT launched in November 2022, is responsible for this surge. That’s because data centers process lots of different types of data—in addition to training or pinging AI models, they do everything from hosting websites to storing your photos in the cloud. However, the researchers say, AI’s share is certainly growing rapidly as nearly every segment of the economy attempts to adopt the technology.

“It’s a pretty big surge,” says Eric Gimon, a senior fellow at the think tank Energy Innovation, who was not involved in the research. “There’s a lot of breathless analysis about how quickly this exponential growth could go. But it’s still early days for the business in terms of figuring out efficiencies, or different kinds of chips.”

Notably, the sources for all this power are particularly “dirty.” Since so many data centers are located in coal-producing regions, like Virginia, the “carbon intensity” of the energy they use is 48% higher than the national average. The paper, which was published on arXiv and has not yet been peer-reviewed, found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. 

There are causes other than simply being located in coal country, says Falco Bargagli-Stoffi, an author of the paper and Assistant Professor at UCLA Fielding School of Public Health. “Dirtier energy is available throughout the entire day,” he says, and plenty of data centers require that to maintain peak operation 24-7. “Renewable energy, like wind or solar, might not be as available.” Political or tax incentives, and local pushback, can also affect where data centers get built.  

One key shift in AI right now means that the field’s emissions are soon likely to skyrocket. AI models are rapidly moving from fairly simple text generators like ChatGPT toward highly complex image, video, and music generators. Until now, many of these “multimodal” models have been stuck in the research phase, but that’s changing. 

OpenAI released its video generation model Sora to the public on December 9, and its website has been so flooded with traffic from people eager to test it out that it is still not functioning properly. Competing models, like Veo from Google and Movie Gen from Meta, have still not been released publicly, but if those companies follow OpenAI’s lead as they have in the past, they might be soon. Music generation models from Suno and Udio are growing (despite lawsuits), and Nvidia released its own audio generator last month. Google is working on its Astraproject, which will be a video-AI companion that can converse with you about your surroundings in real time. 

“As we scale up to images and video, the data sizes increase exponentially,” says Gianluca Guidi, a PhD student in artificial intelligence at University of Pisa and IMT Lucca and visiting researcher at Harvard, who is the paper’s lead author. Combine that with wider adoption, he says, and emissions will soon jump. 

One of the goals of the researchers was to build a more reliable way to get snapshots of just how much energy data centers are using. That’s been a more complicated task than you might expect, given that the data is dispersed across a number of sources and agencies. They’ve now built a portalthat shows data center emissions across the country. The long-term goal of the data pipeline is to inform future regulatory efforts to curb emissions from data centers, which are predicted to grow enormously in the coming years. 

“There’s going to be increased pressure, between the environmental and sustainability-conscious community and Big Tech,” says Francesca Dominici, director of the Harvard Data Science Initiative, Harvard Professor and another coauthor. “But my prediction is that there is not going to be regulation. Not in the next four years.”

by James O’Donnell

Article link: https://www.technologyreview.com/2024/12/13/1108719/ais-emissions-are-about-to-skyrocket-even-further/?

How Silicon Valley is disrupting democracy – MIT Technology Review

Posted by timmreardon on 12/22/2024
Posted in: Uncategorized.

Two books explore the price we’ve paid in handing over unprecedented power to Big Tech—and explain why it’s imperative we start taking it back.

By Bryan Gardinerarchive page

December 13, 2024

The internet loves a good neologism, especially if it can capture a purported vibe shift or explain a new trend. In 2013, the columnist Adrian Wooldridge coined a word that eventually did both. Writing for theEconomist, he warned of the coming “techlash,” a revolt against Silicon Valley’s rich and powerful fueled by the public’s growing realization that these “sovereigns of cyberspace” weren’t the benevolent bright-future bringers they claimed to be. 

While Wooldridge didn’t say precisely when this techlash would arrive, it’s clear today that a dramatic shift in public opinion toward Big Tech and its leaders did in fact ­happen—and is arguably still happening. Say what you will about the legions of Elon Musk acolytes on X, but if an industry and its executives can bring together the likes of Elizabeth Warren and Lindsey Graham in shared condemnation, it’s definitely not winning many popularity contests.   

To be clear, there have always been critics of Silicon Valley’s very real excesses and abuses. But for the better part of the last two decades, many of those voices of dissent were either written off as hopeless Luddites and haters of progress or drowned out by a louder and far more numerous group of techno-optimists. Today, those same critics (along with many new ones) have entered the fray once more, rearmed with popular Substacks, media columns, and—increasingly—book deals.

Two of the more recent additions to the flourishing techlash genre—Rob Lalka’s The Venture Alchemists: How Big Tech Turned Profits into Power and Marietje Schaake’s The Tech Coup: How to Save Democracy from Silicon Valley—serve as excellent reminders of why it started in the first place. Together, the books chronicle the rise of an industry that is increasingly using its unprecedented wealth and power to undermine democracy, and they outline what we can do to start taking some of that power back.

Lalka is a business professor at Tulane University, and The Venture Alchemists focuses on how a small group of entrepreneurs managed to transmute a handful of novel ideas and big bets into unprecedented wealth and influence. While the names of these demigods of disruption will likely be familiar to anyone with an internet connection and a passing interest in Silicon Valley, Lalka also begins his book with a page featuring their nine (mostly) young, (mostly) smiling faces. 

There are photos of the famous founders Mark Zuckerberg, Larry Page, and Sergey Brin; the VC funders Keith Rabois, Peter Thiel, and David Sacks; and a more motley trio made up of the disgraced former Uber CEO Travis Kalanick, the ardent eugenicist and reputed father of Silicon Valley Bill Shockley (who, it should be noted, died in 1989), and a former VC and the future vice president of the United States, JD Vance.

To his credit, Lalka takes this medley of tech titans and uses their origin stories and interrelationships to explain how the so-called Silicon Valley mindset (mind virus?) became not just a fixture in California’s Santa Clara County but also the preeminent way of thinking about success and innovation across America.

This approach to doing business, usually cloaked in a barrage of cringey innovation-speak—disrupt or be disrupted, move fast and break things, better to ask for forgiveness than permission—can often mask a darker, more authoritarian ethos, according to Lalka. 

CULTURE

How Silicon Valley is disrupting democracy

Two books explore the price we’ve paid in handing over unprecedented power to Big Tech—and explain why it’s imperative we start taking it back.

By 

  • Bryan Gardinerarchive page

December 13, 2024

""

MATTHIEU BOUREL

The internet loves a good neologism, especially if it can capture a purported vibe shift or explain a new trend. In 2013, the columnist Adrian Wooldridge coined a word that eventually did both. Writing for theEconomist, he warned of the coming “techlash,” a revolt against Silicon Valley’s rich and powerful fueled by the public’s growing realization that these “sovereigns of cyberspace” weren’t the benevolent bright-future bringers they claimed to be. 

While Wooldridge didn’t say precisely when this techlash would arrive, it’s clear today that a dramatic shift in public opinion toward Big Tech and its leaders did in fact ­happen—and is arguably still happening. Say what you will about the legions of Elon Musk acolytes on X, but if an industry and its executives can bring together the likes of Elizabeth Warren and Lindsey Graham in shared condemnation, it’s definitely not winning many popularity contests.   

Advertisement

null

To be clear, there have always been critics of Silicon Valley’s very real excesses and abuses. But for the better part of the last two decades, many of those voices of dissent were either written off as hopeless Luddites and haters of progress or drowned out by a louder and far more numerous group of techno-optimists. Today, those same critics (along with many new ones) have entered the fray once more, rearmed with popular Substacks, media columns, and—increasingly—book deals.

Two of the more recent additions to the flourishing techlash genre—Rob Lalka’s The Venture Alchemists: How Big Tech Turned Profits into Power and Marietje Schaake’s The Tech Coup: How to Save Democracy from Silicon Valley—serve as excellent reminders of why it started in the first place. Together, the books chronicle the rise of an industry that is increasingly using its unprecedented wealth and power to undermine democracy, and they outline what we can do to start taking some of that power back.

Lalka is a business professor at Tulane University, and The Venture Alchemists focuses on how a small group of entrepreneurs managed to transmute a handful of novel ideas and big bets into unprecedented wealth and influence. While the names of these demigods of disruption will likely be familiar to anyone with an internet connection and a passing interest in Silicon Valley, Lalka also begins his book with a page featuring their nine (mostly) young, (mostly) smiling faces. 

There are photos of the famous founders Mark Zuckerberg, Larry Page, and Sergey Brin; the VC funders Keith Rabois, Peter Thiel, and David Sacks; and a more motley trio made up of the disgraced former Uber CEO Travis Kalanick, the ardent eugenicist and reputed father of Silicon Valley Bill Shockley (who, it should be noted, died in 1989), and a former VC and the future vice president of the United States, JD Vance.

Related Story

This Chinese city wants to be the Silicon Valley of chiplets

Wuxi, the Chinese center of chip packaging, is investing in chiplet research to enhance its role in the semiconductor industry.

To his credit, Lalka takes this medley of tech titans and uses their origin stories and interrelationships to explain how the so-called Silicon Valley mindset (mind virus?) became not just a fixture in California’s Santa Clara County but also the preeminent way of thinking about success and innovation across America.

This approach to doing business, usually cloaked in a barrage of cringey innovation-speak—disrupt or be disrupted, move fast and break things, better to ask for forgiveness than permission—can often mask a darker, more authoritarian ethos, according to Lalka. 

https://buy.tinypass.com/checkout/template/cacheableShow?aid=WUOCNSUgpu&templateId=OTCBIZBLG8WE&templateVariantId=OTVETRVFOWHB9&offerId=fakeOfferId&experienceId=EX43E7JR539R&iframeId=offer_d10d45c57d0354924792-0&displayMode=inline&pianoIdUrl=https%3A%2F%2Fauth.technologyreview.com%2Fid%2F&widget=template&url=https%3A%2F%2Fwww.technologyreview.com

One of the nine entrepreneurs in the book, Peter Thiel, has written that “I no longer believe that freedom and democracy are compatible” and that “competition [in business] is for losers.” Many of the others think that all technological progress is inherently good and should be pursued at any cost and for its own sake. A few also believe that privacy is an antiquated concept—even an illusion—and that their companies should be free to hoard and profit off our personal data. Most of all, though, Lalka argues, these men believe that their newfound power should be unconstrained by governments, ­regulators, or anyone else who might have the gall to impose some limitations.

Where exactly did these beliefs come from? Lalka points to people like the late free-market economist Milton Friedman, who famously asserted that a company’s only social responsibility is to increase profits, as well as to Ayn Rand, the author, philosopher, and hero to misunderstood teenage boys everywhere who tried to turn selfishness into a virtue. 

cover of Venture Alchemists

The Venture Alchemists: How Big Tech Turned Profits into Power
Rob Lalka

COLUMBIA BUSINESS SCHOOL PUBLISHING, 2024

It’s a somewhat reductive and not altogether original explanation of Silicon Valley’s libertarian inclinations. What ultimately matters, though, is that many of these “values” were subsequently encoded into the DNA of the companies these men founded and funded—companies that today shape how we communicate with one another, how we share and consume news, and even how we think about our place in the world. 

The Venture Alchemists is hi strongest when it’s describing the early-stage antics and on-campus controversies that shaped these young entrepreneurs or, in many cases, simply reveal who they’ve always been. Lalka is a thorough and tenacious researcher, as the book’s 135 pages of endnotes suggest. And while nearly all these stories have been told before in other books and articles, he still manages to provide new perspectives and insights from sources like college newspapers and leaked documents. 

One thing the book is particularly effective at is deflating the myth that these entrepreneurs were somehow gifted seers of (and investors in) a future the rest of us simply couldn’t comprehend or predict. 

Sure, someone like Thiel made what turned out to be a savvy investment in Facebook early on, but he also made some very costly mistakes with that stake. As Lalka points out, Thiel’s Founders Fund dumped tens of millions of shares shortly after Facebook went public, and Thiel himself went from owning 2.5% of the company in 2012 to 0.000004% less than a decade later (around the same time Facebook hit its trillion-dollar valuation). Throw in his objectively terrible wagers in 2008, 2009, and beyond, when he effectively shorted what turned out to be one of the longest bull markets in world history, and you get the impression he’s less oracle and more ideologue who happened to take some big risks that paid off. 

One of Lalka’s favorite mantras throughout The Venture Alchemists is that “words matter.” Indeed, he uses a lot of these entrepreneurs’ own words to expose their hypocrisy, bullying, juvenile contrarianism, casual racism, and—yes—outright greed and self-interest. It is not a flattering picture, to say the least. 

Unfortunately, instead of simply letting those words and deeds speak for themselves, Lalka often feels the need to interject with his own, frequently enjoining readers against ­finger-pointing or judging these men too harshly even after he’s chronicled their many transgressions. Whether this is done to try to convey some sense of objectivity or simply to remind readers that these entrepreneurs are complex and complicated men making difficult decisions, it doesn’t work. At all. 

For one thing, Lalka clearly has his own strong opinions about the behavior of these entrepreneurs—opinions he doesn’t try to disguise. At one point in the book he suggests that Kalanick’s alpha-male, dominance-at-any-cost approach to running Uber is “almost, but not quite” like rape, which is maybe not the comparison you’d make if you wanted to seem like an arbiter of impartiality. And if he truly wants readers to come to a different conclusion about these men, he certainly doesn’t provide many reasons for doing so. Simply telling us to “judge less, and discern more” seems worse than a cop-out. It comes across as “almost, but not quite” like victim-blaming—as if we’re somehow just as culpable as they are for using their platforms and buying into their self-mythologizing. 

“In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be.”Marietje Schaake

Equally frustrating is the crescendo of empty platitudes that ends the book. “The technologies of the future must be pursued thoughtfully, ethically, and cautiously,” Lalka says after spending 313 pages showing readers how these entrepreneurs have willfully ignored all three adverbs. What they’ve built instead are massive wealth-creation machines that divide, distract, and spy on us. Maybe it’s just me, but that kind of behavior seems ripe not only for judgment, but also for action.

So what exactly do you do with a group of men seemingly incapable of serious self-reflection—men who believe unequivocally in their own greatness and who are comfortable making decisions on behalf of hundreds of millions of people who did not elect them, and who do not necessarily share their values?

You regulate them, of course. Or at least you regulate the companies they run and fund. In Marietje Schaake’s The Tech Coup, readers are presented with a road map for how such regulation might take shape, along with an eye-opening account of just how much power has already been ceded to these corporations over the past 20 years.

There are companies like NSO Group, whose powerful Pegasus spyware tool has been sold to autocrats, who have in turn used it to crack down on dissent and monitor their critics. Billionaires are now effectively making national security decisions on behalf of the United States and using their social media companies to push right-wing agitprop and conspiracy theories, as Musk does with his Starlink satellites and X. Ride-sharing companies use their own apps as propaganda tools and funnel hundreds of millions of dollars into ballot initiatives to undo laws they don’t like. The list goes on and on. According to Schaake, this outsize and largely unaccountable power is changing the fundamental ways that democracy works in the United States. 

Related Story

Palmer Luckey sitting on yellow metal staircase

Palmer Luckey’s vision for the future of mixed reality

The Oculus founder has pivoted from selling goggles to consumers, to selling them to the military

“In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be: from dismissing government to literally taking on equivalent functions; from lauding freedom of speech to becoming curators and speech regulators; and from criticizing government overreach and abuse to accelerating it through spyware tools and opaque algorithms,” she writes.

Schaake, who’s a former member of the European Parliament and the current international policy director at Stanford University’s Cyber Policy Center, is in many ways the perfect chronicler of Big Tech’s power grab. Beyond her clear expertise in the realms of governance and technology, she’s also Dutch, which makes her immune to the distinctly American disease that seems to equate extreme wealth, and the power that comes with it, with virtue and intelligence. 

This resistance to the various reality-distortion fields emanating from Silicon Valley plays a pivotal role in her ability to see through the many justifications and self-serving solutions that come from tech leaders themselves. Schaake understands, for instance, that when someone like OpenAI’s Sam Altman gets in front of Congress and begs for AI regulation, what he’s really doing is asking Congress to create a kind of regulatory moat between his company and any other startups that might threaten it, not acting out of some genuine desire for accountability or governmental guardrails. 

cover of The Tech Coup

The Tech Coup:
How to Save Democracy
from Silicon Valley

Marietje Schaake

PRINCETON UNIVERSITY PRESS, 2024

Like Shoshana Zuboff, the author of The Age of Surveillance Capitalism, Schaake believes that “the digital” should “live within democracy’s house”—that is, technologies should be developed within the framework of democracy, not the other way around. To accomplish this realignment, she offers a range of solutions, from banning what she sees as clearly antidemocratic technologies (like face-recognition software and other spyware tools) to creating independent teams of expert advisors to members of Congress (who are often clearly out of their depth when attempting to understand technologies and business models). 

Predictably, all this renewed interest in regulation has inspired its own backlash in recent years—a kind of “tech revanchism,” to borrow a phrase from the journalist James Hennessy. In addition to familiar attacks, such as trying to paint supporters of the techlash as somehow being antitechnology (they’re not), companies are also spending massive amounts of money to bolster their lobbying efforts. 

Some venture capitalists, like LinkedIn cofounder Reid Hoffman, who made big donations to the Kamala Harris presidential campaign, wanted to evict Federal Trade Commission chair Lina Khan, claiming that regulation is killing innovation (it isn’t) and removing the incentives to start a company (it’s not). And then of course there’s Musk, who now seems to be in a league of his own when it comes to how much influence he may exert over Donald Trump and the government that his companies have valuable contracts with.

What all these claims of victimization and subsequent efforts to buy their way out of regulatory oversight miss is that there’s actually a vast and fertile middle ground between simple techno­-optimism and techno-skepticism. As the New Yorker contributor Cal Newport and others have noted, it’s entirely possible to support innovations that can significantly improve our lives without accepting that every popular invention is good or inevitable. 

Regulating Big Tech will be a crucial part of leveling the playing field and ensuring that the basic duties of a democracy can be fulfilled. But as both Lalka and Schaake suggest, another battle may prove even more difficult and contentious. This one involves undoing the flawed logic and cynical, self-serving philosophies that have led us to the point where we are now. 

What if we admitted that constant bacchanals of disruption are in fact not all that good for our planet or our brains? What if, instead of “creative destruction,” we started fetishizing stability, and in lieu of putting “dents in the universe,” we refocused our efforts on fixing what’s already broken? What if—and hear me out—we admitted that technology might not be the solution to every problem we face as a society, and that while innovation and technological change can undoubtedly yield societal benefits, they don’t have to be the only measures of economic success and quality of life? 

When ideas like these start to sound less like radical concepts and more like common sense, we’ll know the techlash has finally achieved something truly revolutionary. 

Bryan Gardiner is a writer based in Oakland, California.

by Bryan Gardiner

Article link: https://www.technologyreview.com/2024/12/13/1108459/book-review-silicon-valley-democracy-techlash-rob-lalka-venture-alchemists-marietje-schaake-tech-coup/?

It’s the end of the internet as we know it—and I feel fine – Fastcompany

Posted by timmreardon on 12/18/2024
Posted in: Uncategorized.

Interesting reflection on the state of the internet reveals a sobering yet necessary truth:

The platforms that once defined the digital age are crumbling under the weight of their own success. Once celebrated for their utility- innovation- and democratizing power, today’s online spaces are increasingly marred by corporate greed, declining quality and a disheartening indifference toward users.

Search engines, gateways to meaningful information: are now rife with low quality results, SEO (driven) content and aggressive advertising.

Amazon’s virtual shelves echo with a similar degradation, prioritizing paid visibility over quality or relevance.

Social media has devolved into fragmented spaces: algorithmically manipulated, ad saturated and devoid of their original vibrancy. Even platforms that still foster joy like TikTok, face an existential threats rooted in geopolitics and regulation.

Adding to the erosion is the rise of generative AI… flooding the web with artificial content that blurs the line between reality and fabrication. The “slop” Nover describes: fake images, untrustworthy reviews and hollow engagement. It underscores a more significant issue: the commodification of our online experience. Today’s digital landscape is not broken but neglected, treated by tech giants as a vessel for profit rather than a public good.

How much should we rely on systems that no longer value us? What does a healthier, more authentic digital future look like?

If this truly marks the end of the internet “as we know it,” perhaps that’s not a loss but a call to reimagine what the web can and could be..

Internet #Web #AI #Trust

Maybe this was the last year of the usable web. If so, blame corporate greed.

BY SCOTT NOVER2 MINUTE READ

The internet feels like it’s falling apart. 

Not literally. Structurally, it’s sound. There are plenty of fiber optic cables lining ocean floors, cell towers looming above cityscapes, and server-filled data centers. But the very foundations of the utilitarian web—the platforms that undergird our everyday experiences online—feel shaky, pulsing with the first foreshocks of a collapse.

To start, nothing seems to work anymore. Google’s search engine once provided directory-level assistance to the denizens of the internet. Now it’s chock full of ads, sidebars, SEO-optimized clickbait, and artificial intelligence-powered guesstimations of possible answers to peoples’ questions. Earlier this year, a group of German researchers found that Google ranked product reviews pages high when they had low-quality text, tons of affiliate links to ecommerce sites, and were riddled with SEO tricks that don’t exactly coincide with quality. In other words, Google was letting their platform get co-opted by the lowest of the low.

On Amazon, the digital shelves are littered with sponsored products and cheap replicas of popular items. On either Amazon or Google, you’ll often need to scroll for a bit to get anything remotely helpful or relevant when you search. Government antitrust complaintsagainst both companies have essentially called them toll booths for advertisers, who need to pay-to-play to get noticed, which has degraded those services in the process. When big tech giants make their relied-on services worse, that’s bad for consumers—even if they don’t have to pay more.

On social media, the situation is even more dire. Facebook is functionally good for fighting with high school friends about politics, getting birthday reminders, and learning who is married or pregnant. There’s almost no news on the platform anymore, and my feed is full of meme pages that I would never follow, repurposed TikToks posted as Reels, and—you guessed it—low-quality ads. X is a right-wing cesspool full of Elon Musk sycophants, tech bro hustle posters, and—good lord—the worst ads you’ve ever seen outside of Truth Social. TikTok, one of the only interesting, serendipitous, and (usually) joyful places on the internet is in danger of being banned from the United States in the next month unless the conservative Supreme Court or President-elect Donald Trumphimself intervene to save it.

And the rise of generative AI has meant that every one of these platforms is now infused with what’s most commonly called slop, insultingly bad fake images often designed to trick or enrage people. You can find Facebook Groups fawning over beautiful landscapes without realizing they’re melting away if you look closely enough, and that the gorgeous Instagram model in the photo has far too many fingers.

The internet, of course, is controlled by the largest, richest, most powerful companies in the world. It’s not a dead internet, as some have posited, because we primarily consume artificial content; rather, it’s living and neglected, merely damned by corporate greed, indolence, and indifference. Silicon Valley’s giants no longer compete and no longer innovate; instead they cut costs, boost profit margins, and block out competitors in order to maintain consumer habit and market dominance. Online platforms give us convenience, but no novelty, and they have vanishing utility in increasingly our digital lives. 

In 2025, perhaps the whole thing will explode. But hopefully, people will begin to rethink their reliance on digital platforms that treat them with utter contempt, like they’re consumers, like they’re “users.” If it’s the end of the internet as we know it, then I feel fine.

Article link: https://www.fastcompany.com/91246383/its-the-end-of-the-internet-as-we-know-it-and-i-feel-fine

The Google chip called “Willow.”

Posted by timmreardon on 12/15/2024
Posted in: Uncategorized.

Google’s new quantum computer chip solves problems in 5 minutes, which would take supercomputers 10 septillion years to complete — longer than the universe has existed:

The achievement, published in Nature, marks a major step forward for quantum computing, proving it can perform tasks beyond the capabilities of classical computers.

The chip is called “Willow.”

This breakthrough centers on the random circuit sampling (RCS) benchmark, a complex statistical test used to assess computational randomness. Hartmut Neven, founder of Google Quantum AI, called Willow “the most convincing prototype” of a quantum computer to date, highlighting its potential to tackle real-world problems previously considered unsolvable.

Unlike traditional computers that process information as binary bits (ones and zeros), quantum computers use qubits, which can exist in multiple states simultaneously due to quantum superposition.

This allows them to exponentially increase computing power. However, qubits are notoriously fragile and prone to errors, a challenge Google claims to have overcome with Willow’s built-in error suppression system.

Google now aims to leverage this progress to address real-world applications, such as advancements in medicine, nuclear fusion, and clean energy. Neven emphasized that quantum computing, alongside artificial intelligence, will drive transformational changes in technology, pushing humanity closer to solving otherwise insurmountable challenges.

Learn more: https://blog.google/technology/research/google-willow-quantum-chip/

Why the RVU system makes attaining the quadruple aim laughable – KevinMD

Posted by timmreardon on 12/13/2024
Posted in: Uncategorized.

MICK CONNORS, MD 

PHYSICIAN 

OCTOBER 31, 2024

The quadruple aim represents an ambitious, holistic vision for the future of health care: improving population health, enhancing the patient experience, reducing per capita costs, and improving the work-life balance of health care providers. While many health care systems have adopted this framework, the widespread use of the relative value unit (RVU) system fundamentally undermines these goals. Far from facilitating the quadruple aim, the RVU system creates a chasm between what health care is and what it aspires to be, making the attainment of these aims seem, at times, almost laughable.

The four pillars of the quadruple aim and how the RVU system undermines them.

1. Improving population health: procedures over prevention. The first pillar of the quadruple aim emphasizes improving population health through preventive care, management of chronic diseases, and addressing health disparities. These goals require long-term, holistic care strategies that go beyond episodic, procedure-based interventions. However, the RVU system is fundamentally biased toward procedures and volume-based care, rather than preventive care and long-term patient outcomes.

Why it’s laughable: The RVU system rewards health care providers for doing more procedures, not for preventing them. Providers are incentivized to perform surgeries, diagnostics, and interventions because these actions translate into higher RVUs and, thus, higher compensation. In contrast, preventive services—like diet counseling, chronic disease management, or mental health care—are poorly compensated because they generate fewer RVUs. This creates an absurd situation where the health care system is essentially structured to focus on “sick care” rather than “health care.”

Trend: Chronic diseases like diabetes and hypertension have skyrocketed over the past 30 years, yet prevention and management strategies remain undervalued. While public health initiatives are making strides, the RVU system continues to undervalue the very services that would improve population health in the long run.

2. Enhancing the patient experience: rushed, fragmented care

Patients increasingly expect not only competent care but also care that is empathetic, personalized, and well-coordinated. The RVU system, however, pressures providers to maximize the number of patients they see or the procedures they perform, effectively turning health care into an assembly line. This compromises the quality of the patient-provider relationship.

Why it’s laughable: The RVU system puts providers on a hamster wheel of patient throughput. Doctors are encouraged to see as many patients as possible within a limited time frame to meet RVU quotas, leading to shorter visits, rushed care, and an inevitable reduction in the quality of interactions. It’s laughable to think we can enhance the patient experience when physicians are forced to spend more time checking boxes in an electronic health record to document RVUs than engaging with their patients.

Trend: Surveys over the past 30 years, such as those conducted by the Agency for Healthcare Research and Quality (AHRQ), indicate that while patient satisfaction scores have become a prominent metric, the patient experience itself is often degraded by the very system that measures these outcomes. Short visits and fragmented care dominate, making genuine, patient-centered interactions rare.

3. Reducing per capita costs: the perverse incentive of overutilization

Reducing health care costs has been a central concern for policymakers, especially in the U.S., which consistently spends more on health care per capita than any other developed country. The RVU system, however, drives up costs through its emphasis on procedures, diagnostics, and volume—often at the expense of actual health outcomes.

Why it’s laughable: The RVU system actively incentivizes overutilization of health care services. The more tests, procedures, and interventions a provider can perform, the more RVUs they generate and, therefore, the more money they make. This directly opposes the goal of reducing health care costs. It’s an open secret that much of health care spending goes to unnecessary procedures, tests, or repeat visits that generate high RVUs but do little to improve patient outcomes.

Trend: Over the past 30 years, U.S. health care spending has skyrocketed. According to the Centers for Medicare & Medicaid Services (CMS), health care spending grew from $1.2 trillion in 1990 to nearly $4 trillion by 2020. This upward trend is largely due to the high utilization of procedures, diagnostics, and tests—all incentivized by the RVU model. Attempts to control costs, such as through bundled payments or capitation models, have not yet been widely enough adopted to counterbalance the pervasive influence of RVU-driven care.

4. Improving provider work-life balance: the burnout epidemic

The quadruple aim added provider well-being as a critical element to emphasize that improving the health care system also requires supporting the mental and physical health of providers. However, the RVU system is a major contributor to physician burnout, which has reached epidemic levels in the last decade.

Why it’s laughable: The RVU system puts intense pressure on providers to maintain productivity at the expense of their well-being. Doctors are often overworked, with more administrative duties related to documenting services and more patients to see, all while dealing with a fragmented and inefficient health care infrastructure. The demand to produce high RVUs leads to emotional exhaustion, depersonalization, and a reduced sense of accomplishment, classic symptoms of burnout. It’s ironic, if not absurd, to speak of improving provider well-being while tethering them to a system that drains their mental and physical reserves.

Trend: Studies over the past decade have shown alarming rates of physician burnout. According to the Mayo Clinic Proceedings, over 50 percent of U.S. physicians experience burnout. Burnout is not just an individual issue—it leads to higher rates of medical errors, physician turnover, and lower quality care, which perpetuates the vicious cycle of a broken health care system. The RVU system plays a central role in this, creating a toxic work environment where productivity is prioritized over professional satisfaction.

The trends: a chasm between the RVU system and the quadruple aim

Over the last 30 years, trends in health care outcomes, costs, patient experience, and provider well-being paint a clear picture: the RVU system is a primary driver of many of the very issues that the quadruple aim seeks to address. The chasm between the goals of the quadruple aim and the reality of the RVU-driven system is wide and growing.

  • Health care costs have continued to rise due to RVU-driven overutilization.
  • Provider burnout has worsened, with many doctors feeling more like cogs in a machine than healers.
  • The patient experience remains fragmented and depersonalized as providers are forced to focus on volume.
  • Population health outcomes are lagging, particularly in areas that rely on preventive care and chronic disease management—fields undercompensated by the RVU system.

Conclusion: the quadruple aim and RVU system—an irreconcilable difference

The goals of the quadruple aim and the realities of the RVU system are in direct opposition. The RVU system prioritizes productivity, volume, and procedures, which undermines the holistic, value-based care model that the quadruple aim aspires to. To suggest that health care providers can meet the quadruple aim within the constraints of RVU-driven care is not just difficult—it’s laughable. The trends over the past few decades demonstrate how broken the system truly is and how deep the divide is between our health care aspirations and the perverse incentives that keep us from achieving them.

To truly move toward a health care system that meets the quadruple aim, the RVU model must be rethought, if not entirely replaced, with systems that reward value over volume, prevention over intervention, and well-being over burnout. Until then, the chasm between where we are and where we need to be will remain wide—and laughable in its absurdity.

Mick Connors is a pediatric emergency physician.

Article link: https://kevinmd.com/2024/10/why-the-rvu-system-makes-attaining-the-quadruple-aim-laughable-a-deep-dive-into-a-broken-health-care-model.html

The hidden $935 billion problem in U.S. health care no one is talking about—and how to solve it – Kevin MD

Posted by timmreardon on 12/11/2024
Posted in: Uncategorized.

ShAKEEL AHMED, MD 

POLICY 

OCTOBER 26, 2024 

Waste is worse than loss. The time is coming when every person who lays claim to ability will keep the question of waste before him constantly.”
– Thomas Edison

The escalating challenge of waste in U.S. medicine

The U.S. health care system is struggling with inefficiencies and waste that weaken its effectiveness, thus reducing accessibility and sustainability as a whole. According to a study by JAMA, between $760 and $935 billion is wasted annually within the U.S. health care system. This is outrageous, especially since it represents almost 25 percent of the nation’s health care costs, creating an incredibly resource-intensive problem.

Three main drivers of this waste are administrative complexity, over-treatment, and lack of care coordination—all driving up costs without improving patient outcomes.

Administrative complexity—or burritos with extra lettuce

In the U.S. health care system, one can argue that administrative complexity wastes far more money than fraud. That’s dramatic but makes a point. Such a fragmented structure exists partly because all payers (public and private) require extensive forms of billing or insurance-related activities. These activities are time- and resource-intensive.

According to The Commonwealth Fund, administrative costs make up almost 8 percent of U.S. health care spending, compared with less than 2 percent in other developed countries. The complexity arises from the need to work with multiple insurance providers, all of which have their own individual billing codes, coverage policies, and procedures. This has resulted in an elaborate system that providers must navigate, leaving much less time for actual patient care. As a result, doctors and their staff waste far too many hours managing paperwork and navigating billing systems, leading to inefficient patient care.

The problem of over-treatment and overuse

Another substantial waste in our health care system is over-treatment. As a clinician with 30 years of practice, I can assure you that defensive medicine is a large driver of this issue, where providers order more tests or treatments than necessary to avoid being sued.

Aside from this, the dominant fee-for-service system in the U.S. encourages more services, as doctors and medical institutions are compensated per procedure or test, rather than for what care is best. The Institute of Medicine estimates that up to 30 percent of U.S. health care spending goes toward services that do not contribute meaningfully to patient care. These practices not only increase costs but also put patient safety at risk—either from untoward effects of unnecessary medications or complications from procedures that were unwarranted.

The challenge of insufficient care coordination

The United States’ fragmented health care system makes it difficult to coordinate care for patients with chronic conditions who need continued treatment and management across multiple in-network providers. This lack of coordination leads to duplicated tests, conflicting treatments, and gaps in care, all resulting in increased costs and compromised patient safety.

For instance, a patient with diabetes might receive care from an endocrinologist, a primary-care physician, and a cardiologist concurrently. Lack of communication or coordination between these specialists can result in contradictory advice, repeated tests, and conflicting treatments, leading to a disjointed care experience. This can lead to avoidable comorbidities, hospital visits, and emergency room visits, putting more stress on the health care system.

Improving through perspective: solutions and best practices

These systemic problems cannot be fixed without a fundamental reorientation of the U.S. health care system, particularly in how care is delivered and reimbursed. One effective solution is the shift to value-based care. While the traditional fee-for-service model incentivizes volume and is often associated with high costs and low quality, value-based care rewards providers not for treating a higher volume of patients or ordering more services but for providing better treatment that leads to healthier outcomes. This method incentivizes disease prevention and optimal management of chronic conditions, translating into less need for high-priced interventions.

The Cleveland Clinic practices value-based care, with a focus on coordinating care and improving patient outcomes. With these measures, they have been able to drive down health care costs while increasing patient satisfaction. This change eliminates unnecessary medical interventions and aligns the interests of health care providers with those of patients, delivering a more comprehensive health plan.

In addition, streamlining administrative burdens is essential for limiting inefficiencies in health care. Two ways to achieve this are through streamlined billing systems and the adoption of standard electronic health records (EHRs). These steps are likely to reduce the administrative burden on health care providers and enable them to direct their expertise toward treating patients. This streamlined process not only lowers costs but also improves the quality of care by placing timely and accurate patient information in providers’ hands.

Lastly, integrated care systems can be quite effective in revolutionizing the patient care experience, offering a holistic approach that places patients at the center of their health journeys.

An example of such integration is Kaiser Permanente. Kaiser Permanente has the advantage of being both a health care provider and insurer, enabling it to provide its members with a top-down continuum of care from preventive services to specialized treatments. As a result, they have improved health outcomes and cost savings through fewer hospitalizations, reduced emergency room utilization, and fewer unnecessary tests.

As Peter Drucker once said, “Efficiency is doing things right; effectiveness is doing the right things.” Let’s start doing what’s right now.

Shakeel Ahmed is a gastroenterologist. 

Article link: https://kevinmd.com/2024/10/the-hidden-935-billion-problem-in-u-s-health-care-no-one-is-talking-about-and-how-to-solve-it.html

Photonic processor could enable ultrafast AI computations with extreme energy efficiency – MIT News

Posted by timmreardon on 12/09/2024
Posted in: Uncategorized.

This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.

Adam Zewe | MIT News

Publication Date:

December 2, 2024

The deep neural network models that power today’s most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware.

Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can’t perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency.

Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip.

The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92 percent accuracy — performance that is on par with traditional hardware.

The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics.

In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications.

“There are a lot of cases where how well the model performs isn’t the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms,” says Saumil Bandyopadhyay ’17, MEng ’18, PhD ’23, a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip.

Bandyopadhyay is joined on the paper by Alexander Sludds ’18, MEng ’19, PhD ’23; Nicholas Harris PhD ’17; Darius Bunandar PhD ’19; Stefan Krastanov, a former RLE research scientist who is now an assistant professor at the University of Massachusetts at Amherst; Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research; Matthew Streshinsky, a former silicon photonics lead at Nokia who is now co-founder and CEO of Enosemi; Michael Hochberg, president of Periplous, LLC; and Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE, and senior author of the paper. The research appears today in Nature Photonics.

Machine learning with light

Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer.

But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems.

In 2017, Englund’s group, along with researchers in the lab of Marin Soljačić, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light.

But at the time, the device couldn’t perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations.

“Nonlinearity in optics is quite challenging because photons don’t interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way,” Bandyopadhyay explains.

They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip.

The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations.

A fully-integrated network

At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs.

The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy.

“We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency,” Bandyopadhyay says.

Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situtraining that typically consumes a huge amount of energy in digital hardware.

“This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time,” he says.

The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond.     

“This work demonstrates that computing — at its essence, the mapping of inputs to outputs — can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed,” says Englund.

The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process.

Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency.

This research was funded, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.

Article link: https://news.mit.edu/2024/photonic-processor-could-enable-ultrafast-ai-computations-1202

The Way We Measure Progress in AI is Terrible – MIT Technology Review

Posted by timmreardon on 12/07/2024
Posted in: Uncategorized.

Every time a new AI model is released, it’s typically touted as acing its performance against a series of benchmarks. OpenAI’s GPT-4o, for example, was launched in May with a compilation of results that showed its performance topping every other AI company’s latest model in several tests.

The problem is that these benchmarks are poorly designed, the results hard to replicate, and the metrics they use are frequently arbitrary, according to new research. That matters because AI models’ scores against these benchmarks will determine the level of scrutiny and regulation they receive. Read the story: https://trib.al/pcJoUxM

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...