healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Algorithms are everywhere – MIT Technology Review

Posted by timmreardon on 06/07/2025
Posted in: Uncategorized.


Three new books warn against turning into the person the algorithm thinks you are.

By Bryan Gardiner

February 27, 2024

Like a lot of Netflix subscribers, I find that my personal feed tends to be hit or miss. Usually more miss. The movies and shows the algorithms recommend often seem less predicated on my viewing history and ratings, and more geared toward promoting whatever’s newly available. Still, when a superhero movie starring one of the world’s most famous actresses appeared in my “Top Picks” list, I dutifully did what 78 million other households did and clicked.

As I watched the movie, something dawned on me: recommendation algorithms like the ones Netflix pioneered weren’t just serving me what they thought I’d like—they were also shaping what gets made. And not in a good way. 

The movie in question wasn’t bad, necessarily. The acting was serviceable, and it had high production values and a discernible plot (at least for a superhero movie). What struck me, though, was a vague sense of déjà vu—as if I’d watched this movie before, even though I hadn’t. When it ended, I promptly forgot all about it. 

That is, until I started reading Kyle Chayka’s recent book, Filterworld: How Algorithms Flattened Culture. A staff writer for the New Yorker, Chayka is an astute observer of the ways the internet and social media affect culture. “Filterworld” is his coinage for “the vast, interlocking … network of algorithms” that influence both our daily lives and the “way culture is distributed and consumed.” 

Music, film, the visual arts, literature, fashion, journalism, food—Chayka argues that algorithmic recommendations have fundamentally altered all these cultural products, not just influencing what gets seen or ignored but creating a kind of self-reinforcing blandness we are all contending with now.

That superhero movie I watched is a prime example. Despite my general ambivalence toward the genre, Netflix’s algorithm placed the film at the very top of my feed, where I was far more likely to click on it. And click I did. That “choice” was then recorded by the algorithms, which probably surmised that I liked the movie and then recommended it to even more viewers. Watch, wince, repeat.  

“Filterworld culture is ultimately homogenous,” writes Chayka, “marked by a pervasive sense of sameness even when its artifacts aren’t literally the same.” We may all see different things in our feeds, he says, but they are increasingly the same kind of different. Through these milquetoast feedback loops, what’s popular becomes more popular, what’s obscure quickly disappears, and the lowest-­common-denominator forms of entertainment inevitably rise to the top again and again. 

This is actually the opposite of the personalization Netflix promises, Chayka notes. Algorithmic recommendations reduce taste—traditionally, a nuanced and evolving opinion we form about aesthetic and artistic matters—into a few easily quantifiable data points. That oversimplification subsequently forces the creators of movies, books, and music to adapt to the logic and pressures of the algorithmic system. Go viral or die. Engage. Appeal to as many people as possible. Be popular.  

A joke posted on X by a Google engineer sums up the problem: “A machine learning algorithm walks into a bar. The bartender asks, ‘What’ll you have?’ The algorithm says, ‘What’s everyone else having?’” “In algorithmic culture, the right choice is always what the majority of other people have already chosen,” writes Chayka. 

One challenge for someone writing a book like Filterworld—or really any book dealing with matters of cultural import—is the danger of (intentionally or not) coming across as a would-be arbiter of taste or, worse, an outright snob. As one might ask, what’s wrong with a little mindless entertainment? (Many asked just that in response to Martin Scorsese’s controversial Harper’sessay  in 2021, which decried Marvel movies and the current state of cinema.) 

Chayka addresses these questions head on. He argues that we’ve really only traded one set of gatekeepers (magazine editors, radio DJs, museum curators) for another (Google, Facebook, TikTok, Spotify). Created and controlled by a handful of unfathomably rich and powerful companies (which are usually led by a rich and powerful white man), today’s algorithms don’t even attempt to reward or amplify quality, which of course is subjective and hard to quantify. Instead, they focus on the one metric that has come to dominate all things on the internet: engagement.

There may be nothing inherently wrong (or new) about paint-by-numbers entertainment designed for mass appeal. But what algorithmic recommendations do is supercharge the incentives for creating only that kind of content, to the point that we risk not being exposed to anything else.

“Culture isn’t a toaster that you can rate out of five stars,” writes Chayka, “though the website Goodreads, now owned by Amazon, tries to apply those ratings to books. There are plenty of experiences I like—a plotless novel like Rachel Cusk’s Outline, for example—that others would doubtless give a bad grade. But those are the rules that Filterworld now enforces for everything.”

Chayka argues that cultivating our own personal taste is important, not because one form of culture is demonstrably better than another, but because that slow and deliberate process is part of how we develop our own identity and sense of self. Take that away, and you really do become the person the algorithm thinks you are. 

Algorithmic omnipresence

As Chayka points out in Filterworld, algorithms “can feel like a force that only began to exist … in the era of social networks” when in fact they have “a history and legacy that has slowly formed over centuries, long before the Internet existed.” So how exactly did we arrive at this moment of algorithmic omnipresence? How did these recommendation machines come to dominate and shape nearly every aspect of our online and (increasingly) our offline lives? Even more important, how did we ourselves become the data that fuels them?

These are some of the questions Chris Wiggins and Matthew L. Jones set out to answer in How Data Happened: A History from the Age of Reason to the Age of Algorithms. Wiggins is a professor of applied mathematics and systems biology at Columbia University. He’s also the New York Times’ chief data scientist. Jones is now a professor of history at Princeton. Until recently, they both taught an undergrad course at Columbia, which served as the basis for the book.

They begin their historical investigation at a moment they argue is crucial to understanding our current predicament: the birth of statistics in the late 18th and early 19th century. It was a period of conflict and political upheaval in Europe. It was also a time when nations were beginning to acquire both the means and the motivation to track and measure their populations at an unprecedented scale.

“War required money; money required taxes; taxes required growing bureaucracies; and these bureaucracies needed data,” they write. “Statistics”may have originally described “knowledge of the state and its resources, without any particularly quantitative bent or aspirations at insights,” but that quickly began to change as new mathematical tools for examining and manipulating data emerged.

One of the people wielding these tools was the 19th-century Belgian astronomer Adolphe Quetelet. Famous for, among other things, developing the highly problematic body mass index (BMI), Quetelet had the audacious idea of taking the statistical techniques his fellow astronomers had developed to study the position of stars and using them to better understand society and its people. This new “social physics,” based on data about phenomena like crime and human physical characteristics, could in turn reveal hidden truths about humanity, he argued.

“Quetelet’s flash of genius—whatever its lack or rigor—was to treat averages about human beings as if they were real quantities out there that we were discovering,” write Wiggins and Jones. “He acted as if the average height of a population was a real thing, just like the position of a star.” 

From Quetelet and his “average man” to Francis Galton’s eugenics to Karl Pearson and Charles Spearman’s “general intelligence,” Wiggins and Jones chart a depressing progression of attempts—many of them successful—to use data as a scientific basis for racial and social hierarchies. Data added “a scientific veneer to the creation of an entire apparatus of discrimination and disenfranchisement,” they write. It’s a legacy we’re still contending with today. 

Another misconception that persists? The notion that data about people are somehow objective measures of truth. “Raw data is an oxymoron,” observed the media historian Lisa Gitelman a number of years ago. Indeed, all data collection is the result of human choice, from what to collect to how to classify it to who’s included and excluded. 

Whether it’s poverty, prosperity, intelligence, or creditworthiness, these aren’t real things that can be measured directly, note Wiggins and Jones. To quantify them, you need to choose an easily measured proxy. This “reification” (“literally, making a thing out of an abstraction about real things”) may be necessary in many cases, but such choices are never neutral or unproblematic. “Data is made, not found,” they write, “whether in 1600 or 1780 or 2022.”

Perhaps the most impressive feat Wiggins and Jones pull off in the book as they continue to chart data’s evolution throughout the 20th century and the present day is dismantling the idea that there is something inevitable about the way technology progresses. 

For Quetelet and his ilk, turning to numbers to better understand humans and society was not an obvious choice. Indeed, from the beginning, everyone from artists to anthropologists understood the inherent limitations of data and quantification, making some of the same critiques of statisticians that Chayka makes of today’s algorithmic systems (“Such statisticians ‘see quality not at all, but only quantity’”).

Whether they’re talking about the machine-learning techniques that underpin today’s AI efforts or an internet built to harvest our personal data and sell us stuff, Wiggins and Jones recount many moments in history when things could have just as likely gone a different way.

“The present is not a prison sentence, but merely our current snapshot,” they write. “We don’t have to use unethical or opaque algorithmic decision systems, even in contexts where their use may be technically feasible. Ads based on mass surveillance are not necessary elements of our society. We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future. Privacy is not dead because of technology; it’s not true that the only way to support journalism or book writing or any craft that matters to you is spying on you to service ads. There are alternatives.” 

A pressing need for regulation

If Wiggins and Jones’s goal was to reveal the intellectual tradition that underlies today’s algorithmic systems, including “the persistent role of data in rearranging power,” Josh Simons is more interested in how algorithmic power is exercised in a democracy and, more specifically, how we might go about regulating the corporations and institutions that wield it.

Currently a research fellow in political theory at Harvard, Simons has a unique background. Not only did he work for four years at Facebook, where he was a founding member of what became the Responsible AI team, but he previously served as a policy advisor for the Labour Party in the UK Parliament. 

In Algorithms for the People: Democracy in the Age of AI, Simons builds on the seminal work of authors like Cathy O’Neil, Safiya Noble, and Shoshana Zuboff to argue that algorithmic prediction is inherently political. “My aim is to explore how to make democracy work in the coming age of machine learning,” he writes. “Our future will be determined not by the nature of machine learning itself—machine learning models simply do what we tell them to do—but by our commitment to regulation that ensures that machine learning strengthens the foundations of democracy.”

Much of the first half of the book is dedicated to revealing all the ways we continue to misunderstand the nature of machine learning, and how its use can profoundly undermine democracy. And what if a “thriving democracy”—a term Simons uses throughout the book but never defines—isn’t always compatible with algorithmic governance? Well, it’s a question he never really addresses. 

Whether these are blind spots or Simons simply believes that algorithmic prediction is, and will remain, an inevitable part of our lives, the lack of clarity doesn’t do the book any favors. While he’s on much firmer ground when explaining how machine learning works and deconstructing the systems behind Google’s PageRank and Facebook’s Feed, there remain omissions that don’t inspire confidence. For instance, it takes an uncomfortably long time for Simons to even acknowledge one of the key motivations behind the design of the PageRank and Feed algorithms: profit. Not something to overlook if you want to develop an effective regulatory framework. 

Much of what’s discussed in the latter half of the book will be familiar to anyone following the news around platform and internet regulation (hint: that we should be treating providers more like public utilities). And while Simons has some creative and intelligent ideas, I suspect even the most ardent policy wonks will come away feeling a bit demoralized given the current state of politics in the United States. 

In the end, the most hopeful message these books offer is embedded in the nature of algorithms themselves. In Filterworld, Chayka includes a quote from the late, great anthropologist David Graeber: “The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.” It’s a sentiment echoed in all three books—maybe minus the “easily” bit. 

Algorithms may entrench our biases, homogenize and flatten culture, and exploit and suppress the vulnerable and marginalized. But these aren’t completely inscrutable systems or inevitable outcomes. They can do the opposite, too. Look closely at any machine-learning algorithm and you’ll inevitably find people—people making choices about which data to gather and how to weigh it, choices about design and target variables. And, yes, even choices about whether to use them at all. As long as algorithms are something humans make, we can also choose to make them differently. 

Bryan Gardiner is a writer based in Oakland, California.

Article link: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/02/27/1088164/algorithms-book-reviews-kyle-chayka-chris-wiggins-matthew-l-jones-josh-simons/amp/

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← $400 Million Breakthrough: ASML’s New High-NA Machine Set to Transform Chipmaking – Techovedas
New database details AI risks – MIT →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists 12/05/2025
    • Why Digital Transformation—And AI—Demands Systems Thinking – Forbes 12/02/2025
    • How artificial intelligence impacts the US labor market – MIT Sloan 12/01/2025
    • Will quantum computing be chemistry’s next AI? 12/01/2025
    • Ontology is having its moment. 11/28/2025
    • Disconnected Systems Lead to Disconnected Care 11/26/2025
    • How organizations build a culture of AI ethics – MIT Sloan Management 11/19/2025
    • AI crawler wars threaten to make the web more closed for everyone – MIT Technology Review 11/19/2025
    • IBM CEO predicts quantum computing breakthrough in 3-5 years | Karl Haller 11/15/2025
    • The Quantum Mirage 11/15/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • December 2025 (4)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 155 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d