Companies already have the systems in place that are needed to evaluate their deeper impacts on the social fabric.
By Nathaniel Lubinarchive page Thomas Krendl Gilbertarchive page August 9, 2022
We all want to be able to speak our minds online—to be heard by our friends and talk (back) to our opponents. At the same time, we don’t want to be exposed to speech that is inappropriate or crosses a line. Technology companies address this conundrum by setting standards for free speech, a practice protected under federal law. They hire in-house moderators to examine individual pieces of content and remove them if posts violate predefined rules set by the platforms.
The approach clearly has problems: harassment, misinformation about topics like public health, and false descriptions of legitimate elections run rampant. But even if content moderation were implemented perfectly, it would still miss a whole host of issues that are often portrayed as moderation problems but really are not. To address those non-speech issues, we need a new strategy: treat social media companies as potential polluters of the social fabric, and directly measure and mitigate the effects their choices have on human populations. That means establishing a policy framework—perhaps through something akin to an Environmental Protection Agency or Food and Drug Administration for social media—that can be used to identify and evaluate the societal harms generated by these platforms. If those harms persist, that group could be endowed with the ability to enforce those policies. But to transcend the limitations of content moderation, such regulation would have to be motivated by clear evidence and be able to have a demonstrable impact on the problems it purports to solve.
Moderation (whether automated or human) can potentially work for what we call “acute” harms: those caused directly by individual pieces of content. But we need this new approach because there are also a host of “structural” problems—issues such as discrimination, reductions in mental health, and declining civic trust—that manifest in broad ways across the product rather than through any individual piece of content. A famous example of this kind of structural issue is Facebook’s 2012 “emotional contagion” experiment, which showed that users’ affect (their mood as measured by their behavior on the platform) shifted measurably depending on which version of the product they were exposed to.
In the blowback that ensued after the results became public, Facebook (now Meta) ended this type of deliberate experimentation. But just because they stopped measuring such effects does not mean product decisions don’t continue to have them.
Structural problems are direct outcomes of product choices. Product managers at technology companies like Facebook, YouTube, and TikTok are incentivized to focus overwhelmingly on maximizing time and engagement on the platforms. And experimentation is still very much alive there: almost every product change is deployed to small test audiences via randomized controlled trials. To assess progress, companies implement rigorous management processes to foster their central missions (known as Objectives and Key Results, or OKRs), even using these outcomes to determine bonuses and promotions. The responsibility for addressing the consequences of product decisions is often placed on other teams that are usually downstream and have less authority to address root causes. Those teams are generally capable of responding to acute harms—but often cannot address problems caused by the products themselves.
With attention and focus, this same product development structure could be turned to the question of societal harms. Consider Frances Haugen’s congressional testimony last year, along with media revelations about Facebook’s alleged impact on the mental health of teens. Facebook responded to criticism by explaining that it had studied whether teens felt that the product had a negative effect on their mental health and whether that perception caused them to use the product less, and not whether the product actually had a detrimental effect. While the response may have addressed that particular controversy, it illustrated that a study aiming directly at the question of mental health—rather than its impact on user engagement—would not be a big stretch.
Article link: https://www.technologyreview.com/2022/08/09/1057171/social-media-polluting-society-moderation-alone-wont-fix-the-problem/