reportFebruary 21 2018

Dead Reckoning

Navigating Content Moderation After Fake News

Robyn Caplan,
Lauren Hanson,
Joan Donovan

New Data & Society report clarifies current uses of “fake news” and analyzes four specific strategies for intervention.

Report Summary

First, authors Robyn Caplan, Lauren Hanson, and Joan Donovan analyze nascent solutions recently proposed by platform corporations, governments, news media industry coalitions, and civil society organizations. Then, the authors explicate potential approaches to containing “fake news” including trust and verification, disrupting economic incentives, de-prioritizing content and banning accounts, as well as limited regulatory approaches.

This report is intended to inform platforms, news organizations, media makers, and others who do not ask whether standards for media content should be set, but rather who should set them, who should enforce them, and what entity should hold platforms, the media industry, states, and users accountable. “Fake news” is thus not only about defining what content is problematic or false, but what constitutes credible and legitimate news in the social media era.

Among the report’s findings:
  • “Fake news” has become a politicized and controversial term, being used both to extend critiques of mainstream media and refer to the growing spread of propaganda and problematic content online.
  • Definitions that point to the spread of problematic content rely on assessing the intent of producers and sharers of news, separating content into clear and well-defined categories, and/or identifying features that can be used to detect “fake news” content by machines or human reviewers.
  • Strategies for limiting the spread of “fake news” include trust and verification, disrupting economic incentives, de-prioritizing content and banning accounts, as well as limited regulatory approaches.
  • Content producers learn quickly and adapt to new standards set by platforms, using tactics like including satire or parody disclaimers to bypass standards enforced through content moderators and automated approaches.
  • Moderating “fake news” well requires understanding the context of the article and the source. Currently automated technologies and artificial intelligence (AI) are not advanced enough to address this issue, which requires human-led interventions.
  • Third-party fact-checking and media literacy organizations are expected to close the gap between platforms and the public interest, but are currently under resourced to meet this challenge.

This report is the third in a series of outputs from the Data & Society initiative on Media Manipulation.

Report Authors

Connected Track