The text below is the Executive Summary of the report. Read the full report here.
Research by: Mervi Pantti (Helsinki University) and Yang Xu (Helsinki University)
This report stems from research conducted at the University of Helsinki, Media and Communication Studies (Pantti & Pohjonen, 2023; Xu, 2023). The report discusses content moderation from the perspective of powerful platform companies (Big Tech) in the context of increasing demands for accountability. Multinational platform corporations such as Microsoft, Alphabet/Google (YouTube), Meta (Facebook, Instagram, WhatsApp), Twitter (currently X), and Bytedance (TikTok) wield significant political, economic, social, and infrastructural power globally. Recent years have witnessed a public backlash against the platforms for causing social harm because of their profit interest, privacy risks, and biased algorithmic systems (e.g., Zuboff, 2019). There have been demands from policymakers, NGOs, citizens, and former employees for Big Tech companies to demonstrate public accountability concerning disinformation and hate speech published on their platforms.
The European Union (EU) has taken several steps to limit the spread of online disinformation. The 2022 Strengthened Code of Practice on Disinformation replaced the 2018 self-regulatory Code and was signed by several platform companies, including Facebook and TikTok. The Digital Services Act (DSA) (2022) requires large online platforms to remove illegal content based on European and national laws and increase their efforts to fight misinformation and disinformation campaigns. In the Nordic context, the Nordic Think Tank for Tech and Democracy has demanded speedy efforts to regulate social media platforms (A Nordic approach to democratic debate in the age of Big Tech (norden.org).
In this context, characterized by a growing political will in Europe and the Nordics to hold platforms accountable, this report focuses on how online platforms have responded to increasing criticism and regulatory efforts. The report analyzes corporate texts (blogs and community guidelines) and investigates how social media platforms perceive their responsibilities and how their discursive legitimation processes have evolved. While corporate public communication has a promotional leaning, it also allows researchers to explore how the platforms articulate their legitimacy and frame their activities.
In particular, the report focuses on how Western platform companies responded to the surge of disinformation following Russia’s war in Ukraine and how the Chinese social media company TikTok’s community guidelines have evolved concerning harmful content. We chose these two cases to demonstrate how digital platforms respond to accountability demands. Following the Russian war in Ukraine, Western digital platforms were under exceptional pressure from the EU to counter Russian disinformation. Chinese social media company TikTok has faced increasing security and disinformation concerns in Europe which recently led the company to pledge to counter disinformation more effectively to live up to the EU code of practice on disinformation (Chee 2023).