Today, we are sharing insights about our work to protect the integrity of our platform for the recent European Parliament elections, which saw 27 national elections take place concurrently between June 6-9.
Key points include:
- A dedicated in-app Election Centre was launched for every EU Member State, which were visited more than 7.5million times in the four weeks leading up to and during the election
- We removed over 2,600 pieces of content for violating our civic and election integrity policies, and over 43,000 pieces of content for violating our misinformation policies in the four weeks leading up to and during the election
- We proactively removed over 96% of violative misinformation content before it was reported to us, and over 80% before receiving a single view in this period.
Protecting our community
Alongside technology, we have over 6,000 people dedicated to moderating EU language content to ensure we take action against content and behaviours that violate our rules. Included in this number are specialised misinformation moderators who are given enhanced tools and training to detect and remove violative content.
To facilitate even stronger collaboration and action, we also established a dedicated 'Mission Control Centre' space in our Dublin office, bringing together in-person employees from multiple specialist teams within our safety department to maximise the effectiveness of our work in the run-up to, and during, the elections themselves.
In the four weeks leading up to and including the elections, we removed over 2,600 pieces of content for violating our civic and election integrity policies, and over 43,000 pieces of content for violating our misinformation policies. We removed over 96% of violative misinformation content before it was reported to us, and over 80% before receiving a single view. Some of the misinformation narratives we observed included narratives around migration, climate change, security and defence and LGBTQ rights.
The rise of AI-generated content (AIGC) more generally has created new challenges for the industry ahead of the many elections taking place this year. For over a year, we've required people to label realistic AIGC and prohibited harmfully misleading AIGC that depicts public figures endorsing a political view.
Ahead of these elections, we further strengthened our approach to AIGC by becoming the first video sharing platform to begin implementing Content Credentials technology, partnering with the Coalition for Content Provenance and Authenticity (C2PA), which enables us to automatically label AIGC that originated on other major platforms.
In February, we were also a founding signatory to the cross-industry AI tech accord to combat the use of deceptive AI in 2024 election
Empowering our community
Another key part of our work focused on connecting people to trusted information and providing them with more context about the content they were viewing.
In March, we launched an individual in-app Election Centre for each EU member state. Working with electoral commissions and civic society organisations, these Centres connected people with reliable voting information, including when, where, and how to vote; eligibility requirements for candidates; and, ultimately, the election results themselves. We also sent push notifications to users in each of the 27 countries.
We actively directed people to these Centres through prompts on relevant election content and election-related searches, which were labelled with notice tags and search banners respectively. In the four weeks up to and including the election, these Election Centres were visited more than 7.5million times, and search banners were viewed more than 63 million times.
Throughout this period, we worked closely with the European Parliament, including by incorporating the European Parliament's own media literacy videos into the Centres, with a 'follow' button to the Parliament's own TikTok account as a further link to authoritative election-related information.
Partnering with experts
Collaborating with outside experts was another essential component of our strategy. Across Europe, we work with 12 local fact-checking partners, covering at least one official language of every Member State. These partners supported our work through proactive surfacing and flagging of potential misinformation; verification of content and contribution to a repository of fact-checked claims; and in-app interventions. We also collaborated with fact-checkers to create media literacy videos.
These insights provide a snapshot of both our approach and its impact, and we'll share further details in the months ahead as part of our transparency reporting requirements under both the Code of Practice on Disinformation and the Digital Services Act. Our work to keep people on TikTok safe has no finish line, and we continue to evolve our policies and processes, while also taking feedback from outside experts, to ensure that our approach evolves to address emerging challenges and threats in this landmark year for elections.