By Caroline Greer, Director of Public Policy and Government Relations, Brussels
Today, we are sharing our fourth report under the EU Code of Practice on Disinformation, covering the first half of 2024. With roughly 3,300 data points, including at country level, the report highlights how we’re continuing to strengthen our efforts to combat mis- and disinformation and platform manipulation across Europe, and the progress we're making in this space: Notably, in the first half of 2024, 97% of videos violating our misinformation policies were removed proactively, before any user report.
Since 2020, we’ve actively participated in the Code’s Taskforce and working groups, including co-chairing the one on Elections and the one on Transparency, to stay at the forefront of combating disinformation. Our holistic approach includes preventing the spread of mis- and disinformation, elevating reliable information, and investing in media literacy to build resilience across our community; and was detailed in the most recent report, with noteworthy points including:
Protecting Election Integrity
With over half of the global population heading to the polls in 2024, our report details the extensive resources we’ve deployed to protect platform integrity during this time. A key test was the EU elections, the third-largest election worldwide by votes cast, held across 27 EU countries from 6-9 June 2024. We launched 27 in-app Election Centres in 24 languages, providing trusted information that was viewed 7.5 million times in the four weeks leading up to the vote.
To further ensure platform integrity, we established a Mission Control Centre in Dublin to provide 24/7 monitoring of potential election-related issues. The Centre brought together staff from multiple specialist teams within our trust and safety department to maximise the effectiveness of our work in the run-up to, and during, the elections themselves. By leveraging the Code’s Rapid Response System, we streamlined information exchange between civil society organisations, fact-checkers, and platforms. Additionally, we ensured fact-checking coverage in at least one official language of every EU Member State and launched localised media literacy campaigns to help users navigate misinformation.
In the four weeks leading up to and including the elections, our efforts resulted in the removal of 43,000 pieces of content in violation of our misinformation policies and 2,600 pieces of civic- and election-integrity related content, with 96% of this content removed before it was reported and over 80% before receiving a single view.
Ad Transparency and Political Ads Ban
Ad transparency plays an important part in our work to preserve platform safety and integrity. In the first half of 2024, we removed 1,327 ads across the EEA for violating our policies on dangerous and medical misinformation, conspiracy theories, synthetic and manipulated media, and our political content policy. TikTok's policies prohibit paid political ads and we also continue to prevent monetisation of political accounts, to help ensure TikTok remains an entertainment-focused platform while promoting transparency and user trust.
Managing Edited Media and AI-Generated Content
Earlier this year, we expanded our Edited Media and AI-Generated Content (AIGC) policy to ensure further transparency. This includes mandatory labelling of AIGC and the introduction of Content Credentials technology from the Coalition for Content Provenance and Authenticity (C2PA, enabling automatic recognition and labelling of AIGC). This adds to a tool we developed to make it easy for creators to label their AI-generated content, which has already been used by 37 million creators. Our ongoing commitment to AIGC transparency helps ensure users can easily identify synthetic content and understand the context behind content they view.
Crisis Response: The war in Ukraine and Israel-Hamas Conflict
During times of crisis, our response to disinformation has been swift and comprehensive. In the reporting period, we removed 7,726 videos and removed 3 covert influence networks (93 accounts) attempting to manipulate public opinion related to the war in Ukraine. Fact-checkers working in Russian and Ukrainian reviewed content to help us make accurate moderation decisions.
Similarly, throughout the Israel-Hamas conflict, we focused on rapidly removing harmful content and providing access to reliable information. Between January and June 2024, we took action to remove 2 networks (consisting of 132 accounts in total) that were found to be related to the conflict.
Empowering Users, Researchers, and Fact-Checkers
Empowering our community is a key pillar of our strategy. During the reporting period, we launched media literacy campaigns across Europe to help users, especially younger audiences, critically assess content, identify misinformation, and access reliable sources.
We also work closely with researchers and fact-checkers to strengthen our defences. Through partnerships with fact-checkers across all official EU languages, we ensure quick verification and response to disinformation trends. Beyond enabling users to report misinformation, we share granular data with academic institutions and civil society organisations, fostering further research into the spread of disinformation.
Our Transparency Centre provides key insights and data to researchers, regulators, and other stakeholders, supporting collaborative efforts to combat misinformation. These initiatives are lead by 6,000 safety professionals dedicated to moderating content in EU languages, ensuring harmful content doesn't make it onto the platform or it is swiftly removed.
Looking Forward
We remain fully committed to our obligations under the EU Code of Practice on Disinformation, soon to become a DSA Code of Conduct. We will continue collaborating with EU authorities, industry partners, and civil society to combat disinformation and uphold the integrity of our platform.
For more detailed insights and data, visit our Transparency Centre and explore the full report here.