By Caroline Greer, Director of Public Policy and Government Relations, Brussels
Today we are publishing our sixth Transparency Report under the EU Code of Conduct on Disinformation, which has now been incorporated under the Digital Services Act. The report covers 30 EU and EEA countries and the methodology remains unchanged from previous iterations, so trends are clear and comparable.
Elections: earlier, faster enforcement
We protected the integrity of our platform throughout elections in Croatia, Germany, Poland, Portugal and Romania in H1 2025, scaling our work to address associated risks. Continuous improvement to our detection systems drove an increase in removals for fake engagement, including more removals of impersonation accounts and other types of fake accounts. As part of this work, we shared that we banned nearly 3,000 GPPPA (Government, Politician, and Political Party Accounts) impersonation accounts in our Election Integrity Hub. We also took down more harmful election content before anyone saw it, which is especially important during elections given the potential impact on voter sentiment. Zero-view removals for violative content in the Civic and Election Integrity category increased by 15 percentage points to 90%.
Following the annulment of Romania's presidential election in December 2024, we built on the playbook that we've used in more than 200 elections since 2020 with extra precautions. We planned early with the launch of a Mission Control Centre, added resources across teams, and moved fast on risk. We kept investing in our fact-checking efforts too, and in-app election centre page views doubled to more than 2 million. We also participated in an election stress test organised by the European Commission in view of the federal elections in Germany and partnered with European fact-checkers on media literacy campaigns and speaker series events for elections in the region.
AI transparency: clearer labels, fewer takedowns
We expanded C2PA Content Credentials to new features and launched a visible watermark for AI content created with the TikTok camera, which helped drive more consistent labelling. Creator-labelled AI videos grew 36% to more than 8.7 million while automatically labelled AIGC increased 81% to about 5.5 million. At the same time, policy-violating AIGC removals fell 53% to less than 25,000 and views on removed AIGC fell 47%.
Fact-checking at scale
As the number of elections increased in the first half of 2025, videos reviewed by fact-checkers more than doubled to 13,000. Supported by improved detection that surfaced more potentially-violative content, removals following fact-check assessments rose 80% and content that we removed from recommendation in the For You feed rose 123%. Appeals grew where enforcement volumes grew, but appeal success rates stayed broadly stable across policies. The system scaled without lowering the bar.
Integrity of services
We continued to act against inauthentic behaviour. The number of fake follower removals dropped to come back in line with historical trends, while we removed more fake likes and prevented large volumes of fake followers. We also continued to disrupt covert influence operations, as we previously reported in our monthly disclosures.
Tackling hate with better interventions
Our Holocaust education campaign reached more people and drew stronger engagement. Video interventions delivered more impressions and more clicks, while search prompts became more targeted and saw a higher click-through rate. The changes followed a refreshed in-app hub that we launched in January with UNESCO and the World Jewish Congress.
Supporting researchers
More researchers used our tools. Applications and approvals increased for both the Research Tools and the Commercial Content Library. Interest in the Code's Transparency Centre also grew, with more visits and downloads during this period.
Our focus is steady. Protect elections and civic debate. Make AI content clearer with labels and watermarking. Work closely with fact-checkers and researchers. And keep improving the speed and precision of enforcement at scale.
For more details, see our full report in the Transparency Centre.