By: Cormac Keenan, Head of Trust & Safety, TikTok
At TikTok, we strive to foster a fun and inclusive environment where people can create, find community, and be entertained. To maintain that environment, we take action to remove content that violates our Community Guidelines, and limit the reach of content that may not be suitable for a broad audience.
Safety has no finish line, and we share quarterly updates on our progress as part of holding ourselves accountable. Today, we're providing context around our Q4 2022 Community Guidelines Enforcement Report, including updates we've made to improve the accuracy and efficiency of content moderation at TikTok.
Moderating at scale
Our goal is to identify and remove violative content as swiftly as possible, and this is no small feat given TikTok's scale. To focus our efforts, we work to:
- Prioritize the fastest removal of highly egregious content, such as child sexual abuse material (CSAM) and violent extremism
- Minimize overall views of content that violate our Community Guidelines
- Ensure accuracy, consistency, and fairness for creators
To help us achieve this, we deploy a combination of automated technology and skilled human moderators who can make contextual decisions on nuanced topics like misinformation, hate speech, and harassment.
Evolving our approach
In the past, we've scaled moderation by casting a wider net for review and working to catch as much violative content as possible. While this generally increased the number of videos we were removing, it doesn't measure our overarching safety goals of prioritizing egregious content, minimizing overall views, and ensuring accurate and consistent decisions. As our community has continued to grow and express themselves—including through new experiences like longer videos, LIVE, and TikTok Now—our approach to content moderation has evolved as well. We're increasingly focused on preventing overall harm across features while building a fair, accurate experience for our creators.
As a result, in recent months we've started refining our approach to better prioritize accuracy, minimize views of violative content, and remove egregious content quickly. We've upgraded the systems that route content for review, so that they better incorporate a video's severity of harm (based on the type of potential violation) and expected reach (based on an account's following) when determining whether to remove it, escalate for human review, or take a different course of action. We're leveraging measures like age-restricted features, ineligibility for recommendation, and our new Content Levels system more frequently and transparently, in order to reduce harm by limiting who can create or see certain content. And our proactive technology is driving down the amount of content that needs review, as it grows more sophisticated at catching things like spam accounts at sign-up, or duplicative content.
Our Q4 2022 Transparency Report
The impact of these changes is already being reflected in the data from our Q4 Community Guidelines Enforcement report. For example, total content removals dipped as we made more low-harm content ineligible for the For You Feed rather than fully removing it. At the same time, the proportion of that content which was accurately removed by automation increased as our systems became more precise. This fluctuation is within our expected range—we consistently remove 1% percent or less of published content for being violative—and is part of a concerted effort to make our growing community safer and to foster a more consistent experience for our creators.
It's possible such metric fluctuations will continue as we continue to evolve our systems over the coming year. For example, we've made significant additional improvements this year, such as introducing a new account enforcement system and comprehensively refreshing our Community Guidelines with new policy groupings that future Enforcement Reports will follow. We're also continuing to refine moderation processes behind the scenes, such as by specializing more content moderation teams around areas of expertise.
As we continue our work to build a safe, inclusive and authentic home for our global community, we look forward to sharing more on our evolving efforts to prevent harm.