By Cormac Keenan, Head of Trust and Safety, TikTok

Today we're releasing our Q2 Community Guidelines Enforcement Report which details the volume and nature of violative content and accounts removed from TikTok to protect the safety of our community and integrity of our platform. We're also sharing updates on our work to protect people from abusive behaviour.

A look at our latest Community Guidelines Enforcement Report

As detailed in our report, 81,518,334 videos were removed globally between April - June for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of those videos, we identified and removed 93.0% within 24 hours of being posted and 94.1% before a user reported them. 87.5% of removed content had zero views, which is an improvement since our last report (81.8%).

We’re continuing to make steady progress in our proactive detection of hateful behaviour, bullying, and harassment. For example, 73.3% of harassment and bullying videos were removed before any reports compared to 66.2% in the first quarter this year, while 72.9% of hateful behaviour videos were removed before any reports compared to 67.3% from January - March. This progress is attributable to ongoing improvements to our systems that proactively flag hate symbols, words, and other abuse signals for further review by our safety teams.

Harassment as a whole, and hate speech in particular, are highly nuanced and contextual issues that can be challenging to detect and moderate correctly every time. For instance, reappropriation of a term is not a violation of our policies, but using that reappropriated term to attack or abuse another person would violate our hateful behaviour policy. Bullying can be highly personal and require offline context that isn't always available. To better enforce our policies, we regularly train and guide our team on how to differentiate between, for instance, reappropriation and slurs or satire and bullying. We've also rolled out unconscious bias training for our moderators and hired policy experts in civil rights, equity, and inclusion. As we make continual improvements to our detection mechanisms, we are striving to get these critical issues right for our community. We encourage people to report accounts or content that may be in violation our Community Guidelines.

Reaffirming our commitment to combating antisemitism

Today, as participants to the Malmö International Forum on Holocaust Remembrance and Combating Antisemitism, we're proud to reaffirm our commitment to combat antisemitic content on TikTok by continuing to strengthen our policies and enforcement actions. We also want to keep expanding our work with NGOs and civil society groups so they can harness the power of TikTok to share their knowledge with new audiences, and direct our community to educational resources so they can learn about the Holocaust and modern-day antisemitism.

Building anti-abuse efforts into our product

In addition to removing content, we empower people to customize their experience with a range of tools and resources, including effective ways to filter comments on their content, delete or report multiple comments at once, and block accounts in bulk. 

We've also added prompts that encourage people to consider the impact of their words before posting a potentially unkind or violative comment. The effect of these prompts has already been felt, with nearly 4 in 10 people choosing to withdraw and edit their comment. Though not everyone chooses to change their comments, we're encouraged by the impact of features like this and we continue to develop and try new interventions to prevent potential abuse.

Today we're expanding on these features with improved mute settings for comments and questions during livestreams. Livestreaming on TikTok is an exciting way for creators and viewers to connect, and we're building safety into the experience by design. Now, the host or their trusted helper can temporarily mute an unkind viewer for a few seconds or minutes, or for the duration of the LIVE. If an account is muted for any amount of time, that person's entire comment history will also be removed. Hosts on LIVE can already turn off comments or limit potentially harmful comments using a keyword filter. We hope these new controls further empower hosts and audiences alike to have safe and entertaining livestreams.

For more information on the steps we're taking to protect the safety of our community and integrity of our platform, we encourage you to read our Q2 report.