By Eric Han, Head of US Safety, TikTok

TikTok is a diverse, global community fueled by creativity, and we believe people should be able to express themselves creatively and be entertained in a safe and welcoming environment. To maintain that environment, we develop tools and technology to empower creators and counter violations of our Community Guidelines.

Over the last year in different markets we've been trialing and adjusting new systems that identify and remove violative content and notify people of their violations. Today we're bringing these systems to the US and Canada as we work to advance the safety of our community and integrity of our platform.

Evolving content moderation on TikTok

Our US-based Safety team is responsible for developing and enforcing the policies and safety strategies aimed at keeping people across the US and Canada safe. Like most user-generated content platforms, content uploaded to TikTok initially passes through technology that works to identify and flag potential policy violations for further review by a safety team member. If a violation is confirmed, the video will be removed and the creator will be notified of the removal and reason and given the opportunity to appeal the removal. If no violation is identified, the video will be posted and others on TikTok will be able to view it.

Over the next few weeks, we'll begin using technology to automatically remove some types of violative content identified upon upload, in addition to removals confirmed by our Safety team. Automation will be reserved for content categories where our technology has the highest degree of accuracy, starting with violations of our policies on minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods. While no technology can be completely accurate in moderating content, where decisions often require a high degree of context or nuance, we'll keep improving the precision of our technology to minimize incorrect removals. Creators will be able to appeal their video's removal directly in our app or report potential violations to us for review, as they can today.

In addition to improving the overall experience on TikTok, we hope this update also supports resiliency within our Safety team by reducing the volume of distressing videos moderators view and enabling them to spend more time in highly contextual and nuanced areas, such as bullying and harassment, misinformation, and hateful behavior. Our Safety team will continue to review reports from our community, content flagged by technology, or appeals, and remove violations. Note that mass reporting content or accounts does not lead to an automatic removal or to a greater likelihood of removal by our Safety team.

As we've detailed in our Transparency Reports, this technology initially launched in places where additional safety support was needed due to the COVID-19 pandemic. Since then, we've found that the false positive rate for automated removals is 5% and requests to appeal a video's removal have remained consistent. We hope to continue improving our accuracy over time.

Helping people understand our Community Guidelines

We've also evolved the way we notify people of the Community Guidelines violations they receive to bring more visibility to our policies and reduce repeat violations. The new system counts the violations accrued by a user and is based on the severity and frequency of the violation(s). People will be notified of the consequence(s) of their violation(s), starting in the Account Updates section of their Inbox. There, they can also see a record of their accrued violations.

More frequent violations will accrue more penalties and notifications in different parts of our app.

Here’s how it works:

First violation

  • Send a warning in the app, unless the violation is a zero-tolerance policy, which will result in an automatic ban.

After the first violation

  • Suspend an account's ability to upload a video, comment, or edit their profile for 24 or 48 hours, depending on the severity of the violation and previous violations.
  • Or, restrict an account to a view-only experience for 72 hours or up to one week, meaning the account can’t post or engage with content.
  • Or, after several violations, a user will be notified if their account is on the verge of being banned. If the behavior persists, the account will be permanently removed.

Our zero-tolerance policies, such as posting child sexual abuse material, automatically result in an account's removal. We may also block a device to help prevent future accounts from being created.

While we strive to be consistent, neither technology nor humans will get moderation decisions correct 100% of the time, which is why it's important that creators can continue to appeal their content's or account's removal directly in our app. If their content or account has been incorrectly removed, it will be reinstated, the penalty will be erased, and it will not impact the account going forward. Accrued violations will expire from a person's record over time.

We developed these systems with input from our US Content Advisory Council, and in testing them in the US and Canada over the last few weeks, over 60% of people who received a first warning for violating our guidelines did not have a second violation. The more transparent and accessible our policies are, the less people violate them, and the more people can create and be entertained on TikTok.

People spend substantial time and energy creating content for TikTok, and it's critical to us that our systems for moderating content be accurate and consistent. We want to hear from our community about their experiences so that we can continue to make improvements as we work to keep the platform safe and inclusive for our global community.