By Michael Beckerman, VP and Head of US Public Policy and Eric Han, Head of Safety, US

Today we published our global Transparency Report for the last half of 2019 (July 1 - December 31, 2019). This report provides insight into the volume and nature of requests we receive from law enforcement bodies around the world and how we responsibly respond to them. To increase transparency into our content moderation practices and actions, we've expanded this report to include information about our approach to enforcing the policies and practices that keep TikTok a safe and authentic place for creative expression. This means we're sharing for the first time the global volume of videos removed for violating our Community Guidelines or Terms of Service, including the 5 markets with the largest volume of removed videos.

Background

We published our inaugural Transparency Report last December which covered the legal requests we received and our response to them during the first half of 2019. We're publishing these reports regularly and plan to provide more information in future reports as we continue to invest in our infrastructure, improve our reporting systems, and develop new safety policies, practices, and partnerships.

Our approach to safety  

Around the world, tens of thousands of videos are uploaded on TikTok every minute. With every video comes a greater responsibility on our end to protect the safety and well-being of our users. As a global platform, we have thousands of people across the markets where TikTok operates working to maintain a safe and secure app environment for everyone. 

This includes our US-based safety team, which has grown considerably over the last year under the leadership of our Head of Safety for TikTok US, as well as our Trust & Safety hubs in California, Dublin, and Singapore. Our global safety teams comprise experienced industry professionals whose backgrounds span product, policy, compliance, child safety, law, privacy, and NGOs. These dedicated hubs are focused on strengthening policies, technologies, and moderation strategies and ensuring that they complement both local culture and context. Our Trust & Safety leaders collaborate closely with regional regulators, policymakers, government, and law enforcement agencies in our pursuit to promote the highest possible standard of user safety. 

To enforce our Community Guidelines, we use a combination of technology and content moderation to identify and remove content and accounts that don’t meet our standards. 

  • Technology: Our systems automatically flag certain types of content that may violate our Community Guidelines, which enables us to take swift action and reduce potential harm. These systems take into account things like patterns or behavioral signals to flag potentially violative content.
  • Content moderation: Technology today isn't so advanced that we can solely rely on it to enforce our policies. For instance, context can be important when determining whether certain content, like satire, is violative. As such, our team of trained moderators helps to review and remove content. In some cases, this team proactively removes evolving or trending violative content, such as dangerous challenges or harmful misinformation. Another way we moderate content is based on reports we receive from our users. We try to make it easy for users to flag potentially inappropriate content or accounts to us through our in-app reporting feature, which allows a user to choose from a list of reasons why they think something might violate our guidelines (such as violence or harm, harassment, or hate speech). If our moderators determine there's a violation, the content is removed. 

We continue to educate our users about the additional options and controls available to our users by directly sharing that information through in-app safety and well-being videos and on our Safety Center. On our new Transparency webpage we also share more information about the steps we're taking to help keep our platform safe, details about our company operations, and resources like our Community Guidelines and Transparency Reports.

How we enforced our Community Guidelines & Terms of Service

At TikTok, we celebrate creative expression, but we also prioritize protecting against harm. Taking action on content that violates our policies is a critical part of fulfilling our responsibility to our users.

In the second half of last year (July 1 - December 31, 2019), we removed 49,247,689 videos globally, which is less than 1% of all the videos our users uploaded, for violating our Community Guidelines or Terms of Service. Our systems proactively caught and removed 98.2% of those videos before a user reported them. And of the total videos removed, 89.4% were taken down before they received any views. Our Transparency Report provides detail on the markets with the largest volumes of removed videos.

At the end of last year, we started to roll out a new content moderation infrastructure that enables us to be more transparent in reporting the reasons that videos are removed from our platform. In this report, we're sharing a breakdown of policy category violations for videos removed in the month of December under that new content moderation infrastructure. Note that when a video violates our Community Guidelines, it's labeled with the policy or policies it violates and is taken down. This means that the same video may be counted across multiple policy categories. We've since transitioned the majority of our content review queues to our new content moderation system, and our subsequent reports will include more detailed data for the full time period of each report.

In addition to removing content, we continue to take meaningful steps to promote a positive platform and provide visibility into our practices. For example, this year we've introduced more sophisticated policies; created our U.S. Content Advisory Council; launched Family Pairing safety features; announced global Transparency Centers in Los Angeles and Washington, D.C.; explained how videos are recommended #ForYou; joined WePROTECT Global Alliance; and announced our support for the Voluntary Principles to counter online child exploitation. 

Legal requests for user information

Like all internet platforms we receive legal requests for user information from government agencies around the world. Any information request we receive is carefully reviewed for legal sufficiency to determine, for example, whether the requesting entity is authorized to gather evidence in connection with a law enforcement investigation or to investigate an emergency involving imminent harm. In the second half of 2019, we received 500 legal requests for information from 26 countries, and our report details how we responded. 

Government requests for content removal   

From time to time we receive requests from government agencies to remove content on our platform, such as requests around local laws prohibiting obscenity, hate speech, adult content, and more. We review all material in line with our Community Guidelines, Terms of Service, and applicable law and take the appropriate action. If we believe that a report isn't legally valid or doesn't violate our standards, we may not action the content. During the second half of 2019, we received 45 requests to remove or restrict content from government bodies in 10 countries. Our report includes more detail. 

Takedowns for Infringement of Intellectual Property

Our Community Guidelines and Terms of Service prohibit content that infringes on third party intellectual property. The creativity of our users is the fuel of TikTok. Our platform enables their self-expression to shine, and we do our best to protect it. In the second half of 2019, we evaluated 1,338 copyright content takedown notices, and our report shows how we responded. 

Looking ahead

As our young company continues to grow, we're committed to taking a responsible approach to building our platform and moderating content. We're working every day to be more transparent about the violating content we take down and offer our users meaningful ways to have more control over their experience, including the option to appeal if we get something wrong. We'll continue to evolve our Transparency Report to provide greater visibility into our practices and actions, and to address the feedback we hear from our users and outside stakeholders. 

Additionally, we're on track to open our global Transparency Centers in Los Angeles and Washington, D.C. this year. These centers will provide invited experts and policymakers with an opportunity to view first-hand how our teams address the challenging and critically important work of moderating content on TikTok.

Our ultimate goal is to keep TikTok an inspiring and joyful place for everyone to create.