By Eric Han, Head of Safety, TikTok US

TikTok is a diverse and growing community. Our users include grandparents who show us their dance moves; farmers who show us their fields; nurses who share what life's like on the front lines of the pandemic; and activists who bring us along to marches and protests. Part of the magic of TikTok is how people from all walks of life use the platform to connect through creative, authentic expression. 

As the Head of Safety, my team and I work to protect our users from the things that could interfere with their ability to express themselves safely and have a positive experience on the app. In what can feel like an increasingly divisive world, one of the areas we're especially intent on improving is our policies and actions towards hateful content and behavior. Our goal is to eliminate hate on TikTok. 

Today we want to share five ways we're working diligently to counter the spread of hate on our platform. 

1) Evolving our hate speech policy. We continue to take a proactive approach to stopping the spread of hate and known hate and violent extremist groups. It starts with our Community Guidelines – the code of conduct for our growing global community – which prohibit any form of hateful speech or ideology, as well as harassment and bullying. We define hate speech as "content that intends to or does attack, threaten, incite violence against, or dehumanize an individual or group of individuals on the basis of protected attributes like race, religion, gender, gender identity, national origin, and more."  

We work to proactively detect and remove such content before it reaches our community, and we carefully review content our community reports to us. We regularly consult with experts to help our policies evolve as hateful behavior itself does. For example, this year we expanded these policies to better account for scenarios that make others feel excluded or marginalized. It's our job to remain vigilant for the sake and safety of our community. 

2) Countering hateful speech, behavior, and groups. We do not tolerate hate on TikTok. We remove hateful content – including race-based harassment – from our platform as we become aware of it. We also ban accounts that repeatedly promote it. We have a zero tolerance stance on organized hate groups and those associated with them, like accounts that spread or are linked to white supremacy or nationalism, male supremacy, anti-semitism, and other hate-based ideologies. In addition, we remove race-based harassment and the denial of violent tragedies, such as the Holocaust and slavery. We may also take off-platform behavior into consideration as we enforce our policies, such as an account belonging to the leader of a known hate group, to protect people against harm.  

Since the start of 2020, we've removed more than 380,000 videos in the US for violating our hate speech policy. We also banned more than 1,300 accounts for hateful content or behavior, and removed over 64,000 hateful comments. To be clear, these numbers don't reflect a 100% success rate in catching every piece of hateful content or behavior, but they do indicate our commitment to action.

Another strategy is making it more difficult for people to find hateful content and accounts, which creates room for more positive and joyful content. For instance, if someone searches for a hateful ideology or group, such as "heil Hitler" or "groyper,” we take various approaches to stop the spread of hate, including removing related content, refraining from showing results, or redirecting the search to our Community Guidelines to educate our community about our policies against hateful expression. It's not a fail-safe solution, but we work to quickly apply this approach to hate groups as they emerge. 



3) Increasing cultural awareness in our content moderation. As a Safety team, it's on us to recognize the many forms hate takes so that we can develop policies and enforcement strategies to combat it effectively. We periodically train our enforcement teams to better detect evolving hateful behavior, symbols, terms, and offensive stereotypes. 

For example, we acknowledge that different communities have different lived experiences, and language previously used to exclude and demean groups of people is now being reclaimed by these communities and used as terms of empowerment and counterspeech. We're working to incorporate the evolution of expression into our policies and are training our moderation teams to better understand more nuanced content like cultural appropriation and slurs. If a member of a disenfranchised group, such as the LGBTQ+, Latinx, Asian American and Pacific Islander, Black, and Indigenous communities, uses a slur as a term of empowerment, we want our moderators to understand the context behind it and not mistakenly take the content down. On the other hand, if a slur is being used hatefully, it doesn't belong on TikTok. Educating our content moderation teams on these important distinctions is ongoing work, and we strive to get this right for our users. 

4) Improving transparency with our community. We want our community to know that we're listening to their feedback, and we're working to increase transparency into the reasons content may be removed. For example, we recently released a feature that notifies users if they duet or react to a video that was removed for violating our Community Guidelines. This feature was built in response to feedback from users who made duets condemning other content; without clarity, they often felt betrayed to find their own video removed, which would happen because the original video they duetted with was taken down. 

We’re also working to improve our appeals process and proactively educate about our Community Guidelines. As an example, if a user searches for #whitelivesmatter, they'll be reminded of our guidelines and our commitment to fostering a respectful and diverse community.

5) Investing in our teams and partnerships. We continue to invest in our ability to detect and triage hateful or abusive behavior to our enforcement teams as quickly as possible. We recently added leaders with deep expertise in these areas to our product and engineering teams to focus on enforcement-related efficiencies and transparency. 

We also actively work to learn and get feedback from experts, like those on our Content Advisory Council and civil society organizations. Our industry hasn't always gotten these decisions right, but we are committed to learning from the mistakes of others' – and our own. We expect to be held accountable for any shortcomings and progress; by working together, we will continue to improve our policies, processes, and products that keep TikTok a place where everyone feels welcome.

We recognize the perhaps insurmountable challenge to completely eliminate hate on TikTok – but that won't stop us from trying. Every bit of progress we make gets us that much closer to a more welcoming community experience for people on TikTok and out in the world. These issues are complex and constantly changing – not just for us, but for all internet companies. We are committed to getting it right for our community.