From spoken word artists to breakout singers and footballers, TikTok is home to a diverse and vibrant community where people from all backgrounds come together to find joy, inspire creativity, and express themselves. 

To foster an environment that allows creative expression to flourish, our Trust & Safety teams continually work to protect our community from content that could interfere with their ability to have a positive experience. We're especially focused on improving our policies and actions to address hateful content and behaviour. Here's a look at some of the work being done to ensure TikTok continues to be an inclusive platform for everyone:

Taking a stand against hateful ideologies

Our Community Guidelines are the foundation of our work to help our community feel supported and safe on TikTok. These guidelines reflect our values, and they make clear that hateful ideologies are incompatible with the inclusive and supportive community that our platform provides.

In co-operation with academics and experts from across the globe, we regularly assess and evaluate our enforcement processes to ensure that we are supporting our community and protecting against new risks as they emerge. As part of our efforts to prevent hateful ideologies from taking root, we will stem the spread of coded language and symbols that can normalise hateful speech and behaviour.

While our Trust & Safety teams already work to remove hate speech and hateful ideologies, such as neo-Nazism and white supremacy, we are strengthening our enforcement action to remove neighbouring ideologies, such as white nationalism, white genocide theory, as well as statements that have their origin in these ideologies, and movements such as Identitarianism and male supremacy.

As many monitoring organisations are reporting that antisemitic sentiment is increasing in the world, we're proud that we have already taken steps to keep our community safe, for example, by not permitting content that denies the Holocaust and other violent tragedies. We know there's always more we can do which is why we are taking further action to remove misinformation and hurtful stereotypes about Jewish, Muslim and other communities. This includes misinformation about notable Jewish individuals and families who are used as proxies to spread antisemitism. We're also removing content that is hurtful to the LGBTQ+ community by removing hateful ideas, including content that promotes conversion therapy and the idea that no one is born LGBTQ+.

Increasing cultural awareness in our content moderation

As a Trust & Safety team, it's on us to recognise the many forms hate takes and to develop policies and enforcement strategies to combat it effectively. We regularly train our enforcement teams to better detect evolving hateful behaviour, symbols, terms, and offensive stereotypes. 

For example, we acknowledge that different communities have different lived experiences, and language previously used to exclude and demean groups of people is now being reclaimed by these communities and used as terms of empowerment and counter-speech. We're working to incorporate the evolution of expression into our policies and are training our enforcement teams to better understand more nuanced content like cultural appropriation and slurs. If a member of a disenfranchised group, such as the LGBTQ+, Black, Jewish, Roma and minority ethnic communities, uses a word as a term of empowerment, we want our enforcement teams to understand the context behind it and not mistakenly take the content down. On the other hand, if a slur is being used hatefully, it doesn't belong on TikTok. Educating our enforcement teams on these crucial distinctions is ongoing work, and we strive to get this right for our community. 

Improving transparency with our community

We're working to increase transparency into the reasons content may be removed so that our community members can continue to use our platform while being mindful of how their actions could be hurtful to others. When we remove content, the original creator is notified and they have the option to ask us to review our decision.

Investing in our teams and partnerships

We continue to invest in our ability to detect and triage hateful or abusive behaviour to our enforcement teams as quickly as possible. We have added leaders with deep expertise in these areas to our product and engineering teams to focus on enforcement-related efficiencies and transparency. 

The above follows our recent announcement of joining the European Commission's Code of Conduct on Countering Illegal Hate Speech Online and the steps we're taking to support body positivity on TikTok.

Enforcement of these policies will be an ongoing process on our end as we work to identify and respond to evolving content and behavior, just as we constantly iterate on our protective measures as a whole. We remain committed to serving the needs of our community which is why we want to be transparent about the additional steps we are taking to keep hate off TikTok. 

We will continue to improve our policies, processes, and products to keep TikTok a place where everyone feels welcome. As we've said before, our end goal is to eliminate hate on TikTok, and although this might seems like a tall mountain to climb, we're ready for the challenge.