November 19, 2025News

Combatting extremism and radicalisation

Today we are sharing more information about how we continue to defend our platform against hate and violent extremism. This builds on our long-standing work to protect our community and foster civility. We believe that being civil and feeling safe are key to a thriving and creative community. That's why our policies against hate and violent extremism are founded on the idea of treating one another with dignity and respect on TikTok.

Our approach to combating violent extremism

TikTok does not allow violent and hateful organisations or individuals on our platform. We also prohibit the promotion or provision of material support to violent or hateful actors. If we become aware of violative content, including attempts to recruit members, or off-platform behaviour, we will take immediate action to remove it and terminate accounts.

We maintain strong relationships with national and international law enforcement agencies. We report cases to them as appropriate, including where we believe there is a specific, credible, and imminent threat to human life or serious physical injury.

Our advanced moderation technologies and teams work effectively to remove this type of content from the platform. In the first half of this year, we removed more than 6.5 million videos for violating our rules against violent and hateful organisations. This represents just under 2% of all violative content removed during those six months. Our systems are highly efficient: 98.9% were taken down before being reported to us, and 94% were removed within 24 hours.


Disrupting extremist networks

Combating groups promoting online hate and violent extremism is a long-standing societal and industry-wide challenge. These groups are deeply motivated. They constantly adapt their tactics to avoid detection. This includes finding new ways to covertly reference hateful ideologies or hiding behind seemingly innocuous identities. As violent extremist methodologies and evasion techniques continue to evolve, so do we. That's why we work with safety and security experts and threat-detection partners around the world to understand trends and regularly evaluate our policies and practices.

Today, we are bringing more transparency to how we are using technology to advance our existing and long-standing moderation efforts in this space. We know that sometimes accounts work together as a network to spread hate and violence and use more subtle techniques to break our rules. When we find these networks, we conduct detailed investigations to identify and disrupt further accounts. Some of the signals we look for include:

  • accounts that try and hide from our detection systems by using coded language, such as using a combination of emojis to reference certain hateful narratives
  • accounts that are systematically removing content and rebranding their profiles to avoid detection
  • accounts that try to direct people to off-platform sites where they can freely share their materials.

So far this year, we have taken down 17 of these types of networks around the world, made up of more than 920 accounts dedicated to spreading hate. These groups promoted a range of extremist ideologies. Their content included celebrating terrorist attacks, supporting mass violence, or targeting protected groups with hate.

Once we remove these networks, we continue to track them and can quickly disrupt their attempts to re-establish their presence on TikTok. This stops the network being able to regain its original size and following. As we learn more about how they operate, we do more to make TikTok the most hostile platform for these bad actors.

Piloting new community resources

In the fight against online hate and violent extremism, we can also use technology to help us reach and educate communities where they are. We already offer our global community access to resources that can help them develop media literacy skills and encourage them to view and create online content responsibly and critically. For example, we've long been directing users to authoritative educational resources about the Holocaust when they search for related words.

Today, we're announcing a new partnership with Violence Prevention Network, an organisation dedicated to stopping violent extremism. Together, we will design new resources aimed at building community resilience against these types of actors. These resources will initially be available for our community in Germany to help them think more critically about the content they see. They will be able to find these resources in-app when they search for words related to violent extremism. We will evaluate the impact of this approach as we consider bringing it to other parts of the world.

Strengthening industry collaboration against violent extremism

Working together is essential to protect our communities from violent extremists online. That's why we're proud to now be a member of the Global Internet Forum to Counter Terrorism (GIFCT). This membership recognises years of TikTok's consistent efforts against hate and violent extremism, including playing an active role in GIFCT's Working Groups on key issues like Transparency and Incident Response.

Working with GIFCT and partners like Tech Against Terrorism helps us with:

  • Practical support for our Trust & Safety teams
  • Better understanding of emerging threats from peers and experts
  • Continuous improvement in our approach, including our policies


Online hate and violent extremism have no place on TikTok. We are resolute in our fight against them, as we continue to provide our global community with a safe and inclusive space where they can discover, connect and share.