Update October 2, 2024 at 5:45pm GMT

As we approach the one-year anniversary of the horrific attack carried out by Hamas in Israel, we're providing another update on our ongoing work to safeguard our platform.

Strengthened policies against hate speech and hateful behavior

Since our last update, we rolled out refreshed Community Guidelines and began to enforce expanded hate speech and hateful behavior policies. These policies aim to better address implicit or indirect hate speech and create a safer and more civil environment for everyone. These add to our long-standing policies against antisemitism and other hateful ideologies. We also updated our hate speech policy to recognize content that uses "Zionist" as a proxy for a protected attribute when it is not used to refer to a political ideology and instead used and proxy with Jewish or Israeli identity. This policy was implemented early this year after observing a rise in how the word was increasingly used in a hateful way.


To enforce our policies, we've continued to invest in both automated moderation technology - which now takes down 80% of the content removed from TikTok - as well as moderators. We've continued to update and expand our hate speech policy refreshers, trainings, and course materials, including implicit bias training addressing antisemitism and Islamophobia. We also had additional training from the Anti-Defamation League and the American Jewish Committee to further our understanding of new threats facing the Jewish community. We continue to improve our hate speech detection with an improved audio hash bank to help detect hateful sounds as well as updated machine learning models to recognize emerging hateful content. In addition, we increased fact checking resources and expanded our fact-checking program by partnering with Fatabyyano to fact-check content in the Middle East.


Over the last year, we removed more than 4.7 million videos and suspended more than 300,000 livestreams in the conflict region for violating our Community Guidelines, which includes content promoting Hamas, hate speech, and misinformation. With these additional investments, we tripled the number of accounts removed for hate each quarter this year. Globally, in the same time period, we removed more than 100 million pieces of content, as detailed in the chart below.

In addition:

  • We removed 500 million fake accounts globally over the last year, along with the content posted by these accounts.
  • We removed and reported publicly on three networks of accounts inauthentically posting content related to the conflict which violated our covert influence operations policies.

*Data covers October 7, 2023 to September 15, 2024

Ongoing community education and engagement

We continue to engage a wide-range of experts, organizations, and creators as we improve our approach to combating hate, misinformation, and violent extremism on an ongoing basis. For example, TikTok participated in the World Jewish Congress's security summit, Building a Safer Future: 30 Years After The AMIA Bombing, and the Symposium to Combat Online Antisemitism hosted by the US State Department, where conversations centered on keeping antisemitism off of social media.

In addition, we continue to invest in supporting reliable content about topics that are important to our community and the broader public. For instance, TikTok supported a delegation of 27 creators from around the world who participated in the International March of the Living in Poland and Hungary. These creators heard from renowned Holocaust history scholars, along with Holocaust survivors, and marched from Auschwitz to Birkenau where the delegation paid tribute to the victims of the Holocaust. This year's march was led by 55 Holocaust survivors, including seven Israeli Holocaust survivors affected by the events of October 7 in Israel. As part of their journey, TikTok creators produced 76 videos that reached over 22 million views on our platform to raise awareness about the horrors of the Holocaust and the importance of combating hate.

We remain focused on enforcing our rules and will continue protect our community through this ongoing conflict.

Update on 6 April at 6pm GMT

As we approach the six-month anniversary of the start of the Israel-Hamas war, we're providing a further update on our work to help maintain the safety of our community and the integrity of our platform.

In the six months since October 7, 2023*, we have removed more than 3.1 million videos and suspended more than 140,000 livestreams in Israel and Palestine for violating our Community Guidelines, including content promoting Hamas, hate speech, violent extremism and misinformation. Globally, in the same time period, we have removed tens of millions of pieces of content, as detailed in the chart below.

We continue to take robust action against deceptive behaviours too. In the six months since the start of the war, we have removed more than 320 million fake accounts globally, along with their content.

As we said when the war first started, we immediately mobilised resources to help us improve both our proactive automated detection and strengthen our moderation teams, as we countered these new and evolving risks. Thanks to improvements we made to machine moderation models when the war started, we saw an immediate 234% increase in the violative comments removed in Israel and Palestine by this technology. These efforts help us ensure that we are providing safe, inclusive and welcoming TikTok experiences for our community.

*Data covers October 7, 2023 to March 31, 2024.

Update on December 7 at 8:45pm GMT

We understand this is a difficult, fearful, and polarizing time for many people around the world and on TikTok. As we continue to focus on the safety of our community, we're launching a series of initiatives to #swipeouthate on TikTok. This includes a public service initiative that will roll out on our platform encouraging people to stand together against hateful behavior.

  • We've brought together a new anti-hate and discrimination task force within our trust and safety team that's developing an aggressive plan to further crack down on hateful behavior in response to the recent rise, with a particular focus on antisemitism and Islamophobia. As part of this effort, we're investing more resources to proactively identify new and emerging trends, before they gain visibility, and strengthening and deepening training for moderators, in partnership with experts, to address implicit bias and the unique aspects of hateful ideologies.
  • As we reinforce our content moderation, we're also adding ways to empower creators to manage their experiences. We're starting to roll out Comment Care Mode, which expands our suite of comment controls with new choices to filter unfriendly or unwelcome comments. When Comment Care Mode is turned on, it will filter comments we think are similar to those that the creator has previously reported or deleted or are inappropriate, offensive or contain profanity. We want to make sure every creator on TikTok is aware of the tools available to them. We're starting to prompt new creators after they post their first video and remind established creators who have yet to use these tools. We're also starting to prompt creators that may be experiencing a spike in unwelcome or unfriendly comments to turn on Comment Care Mode and use a new feature we're rolling out that filters comments made by accounts that are not in the creator's following or follower list. Early data has shown that creators using these filters have a 30% decrease in the amount of comments they report.
  • We know that part of addressing issues of hate means hearing more directly from those groups most impacted. To ensure the products we're building are serving our community as intended, we're developing a co-design and product beta testing program for creators to provide input on our features and test them to ensure our products meet their needs. We're also expanding our managed creator communities to Jewish and other inter-faith communities as well as API and LGBTQ+ next year.

In addition, we continue working aggressively to enforce our hate speech, misinformation, and other policies. From Oct. 7-Nov. 30, we removed more than 1.3M videos in the conflict region for violating our Community Guidelines, including content promoting Hamas, hate speech, terrorism and misinformation. Globally, in the same time period, we have removed tens of millions of pieces of content, as detailed in the chart below, and have prevented teen accounts from viewing over 1 million videos containing violence or graphic content.

Update on November 23 at 3pm GMT

As the conflict continues, we remain focused on enforcing our rules against hate, harmful misinformation and other violative content. From October 7 to November 17, we removed more than 1,164,000 videos in the conflict region for breaking our rules, including content promoting Hamas, hate speech, terrorism and misinformation. Globally, we've removed millions of pieces of content during the same time period:


We continue to take swift action against an increase in fake engagement and accounts. In the month before the conflict started, we removed 21 million fake accounts globally, compared to 35 million fake accounts removed in the month after the start of the war - a 67% increase. In that month, we've also removed 933k bot comments posted on content tagged with hashtags related to the conflict.

We recognize that this is a challenging time for many in our community. That's why we continue to pursue opportunities to hear directly from creators about their experience on TikTok and to speak to community groups and other experts, as our teams consider additional changes and tools, such as our new Safety Center resource on how to access support during tragic events.

Update on November 5 at 3pm GMT

Like millions in our community, we are appalled by the reported rise of Islamophobia and antisemitism globally. Hateful ideologies are not and have never been allowed on our platform. We're continuously taking important steps to protect our community and do our part to prevent the spread of hate.

Since October 7, we have removed more than 925,000 videos in the conflict region for violating our policies around violence, hate speech, misinformation, and terrorism, including content promoting Hamas. During the same time period across TikTok globally, we've removed millions of pieces of content.

As the war goes on, our teams are closely monitoring evolving content trends, and collaborating with partners and intelligence firms to remain ahead of emerging themes and potential risks. We have already seen changes in the type of violative content we are removing, with the initial surge in violent and graphic content followed by a rise in content promoting terrorism, and more recently a rise in misinformation, conspiracy theories and the spread of hateful ideologies, including Islamophobia and antisemitism.

We have also seen spikes in fake engagement in the wake of the conflict and have correspondingly removed more than 24 million fake accounts globally since the start of the war. We've also removed more than half a million bot comments on content under hashtags related to the conflict.

We remain agile in considering and implementing changes to both our policies and enforcement strategies. A key part of this is working with external experts, for example engaging with dozens of organisations representing Jewish and Muslim communities to help ensure our actions against antisemitism and Islamophobia are effective. We've also updated our LIVE feature guidelines to better prevent people from misusing monetization features to exploit the ongoing tragedy for personal gain.

Update on October 25 at 7pm BST

We remain focused on quickly and consistently enforcing our policies to protect the TikTok community. Since Oct. 7, we've removed over 775,000 videos and closed over 14,000 livestreams promoting violence, terrorism, hate speech, misinformation, and other violations of our Community Guidelines in the impacted region.

Initial post - October 15

TikTok stands against terrorism. We are shocked and appalled by the horrific acts of terror in Israel last week. We are also deeply saddened by the intensifying humanitarian crisis unfolding in Gaza. Our hearts break for everyone who has been affected.

We immediately mobilised significant resources and personnel to help maintain the safety of our community and integrity of our platform. We're committed to transparency as we work to provide a safe and secure space for our global community. We remain focused on supporting free expression, upholding our commitment to human rights, and protecting our platform during the Israel-Hamas war.

Upholding TikTok's Community Guidelines

As part of our crisis management process, our actions to safeguard our community include:

  • Launching a command centre that brings together key members of our 40,000-strong global team of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis.
  • Evolving our proactive automated detection systems in real-time as we identify new threats; this enables us to automatically detect and remove graphic and violent content so that neither our moderators nor our community members are exposed to it.
  • Adding more moderators who speak Arabic and Hebrew to review content related to these events. As we continue to focus on moderator care, we're deploying additional well-being resources for frontline moderators through this time.
  • Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that supports the attacks or mocks victims affected by the violence. If content is posted depicting a person who has been taken hostage, we will do everything we can to protect their dignity and remove content that breaks our rules. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organizations and individuals, and those organizations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules.
  • Adding opt-in screens over content that could be shocking or graphic to help prevent people from unexpectedly viewing it as we continue to make public interest exceptions for some content. We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes.
  • Making temporary adjustments to policies that govern TikTok features in an effort to proactively prevent them from being used for hateful or violent behaviour in the region. For example, we're adding additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation.
  • Cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines which are informed by legal and human rights standards. We are acutely aware of the specific and imminent risks to human life involved in the kidnapping of hostages and are working with law enforcement to ensure the safety of the victims in accordance with our emergency procedures.
  • Engaging with experts across the industry and civil society, such as Tech Against Terrorism and our Advisory Councils, to further safeguard and secure our platform during these difficult times.

Since the brutal attack on October 7, we've continued working diligently to remove content that violates our guidelines. To-date, we've removed over 500,000 videos and closed 8,000 livestreams in the impacted region for violating our guidelines.

Preventing the spread of misleading content

Misinformation during times of crisis can make matters worse. That's why we work to identify and remove harmful misinformation. We also remove synthetic media that has been edited, spliced, or combined in a way that could mislead our community about real-world events.

To help us enforce these policies accurately, we work with IFCN-accredited fact-checking organisations who support over 50 languages, including Arabic and Hebrew. Fact-checkers assess content enabling our moderators to accurately apply our misinformation policies. Out of an abundance of caution, while a video is being fact-checked, we make it ineligible for the For You feed. If fact-checking is inconclusive, we label the content as unverified, don't allow it in For You feeds, and prompt people to reconsider before sharing it.

We continue to proactively look for signs of deceptive behaviour on our platform. This includes monitoring for behaviour that would indicate a covert influence operation which we would disrupt and ban accounts identified as part of the network.

We will soon be rolling out reminders in Search for certain keywords in Hebrew, Arabic, and English to encourage our community to be aware of potential misinformation, consult authoritative sources, and to remind them of our in-app well-being resources if they need them.

Shaping your TikTok experience

We have a large suite of existing controls and features that we encourage everyone in our community to consider using as they tailor the TikTok experience that best suits their preferences. These include:

  • For You feed controls: People can tap 'Not Interested' on content they want to see less of or choose 'Refresh' if they want to restart their feed. When Restricted Mode is enabled, it helps to limit the appearance of content that may not be appropriate for a general audience and filters content with warning labels.
  • Comment controls: People can choose who can comment on their videos, filter keywords from comments, or review comments before they are published. They can also block, delete and report comments in bulk. We prompt people to reconsider posting unkind comments, too.
  • Screen time controls: We offer a range of tools to help people customise and control their time on our app, such as tools to set a screen time limit, reminders to take a break or log off for bedtime, and more.
  • Family Pairing tools: Through our Family Pairing feature, parents and guardians can link their TikTok account to their teen's account to enable a variety of content settings. For example, they can choose to turn off search, enable Restricted Mode, and customise screen time settings.
  • Reporting: Anyone can report content on TikTok, including comments, accounts and livestreams. Long press on a video, tap report, and select a reason, such as misinformation.

We also provide our community with helpful resources on a number of issues, including how to recognise hate and report it, and how to safely share stories about their mental health and access help if they need it.

We will continue to adapt our safeguards to protect our community.