TikTok stands against terrorism. We are shocked and appalled by the horrific acts of terror in Israel last week. We are also deeply saddened by the intensifying humanitarian crisis unfolding in Gaza. Our hearts break for everyone who has been affected.
We immediately mobilised significant resources and personnel to help maintain the safety of our community and integrity of our platform. We're committed to transparency as we work to provide a safe and secure space for our global community. We remain focused on supporting free expression, upholding our commitment to human rights, and protecting our platform during the Israel-Hamas war.
Upholding TikTok's Community Guidelines
As part of our crisis management process, our actions to safeguard our community include:
- Launching a command centre that brings together key members of our 40,000-strong global team of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis.
- Evolving our proactive automated detection systems in real-time as we identify new threats; this enables us to automatically detect and remove graphic and violent content so that neither our moderators nor our community members are exposed to it.
- Adding more moderators who speak Arabic and Hebrew to review content related to these events. As we continue to focus on moderator care, we're deploying additional well-being resources for frontline moderators through this time.
- Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that supports the attacks or mocks victims affected by the violence. If content is posted depicting a person who has been taken hostage, we will do everything we can to protect their dignity and remove content that breaks our rules. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organizations and individuals, and those organizations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules.
- Adding opt-in screens over content that could be shocking or graphic to help prevent people from unexpectedly viewing it as we continue to make public interest exceptions for some content. We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes.
- Making temporary adjustments to policies that govern TikTok features in an effort to proactively prevent them from being used for hateful or violent behaviour in the region. For example, we're adding additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation.
- Cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines which are informed by legal and human rights standards. We are acutely aware of the specific and imminent risks to human life involved in the kidnapping of hostages and are working with law enforcement to ensure the safety of the victims in accordance with our emergency procedures.
- Engaging with experts across the industry and civil society, such as Tech Against Terrorism and our Advisory Councils, to further safeguard and secure our platform during these difficult times.
Since the brutal attack on October 7, we've continued working diligently to remove content that violates our guidelines. To-date, we've removed over 500,000 videos and closed 8,000 livestreams in the impacted region for violating our guidelines.
Preventing the spread of misleading content
Misinformation during times of crisis can make matters worse. That's why we work to identify and remove harmful misinformation. We also remove synthetic media that has been edited, spliced, or combined in a way that could mislead our community about real-world events.
To help us enforce these policies accurately, we work with IFCN-accredited fact-checking organisations who support over 50 languages, including Arabic and Hebrew. Fact-checkers assess content enabling our moderators to accurately apply our misinformation policies. Out of an abundance of caution, while a video is being fact-checked, we make it ineligible for the For You feed. If fact-checking is inconclusive, we label the content as unverified, don't allow it in For You feeds, and prompt people to reconsider before sharing it.
We continue to proactively look for signs of deceptive behaviour on our platform. This includes monitoring for behaviour that would indicate a covert influence operation which we would disrupt and ban accounts identified as part of the network.
We will soon be rolling out reminders in Search for certain keywords in Hebrew, Arabic, and English to encourage our community to be aware of potential misinformation, consult authoritative sources, and to remind them of our in-app well-being resources if they need them.
Shaping your TikTok experience
We have a large suite of existing controls and features that we encourage everyone in our community to consider using as they tailor the TikTok experience that best suits their preferences. These include:
- For You feed controls: People can tap 'Not Interested' on content they want to see less of or choose 'Refresh' if they want to restart their feed. When Restricted Mode is enabled, it helps to limit the appearance of content that may not be appropriate for a general audience and filters content with warning labels.
- Comment controls: People can choose who can comment on their videos, filter keywords from comments, or review comments before they are published. They can also block, delete and report comments in bulk. We prompt people to reconsider posting unkind comments, too.
- Screen time controls: We offer a range of tools to help people customise and control their time on our app, such as tools to set a screen time limit, reminders to take a break or log off for bedtime, and more.
- Family Pairing tools: Through our Family Pairing feature, parents and guardians can link their TikTok account to their teen's account to enable a variety of content settings. For example, they can choose to turn off search, enable Restricted Mode, and customise screen time settings.
- Reporting: Anyone can report content on TikTok, including comments, accounts and livestreams. Long press on a video, tap report, and select a reason, such as misinformation.
We also provide our community with helpful resources on a number of issues, including how to recognise hate and report it, and how to safely share stories about their mental health and access help if they need it.
We will continue to adapt our safeguards to protect our community.
Update on October 25 at 8pm CET
We remain focused on quickly and consistently enforcing our policies to protect the TikTok community. Since Oct. 7, we've removed over 775,000 videos and closed over 14,000 livestreams promoting violence, terrorism, hate speech, misinformation, and other violations of our Community Guidelines in the impacted region.
Update on November 5 at 4pm CET
Like millions in our community, we are appalled by the reported rise of Islamophobia and antisemitism globally. Hateful ideologies are not and have never been allowed on our platform. We're continuously taking important steps to protect our community and do our part to prevent the spread of hate.
Since October 7, we have removed more than 925,000 videos in the conflict region for violating our policies around violence, hate speech, misinformation, and terrorism, including content promoting Hamas. During the same time period across TikTok globally, we've removed millions of pieces of content.
As the war goes on, our teams are closely monitoring evolving content trends, and collaborating with partners and intelligence firms to remain ahead of emerging themes and potential risks. We have already seen changes in the type of violative content we are removing, with the initial surge in violent and graphic content followed by a rise in content promoting terrorism, and more recently a rise in misinformation, conspiracy theories and the spread of hateful ideologies, including Islamophobia and antisemitism.
We have also seen spikes in fake engagement in the wake of the conflict and have correspondingly removed more than 24 million fake accounts globally since the start of the war. We've also removed more than half a million bot comments on content under hashtags related to the conflict.
We remain agile in considering and implementing changes to both our policies and enforcement strategies. A key part of this is working with external experts, for example engaging with dozens of organisations representing Jewish and Muslim communities to help ensure our actions against antisemitism and Islamophobia are effective. We've also updated our LIVE feature guidelines to better prevent people from misusing monetization features to exploit the ongoing tragedy for personal gain.
Update on November 23 at 10am ET
As the conflict continues, we remain focused on enforcing our rules against hate, harmful misinformation and other violative content. From October 7 to November 17, we removed more than 1,164,000 videos in the conflict region for breaking our rules, including content promoting Hamas, hate speech, terrorism and misinformation. Globally, we've removed millions of pieces of content during the same time period:
We continue to take swift action against an increase in fake engagement and accounts. In the month before the conflict started, we removed 21 million fake accounts globally, compared to 35 million fake accounts removed in the month after the start of the war - a 67% increase. In that month, we've also removed 933k bot comments posted on content tagged with hashtags related to the conflict.
We recognize that this is a challenging time for many in our community. That's why we continue to pursue opportunities to hear directly from creators about their experience on TikTok and to speak to community groups and other experts, as our teams consider additional changes and tools, such as our new Safety Center resource on how to access support during tragic events.