Update on 6 April at 1:00pm ET
As we approach the six-month anniversary of the start of the Israel-Hamas war, we're providing a further update on our work to help maintain the safety of our community and the integrity of our platform.
In the six months since October 7, 2023*, we have removed more than 3.1 million videos and suspended more than 140,000 livestreams in Israel and Palestine for violating our Community Guidelines, including content promoting Hamas, hate speech, violent extremism and misinformation. Globally, in the same time period, we have removed tens of millions of pieces of content, as detailed in the chart below.
We continue to take robust action against deceptive behaviors too. In the six months since the start of the war, we have removed more than 320 million fake accounts globally, along with their content.
As we said when the war first started, we immediately mobilized resources to help us improve both our proactive automated detection and strengthen our moderation teams, as we countered these new and evolving risks. Thanks to improvements we made to machine moderation models when the war started, we saw an immediate 234% increase in the violative comments removed in Israel and Palestine by this technology. Thanks to improvements we made to machine moderation models when the war started, we saw an immediate 234% increase in the violative comments removed in Israel and Palestine by this technology.
These efforts help us ensure that we are providing safe, inclusive and welcoming TikTok experiences for our community.
*Data covers October 7, 2023 to March 31, 2024.
Update on January 12 at 1:00pm ET
Since our last update in early December 2023, we have continued to make progress in our work to combat hate on TikTok and protect our platform as the war continues:
- We have launched our #SwipeOutHate campaign in the US, encouraging our community to stand together against hate by reporting it in-app. The videos have already received millions of views.
- Comment Care Mode, a new set of comment filters, which began testing late last year, is now available to everyone in the US, Israel and Palestine, as we continue to test the feature globally.
- We have ramped up our efforts to onboard partners to our Community Partner Channel - a direct avenue for trusted flaggers around the world, including in the conflict region, to report content to us for review, which sits alongside our in-app reporting function. Since December, we have onboarded eight new organizations, including in Australia, Mexico and Denmark, representing communities affected by the war.
We continue to diligently and robustly enforce our Community Guidelines. From the start of the war through to the end of last year, we have removed more than 1.5 million videos and suspended more than 46,000 livestreams in Israel and Palestine for violating our Community Guidelines, including content promoting Hamas, hate speech, terrorism and misinformation. Globally, in the same time period, we have removed tens of millions of pieces of content, as detailed in the chart below, and have prevented teen accounts from viewing more than 1.5 million videos containing violence or graphic content.
We remain vigilant against deceptive behaviors, too. From October 7 through to the end of last year, we have removed more than 169 million fake accounts globally, and we have removed about 1.2 million bot comments on content tagged with hashtags related to the conflict.
Update on December 7 at 3:45pm ET
We understand this is a difficult, fearful, and polarizing time for many people around the world and on TikTok. As we continue to focus on the safety of our community, we're launching a series of initiatives to #swipeouthate on TikTok. This includes a public service initiative that will roll out on our platform encouraging people to stand together against hateful behavior.
- We've brought together a new anti-hate and discrimination task force within our trust and safety team that's developing an aggressive plan to further crack down on hateful behavior in response to the recent rise, with a particular focus on antisemitism and Islamophobia. As part of this effort, we're investing more resources to proactively identify new and emerging trends, before they gain visibility, and strengthening and deepening training for moderators, in partnership with experts, to address implicit bias and the unique aspects of hateful ideologies.
- As we reinforce our content moderation, we're also adding ways to empower creators to manage their experiences.
- We're starting to roll out Comment Care Mode, which expands our suite of comment controls with new choices to filter unfriendly or unwelcome comments. When Comment Care Mode is turned on, it will filter comments we think are similar to those that the creator has previously reported or deleted or are inappropriate, offensive or contain profanity.
- We want to make sure every creator on TikTok is aware of the tools available to them. We're starting to prompt new creators after they post their first video and remind established creators who have yet to use these tools. We're also starting to prompt creators that may be experiencing a spike in unwelcome or unfriendly comments to turn on Comment Care Mode and use a new feature we're rolling out that filters comments made by accounts that are not in the creator's following or follower list. Early data has shown that creators using these filters have a 30% decrease in the amount of comments they report.
- We know that part of addressing issues of hate means hearing more directly from those groups most impacted. To ensure the products we're building are serving our community as intended, we're developing a co-design and product beta testing program for creators to provide input on our features and test them to ensure our products meet their needs. We're also expanding our managed creator communities to Jewish and other inter-faith communities as well as API and LGBTQ+ next year.
In addition, we continue working aggressively to enforce our hate speech, misinformation, and other policies. From Oct. 7-Nov. 30, we removed more than 1.3M videos in the conflict region for violating our Community Guidelines, including content promoting Hamas, hate speech, terrorism and misinformation. Globally, in the same time period, we have removed tens of millions of pieces of content, as detailed in the chart below, and have prevented teen accounts from viewing over 1 million videos containing violence or graphic content.
Update on November 23 at 10am ET
As the conflict continues, we remain focused on enforcing our rules against hate, harmful misinformation and other violative content. From October 7 to November 17, we removed more than 1,164,000 videos in the conflict region for breaking our rules, including content promoting Hamas, hate speech, terrorism and misinformation. Globally, we've removed millions of pieces of content during the same time period:
We continue to take swift action against an increase in fake engagement and accounts. In the month before the conflict started, we removed 21 million fake accounts globally, compared to 35 million fake accounts removed in the month after the start of the war - a 67% increase. In that month, we've also removed 933k bot comments posted on content tagged with hashtags related to the conflict.
We recognize that this is a challenging time for many in our community. That's why we continue to pursue opportunities to hear directly from creators about their experience on TikTok and to speak to community groups and other experts, as our teams consider additional changes and tools, such as our new Safety Center resource on how to access support during tragic events.
Update on November 5 at 10am ET
Like millions in our community, we are appalled by the reported rise of Islamophobia and antisemitism globally. Hateful ideologies are not and have never been allowed on our platform. We're continuously taking important steps to protect our community and do our part to prevent the spread of hate.
Since October 7, we have removed more than 925,000 videos in the conflict region for violating our policies around violence, hate speech, misinformation, and terrorism, including content promoting Hamas. During the same time period across TikTok globally, we've removed millions of pieces of content.
As the war goes on, our teams are closely monitoring evolving content trends, and collaborating with partners and intelligence firms to remain ahead of emerging themes and potential risks. We have already seen changes in the type of violative content we are removing, with the initial surge in violent and graphic content followed by a rise in content promoting terrorism, and more recently a rise in misinformation, conspiracy theories and the spread of hateful ideologies, including Islamophobia and antisemitism.
We have also seen spikes in fake engagement in the wake of the conflict and have correspondingly removed more than 24 million fake accounts globally since the start of the war. We've also removed more than half a million bot comments on content under hashtags related to the conflict.
We remain agile in considering and implementing changes to both our policies and enforcement strategies. A key part of this is working with external experts, for example engaging with dozens of organisations representing Jewish and Muslim communities to help ensure our actions against antisemitism and Islamophobia are effective. We've also updated our LIVE feature guidelines to better prevent people from misusing monetization features to exploit the ongoing tragedy for personal gain.
Update on October 25 at 2pm ET
We remain focused on quickly and consistently enforcing our policies to protect the TikTok community. Since Oct. 7, we've removed over 775,000 videos and closed over 14,000 livestreams promoting violence, terrorism, hate speech, misinformation, and other violations of our Community Guidelines in the impacted region.
Initial post - October 14 at 8pm ET
TikTok stands against terrorism. We are shocked and appalled by the horrific acts of terror in Israel last week. We are also deeply saddened by the intensifying humanitarian crisis unfolding in Gaza. Our hearts break for everyone who has been affected.
We immediately mobilized significant resources and personnel to help maintain the safety of our community and integrity of our platform. We're committed to transparency as we work to provide a safe and secure space for our global community. We remain focused on supporting free expression, upholding our commitment to human rights, and protecting our platform during the Israel-Hamas war.
Upholding TikTok's Community Guidelines
As part of our crisis management process, our actions to safeguard our community include:
- Launching a command center that brings together key members of our 40,000-strong global team of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis.
- Evolving our proactive automated detection systems in real-time as we identify new threats; this enables us to automatically detect and remove graphic and violent content so that neither our moderators nor our community members are exposed to it.
- Adding more moderators who speak Arabic and Hebrew to review content related to these events. As we continue to focus on moderator care, we're deploying additional well-being resources for frontline moderators through this time.
- Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that supports the attacks or mocks victims affected by the violence. If content is posted depicting a person who has been taken hostage, we will do everything we can to protect their dignity and remove content that breaks our rules. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organizations and individuals, and those organizations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules.
- Adding opt-in screens over content that could be shocking or graphic to help prevent people from unexpectedly viewing it as we continue to make public interest exceptions for some content. We recognize that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes.
- Making temporary adjustments to policies that govern TikTok features in an effort to proactively prevent them from being used for hateful or violent behavior in the region. For example, we're adding additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation.
- Cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines which are informed by legal and human rights standards. We are acutely aware of the specific and imminent risks to human life involved in the kidnapping of hostages and are working with law enforcement to ensure the safety of the victims in accordance with our emergency procedures.
- Engaging with experts across the industry and civil society, such as Tech Against Terrorism and our Advisory Councils, to further safeguard and secure our platform during these difficult times.
Since the brutal attack on October 7, we've continued working diligently to remove content that violates our guidelines. To-date, we've removed over 500,000 videos and closed 8,000 livestreams in the impacted region for violating our guidelines.
Preventing the spread of misleading content
Misinformation during times of crisis can make matters worse. That's why we work to identify and remove harmful misinformation. We also remove synthetic media that has been edited, spliced, or combined in a way that could mislead our community about real-world events.
To help us enforce these policies accurately, we work with IFCN-accredited fact-checking organizations who support over 50 languages, including Arabic and Hebrew. Fact-checkers assess content enabling our moderators to accurately apply our misinformation policies. Out of an abundance of caution, while a video is being fact-checked, we make it ineligible for the For You feed. If fact-checking is inconclusive, we label the content as unverified, don't allow it in For You feeds, and prompt people to reconsider before sharing it.
We continue to proactively look for signs of deceptive behavior on our platform. This includes monitoring for behavior that would indicate a covert influence operation which we would disrupt and ban accounts identified as part of the network.
We will soon be rolling out reminders in Search for certain keywords in Hebrew, Arabic, and English to encourage our community to be aware of potential misinformation, consult authoritative sources, and to remind them of our in-app well-being resources if they need them.
Shaping your TikTok experience
We have a large suite of existing controls and features that we encourage everyone in our community to consider using as they tailor the TikTok experience that best suits their preferences. These include:
- For You feed controls: People can tap 'Not Interested' on content they want to see less of or choose 'Refresh' if they want to restart their feed. When Restricted Mode is enabled, it helps to limit the appearance of content that may not be appropriate for a general audience and filters content with warning labels.
- Comment controls: People can choose who can comment on their videos, filter keywords from comments, or review comments before they are published. They can also block, delete and report comments in bulk. We prompt people to reconsider posting unkind comments, too.
- Screen time controls: We offer a range of tools to help people customize and control their time on our app, such as tools to set a screen time limit, reminders to take a break or log off for bedtime, and more.
- Family Pairing tools: Through our Family Pairing feature, parents and guardians can link their TikTok account to their teen's account to enable a variety of content settings. For example, they can choose to turn off search, enable Restricted Mode, and customize screen time settings.
- Reporting: Anyone can report content on TikTok, including comments, accounts and livestreams. Long press on a video, tap report, and select a reason, such as misinformation.
We also provide our community with helpful resources on a number of issues, including how to recognize hate and report it, and how to safely share stories about their mental health and access help if they need it.
We will continue to adapt our safeguards to protect our community.