The war in Ukraine is devastating, and our hearts break for all those who are suffering. It has also brought pain to our community and our people. And as a platform, this war has challenged us to confront a complex and rapidly changing environment as we look to be a canvas, a window, and a bridge for people across the globe.
With the safety of our people and community as our guiding priority, we've dedicated significant resources to developing and enforcing new protective measures, and we want to share more on some of the steps we're taking.
Piloting our state-controlled media policy
Last year we began working to develop a consistent and comprehensive state media policy, as we recognize that an additional layer of context can be helpful to viewers, especially in times of war and in conflict zones. In response to the war in Ukraine, we're expediting the rollout of our state media policy to bring viewers context to evaluate the content they consume on our platform. We’ll begin piloting our policy by applying labels to content from some state-controlled media accounts over the coming days.
Over the past several months we've engaged over 50 experts from multi-disciplinary backgrounds across 20 countries to inform our definition of state-controlled media – entities for which a government exercises direct or indirect control over their editorial content or decision-making – and our approach to making such designations. As we start labelling content now, we'll continue to gather feedback from experts in parallel to inform the development of our global approach and expansion. We'll share additional details on our broader policy roll out later this year. Our goal is to ensure our community has context on this type of content, and that we have the appropriate processes in place to consistently enforce the policy.
We recognize the heightened risk and impact of misleading information during a time of crisis. We continue to increase our safety and security measures and are working aggressively to help ensure people can express themselves and share their experiences, while we also seek to mitigate the potential for harm. We use a combination of technology and people to protect our platform, and our teams speak more than 60 languages and dialects including Ukrainian and Russian.
Our Community Guidelines prohibit content that contains harmful misinformation, hateful behaviour, or promotion of violence, and our actions to uphold these policies include removing violative content, banning accounts, and suspending access to product features like livestream. We partner with independent fact-checking organizations to aid our efforts to assess the accuracy of content so violations can be removed. Out of an abundance of caution, content that is being fact checked, and reviewed content that can't be substantiated, will be ineligible for recommendation into For You feeds.
We've also evolved our methods in real-time to identify and combat harmful content, such as implementing additional measures to help detect and take action on livestreams that may broadcast unoriginal or misleading content. We remain focused on preventing, detecting, and deterring influence operations on our platform and our systems help us identify, block, and remove inauthentic accounts, engagement, or other associated activities on TikTok.
Supporting digital literacy among our community
On our Discover page, we've added digital literacy tips developed in partnership with the National Association for Media Literacy Education and Mediawise to help our community evaluate and make decisions about the content they view online. This digital literacy hub also helps viewers learn more about the many safety, security, and privacy tools available to them on TikTok, such as 2-step verification. We're also adding opt-in screens and digital literacy reminders that will appear for viewers on some videos and livestreams.
The safety of our community and our people remains our priority, and we are also committing humanitarian aid. Last week we shared that we’re making a $1M donation to the United Nations Central Emergency Response Fund which is one of the fastest and most effective ways to ensure urgently needed humanitarian assistance reaches people in crises, and donating $4M across UNICEF, International Committee for the Red Cross, International Committee for the Red Cross, United Nations High Commissioner for Refugees (UNHCR) and Polish Humanitarian Action – an organization identified by our Polish employees. We are also matching donations made by our employees to the Canadian Red Cross and UNICEF. We will continue to be responsive to events as they unfold, take action on content or behaviour that threatens the safety of our platform, and dedicate resources to protecting our community.
Update on March 6, 2022 at 1:15pm ET
An update on TikTok's services in Russia: TikTok is an outlet for creativity and entertainment that can provide a source of relief and human connection during a time of war when people are facing immense tragedy and isolation. However, our highest priority is the safety of our employees and our users, and in light of Russia's new ‘fake news’ law, we have no choice but to suspend livestreaming and new content to our video service in Russia while we review the safety implications of this law. Our in-app messaging service will not be affected. We will continue to evaluate the evolving circumstances in Russia to determine when we might fully resume our services with safety as our top priority.
Update on April 12, 2022 at 6:00am ET
An update on our actions to protect our community: As we continue our ongoing work to safeguard our platform, we wanted to share additional insights that bring further transparency to our actions. The following data reflects steps taken from February 24 - March 31, 2022.
- Our safety team focused on the Ukraine war has removed 41,191 videos, 87% of which violated our policies against harmful misinformation. The vast majority (78%) were identified proactively.
- Our fact-checking partners have helped assess 13,738 videos globally, and we've added prompts on 5,600 videos informing viewers that content could not be verified by fact checkers.
- We've labelled content from 49 Russian state-controlled media accounts as we pilot our new state-controlled media policy.
- We identified and removed 6 networks and 204 accounts globally for coordinated efforts to influence public opinion and mislead users about their identities.
- During this time period we also removed 321,784 fake accounts in Russia and 46,298 fake accounts in Ukraine, which removed 343,961 videos. These are ongoing actions we take to protect against fake engagement, and are not specific to accounts or content related to the Ukraine war.
Update on May 20, 2022 at 6am ET
An update on our state-controlled media labels: In light of the continuing war in Ukraine, we're expanding the rollout of our labels on content posted by state-controlled media accounts to include Ukraine and Belarus while we continue working on the development of our global policy and approach beyond this pilot.