By Cormac Keenan, Head of Trust and Safety

TikTok empowers people to share their creativity, knowledge, and passion with others as they entertain and bring joy to our community. To help keep our platform welcoming and authentic for everyone, we remove content which violates our policies. We aim to be transparent about those removals with creators and through our Community Guidelines Enforcement Reports. Our next report for April-June, published today, shows improvements made in countering misinformation and we want to provide insight on the work that led to these gains.

Misinformation is not a new problem, but the internet provides a new avenue to an old challenge. We recognize the impact misinformation can have in eroding trust in public health, electoral processes, facts, and science. We are committed to being part of the solution. We treat misinformation with the utmost seriousness and take a multi-pronged approach to stopping it from spreading, while elevating authoritative information and investing in digital literacy education to help get ahead of the problem at scale.

Our policies

Our integrity policies aim to promote a trustworthy, authentic experience. Within those, our harmful misinformation policies prohibit content that could mislead our community about civic processes, public health, or safety. For instance, we do not allow medical misinformation about vaccines or abortion, and we do not allow misinformation about voting. These policies can be applied to a wide range of content, and that's by design; this content is constantly changing, often based on what's happening in the world.

In addition to removing content that is inaccurate and harms our users or community, we also remove accounts that seek to mislead people or use TikTok to deceptively sway public opinion. These activities range from inauthentic or fake account creation, to more sophisticated efforts to undermine public trust. These actors never stop evolving their tactics, and we continually seek to strengthen our policies as we detect new types of content and behaviors.

Enforcing our policies

At TikTok, a combination of technology and thousands of safety professionals work together to enforce our Community Guidelines. To do this effectively at scale, we continue to invest in technology-based flagging as well as moderation. We rely on automated moderation when our systems have a high degree of confidence that content is violative so that we can expeditiously remove violations of our policies.

However, misinformation is different than other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. So while we use machine learning models to help detect potential misinformation, ultimately our approach today is having our moderation team assess, confirm, and remove misinformation violations. We have specialized misinformation moderators who have enhanced training, expertise, and tools to take action on misinformation. This includes direct access to our fact-checking partners who help assess the accuracy of content.

We have more than a dozen fact-checking partners around the world that review content in over 30 languages. All of our fact-checking partners are accredited by the International Fact-Checking Network as verified signatories of the International Fact-Checking Network's code of principles. Out of an abundance of caution, while content is being fact-checked or when content can't be substantiated through fact-checking, it becomes ineligible for recommendation into For You feeds. If fact-checkers confirm content to be false, we may remove the video from our platform or make the video ineligible for recommendation into For You feeds.

To continually improve detecting and removing misinformation, we've made some key investments this year, including:

  • continued investment in machine learning models and increased capacity to iterate on these models rapidly given the fast changing nature of misinformation.
  • improved detection of known misleading audio and imagery to reduce manipulated content.
  • a database of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
  • a proactive detection program with our fact-checkers who flag new and evolving claims they're seeing across the internet. This allows us to look for these claims on our platform and remove violations. Since starting this program last quarter, we identified 33 new misinformation claims, resulting in the removal of 58,000 videos from the platform.

While violations of our integrity and authenticity policies make up less than 1% of overall video removals, these continued investments have brought gains to our proactive detection and enforcement of these policies.

Collaborating with others

We firmly believe collaborating with experts is key to tackling this challenge. We regularly engage with our Content Advisory Council, researchers, civil society organizations, and media literacy experts. Not only does this collaboration help strengthen our policies and overall knowledge of trends and issues, it also enables us to elevate authoritative voices in our app. For instance, we created a digital literacy hub in our app with content aimed at equipping people with skills to evaluate content they consume, featuring content from experts at MediaWise and the National Association of Media Literacy Education. To promote election integrity, we create Elections Centers to provide access to authoritative information about voting and local elections from organizations like the National Association of Secretaries of State and Ballotpedia. To support public health, people can access information about COVID-19 and monkeypox from the World Health Organization through hashtag PSAs and labels on videos.

We will continue to iterate on how we tackle this challenge, and we know our work will never be finished. To learn more about our efforts to safeguard our platform, visit our Transparency Center.