By Brie Pegum, Global Head of Product, Authenticity & Transparency
As we continue working to safeguard our Romanian community through the presidential elections and beyond, we're sharing more about how we combat deceptive behaviours. Our platform is built on the joy of authentic experiences, and we strictly prohibit attempts to undermine its integrity, mislead people, or manipulate our systems. There's a wide-range of deceptive behaviour carried out online across all platforms, which can touch on important social issues including elections—from spam to impersonation to covert influence operations. At TikTok, we proactively take action against these inauthentic activities, which violate our policies. Today we're shining a light on how we define, remove, and stay ahead of the actors behind them.
Defining covert influence operations
Covert influence operations are one of the most challenging deceptive behaviours our industry tackles, which is why we have built dedicated teams, policies and transparency reports to focus on them full-time. Our policies define covert influence operations as coordinated, inauthentic behaviour where networks of accounts work together to mislead people or our systems and influence public discussion on important social issues, including elections.
While influence operations often garner the most public attention, they're relatively rare compared to other deceptive behaviours like spam or fake engagement—which typically operate at much greater scales, using more obvious tactics that are easier to spot. However, influence operations also seek to cause disproportionate societal harm and use particularly sophisticated tactics. That's why we invest heavily in building targeted strategies, policy definitions, and taskforces to root these specific actors out.
Building an influence operations taskforce
We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis.
In 2024 so far, we disrupted over 40 new covert influence networks—including 3 in Romania—and removed over 70,000 accounts for violating our covert influence policies. That includes tens of thousands of accounts that we detected trying to reestablish a presence on our platform, which we call "recidivist" accounts.
Targeting inauthentic expression
Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:
- They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or are working together to spread the same narrative.
- They are misleading our systems or users. For example, they are trying to conceal their actual location, or using fake personas to pose as someone they're not.
- They are attempting to manipulate or corrupt public debate to impact the decision making, beliefs and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.
These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.
Disrupting influence networks
Countering covert influence operations is an evolving challenge for any platform because the adversarial actors behind them continuously change their tactics and how they attempt to conceal their efforts. That's why we continuously evolve our detection systems for on-platform activity, work with threat intelligence vendors for additional signals, and encourage authorities to share any potential leads with us proactively. We also look at off-platform activity, and make use of open-source intelligence to identify any related deceptive behaviour on TikTok. For example, in September we disrupted a network of 42 accounts targeting political discourse in Germany, which were found to coordinate through a messaging platform outside of TikTok.
After we remove networks, we also monitor vigilantly to prevent them from returning to our platform. In September, we reported 9,743 accounts that we'd removed associated with previously disrupted networks attempting to re-establish their presence within this reporting period.
Differentiating deceptive behaviours
Sometimes, behaviour or content that violates our other deceptive behaviour policies (such as spam or fake engagement) touches on the same topics that a covert influence operation might. For example, it’s common in our industry for financially-motivated actors to try and leverage sensitive issues like elections to drive engagement for personal profit. These cases are not classified as covert influence operations unless they meet the criteria outlined above, since they don't share the same strategic goals, technical signals, or deceptive tactics (which you can read more about here). However, they are still strictly prohibited and would be removed through the other deceptive behaviour efforts we describe below.
More often than not, inauthentic behaviour that is externally visible on the platform is not part of a covert influence network, which goes to much to greater lengths to hide any obvious linkages and usually requires in-depth technical investigations to uncover.
Countering fake engagement
In addition to countering covert influence operations, we prohibit a wide-range of other deceptive behaviours that seek to manipulate our platform, often for personal or financial gain. For example:
- We do not allow the use of accounts to engage in platform manipulation, such as the use of automation to register or operate accounts in bulk.
- We do not allow spam, including manipulating engagement signals to amplify the reach of certain content.
- We do not allow impersonation, including accounts that pose as another real person or entity (other than parody accounts that are clearly disclosed as such)
- We do not allow presenting as a fake person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform
- We do not allow fake engagement, such as facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes or providing instructions on how to artificially increase engagement on TikTok.
- We do not allow the indiscriminate dissemination of hacked materials that would pose significant harm.
We continuously invest in advanced technologies to block and prevent these deceptive behaviours, and report on our efforts in our Community Guidelines Enforcement Report. In Q2 2024, over 94% of the videos that violate our fake engagement policies were removed proactively. In the first half of this year alone, through automated moderation globally we also:
- Prevented over 700M fake accounts from being created, and removed over 940M videos from fake accounts.
- Prevented over 36 billion fake likes, and removed a further 379M+ fake likes.
- Prevented over 15 billion fake follow requests, and removed over 207M fake followers.
In Romania specifically, we proactively prevented nearly 45M fake likes, more than 27M fake follow requests and blocked more than 400 thousand spam accounts from creation since September alone.
Safeguarding authentic experiences
This work to counter deceptive behaviours is just one aspect of our expansive approach to safeguarding authentic experiences on TikTok. Just a few of these measures include:
- Prohibiting harmful misinformation, restricting unverified content from For You feeds, and partnering with 20 fact-checking organisations globally to enforce those policies accurately, including Lead Stories in Romania. 98% of the misinformation that violates our rules is removed proactively.
- Labeling unverified content, connecting people to authoritative sources of information in-app, and providing "verified" account badges to signal an account belongs to who it claims to be. TikTok is one of the only remaining platforms where verified badges are earned based purely on authenticity criteria, rather than bought.
- Labeling state-affiliated media accounts, restricting them from advertising to audiences outside their registered country, and making them ineligible for For You feed recommendation if they try to influence foreign audiences on current affairs. We've also removed over 150 accounts associated with Rossiya Segodnya and TV-Novosti for engaging in covert influence operations on TikTok.
Continuing to invest and evolve
In addition to our proactive detection measures, we enable people to easily report content or accounts they're concerned about to us. In our app, people can report deceptive behaviour, spam and harmful misinformation and more. We also review reports we receive through our Community Partner Channel and from government agencies and regulators, and remove violations of our policies.
The work to protect our platform's integrity never ends. We'll continue to invest, evolve and report on these efforts to help people access reliable information, discover original content, and share authentic interactions—through the Romanian elections and beyond.