By Cormac Keenan, Trust & Safety, EMEA; Arjun Narayan Bettadapur Manjunath, Trust & Safety, APAC; Jeff Collins, Trust & Safety, Americas

Earlier today we released our global Transparency Report for the first six months of 2020. The report provides insight into the ways we work to keep TikTok safe and uplifting for everyone. To hold ourselves accountable to our community as we build trust through transparency, we regularly share information about the content we remove, including hate speech, misinformation, and other topics that violate our Community Guidelines and Terms of Service. Our commitment to keeping our users safe encompasses a broad effort which also includes features like Family Pairing, which we built to give parents better coordination with their teens; resources and education for our community, like our Youth Portal or educational safety videos; and industry partnerships we forge to collaborate and learn from others like the WePROTECT Global Alliance

Social and content platforms are continually challenged by the posting and cross-posting of harmful content, and this affects all of us – our users, our teams, and the broader community. As content moves from one app to another, platforms are sometimes left with a whack-a-mole approach when unsafe content first comes to them. Technology can help auto-detect and limit much but not all of that, and human moderators and collaborative teams are often on the frontlines of these issues. 

Each individual effort by a platform to safeguard its users would be made more effective through a formal, collaborative approach to early identification and notification amongst companies. 

Such collaboration is already happening when it comes to certain content most people agree are dangerous or harmful, such as child sexual abuse material (CSAM). But there's a critical need to work together to protect people from extremely violent, graphic content, including suicide. 

To that end, yesterday TikTok's interim head, Vanessa Pappas, sent a letter to the heads of nine social and content platforms proposing a Memorandum of Understanding (MOU) that would encourage companies to warn one another of such violent, graphic content on their own platforms. By working together and creating a hashbank for violent and graphic content, we could significantly reduce the chances of people encountering it and enduring the emotional harm that viewing such content can bring – no matter the app they use. 

There is nothing more important to us than the safety of our community. Bringing joy and connection to our global community is the ultimate goal, and prioritizing the well-being of the people in that community is paramount to that. We believe other platforms share this goal. We are committed to working with others across the industry, as well as experts, academics, and non-profit organizations as we develop a framework and plan to bring this group to fruition. Our users deserve it. 

+++++++++++++++++++++++++++++++++++++++

The following letter was sent to the heads of nine social and content platforms.

Recently, social and content platforms have once again been challenged by the posting and cross-posting of explicit suicide content that has affected all of us – as well as our teams, users, and broader communities. 

Like each of you, we worked diligently to mitigate its proliferation by removing the original content and its many variants, and curtailing it from being viewed or shared by others. However, we believe each of our individual efforts to safeguard our own users and the collective community would be boosted significantly through a formal, collaborative approach to early identification and notification amongst industry participants of extremely violent, graphic content, including suicide. 

To this end, we would like to propose the cooperative development of a Memorandum of Understanding (MOU) that will allow us to quickly notify one another of such content. 

Separately, we are conducting a thorough analysis of the events as they relate to the recent sharing of suicide content, but it's clear that early identification allows platforms to more rapidly respond to suppress highly objectionable, violent material.

We are mindful of the need for any such negotiated arrangement to be clearly defined with respect to the types of content it could capture, and nimble enough to allow us each to move quickly to notify one another of what would be captured by the MOU. We also appreciate there may be regulatory constraints across regions that warrant further engagement and consideration.

To this end, we would like to convene a meeting of our respective Trust and Safety teams to further discuss such a mechanism, which we believe will help us all improve safety for our users.

We look forward to your positive response and working together to help protect our users and the wider community.

Sincerely,

Vanessa Pappas
Head of TikTok