The makers of TikTok, the Chinese video-sharing app with hundreds of millions of users around the world, instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform, according to internal documents obtained by The Intercept. These same documents show moderators were also told to censor political speech in TikTok livestreams, punishing those who harmed “national honor” or broadcast streams about “state organs such as police” with bans from the platform.
These previously unreported Chinese policy documents, along with conversations with multiple sources directly familiar with TikTok’s censorship activities, provide new details about the company’s efforts to enforce rigid constraints across its reported 800 million or so monthly users while it simultaneously attempts to bolster its image as a global paragon of self-expression and anything-goes creativity. They also show how TikTok controls content on its platform to achieve rapid growth in the mold of a Silicon Valley startup while simultaneously discouraging political dissent with the sort of heavy hand regularly seen in its home country of China.
On TikTok, livestreamed military movements and natural disasters, video that “defamed civil servants,” and other material that might threaten “national security” has been suppressed alongside videos showing rural poverty, slums, beer bellies, and crooked smiles. One document goes so far as to instruct moderators to scan uploads for cracked walls and “disreputable decorations” in users’ own homes — then to effectively punish these poorer TikTok users by artificially narrowing their audiences.
Today, The Intercept and The Intercept Brasil are publishing two internal TikTok moderation documents, recreated with only minor redactions, below. One lays out bans for ideologically undesirable content in livestreams, and another describes algorithmic punishments for unattractive and impoverished users. The documents appear to have been originally drafted in Chinese and later — at times awkwardly — translated into English for use in TikTok’s global offices. TikTok is owned by ByteDance, a Beijing-headquartered company that operates a suite of popular sites and social apps, a sort of Chinese analog to Facebook. ByteDance, founded in 2012, has come under scrutiny by the U.S. government over its ties to the Chinese Communist Party and numerous reports that the app’s censorship tactics mirror those of Beijing; Sens. Chuck Schumer and Josh Hawley have both worked to limit TikTok’s use by government personnel, arguing that it presents a risk to national security.
TikTok spokesperson Josh Gartner told The Intercept that “most of” the livestream guidelines reviewed by The Intercept “are either no longer in use, or in some cases appear to never have been in place,” but would not provide specifics. Regarding the policy of suppressing videos featuring unattractive, disabled, or poor users, Gartner stated that the rules “represented an early blunt attempt at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”
Sources indicated that both sets of policies were in use through at least late 2019 and that the livestream policy document was created in 2019. Gartner would not explain why a document purportedly aimed at “preventing bullying” would make zero mention of bullying, nor why it offers an explicit justification of attracting users, not protecting them.
Excluding Undesirable Users From the “For You” Fire Hose
One moderation document outlining physical features, bodily and environmental, deemed too unattractive spells out a litany of flaws that could be grounds for invisibly barring a given clip from the “For You” section of the app, where TikTok videos are funneled to a vast audience based on secret criteria. Although what it takes to earn a spot on the “For You” section remains a mystery, the document reveals that it took very little to be excluded, all based on the argument that uploads by unattractive, poor, or otherwise undesirable users could “decrease the short-term new user retention rate,” as stated in the document. This is of particular importance, the document stresses, for videos in which the user “is basically the only focus of the video … if the character’s appearance or the shooting environment is not good, the video will be much less attractive, not worthing [sic] to be recommended to new users.”
Under this policy, TikTok moderators were explicitly told to suppress uploads from users with flaws both congenital and inevitable. “Abnormal body shape,” “ugly facial looks,” dwarfism, and “obvious beer belly,” “too many wrinkles,” “eye disorders,” and many other “low quality” traits are all enough to keep uploads out of the algorithmic fire hose. Videos in which “the shooting environment is shabby and dilapidated,” including but “not limited to … slums, rural fields” and “dilapidated housing” were also systematically hidden from new users, though “rural beautiful natural scenery could be exempted,” the document notes.
The document, presented in both English and Chinese, advised TikTok’s moderators that for videos shot in someone’s house with “no obvious slummy charactor [sic],” special care should be given to check for slummy features such as a “crack on the wall” or “old and disreputable decorations.” The mere appearance of residential disrepair or crooked teeth in the frame, the document shows, could mean the difference between worldwide distribution and relative invisibility.
The justification here, as with “ugly” uploaders, was again that TikTok should retain an aspirational air to attract and hold onto new users: “This kind of environment is not that suitable for new users for being less fancy and appealing.” Social startups, eager to build on their momentum rather than disappear into the app heap of history, commonly consider growth and user retention to be by far their top priority, but rarely is the public privy to the details of this kind of nakedly aggressive expansion.
TikTok moderators were told to suppress users with “abnormal body shape,” “ugly facial looks,” “too many wrinkles,” or in “slums, rural fields” and “dilapidated housing.”
It’s unclear how widespread this exclusionary practice has been. Gartner, the TikTok spokesperson, told The Intercept that “the policies mentioned appear to be the same or similar to those published by” German publication Netzpolitik in December in a story about how TikTok was artificially suppressing access to videos created by disabled, overweight, and LGBT users and represented an effort “at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”
However, the TiKTok documents reviewed by The Intercept include a range of policies beyond those reported by Netzpolitik, involving among other things the suppression of content from poor, old, and “ugly” users. Furthermore, these documents contain no mention of any anti-bullying rationale, instead explicitly citing an entirely different justification: The need to retain new users and grow the app.
In stark contrast to its practice of strategically suppressing the unattractive, infirm, and despondent, TikTok also conducted closed-door outreach to its more popular users, one TikTok moderation source told The Intercept. “Operators who spoke directly to influencers and official content creators always make video conferences with groups to pass ‘safety rules’, thus reducing the chances of creating videos that go against what [ByteDance] think is right”, the source told us. According to this source, their office held regular video conferences between operators, a person from the “safety team,” and select TikTok “influencers” to provide advanced warning of changes to the app’s content policies, helping ensure that they wouldn’t run afoul of any new rules or prohibitions as they made their way across two billion smartphones. Gartner did not comment when asked about this outreach.
Censoring Political Speech
While TikTok policies around the “For You” section had to do with suppression, that is, keeping certain content from becoming too popular, a second document obtained by The Intercept is concerned with censorship, laying out rules for outright removing content from the company’s video livestreaming feature. The rules go far beyond the usual Beijing bugbears like Tiananmen Square and Falun Gong. Crucially, these rules could be easily interpreted to proscribe essential components of political speech by classifying them as dangerous or defamatory.
Any number of the document’s rules could be invoked to block discussion of a wide range of topics embarrassing to government authorities: “Defamation … towards civil servants, political or religious leaders” as well as towards “the families of related leaders” has been, under the policy, punishable with a terminated stream and a daylong suspension. Any broadcasts deemed by ByteDance’s moderators to be “endangering national security” or even “national honor and interests” were punished with a permanent ban, as were “uglification or distortion of local or other countries’ history,” with the “Tiananmen Square incidents” cited as only one of three real world examples. A “Personal live broadcast about state organs such as police office, military etc,” would knock your stream offline for three days, while documenting military or police activity would get you kicked off for that day (would-be protestors, take note).
Broadcasts deemed to be endangering “national honor and interests” were punished with a permanent ban, as was “uglification of countries’ history,” including Tiananmen Square.
Gartner refused to clarify whether the substance and intent of these restrictions are still in effect under different phrasing, for example, whether there is any current rule whatsoever against “harming national honor” or documenting police movements. “Like all platforms, we have policies that protect our users, and protect national security, for example banning any accounts that promote hate speech or terrorism, as outlined in our Community Standards,” Gartner wrote in an emailed statement.
While the document outlines harsh penalties for some political speech, TikTok’s treatment of racist broadcasts was milder. Streams that attempt “to shame/degrade individuals or groups on certain attributes such as disability, gender, color, sexual orientation, nationality, ethnics, beliefs” resulted in a user’s suspension for a single month. After that, the user can stream as usual. Users that disparaged TikTok via livestream were to be suspended for three days, and the ones who promote TikTok’s competitors could be banned forever.
Other moderation documents obtained by The Intercept indicate that TikTok has influenced content on its platform not just by censoring videos and disappearing users, but by padding feeds with content from “shadow accounts” operated by company employees posing as regular users. Internal employee guidelines reviewed by The Intercept suggest that ByteDance employees scoured Instagram for popular topics, downloaded the videos, and reshared them on TikTok to maintain a steady spray of appealing content; workers tasked with populating “Nice Looking” videos on the app were encouraged to check out Instagram posts tagged with “#BeachGirl,” for example. Asked about such practices and their reflection in the guidelines, Gartner said, “we did not see that language anywhere in moderation guidelines or Trust and Safety policies after a thorough search of them” and did not respond further.
At the same time as TikTok contractors seem to have been pilfering “beach girl” content, women who didn’t hew to TikTok’s invisible modesty code could have their streams terminated and their accounts banned, the livestream policy document shows. Streams depicting someone wearing bikini or swimming suits outside of a “beach or swimming occasion” were to be punished with a one-week suspension, while accounts showing the “outline of female nipples” could be closed “forever.” TikTok’s livestream modesty code even applied to the arts: “Singing or playing music pornography contents, sexual cues, etc” are forbidden, as is merely “discussing the topic of sexual reproduction” on stream — acts TikTok classifies as “voice vulgarity.”
“It is correct that for live streaming TikTok is particularly vigilant about keeping sexualized content off the platform,” Gartner wrote in an email.
The content moderation documents obtained by The Intercept Brasil and The Intercept contain indications that standards enforced on TikTok livestreams originate in China. One document, while in English, contains clunky phrasing suggestive of machine translation, as well as references to a Chinese language font embedded in the file itself, while the second contains large portions of both Chinese and English text. The TikTok livestream policy guide details 64 possible infractions organized into 13 different categories, each corresponding to a specific penalty. The categories range from the obvious common-sense prohibitions (“Juvenile Improper Behavior”) to the prudish and baffling: TikTok users who “Give the finger on purpose over twice” will have their stream terminated and their account banned for a day, while “disrupting national unity,” left undefined, comes with a permanent suspension.
TikTok’s political rules have proven controversial in the recent past. In September, the Guardian reported on similar content moderation documents that showed how TikTok “instructs its moderators to censor videos that mention Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong,” among other authoritarian-friendly censorship rules. ByteDance, eliding this confirmation of TikTok’s use to push Chinese foreign policy, acknowledged to the Guardian that “In TikTok’s early days we took a blunt approach to minimizing conflict on the platform, and our moderation guidelines allowed penalties to be given for things like content that promoted conflict, such as between religious sects or ethnic groups, spanning a number of regions around the world. … The old guidelines in question are outdated and no longer in use.”
Public “Guidelines,” Secret Rules
Many TikTok rule breakers will likely never receive a satisfying explanation for their punishment, because the existence and contents of the fine-grained rules have been kept out of public view. TikTok holds its users accountable to secret policies that, as on other digital platforms, attempt to dictate what is impermissible and how offending users are to be punished.
Behind a typical social network’s public-facing generic “Community Guidelines” are actual rules, sprawling documents that give moderators nuts and bolts instructions on what to delete and what to keep. Precariously employed tech company moderators themselves are typically as excluded from the rule-making process as the users are, receiving policy updates from on high abruptly and with little rationale to help them with their Sisyphean enforcement gigs. Gartner told The Intercept that TikTok’s moderation staff is “a combination” of staff employees and contractors, and that “Over the past year, we have established Trust and Safety hubs in California, Dublin and Singapore, which oversee development and execution of our moderation policies and are headed by industry experts with extensive experience in these areas.” Gartner did not address why the documents viewed by The Intercept were originally composed in Chinese.
Multiple TikTok sources, who spoke with The Intercept on the condition of anonymity because they feared professional and legal reprisal, emphasized the primacy of ByteDance’s Beijing HQ over the global TikTok operation, explaining that their ever-shifting decisions about what’s censored and what’s boosted are dictated by Chinese staff, whose policy declarations are then filtered around TikTok’s 12 global offices, translated into rough English, finally settling into a muddle of Beijing authoritarianism crossed with the usual Silicon Valley prudishness.
TikTok’s livestream feature is still being rolled out to the app’s global user base, who will have only the company’s publicized Community Guidelines to consult on the subject of what’s allowed and what isn’t. This anodyne “guidelines” page is a mix of vague boundaries and boilerplate marketing (“TikTok’s mission is to inspire creativity and bring joy. … We remove all expressions of abuse. … We do not allow sexually explicit or gratifying content on TikTok”) too compressed to entertain nuance. TikTok’s community guidelines also omit any signs that the content policies obtained by The Intercept — used behind the scenes by the service’s invisible moderator teams — have threatened free political expression and provided for the censorship of large swaths of the world’s population based on genetics, economics, and arbitrary decency standards. The matter of who decides what ugliness means for hundreds of millions of people in cultures around the world, what “disreputable” decor might mean, or how many wrinkles are considered “too many wrinkles” remain glaringly open and unaddressed, even in the internal moderator documents.
Instead, TikTok fans must continue to rely on the Community Guidelines page to guide their conduct, while the actual rules remain always on the verge of revision, revocation — or disavowal via corporate statement.
Latest Stories
Biden Made “Record Time” on Worker Protections for Heat. Trump Could Quickly Stamp Them Out.
The OSHA heat regulation was one of the few to have broad public support, but Democrats can’t ever seem to get their act together.
Meta-Powered Military Chatbot Advertised Giving “Worthless” Advice on Airstrikes
The marketing of a new military tech tool powered by Meta’s artificial intelligence is “irresponsible” and “clumsy,” experts said.
The Intercept’s Lawsuit Against OpenAI Advances on Claim It Removed Reporters’ Bylines
The Intercept’s lawsuit argues that the Digital Millennium Copyright Act prevents OpenAI from stripping a story’s title or byline.