B. Coordinated campaigns on social networks
Since 2019, Twitter, Facebook and Google (YouTube) no longer refrain from iden-
tifying coordinated campaigns as originating from China. In August 2019, Twitter
announced that it had deleted about 200,000 accounts (and publicized information about
the 936 most active ones) participating in “a significant state-backed information opera-
tion.” The accounts in question, “originating from within the People’s Republic of China
(PRC),”1203 attacked Hong Kong demonstrators. Twitter did not imply any formal attribution,
but the evidence was clear. The ASPI described it as “a relatively small and hastily assembled
operation rather than a sophisticated information campaign planned well in advance.”1204
In September 2019, Twitter published data from 4,301 additional accounts involved in this
operation.1205 In June 2020, Twitter exposed two sets of accounts, one of 23,750 accounts,
the other of about 150,000 accounts, which published pro-CCP and anti-democracy con-
tent in Chinese about Hong Kong; social network moderators said they “attributed” this
operation to the PRC.1206 In September 2020, Facebook announced that it had detected
and deleted two networks displaying coordinated inauthentic behavior (CIB), one of which
originated in China (155 accounts, 11 pages, 9 groups and 6 Instagram accounts) and tar-
geted the Philippines and Southeast Asia, but also the United States. Facebook established
that this network had “links to individuals in the Fujian province of China.”1207
As part of its “investigation into coordinated influence operations linked to
China,”1208 Google suspended several tens of thousands of YouTube channels in
1200. In Le Journal du Net, Les Échos, Economie matin, Challenges or in a Médiapart blog.
1201. Nicolas Arpagian, “OPA sur le Club Med: de l’intox sur le Net !” (“IPO on Club Med: Propaganda on the
Net!”), Journal du Net (5 Jan. 2015); “Club Med: soupçon de manipulation de l’OPA” (“Club Meb: Suspicions of IPO
Manipulation”) Le Figaro (6 Jan. 2015).
1202. Justine Brabant, “Faux analystes pour influencer une opération en bourse (JDN)” (“Fake Analysts to Influence
a Stock Market Operation”), Arrêt sur images (5 Jan. 2015).
1203. https://blog.twitter.com/en_us/topics/company/2019/information_operations_directed_at_Hong_Kong.html.
1204. Tom Uren, Elise Thomas, and Jacob Wallis, Tweeting through the Great Firewall: Preliminary analysis of
PRC-linked information operations against the Hong Kong protests, ASPI, Report No. 25 (2019), 3.
1205. https://blog.twitter.com/en_us/topics/company/2019/info-ops-disclosure-data-september-2019.html.
1206. https://blog.twitter.com/en_us/topics/company/2020/information-operations-june-2020.html.
1207. https://about.fb.com/news/2020/09/removing-coordinated-inauthentic-behavior-china-philippines/.
1208. https://blog.google/threat-analysis-group/tag-bulletin-q2-2020/.
378
2020. The Threat Analysis Group’s bulletins revealed that between April and June 2020,
2,596 channels had been suspended, as well as 3,773 channels between July and September
and another 7,479 channels in October alone.1209 Over the same period, between April and
October 2020, Google suspended only 200 other YouTube channels worldwide, 124 related
to Russian operations. Hence, the overwhelming majority of suspended channels were tied
to China. While these channels mainly published apolitical content (entertainment, music,
lifestyle, cooking, etc.), often referred to as “spam,” a small portion was political in nature
and was published in English and/or Chinese. This technique makes it possible to create
an audience before sharing political content more broadly. Among the political topics these
channels covered, Google cited the U.S. management of the Covid-19 crisis, the protests
for racial justice of the Black Lives Matter movement, the fires that ravaged the United
States, but also the events in Hong Kong. Google did not reveal the names of the channels
in question but specified that the results of its investigations were consistent with those of
Graphika (→ p. 379).1210
As we will see in subsequent pages, most of the information relayed by these false
accounts was poorly created, not very elaborate and easily identifiable – particularly
in the Chinese case where these operations are often botched. This differs from Russian
operations, in particular, a comparison we address in the conclusion (→ p. 620). However,
it is crucial to understand that their impact depends less on sophistication than on
repetition, which can create an “illusory truth effect.” The more an idea is repeated,
the more familiar it becomes, and the more likely it is to convince the public (whatever its
intrinsic weaknesses).1211 Therefore, actors involved in disinformation campaigns will not
always bother adopt an appearance of authenticity for the information they propagate. This
is the reason why Chinese actors, in particular, seem to put quantity before quality.
1. A persistent campaign since 2017
In 2019, the ASPI identified a campaign led by Chinese actors and targeting Hong
Kong demonstrations but which had already begun targeting critics of the Chinese regime
as early as April 2017 (see the report Tweeting through the Great Firewall). In 2020, they wrote
that this campaign had continued on Twitter and Facebook, that it was therefore
“persistent,” consistently “large-scaled,” targeting primarily Hongkongers and, to a lesser
extent, all Chinese abroad. The themes were well-known: first and foremost, Hong Kong
and exiled Chinese billionaire Guo Wengui, but also, to a lesser extent, the Covid-19 pan-
demic and Taiwan (their report Retweeting through the Great Firewall). The attribution to
actors located in China (even though Twitter is technically banned there) was confirmed
by the fact that 90% of messages were posted very routinely Monday to Friday, 8 am to 5
pm Beijing Time, with a clear drop at lunchtime: the accounts did not seem to be used the
rest of the time (early morning, late afternoon, evenings, weekends), which could indicate
a professional rather than personal endeavor.
The inauthenticity was further confirmed by the fact that 78.5% of the Twitter accounts
examined had no subscribers at all; recently created accounts with only a few subscribers
1209. Threat Analysis Group, “TAG Bulletin: Q2 2020”, Google (5 Aug. 2020); “TAG Bulletin: Q3 2020” (15 Sept.
2020); “TAG Bulletin: Q4 2020” (17 Nov. 2020).
1210. Ben Nimmo, Camille François, C. Shawn Eib, and Léa Ronzaud, Return of the (Spamouflage) Dragon: Pro-Chinese
Spam Network Tries Again, Graphika (Apr. 2020).
1211. Lynn Hasher, David Goldstein, and Thomas Toppino, “Frequency and the Conference of Referential
Validity,” Journal of Verbal Learning and Verbal Behavior, 16:1 (2017), 107-112.
379
were reaching record levels of engagement with hundreds or even thousands of “likes”;
and among the rest of the accounts, some were “potentially purchased, hacked or sto-
len.”1212 The work was usually sloppy: the authors did not bother cover their tracks
or make the accounts appear authentic. The ASPI gave the example of an account initially
held by a Frenchman who, in March 2020, suddenly started tweeting only in English and
Chinese. The photo had been changed (it was then a young woman) but the biographi-
cal presentation still referred to the original Facebook page, i.e., that of the French man.
Another French-speaking account tweeted “Test123” in Chinese before launching attacks
against Guo Wengui. Its operators did not even bother delete previous French tweets or the
test tweet. The ASPI termed this a “lazy approach” on both Twitter and Facebook: most
of the time, only the profile picture of the acquired accounts was changed, but not
the content nor the previous photos.1213 In other words, concealment was not an issue:
putting quantity before quality, they wanted to act fast and rake it in. Moreover, there
was some coordination between platforms: different accounts, under different identities
were posting the same thing on different platforms (Twitter and Facebook) simultaneously.
This is another symptom of inauthenticity.
It is likely the same information operation since 2017. The campaign was adapted to
the events (Hong Kong, pandemic, but also the U.S. presidential campaign) and it was
particularly robust since it resisted successive takedowns. The ASPI, which also emphasized
that its results converged with those of Graphika and Bellingcat,1214 noted that this network
could “have sourced, created or activated new accounts within a matter of days.”1215
In September 2021, a study by the U.S. cybersecurity firm Mandiant (FireEye) provided
new evidence. First, the campaign was much larger than previously thought, as it involved
no less than 30 different social network platforms and more than 40 websites and
forums, in numerous languages including Russian, German, Spanish, Korean and
Japanese. Then, the attackers “have actively sought to physically mobilize protestors in
the U.S. in response to the COVID-19 pandemic.”1216 This attempt, which does not appear
to have been successful, is typical of what Russians had also attempted to do in previous
years with greater success (pushing communities to demonstrate and clash in order to
divide a society) and is thus a further manifestation of the Russification of Chinese
operations (→ p. 620).
2. Spamouflage Dragon: an enduring network using fake accounts and fake
AI-generated human faces
Since 2019, Graphika has exposed a pro-Chinese network involved in at least three
operations. The first, from which Graphika drew a first report,1217 seems to have begun,
in the summer of 2019, to attack pro-democracy demonstrators in Hong Kong in
1212. Jake Wallis et al., Retweeting Through the Great Firewall: A persistent and Undeterred Threat Actor, ASPI
Policy Brief Report No. 33 (Jun. 2020), 4.
1213. Ibid.
1214. Benjamin Strick, “Uncovering a Pro-Chinese Government Information Operation on Twitter and Facebook:
Analysis of the #MilesGuo Bot Network,” Bellingcat (5 May 2020).
1215. Wallis et al., Retweeting Through the Great Firewall, 52.
1216. Ryan Serabian and Lee Foster, “Pro-PRC Influence Campaign Expands to Dozens of Social Media Platforms,
Websites, and Forums in at Least Seven Languages, Attempted to Physically Mobilize Protesters in the U.S.,” Fireye
Threat Research blog, September 8, 2021.
1217. Ben Nimmo, C. Shawn Eib, and L. Tamora, Cross-Platform Spam Network Targeted Hong Kong Protests, Graphika (Sept.
2019).
380
Chinese and the exiled billionaire and CCP critic Guo Wengui. The network used
hundreds of YouTube, Facebook, and Twitter accounts (those involved were among the
936 accounts identified by Twitter on August 19, 2019 as participating in “a significant
state-backed information operation”; the accounts “originat[ed] from within the PRC”).1218
Most of the accounts were hijacked or reused: despite having Western, Slavic, and
Bangladeshi names and profiles, they published in Chinese. The videos they published
were then amplified (shared, commented on) by groups of fake accounts. Some
political messages (in this case, attacking Hong Kong demonstrators and Guo Wengui)
were interspersed within a mass of harmless photos and videos (landscape, poetry, sport)
– perhaps used as camouflage – hence the name chosen by Graphika: “Spamouflage.”1219
Sometimes “different accounts posted the same content in the same order, suggest-
ing that they were automated.”1220 In September 2019, Twitter, Facebook, and YouTube
thought they had dismantled this network by deleting the accounts and pages involved. In
reality, the network simply downsized and depoliticized its activity to pass “under the radar.”
The network was back for a second operation in early 2020, from which Graphika drew
a second report.1221 This operation’s objective was to defend China, which received much
criticism at the beginning of the Covid-19 pandemic. The operation seems to have been
“galvanized,” in late January, by the decision of U.S. authorities to ban travelers from China.1222
The network reactivated dormant accounts. For example, one of them, the Facebook page
画苑之花 (“Flower of the Garden”), was created in January 2019 under a Bangladeshi name.
The page started by publishing landscape images in English, then participated in the first oper-
ation on Hong Kong and Guo by posting a mixture of political messages and landscapes
(“spamouflage”), before laying low in the last months of 2019, and finally returning at the
very end of January 2020 to defend Beijing against criticisms of its management of the epi-
demic. The Graphika team deduced that the Flower of the Garden page “was a commer-
cial acquisition, created by users unrelated to the network (perhaps in Bangladesh),
obtained by the operation around the time of the first disruption, but then run in
an online variation on “stealth mode” until the operators decided to turn up the vol-
ume.”1223 Another example on Twitter: one of the accounts involved in this second operation,
in March 2020, @kstaceee (Kathryn Stacey), was created in 2009, published in English, then
turned silent in 2013; it republished a few commercial tweets between 2016 and 2019, then
began posting in Chinese in October 2019, participating in the first operation against Hong
Kong demonstrators. Therefore, it could be “an account created by a genuine individual but
abandoned in 2013, hijacked and repurposed by a commercial operator in 2016, and then taken
over by Spamouflage Dragon in late October 2019.”1224 Examples like these are numerous.
As it reactivated old accounts, this network also acquired or created others, some
to disseminate content, others to amplify it. Disseminated and amplified videos,
often taken from Chinese state media, praised the CCP’s handling of the health
crisis. Like the first time, some accounts interspersed them with a mass of innocuous mes-
sages and videos taken from TikTok. On Facebook, the involved pages usually had between
1218. https://blog.twitter.com/en_us/topics/company/2019/information_operations_directed_at_Hong_Kong.html.
1219. Ben Nimmo, Camille François, C. Shawn Eib, and Léa Ronzaud, Spamouflage Goes to America: Pro-Chinese
Inauthentic Network Debuts English-Language Videos, Graphika (Aug. 2020), 2.
1220. Nimmo, François, Eib, and Ronzaud, Return of the (Spamouflage) Dragon, 2.
1221. Ibid.
1222. Ibid., 22.
1223. Ibid., 7-8.
1224. Ibid., 9
381
4,000 and 4,900 users, just below 5,000, without ever reaching this threshold beyond which
the transparency settings automatically show where the page is administered from; it may
seem intentional. Moreover, their engagement rate was far below what could have been
expected from a page actually followed by such a high number of (real) people: it could
“indicate a policy of purchasing enough followers to make the assets look authoritative,
without obtaining so many that they triggered the transparency setting.”1225
The effectiveness of this maneuver was limited because, as Graphika observed, it “failed
to break out of its own echo chamber. All the likes, shares, and comments on the network’s
posts […] came from other members of the network.”1226 In other words, Spamouflage
Dragon’s activity was circular. Like for the first operation, as soon as the maneuver was
exposed (in April), the platforms deleted the accounts involved.
The network came back in June 2020 for a third operation, which resulted in a third
Graphika report.1227 This time it targeted the United States and the Trump adminis-
tration in the context of the Sino-American “Cold War.” It particularly battered “its for-
eign policy, its handling of the coronavirus outbreak, its racial inequalities, and its moves
against TikTok.”1228 There were two novelties compared to previous campaigns: first,
this one was conducted in English, mainly through videos. The quality was not per-
fect (“the videos were clumsily made, marked by language errors and awkward automated
voice-overs.”),1229 but the attackers showed a certain reactivity to current events, such as
with a speech by an American official. They were able to create and broadcast videos
in English within 36 hours. Then, some of the accounts posting them on YouTube
and Twitter had profile pictures created by generative adversarial networks (GANs,
i.e. AI): they were the fabricated faces of non-existent people. This relatively new
technique had already been observed in another case in 20191230 and on LinkedIn, where
some fake profiles are sometimes illustrated with photos generated in this way, notably by
intelligence services that use LinkedIn as a recruitment method.1231 For example, the nine
faces below, which are profile pictures of YouTube users who have commented on one of
the videos in question, are fake: they were all generated in this way.
Source: Ben Nimmo, Camille François, C. Shawn Eib and Léa Ronzaud,
Spamouflage Goes to America: Pro-Chinese Inauthentic Network Debuts English-Language Videos, Graphika, August 2020, p. 28.
1225. Ibid., 15.
1226. Ibid., 1.
1227. Nimmo, François, Eib, and Ronzaud, Spamouflage Goes to America.
1228. Ibid., 2
1229. Ibid., 1
1230. Ben Nimmo et al., #OperationFFS: Fake Face Swarm, A joint report by Graphika & the Atlantic Council’s
Digital Forensics Research Lab (Dec. 2019).
1231. Raphael Satter, “Experts: Spy Used AI-generated Face to Connect with Targets,” AP News (13 Jun. 2019).
382
The same increasingly-common method of creating faces using artificial intel-
ligence has been used in other operations, one of which – named “Naval Gazing” by
Graphika – focused on the Sino-American rivalry in the South China Sea between 2016 and
2020.1232 Fake Facebook accounts whose profile pictures had either been stolen or gener-
ated by artificial intelligence attacked the Taiwanese president and supported the presidents
of the Philippines and Indonesia. Some were also posing as Americans supporting different
candidates in the 2020 presidential campaign. In all cases, the interventions were dominated
by maritime security and the defense of Chinese maritime interests. Another operation,
also exposed by Graphika, used images generated by artificial intelligence to create fake
profiles, on Twitter this time, to defend Huawei against the project of the Belgian govern-
ment to limit the access of Chinese companies to its 5G network in 2020.1233
Meanwhile, the Spamouflage network continued to grow, with some interesting devel-
opments that prompted Graphika to publish a fourth report.1234 First of all, the network’s
three initial operations, to attack Hong Kong demonstrators, defend China during the
Covid-19 pandemic, and attack the United States, only had a limited impact, despite their
switch to English with the third operation; operators were unable to “break out”
of their own echo chamber, i.e., to develop a sufficient reach outside their network.
However, in late 2020 and early 2021, they were met with some success: the net-
work’s messages were actually amplified by important external accounts, includ-
ing “the Venezuelan Foreign Minister, a Pakistani politician, a senior figure at Huawei
Europe, UK commentator and former member of parliament George Galloway, and four
YouTube channels for Chinese viewers with tens of thousands of followers.”1235 Another
shift identified by Graphika was the development of seemingly authentic accounts,
in the sense that, unlike hundreds of others who did not even bother to conceal their
inauthenticity, they invested in persona development. As a result, the authenticity of
these new accounts drove more engagement with the content. Spamouflage also broad-
ened its focus, which largely overlapped with Chinese diplomats who had themselves
retweeted the network’s accounts several hundred times, leading Graphika to say that
“spamouflage increasingly resembles a state-aligned propaganda network that
boosts, and is boosted by, the Chinese government.”1236 Moreover, the network was
also increasingly aggressive toward the United States. The decline of America. Became
its main narrative, which carried a broader message: the superiority of the Chinese model
over liberal democracy.
3. More than 10,000 fake Twitter accounts linked to the Chinese government
Between August 2019 and March 2020, ProPublica also identified “more than 10,000
suspected fake Twitter accounts involved in a coordinated influence campaign with
ties to the Chinese government.”1237 Some of these accounts had been hacked: ProPublica
gave examples of the initially authentic accounts of “a professor in North Carolina; a
1232. Ben Nimmo, C. Shawn Eib, and Léa Ronzaud, Operation Naval Gazing: Facebook Takes Down Inauthentic Chinese
Network, Graphika (Sept. 2020).
1233. Fake Cluster Boosts Huawei: Accounts with GAN Faces Attack Belgium Over 5G Restrictions, Graphika (Jan. 2021).
1234. Ben Nimmo, Ira Hubert, and Yang Cheng, Spamouflage Breakout: Chinese Spam Network Finally Starts to Gain
Some Traction, Graphika (Feb. 2021).
1235. Ibid., 3.
1236. Ibid., 4.
1237. Kao and Li, “How China built.” All quotes in this section are taken from this article.
383
graphic artist and a mother in Massachusetts; a web designer in the U.K.; and a business
analyst in Australia,” stolen from their owners and which subsequently posted pro-Beijing
propaganda in Chinese and/or English. Other accounts used vernacular Cantonese with
traditional Chinese characters to impersonate Hongkongers – but errors sometimes
appeared with simplified Chinese characters, revealing the account operator’s mainland ori-
gin. Those who held the accounts at the time were not necessarily the ones who stole them:
they could simply have bought them on a black market, where pirates sell existing
accounts that have the double advantage of already having (sometimes many) sub-
scribers and of appearing credible since they were initially real accounts, with no link to
the causes for which they can be used afterward. The most credible accounts, because they
were initially real accounts and kept a real profile picture (of a real person), were used to
spread the messages first, which were then amplified (republished, liked, commented on)
by an army of more obviously false accounts.
Here again, a cluster of clues linked this network of false accounts to the CCP:
not only were the timeline and content of the messages aligned with the Chinese polit-
ical agenda (first Hong Kong, then the pandemic), to the point that the messages were
sometimes literally copy-pasted from official CCP communiqués, but these accounts (of
people supposedly located all over the world) were mainly active during Beijing work-
ing hours – an observation also made by the ASPI in another study.1238 Furthermore,
ProPublica linked these accounts and this informational operation to a Beijing-based
digital marketing company, OneSight Technology Ltd, which “bills itself as the
top overseas social marketing company in China. It contracts with domestic com-
panies and government agencies to help them market their brands or goods on social
media seen outside of China.” It offered, among other services, “to post messages en
masse across a number of accounts on overseas social media platforms including Twitter
and Facebook.” Its CEO, Li Lei, was apparently a former employee at Beijing’s Foreign
Propaganda Department. Among his clients: China Daily, CGTN, and the country’s
two main news agencies, Xinhua and China News Service (which are supervised by the
Overseas Chinese Affairs Office (OCAO) and, since 2018, by the UFWD (United Front
Work Department), which absorbed OCAO). ProPublica obtained a copy of a contract
worth RMB1,244,880 (€159,136) between OneSight and China News Service to increase
the latter’s Twitter visibility.1239
4. Targeted botnets from Serbia to Xinjiang
A substantial rapprochement between China and Serbia has been taking place
since 2014, and has accelerated since 2017. Beijing is more broadly interested in the Balkans
and countries of Central and Eastern Europe (→ p. 310). And Serbia has a twofold advan-
tage: it is not a EU member (therefore a priori more sensitive to a vision, concepts, and
initiatives hostile to the EU), but a candidate (and therefore well-positioned to serve as a
Trojan horse when the time comes). The Balkans are a strategic priority for Beijing,
and Serbia is at the “heart” of this strategy.1240
1238. Uren, Thomas, and Wallis, Tweeting Through the Great Firewall.
1239. Kao and Li, “How China Built a Twitter Propaganda Machine.”
1240. Vuk Vuksanovic, “Light Touch, Tight Grip: China’s Influence and the Corrosion of Serbian Diplomacy,”
War on the Rocks (24 Sept. 2019).
384
China opened a Confucius Institute in Belgrade in 2006, another in Novi Sad in 2017,
signed a large number of academic partnerships, has provided content to the Serbian
media, poured billions of euros into Serbia, built a €45 million cultural center in place of
the Chinese embassy bombed by NATO in 1999, and it carries out very active propaganda
in the country with the help of local relays. One of them is the Center for International
Relations and Sustainable Development. This Serbian think tank was partly funded by the
conglomerate CEFC China Energy (before it went bankrupt in March 2020 → p. 165) and
promoted the BRI and Sino-Serbian friendship.1241 Since 2020, there have also been joint
patrols of Serbian and Chinese police in the streets – the visible part of a cooperation
between security forces that could, in the future if not already, enable Beijing to better mon-
itor and control all those belonging to the vast and diffuse category of “overseas Chinese”
(→ p. 165) in Serbia. The rapprochement is so spectacular that it has been said that “China
[had] overtaken Russia as Serbia’s Great Ally.”1242
Thus, Serbia was logically a Chinese priority during the Covid-19 pandemic, and
the Chinese aid was greatly amplified on social networks. An analysis of 30,000
tweets posted by Serbian accounts between March 9 and April 9, 2020 containing
the words “Kina” (China) and “Srbija” (Serbia) showed that bots produced no less
than 71.9% of them.1243 Some of the used accounts were old (created as early as 2009), but
many (954) were created for the occasion in the first quarter of 2020. More than 85% of
the tweets from these bots were retweets: their objective was, therefore, to amplify existing
content rather than distribute new content. They praised the Sino-Serbian friendship
and the Chinese aid given to Serbia during the pandemic, as well as the Serbian
government for its management of the crisis, while criticizing the inaction and lack
of solidarity of the European Union (despite the fact that the EU immediately released
15 million euros to assist Serbia). One of the accounts most often mentioned in these
tweets was Serbian President Aleksandar Vučić’s, who stated that he trusted his “friend and
brother” Xi Jinping, that China was the only country that could help them, as “European
solidarity does not exist.”1244 The accounts of the Serbian Prime Minister and the Chinese
ambassador to Serbia were also, but to a lesser extent, regularly mentioned.
In December 2019, the ASPI uncovered another similar network of false accounts
that tried to influence the discussions on Xinjiang.1245 The campaign was launched
during the adoption of a bill by the US House of Representatives calling for sanctions
against Chinese Party-State executives responsible for the mass internment of Uyghurs in
Xinjiang.1246 All of the identified accounts used the profile picture of a celebrity (Emma
Stone, Chris Evans, Lily Collins, Keira Knightley…) and retweeted content from Chinese
media to amplify it, in particular the Global Times and Chinese government sources. They
strived to impose sources in line with the Chinese version of the facts concerning the
camps which, according to the official narrative, are education centers.
1241. Ibid.
1242. Vuk Velebit, “China Has Overtaken Russia as Serbia’s Great Ally,” BalkanInsight (8 Jul. 2020).
1243. Digital Forensic Center, “A Bot Network Arrived in Serbia Along with Coronavirus,” Digitalni forenz ički centar
(13 Apr. 2020).
1244. Sofija Popović, “‘Steel Friendship’ Between Serbia and China Criticized by European Commentators,”
European Western Balkans (30 Mar. 2020); Jean-Baptiste Chastand, “Serbie: un sas d’entrée vers l’Europe pour Pékin”
(“Serbia: a Gateway to Europe for Beijing”), Le Monde (22 Mar. 2021), 18.
1245. Masha Borak, “New Swarm of pro-China Twitter Bots Spreads Disinformation about Xinjiang,” South China
Morning Post (5 Dec. 2019).
1246. “US House approves Uyghur Act Calling for Sanctions on China’s Senior Officials,” The Guardian (4 Dec.
2019).
385
#forzaCinaeItalia
A study showed that nearly half (46.3%) of the tweets that quoted the hashtag #forzaCi-
naeItalia (“Go China and Italy”) and more than a third (37.1%) of those quoting the
hashtag #grazieCina (“Thanks China”) between March 11 and 23, 2020, were created
by bots.1247 The Chinese Embassy in Italy had initiated the movement by tweeting an image
untitled “Forza Cina e Italia!” on February 24; then the hashtag #forzaCinaeItalia was used on
March 11. After that date, it was abundantly reproduced.
C. Discredit, divide, and scare opponents
An essential aspect of these campaigns, especially the one identified by the ASPI and
other likeminded groups (Graphika, Bellingcat), which have been persistent on Twitter and
Facebook since 2017, is that they do not solely defend China. Promoting the Chinese
model means degrading other models at the same time, especially liberal democ-
racies, something that Russian influence operations have been doing for years
(→ p. 620). At least three tactics are regularly employed to this end.
First, they discredited the adversary’s capability – which is also, correlatively, a way
of praising China’s own resources by comparison. This process was frequently used during
the Covid-19 pandemic. Several tweets were published in a globally coordinated campaign
that targeted different countries (Canada, Finland, Japan, and the United States). They pre-
sented themselves as personal testimonies of Chinese living abroad, with local references,
but they were, in reality, the same text with gaps filled depending on the context:
____ has already lost control of the pandemic. I heard from a friend in a _____ hospital that num-
berless people are trying to get diagnosed every day, but there are no tests, they just get sent back
home. ____ has a large elderly population; lots of them just have to die at home. If you do not get
diagnosed, then you do not count as having got the disease, which is how _____ is keeping its num-
1247. Gabriele Carrere and Francesco Bechis, “Così la Cina fa propaganda in Italia, con i bot. Ecco l’analisi su
Twitter di Alkemy per Formiche,” Formiche (30 Mar. 2020).
386
bers so low. It’s so scary. I already reserved my plane tickets home. In critical moments we have to
concentrate efforts to tackle a great challenge!1248
The attacks against liberal democracies were not limited to their capacity to manage
health crises: they targeted above all the legitimacy of their institutions, and therefore the
democratic model itself. One of the most widespread ideas here developed was that democ-
racies are not only inefficient but also unstable and chaotic. Any image of disorder (demon-
strations, damage to public property, burning cars, crimes, etc.) was amplified to confirm
the superiority of the Chinese authoritarian model. From this point of view, the assault
on the Capitol by supporters of Donald Trump on January 6, 2021 provided Chinese
media and trolls with an almost inexhaustible source of criticisms against the dem-
ocratic model – presumably embodied in Washington – and a powerful example to
denounce the alleged “double standards” between the said-comparable situations of Hong
Kong and Washington, as illustrated by montages published by the Global Times (see below).
1248. Monaco, Smith, and Studdart, Detecting Digital Fingerprints, 67.
387
Source: “Chinese netizens jeer riot in US Capitol as ‘Karma,’ say bubbles
of ‘democracy and freedom’ have burst,” Global Times (7 Jan. 2021).
Second, they fuel the flames of divisive issues, particularly racism and police vio-
lence in the United States. Some messages were purely critical, giving the image of a coun-
try fighting a civil war, without making any connection to China – this is the “Russian”
approach (→ p. 620). Others drew a parallel between the violence in the United States
and in Hong Kong to denounce the supposed hypocrisy (i.e. double standards) of the
Americans.
China’s campaigns on social networks that targeted the United States during
the presidential campaign were not clearly partisan. But, following the example of the
Russian Internet Research Agency (IRA) four years earlier, they supported both sides of
a divisive issue to add fuel to the fire. A special effort was dedicated to the racial issue
(because it was particularly divisive). For example, Chinese operators broadcast “messages
in support of both the Black Lives Matter and pro-police Blue Lives Matter movements.
The point was not to take a side but rather to boost divisiveness by amplifying competing,
emotionally-charged view points.”1249
Source: https://twitter.com/SpokespersonCHN/status/1266741986096107520.
A report showed that, on Twitter, “CCP followers started focusing on the demonstra-
tions in the US, sparked by the killing of George Floyd on 25 May [2020].” The tweets
within this narrative have several aims: exacerbating domestic tensions over police bru-
1249. Tatlow, “Exclusive: 600 U.S. Groups Linked to Chinese Communist Party Influence Effort with Ambition
Beyond Election.”
388
tality against Black Americans; video and images illustrating the often violent suppres-
sion of protests by police and; comparing the Trump administration’s response to the
demonstrations to the protests in Hong Kong. […] Official CCP accounts used the
hashtags #BlackLivesMatter, #BLM, and #GeorgeFloyd more than 500 times combined
in the weeks after Floyd’s death. Ministry of Foreign Affairs spokesperson Hua Chunying
also used Floyd’s dying words as a rebuttal to a U.S. State Department tweet condemning
China’s actions in Hong Kong” (see image above).1250
1250. Raymond Serrato and Bret Schafer, Reply All: Inauthenticity and Coordinated Replying in pro-Chinese Communist
Party Twitter Networks, Institute for Strategic Dialogue and Alliance for Securing Democracy (Jul. 2020), 20.
389
Source (for the 6 pictures): Wallis et al., Retweeting through the great firewall, 13-16.
On the Bilibili platform, the CYL distinguished itself in June 2020 by broadcasting vid-
eos exploiting the death of George Floyd to “denounce” the racism of the U.S.
government. One of them, published on June 3, which is captured below, was entitled:
“In a few words: they obtained freedom in 1862, so why is it that black Americans still can’t
breathe.” “In a few words” is a series broadcast by Guanchazhe (观察者 – “the observer”),
a medium created in 2012 that presents itself as a forum analyzing international issues.
It displays views close to the government’s. In the video mentioned above, the authors
presented African Americans as an oppressed nation within the United States; they also
suggested that the FBI was responsible for the assassination of Martin Luther King Jr. In
addition, the video claimed that white Americans created the superhero Black Panther to
obscure the true history of the Black Panther movement. This statement was evidently
false. One only needs to check the chronology of events to ascertain the truth: the Marvel
hero made his first appearance in Fantastic Four in July 1966, whereas the African-American
Maoist movement was created the following October. Marvel was, therefore, unable to
create the character to fight against the Black Panthers. Nor has it been established that
Marvel’s hero exerted any influence in the choice of the Black Panthers’ name.
390
George Floyd’s death has thus been used in many media content and in a number of
ways, including artistic images such as the one titled To Breath by “warrior wolf artist”
Wuheqilin (also known for creating the piece featuring an Australian soldier slitting the
throat of an Afghan child → p. 223).
Wuheqilin, To Breath (source: Global Times, https://archive.vn/WhBcy).
Anti-Chinese racism – which is sometimes real, there is no denying it – also gives rise
to manipulated information. For example, in April 2021, a video circulatied on social net-
works in China and Southeast Asia in which an Asian man, on the ground and bleeding,
was violently beaten by dozens of Spanish-speaking men, armed with sticks. The video
(captured below) was posted with the following message: “In California, USA, blacks and
whites kill Chinese. Forward to the whole of China!”
In fact, this video was shot in February 2021 during a prison riot in Ecuador and was
originally released by the Twitter account of the Ecuadorian Ministry of Justice.1251 This is
1251. Jane Tang, “China’s Information Warfare and Media Influence Spawn Confusion in Thailand,” Radio Free
Asia (13 May 2021).
391
not an isolated case: the exploitation of racist anti-Asian acts, whether real, alleged,
or fabricated, is one of the leitmotifs used by Chinese propaganda to denigrate the United
States but also, more generally, “the West” – especially countries with large Asian commu-
nities, such as Australia and Canada.
The United States is not the only one affected by attempts to divide the coun-
try. In Seoul, too, there have been more repeated attempts to intervene in divisive
debates to inflame tensions. Suspecting that Chinese agents were invading Korean-
language forums and discussion groups, some Internet users started an experiment in early
2020. They created a fake online debate by posting links pointing to websites banned in
China to trap Chinese Internet users. A significant number of those who clicked on the
links then started posting the same comment, simply saying, in Korean, “I am an individ-
ual” – an incomprehensible phrase that some interpreted as a kind of code to indicate to
the Chinese services monitoring the web that they had found themselves on these banned
websites against their will. This experience sparked a debate, with some politicians stressing
the need to legislate to prevent this kind of manipulation.1252
Psychological manipulation: gaslighting
The psychological manipulation of denying proven facts and defending false assertions
with the effect of destabilizing the convictions of the target audience and disturbing
their sense of reality is called gaslighting, which refers to the eponymous movie directed by
George Cukor – itself inspired from the play Angel Street written by Patrick Hamilton – in
which the husband manages, using various stratagems, to make his wife doubt her own mental
health.1253 Beyond the abusive interpersonal relationship, this psychological concept can be
applied on a broader scale, such as between those who govern and those who are governed:
the behavior and words of President Donald Trump, for example, have more than once been
described as gaslighting.1254 The CCP has also engaged in this kind of manipulation. If one
cannot scientifically prove someone’s intention, one can at least identify many cases where
the Party openly lied, distorted reality, or sought to rewrite history, with the effect
of creating possible confusion among the public. To take a recent example, the Party
sought to deflect from its short-sightedness at the beginning of the coronavirus crisis,
glorifying doctor Li Wenliang who had previously been accused of spreading false rumors,
censoring the first testimonies of the crisis that contradict the official version of controlled
management.1255 This distortion was carried out by downplaying the public number of infect-
ed people and deaths,1256 and by criticizing foreign countries for not having taken the virus
seriously – although, at the beginning of the crisis, China claimed that the situation was under
control and that foreign countries should not suspend their air links with it.
Through Foreign Ministry spokesperson Zhao Lijian (and other figures), China has also
caused unrest among domestic and foreign audiences by spreading the rumor according to
which the virus was brought to China by U.S. military personnel (→ p. 596). Similarly, Chinese
authorities distorted the reality of the protests in Hong Kong by claiming that the protesters
1252. Tae-jun Kang, “Suspicions Grow in South Korea Over China’s Online Influence Operations,” The Diplomat
(27 Mar. 2020).
1253. G. Alex Sinha, “Lies, Gaslighting and Propaganda,” Buffalo Law Review, 68:4 (Aug. 2020), 1088.
1254. Alfie Eltis, “Trump, and the History of Political Gaslighting,” Varsity (2 Oct. 2020); Nicole Hemmer, “Donald
Trump Is Gaslighting America,” United States Studies Center (16 Mar. 2016). Stephanie Sarkis, “Donald Trump is a Classic
Gaslighter in an Abusive Relationship with America,” USA Today, (10 Mar. 2018). George Hagman, “Gaslighting the
Pandemic: Donald Trump, Lies, Manipulation and Power,” International Association for Psychoanalytic Self Psychology (20
Jun. 2020); Jennifer Rubin, “Trump’s Convention is the Ultimate Gaslighting Exercise,” The Washington Post (24 Aug.
2020).
1255. Christoph Koettl, Muyi Xiao, Nilo Tabrizy, and Dmitriy Khavin, “China Is Censoring Coronavirus Stories.
These Citizens Are Fighting Back,” The New York Times (23 Feb. 2020); Jordan Schneider, “All the Early COVID-19
Stories Censored Off Chinese Internet,” Sup China (7 Apr. 2020).
1256. Nick Paton Walsh, “The Wuhan files: Leaked Documents Reveal China’s Mishandling of the Early Stages of
Covid-19,” CNN (1 Dec. 2020).
392
were ultra-minority or violent troublemakers or that protests were fomented by a foreign actor
aiming to destabilize the authorities. They used the same technics that in their attempts to erase
the Tian’anmen Square events of 1989 from collective memory, or when they claim that the
camps in Xinjiang are merely educational centers to improve the participants’ living conditions
with these “vocational education and training institutions.”1257 The presence of these compet-
ing versions of reality so zealously defended in the media and on the networks takes effect as
soon as the public is so troubled that they no longer know which version of the facts to be-
lieve. Even without creating a definite adherence to one’s own discourse, the gaslighter
hits its target as soon as the latter no longer manages to trust the proven version of the
facts, which, at least in the case of China, makes it possible to reduce the extent or vehemence
of the criticisms it faces on these subjects.
Third, another tactic is to fuel the fire of fear and try to create a panic. In March 2020,
millions of Americans received alarmist text messages that exaggerated the magni-
tude of the pandemic and announced an imminent containment, and a suspension of New
York City’s public transportation. It also encouraged them to stockpile food, medicine, etc.,
and asked them to relay this message to their contacts. One of the messages that circulated,
allegedly sent by the Department of Homeland Security, stated that the government would
make its announcement as soon as troops were properly deployed to contain any riots.
Within 48 hours, the rumor had already become so widespread that the National
Security Council publicly denied the information via its Twitter account: “Text message
rumors of a national #quarantine are FAKE. There is no national lockdown.”1258 This was
a text message version of the traditional email chain: “not something that is new, but it is
something that is effective,” said Graham Brookie, director of the DFRLab at the Atlantic
Council.1259 According to U.S. intelligence, Chinese services contributed to this oper-
ation, not necessarily by creating the messages, but by amplifying them on social
networks and messaging apps, including encrypted messaging, a practice that make
identification and fighting against misinformation all the more difficult.1260
D. The PLA is also waging war through social networks
While attribution to the Chinese state is generally difficult, mainly because of so-called
“nationalist” or “patriotic trolling” which is not necessarily directed or controlled by the
state. It is even more challenging to know which agencies or departments within
the state are involved. Nathan Beauchamp-Mustafaga and Michael S. Chase estimated
that the accounts deleted by Twitter in August and September 2019 were operated by the
Propaganda Department and/or UFWD.1261 Attention is generally focused on the
UFWD, but the authors rightly believed that “the PLA should be recognized as another
key driver of these Chinese efforts. Recent events in Taiwan as well as writings and pat-
ents filed by Chinese military researchers suggest that the PLA is increasingly interested in
leveraging social media for such political interference in foreign countries, including in the
1257. “Scholars Spreading Rumors about Uyghur Detention Work for US Intel Agency: Spokesperson,” Global
Times (3 Dec. 2019), https://archive.vn/hLbF4.
1258. Tweet of the NSC (@WHNSC) (16 Mar. 2020).
1259. Mihir Zaveri, “Be Wary of Those Texts from a Friend of a Friend’s Aunt,” The New York Times (16 Mar. 2020).
1260. Edward Wong, Matthew Rosenberg, and Julian E. Barnes, “Chinese Agents Helped Spread Messages That
Sowed Virus Panic in U.S., Officials Say,” The New York Times (22 Apr. 2020).
1261. Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea.
393
United States.”1262 While the main actors manipulating social media are undoubtedly
the PLA, the UFWD, and the MSS, there does not seem to be a clear division of labor
between them, or at least it remains unknown to us.
The PLA uses social networks for “open” influence first, i.e. circulating pro-
paganda, for deterrence and psychological warfare; then, it uses them to conduct
clandestine and hostile operations against foreign targets. On the first point, the PLA
has its own vectors, notably the PLA Daily and the China Military Online website, and it
uses Chinese state media, notably Xinhua, China Daily, and the Global Times. It has also been
managing a large number of accounts on Chinese social networks (Weibo and WeChat)
since 2010 (the first of which was probably the PLA Daily’s Weibo account in March 2010).
It had 700 accounts in February 2017 and probably more today.1263 The PLA does not yet
have accounts on Western social networks (Facebook, Twitter, YouTube, Instagram) how-
ever, where it is indirectly present, passing on its messages through other Chinese accounts,
especially news agencies. That said, it could join them soon.
Beauchamp-Mustafaga and Chase’s report gives valuable insights because it is based
on Chinese military literature (doctrine, articles published in the PLA Daily, the monthly
Military Correspondent, and other military newspapers). It thus provides a better grasp of
what military researchers think, especially when talking about the “war of public opinion
online” (网络舆论战), an effort to which we contributed with the analysis of different
PLA actors, in particular Base 311, in the second part of this report (→ p. 89).
Doctrinal documents insist on the importance of the informational field: a 2013
book on the Science of Military Strategy (战略学) published by the Department of World
Military Research of the Academy of Military Science (军事学学院战略研究部), stated
that “informational dominance is the foundation of the initiative on the battle-
field”1264; and the 2015 Chinese White Paper on Defense spoke of the “informationiza-
tion” of war.1265
Similarly to Russia, the information field is conceived in a broad sense: “the PLA
views cyber, electronic, and psychological warfare as interconnected subcomponents of
informational warfare writ large.”1266 Moreover, Chinese informational warfare, in general,
is probably coordinated by the Central Cyberspace Affairs Commission (中央网络安全和
信息化委员会), chaired by Xi himself1267 – while at the PLA’s level, coordination probably
falls to the SSF (Strategic Support Force).
The PLA first focused on online media, which were quickly seen as power multi-
pliers that could “double the results for half the effort when trying to mislead and confuse
people with information.”1268 Then, in 2009, the PLA became aware of the importance
of social networks in informational warfare, seeing how they were used by Western
powers to encourage the post-election uprisings in Iran. An article in that year’s China
Defense News talked about “cyber-subversion” and noted how, “through Twitter, Facebook,
YouTube and other websites, the US, Britain and Israeli intelligence have spread sensation-
1262. Ibid., viii-ix.
1263. Ibid., 43.
1264. 战略学 (Science of Military Strategy), 军事学学院战略研究部 (Department of Military Strategy of the
Academy of Military Science), Pékin, 军事科学出版社 (Academy of Military Science Press) (2013), 130.
1265. http://english.www.gov.cn/archive/white_paper/2015/05/27/content_281475115610833.htm.
1266. Joe McReynolds and John Costello, China’s Strategic Support Force: A Force for a New Era (Washington: National
Defense University, 2018), 5.
1267. Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea, 28.
1268. 刘轶 (Liu Yi), “博客新闻在 信息化战争中的运用” (“The Use of Blog News in Informatized Warfare”),
军事 记者 (Military Correspondent), May 2007.
394
alist information to poison the Iranian people.”1269 In 2003 already, before the emergence
of social media, the PLA had been closely monitoring how the Americans used mass media
to shape public opinion about their intervention in Iraq. The PLA generally follows what
the Americans did in terms of influence, considering that they are “the best in the
world” in this domain.1270
From this point of view, it is crucial to understand that the Chinese posture – like
the Russian one – is primarily defensive: the threat that these methods could be used
to encourage revolts in China increased the latter government’s interest in mass media:
“[from] China’s perspective, influence operations are undertaken by all countries, and it is
other countries, especially the United States, that use social media to interfere in the political
processes of countries like Iran and the Middle East. Whatever actions the PLA takes to
counter this perceived subversion are considered ‘defensive’ and necessary to protect and
defend the military and the Party.”1271 It took them a few years only, however, to per-
ceive their offensive potential.
Thus, in a 2011 article, PLA researchers presented the social networks’ potential as
a “subtle disguise” for psychological warfare or a “sugar-coated pill” as the target’s
psychology can be affected without their knowledge.1272 Here, disinformation (虚假 信
息) is a tool of psychological warfare. A 2013 book of the Academy of Military Sciences
explained that it could be used in various ways: “the deprivation of information, the cre-
ation of informational chaos […], the implantation of erroneous information into
the enemy’s information system, causing the enemy command to make the wrong
decisions.”1273 A 2006 article recommended “misleading” enemy leaders by “mixing
true and false information,” making them indistinguishable.1274
In 2015, the PLA Daily devoted a whole page to “Social Media Warfare,”1275 not only
from the point of view of China’s adversaries, and therefore of the risks that it poses
China’s stability, but also considering the opportunities it gives the PLA. This shows how
what was initially perceived as a vulnerability – therefore defensively – quickly became a
weapon as well, with the PLA now assessing its offensive uses.
According to Beauchamp-Mustafaga and Chase, the PLA has three main objectives
on social networks, the first two of which are open, the third clandestine: “First, it
seeks to achieve narrative dominance through the use of official social media accounts
to overtly spread Chinese propaganda and, thereby, shape public perceptions and poli-
cies toward China and its military. Second, the PLA seeks to use official social media
accounts for deterrence purposes to communicate deterrence signals, which specifically
1269. 迟延年 (Chi Yannian), “网络颠覆: 不容 小觑的安全威胁” (“Cyber Subversion: Security Threats that
Must Not be Taken Lightly”), 国防报 (China Defense News) (6 Aug. 2009), 3.
1270. Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea, 34. See, for instance: 朱金平 (Zhu Jinping), “假
新闻:现代战 争中的重要杀手: 以美国21世纪前后发动或主导的4次 战争为例” (“Fake News: The Important
Killer in Modern Warfare: Examples of Four Wars Initiated or Led by the United States before and after the 21st
Century”), 军事记者 (Military Correspondent) (2008), 37-39.
1271. Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea, 4.
1272. 吴银胜 (Wu Yinsheng) and 梅建兵 (Mei Jianbing), “社交媒体的迅猛发展及心理战运用的几点启示”
(“Some Inspirations Drawn from the Application of Booming Social Media in Psychological Warfare”), 国防科技
(National Defense Science & Technology), 3 (2011), 77-80.
1273. 叶征 (Ye Zheng), 信息作战学教 (Lectures on the Science of Information Operations), 军事科学出版社
(Beijing: Academy of Military Science Press) (2013), 105.
1274. 盛沛林 (Sheng Peilin) and 李雪 (Li Xue), “论 ‘舆论斩 首’” (“On ‘Public Opinion Decapitation’”), 南京政
治学院学报 (Journal of the PLA Nanjing Institute of Politics), 5 (2006), 114-117.
1275. 陈航辉 (Chen Hanghui), 芳鹏 (Fang Peng), 杨磊 (Yang Lei), and 夏育仁 (Xia Yuren), “社交媒体战: 信息
时代战争新维度” (“Social Media Warfare: A New Dimension to Warfare in the Information Age”), 解放军报 (PLA
Daily) (25 Sept. 2015).
395
demonstrate China’s capabilities and credibility while also undermining an enemy’s resolve
through psychological warfare. Third, the PLA seeks to leverage social media for polit-
ical interference in order to degrade the credibility of a foreign political system, under-
mine support for a foreign government and its policies, as well as support China’s preferred
political candidates in an election.”1276
In this vein, several articles published in military journals suggested that “the PLA is
developing technologies to manipulate foreign social media platforms,”1277 includ-
ing “deep fakes” and “public sentiment analysis.” A 2018 article from the PLA’s leading
psychological warfare unit advocated for more research and investment in digital informa-
tion operations, particularly into the use of big data and automatic language processing.1278
Another proof of the PLA’s interest in the use of databases and artificial intelli-
gence was the interest aroused by the Cambridge Analytica scandal. In a 2018 article, a
professor at the National Defence University defended that “lessons [had to] be learned,”
particularly in terms of China’s ability to “exploit big data analysis, AI processes, bots
and astroturfing, grasp the different personalities of selectors1279 and realize large-
scale guidance of public opinion and changing their [political] orientation.”1280 The
author also emphasized “the value of tailoring messages based on the beliefs, value sys-
tem, political orientation, and targeting of different countries, political parties, and cultural
groups, among others.”1281 All of this suggests that at least some PLA members are inter-
ested in “using social media and next-generation tools to influence voters in foreign
countries.”1282 Finally, a June 2019 article co-authored by a Base 311 researcher “explicitly
suggested that the PLA should use Artificial Intelligence (AI) to manage its network
of social media bots, which would be able to create content based on human advice,
select the appropriate time to publish on social media and coordinate these fake (
马甲) accounts.”1283
E. Satire and irony
Chinese information-manipulation operations often use humor, particularly sat-
ire and irony, to amplify the scope of their message or to undermine the credibility of
their opponents.1284 We have provided several examples of this, notably taken from the CYL
during the scandal over George Floyd’s death (→ p. 387). The power of these rhetorical
tools was identified very early on by Greek and Roman orators. Because of its abil-
1276. Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea, 14.
1277. Ibid., 23.
1278. 刘惠燕 (Liu Huiyan), 熊武 (Xiong Wu), 吴显亮 (Wu Xianliang), and 梅顺量 (Mei Shunliang), “全媒体
环境下推进认知域作战装备发展的几点思考” (“Several Thoughts on Promoting the Construction of Cognitive
Domain Operations Equipment in the Whole Environment”) 国防科技 (Defense Technology Review), October 2018.
1279. Selectors are key data necessary to target an individual (i.e. a phone number, credit card number, email and
so on).
1280. 董 涛 (Dong Tao), “推进军事新闻分众化的国际传播” (“Advancing International Communications for
Military News [Toward] Differentiated Audiences”), 军事记者 (Military Correspondent) (26 Sept. 2018).
1281. Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea, 95.
1282. Ibid., 96.
1283. 李弼程 (Li Bicheng), 胡华平 (Hu Huaping), and 熊尧 (Xiong Yao), “网络舆情引导智能 代理模型”
(“Intelligent Agent Model for Online Public Opinion Guidance”), 国防科技 (National Defense Science & Technology)
(Jun. 2019), 73-77; cited in Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea, 22.
1284. Satire is also a weapon cherished by Internet users seeking to circumvent censorship: Séverine Arsène, “La
satire, ou la ringardisation de la censure sur le web chinois” (“Satire, or the Nerding out of Censorship on the Chinese
Internet”), CERI (2010).
396
ity to quickly turn a hostile opinion around,1285 the joke is almost a magical weapon that
every speaker must know how to handle. A witticism can, in a few seconds, convince an
initially reluctant audience. As Cicero and Quintilian showed, the strength of a joke is that
it allows one to play on the three methods of persuasion: docere, delectare, and mouere. Because
the joke can be the vector of an honest thought, including (and more importantly) the most
difficult to hear, it acts on docere (translation of the Greek logos that refers to the factual
and objective dimension of the discourse). It is notably in the form of the witty word that
the joke is the most effective in mobilizing this tool of persuasion. The joke is particularly
useful for the second method, delectare (translation of the Greek ethos), because it can build
complicity with the audience and attract sympathy with the speaker (conciliare). Finally, the
joke makes it possible to set in motion mouere (“to move”), such as throwing opprobrium
on the adversary by discrediting them with a scathing remark. Animalization or association
with unpleasant or ridiculous characters is also a proven method of positively moving and
engaging the audience.
In April 2020, Xinhua Agency posted a video on YouTube called “Once Upon a
Virus.” This 1-minute-46-second montage was a perfect illustration of the CCP’s
use of humor and satire in its propaganda operations. Featuring Lego characters par-
ticipating in a play, the video was structured around a dialogue between China and the
United States about the Covid-19, the first represented by a group of Terracotta Warriors
and the second by the Statue of Liberty. The video explicitly stated that the health situa-
tion in the United States was due to the refusal of the authorities to listen to the warnings
that China issued. The comic tone was in the contrast between the words of the Statue of
Liberty (i.e., the United States), which insisted that forcing people to wear masks was con-
trary to human rights or accused China of building concentration camps, and the progres-
sive deterioration of its health ostentatiously suggested by the image. While making people
smile, the message emphasized the absurdity of Washington’s narratives when Beijing was
seemingly simply trying to alert Americans to the risks they were taking with this attitude.
The use of Lego, by association of ideas, reinforced the demonstration of the infantile
character of the U.S. posture.
The CCP’s use of satire also takes the form of hijacking Western fables and folk
tales. Two recent illustrations are very evocative: on May 12, 2021, Xinhua broadcast, via
Western social networks, a criticism of what the media presented as the hegemonic posture
of the United States. To do this, Xinhua created an analogy between Washington’s attitude
and that of the stepmother in the Grimm brothers’ fairy tale, Snow White.
1285. Cicéron, De Oratore.
There are even websites dedicated to this activity, publishing standardized profiles for
each person reported, with photo, personal information, contact details, etc. – all in a staged
presentation reminding these individuals that they are “targeted” (see below).
Screenshot of the HKLeaks website on 03/14/2021.
The best-known website in this register is HKLeaks, which was initially registered on
hkleaks.org on August 15, 2019, and which was on as hkleaks.pk and hkleaks.ml when this
report was written. Among the “targets” profiled on this website were “eight teachers that
are seen as supporters of the protests (including the director of the Chinese University of
Hong Kong), 61 journalists and editors of Apple Daily (one of Hong Kong’s biggest news-
papers), 23 individuals who allegedly ‘doxxed’ the Hong Kong police, numerous pro-de-
mocracy lawmakers and opinion leaders, and more than 900 protesters.”1288 These profiles
were then shared on social networks (Facebook, Twitter, Telegram, and Weibo).
Several signs suggest that Chinese authorities were behind this initiative or at least
supported it. Not only did the very professional nature of the website seemed to suggest a
level of financial resources and expertise, but the accounts promoting it on social networks
were the same that several platforms removed for “coordinated inauthentic behavior linked
to state-backed actors.” Moreover, state media were promoting HKLeaks (CCTV’s Weibo
account featured HKLeaks asking its followers to “act together” and “tear off the masks of
the rioters,” a message later shared by Chinese police and the CYL, among others).1289 Most
importantly, information that only Chinese authorities knew was contained in the profiles
of the targeted individuals, such as a false address that was given to the Chinese police or a
passport photo that was only used on a travel permit to China.1290
Another doxing website, hongkongmob.com, paid users who provided new tar-
gets or new information about existing targets. In January 2020, the website, which
has since been discontinued, reported that it had doxxed 62 protesters and distributed
HKD78,019 (€8,273) in rewards.1291 Insikt Group (Recorded Future) believed that the
actors behind this website “are likely not Hong Kong natives, but are attempting to pass
off as such. Although most of the website is written in traditional Chinese characters […]
1288. Insikt Group, “Chinese Influence Operations,” 15.
1289. Ibid., 19.
1290. Ibid., 15-16.
1291. Ibid., 16.
399
and the content mirrors written and spoken Cantonese […], some terms and wording used
on the website are not commonly used by Hong Kong-born Cantonese speakers.”1292
Insikt Group also noted that the two websites, hkleaks.pk and hongkongmob.com (the
second website is no longer online) were hosted in Russia, by the same company (DDoS-
GUARD), with a partly similar IP address (185.178.208.149 and 185.178.208.143 other)
and they used the same Russian e-mail service (Yandex).1293
G. Language as a clue to the Chinese origin of the manipulations
A priori, Beijing can easily penetrate Chinese-speaking environments, such as Hong
Kong, Taiwan, and Singapore, or even English-speaking environments, such as Singapore,
Australia, New Zealand, the United States, and Canada – even if local variations can make
it more difficult. On the other hand, countries such as Indonesia, Vietnam, and especially
Japan feel relatively protected by the language barrier since few Chinese have a perfect
command of their national languages. However, this seems to be less and less the case.
Three years ago, Chinese informational attacks on Japan contained syntax errors in
Japanese or characters that were only used in China and that were therefore easily
spotted. But Chinese services have improved in recent years: “APT10’s emails are now
very good, their Japanese is excellent, not distinguishable.”1294
In Hong Kong and Taiwan, the difference between the traditional Chinese widely
spoken and the simplified Chinese from mainland China remains the principal way
to detect the origin of a message, but with several limitations. First, it is not obvious
for Taiwanese to spot suspicious simplified Chinese characters because, in their daily life,
Taiwanese people are constantly exposed to simplified Chinese on television, in the sub-
titling of TV series, on social networks, or in many cultural products. Eventually, they do
not notice it anymore, and when they do, it is not necessarily perceived as something sus-
picious.1295 Second, simplified Chinese is found in articles from continental media shared
by Taiwanese on social networks, without being a symptom of any Chinese clandestine
operation. The use of simplified Chinese can also be explained by an assumed strategy of
targeting different audiences: in this case, the parts of the population favorable to Beijing.
Part of the public is aware that these messages come from China. However, it still works
because they do not care whether they are authentic or manipulated as long as these mes-
sages say what they want to hear, comfort them in their certainties – a well-known psycho-
logical phenomenon that helps explain the persistence of information manipulation despite
corrections, warnings and other demystifications.
Attackers attempt to conceal their continental origin by translating simplified
Chinese into traditional Chinese. However, this can be detected in long texts, into
which a few simplified characters may slip by mistake. In the disinformation about the
pandemic, for example, one of the terms that revealed the Chinese origin of the messages
was “corpse,” which is written 屍體 in traditional characters (in Taiwan) but 尸体 in simpli-
fied characters (in the PRC). However, many messages wrote 尸體, correctly converting the
1292. Ibid., 17.
1293. Ibid., 18.
1294. Interview with the authors, Tokyo (Mar. 2019).
1295. According to a Watchout civilian group executive interviewed by the authors (Apr. 2019), confirmed by the
Taiwan FactCheck Center (Jan. 2020).
400
second character, but not the first, thus revealing their continental inception.1296 Another
clue is that the change of language on Microsoft leaves some traces (in the way the
characters are presented) so that it is still possible to determine that a text in traditional
Chinese came from a computer usually running in simplified Chinese. It is also possible to
identify expressions used exclusively in mainland China (such as “Taiwanese author-
ity” rather than Taiwan).
Aware of these difficulties, attackers have refined their methods and are increas-
ingly using Taiwanese intermediaries: messages are either produced locally or in
China and then sent to relays that “Taiwanize” them before disseminating them.
These relays can be public relations agencies in Taiwan or Taiwanese living in mainland
China. In 2017, a student at the PLA National Defence University created a manual to
help the PLA “relocalize” and thus better conceal its social network interventions into
Taiwan: “[the] author explains how to alter the sentence structure and vocabulary used
by Mandarin speakers […] to sound more like that of Southern Min, the language used
in Taiwan, because he is from Fujian, where the local dialect is closest to Taiwanese. He
[explains] that sounding local will reduce the emotional distance between the two sides.”1297
“Since 2018, one minister explained, we have noted that simplified characters are
making themselves rare: China subcontracts to groups located in Taiwan and uses
local expressions.”1298 J. Michael Cole corroborated this information: “At first, they were
using Chinese citizens, but the audience quickly realized it because they were using sim-
plified Chinese and phrases. […] Now, he said, the content appears to be produced in
Taiwan.”1299 This method is more interesting than the previous ones because it goes beyond
simple translation, as the Taiwanese interpreter can insert cultural references in the content.
It is nothing less than cultural intermediation. If the Chinese sponsor correctly erases its
traces, the message disseminated is therefore much harder to attribute.
Language remains a relevant clue, but to a lesser extent, and the foreseeable use of
artificial intelligence to generate false news will probably further reduce the impor-
tance of the language factor. Another problem is that the importance of words needs to be
put into perspective: a significant proportion of information manipulation involves images,
memes, or a few messages that are too short to leave an indication of the language of origin.
At least three other clues must be considered: first, the timing. The Taiwanese have
found that during Golden Week (annual vacations in mainland China), there was a decrease
in misinformation targeting Taiwan, suggesting that some does come from mainland China.
Second, recycling; some of the false news that targeted the DPP during the last electoral
campaign was obviously repurposed from Hong Kong to Taiwan (producing a Taiwanese
version of the same false story → p. 485). Third, the themes: some attacks have cultural
markers. For example, the CCP is characterized by its tendency to think in ethnic terms: hav-
ing foreign ancestors, whether true or not (a false rumor that Joshua Wong was Vietnamese
for instance), is not a problem in a multicultural society like Taiwan and reveals a perspective
from mainland China. So, attacks on Taiwanese using these representations, like the Japanese
name of the president’s spokesperson, Kolas Yotaka, are culturally signed.
1296. Monaco, Smith, and Studdart, Detecting Digital Fingerprints, 66.
1297. Beauchamp-Mustafaga and Chase, Borrowing a Boat Out to Sea, 84-85.
1298. Interview with a Taiwanese minister in Taipei (Apr. 2019).
1299. M. Cole, cited after One Country, One Censor: How China undermines media freedom in Hong Kong and Taiwan, A
special report by the Committee to Protect Journalists (Dec. 2019), 26.
401
X. Other levers
In this non-exhaustive list of other tools of leverage used in Chinese influence opera-
tions, we also need to add citizen movements, Chinese tourists, influencers and hostages.
Không có nhận xét nào:
Đăng nhận xét