In early August, two professors from Vanderbilt University published an essay outlining a trove of Chinese documents linked to the private firm GoLaxy. The sources revealed a sophisticated and troubling use of artificial intelligence (AI) not only to generate misleading content for target audiences – such as in Hong Kong and Taiwan – but also to extract information about U.S. lawmakers, creating profiles that might be used for future espionage or influence campaigns. The article received significant coverage, rightfully so.
Yet, those findings represent only the tip of the iceberg in an emerging phenomenon. A series of reports, incidents, and takedowns over the summer – spanning OpenAI, Meta, and Graphika – shed further light on the latest uses of AI by China-linked actors focused on foreign propaganda and disinformation. Notably, generative AI tools are now employed not only for content production but also for operational purposes like data collection and drafting internal reports to the party-state apparatus. This evolution marks a new frontier in Beijing’s information warfare tactics, offering insights into what a more AI-dominated future could yield and why urgent attention is needed from social media platforms, software developers, and democratic governments.
A close review of these reports reveals five key dimensions:
1. Using AI for Content Generation
While prior China-linked disinformation campaigns had deployed AI-tools to generate false personas or deepfakes, these latest disclosures point to a more concerted effort to leverage these tools for creating entire fake news websites that distribute Beijing-aligned narratives simultaneously in multiple languages. Graphika’s “Falsos Amigos” report published last month identified a network of 11 fake websites, established between late December 2024 and March 2025, using AI-generated pictures as logos or cover images to enhance credibility.
The websites published almost exclusively spin-offs from China Global Television Network (CGTN), the international arm of China’s broadcaster and Chinese Communist Party (CCP) mouthpiece China Central Television (CCTV). Graphika documented that the sites were “systematically publishing AI-generated summaries of CGTN articles in English, French, Spanish, and Vietamese,” alongside machine translation for full articles. The summaries varied in tone and style to suit diverse audiences, while the websites presented them as original content with CGTN only cited as a reference, an attempt to launder propaganda through a façade of independence.
OpenAI’s threat report published in June cited the use of similar tactics, noting that now banned ChatGPT accounts had used prompts (often in Chinese) to generate names and profile pictures for two pages posing as news outlets, as well as for individual persona accounts of U.S. veterans critical of the Trump administration in a campaign the firm dubbed “Uncle Spam.” These efforts aimed to fuel political polarization in the United States, with AI-crafted logos and profiles amplifying the illusion of authenticity.
Another key strategy involved simulating organic engagement. OpenAI detected China-linked accounts bulk-generating social media posts, with a “main” account posting a comment followed by replies from others to mimic discussion. The “Uncle Spam” operation generated comments from supposed American users both supporting and criticizing U.S. tariffs.
One striking case involved Pakistani activist Mahrang Baloch, who has criticized China’s investments in the disputed territory of Balochistan. Meta documented a TikTok account and Facebook page posting a false video accusing her of appearing in pornography, followed by hundreds of apparently AI-generated comments in English and Urdu to simulate engagement.
In another multi-layered campaign that OpenAI named “Sneer review,” Chinese operatives used ChatGPT to generate comments critical of a Taiwanese game in which players work to defeat the CCP. They then disingenuously used the tool to write a “long-form article claiming it had received widespread backlash.”
These examples point to how China-linked information operations are increasingly using generative AI tools to further refine previously deployed tactics like content laundering, covert dissemination of state propaganda, smear campaigns, and development of fake social media personas.
2. Using AI Models for Operational Purposes
Beyond content creation, generative AI tools are beginning to serve as a vehicle to improve operational efficiency in the China-linked disinformation ecosystem. OpenAI disrupted four China-linked operations from March to June 2025 that had used ChatGPT. Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, told National Public Radio (NPR) that the China-linked operations “combined elements of influence operations, social engineering, [and] surveillance.”
This multifaceted approach includes attempts at mass dissemination through coordinated networks. The 11 fake news domains that Graphika identified also had 16 matching social media accounts, active on Facebook, Instagram, Mastodon, Threads, and X. The network displayed similar designs and synchronized posting patterns, matching CGTN English’s own posting schedule almost to the minute. OpenAI also noted cross-platform activity on TikTok, X, Reddit, Facebook, and Bluesky, one of the first such documented examples on that platform, indicating a broadening digital footprint.
Some of this activity was likely facilitated by China-linked accounts querying AI tools. According to OpenAI, they asked for advice to optimize posting schedules and content distribution to maximize user engagement.
OpenAI also reported that the China-linked users sought assistance with data collection. They requested code to extract personal data – profiles and follower lists – from X and Bluesky, possibly aiming to analyze characteristics for targeted influence. The Vanderbilt essay revealed GoLaxy’s compilation of profiles for 117 U.S. members of Congress and over 2,000 political figures, opening up the potential to personalize propaganda and disinformation to specific individual users.
AI tools were also found to support internal strategizing and reporting. OpenAI documented ChatGPT use to draft internal documents, including an essay on embodying Xi Jinping’s teachings for public security agencies and a performance review describing in detail the steps taken to run the operation – matching the actual behavior observed. While such blatant self-documentation can aid disruption of these campaigns by firms like OpenAI, left undetected, making use of AI in this capacity could also allow CCP-linked actors to refine tactics in real time, adapt to platform countermeasures, and maximize impact.
3. Wide Range of Targeted Topics and Audiences – Many Unrelated to China
This particular sample of campaigns targeted diverse issues and regions, often unrelated to Beijing’s direct interests or communities systematically persecuted by the CCP. Graphika noted content on U.S. food insecurity, tariffs, and a youth conference promoting “win-win” ties with China, as well as geopolitical topics like Iran-Israel tensions and Ukraine. OpenAI highlighted debates over the closure of USAID.
Notably, many of the detected campaigns sought to meddle in the internal affairs and political debates of foreign countries, especially democracies. OpenAI’s “Uncle Spam” sought to amplify U.S. polarization. Meta’s May 2025 report detailed the removal of a network of 157 Facebook and 17 Instagram accounts that targeted Myanmar, Taiwan, and Japan. While posing as local citizens, the accounts (some using profile images likely generated by AI) posted about current events in these countries, while criticizing civil resistance to Myanmar’s junta and political leaders in Japan and Taiwan.
Graphika also identified pro-China, anti-West content tailored for young audiences in the Global South. Content on the fake news sites explicitly targeted youth in Africa, the Americas, and Asia, using terms that market to “tech-savvy and socially conscious young professionals across South and Southeast Asia,” alongside hashtags like “independent media” that mask state backing. Graphika’s analysis of domain source code revealed prompts targeting “children aged 8-16,” reflecting an intent to shape long-term perceptions in regions with growing geopolitical significance.
In one example, ActuMeridien, a fake French news site from the network exposed by Graphika, said at its launch that it would focus on francophone youth that are “connected, curious, and engaged.” It announced in French that the website and social media account will be “highlighting the voices, ideas, and realities of the Global South.” The post garnered 1,400 likes – possibly through artificial amplification.
If successful, such use of generative AI tools to tailor content to local languages and cultural contexts alongside a focus on youth could leverage social media’s popularity to deceptively build trust in pro-Beijing sources and influence future leaders in developing regions.
4. Multi-Faceted Hints of Ties to the Chinese Party-State
Although each of the investigative reports is careful to avoid absolute attribution, taken together, the available evidence suggests involvement from the regime’s propaganda and security apparatus.
Graphika found all 11 fake news domains registered via Alibaba Cloud in China, with 10 registrants in Beijing, including one linked to Global International Video Communications, a state-owned enterprise tied to CGTN. Four of five Facebook pages had managers in China, and one ad beneficiary’s name matched a CGTN digital employee’s LinkedIn profile. OpenAI noted a now banned ChatGPT user linked to the detected operations who claimed affiliation with the CCP propaganda department, though this could not be verified. Lastly, the Vanderbilt essay ties GoLaxy to the state-controlled Chinese Academy of Sciences, with documented collaboration with intelligence, military, and CCP entities.
5. Resilience and Vulnerabilities on Display
At the moment, the details of these operations and their disruption reveal both resilience and vulnerabilities. Graphika reported limited organic traction, with Facebook pages gaining 3,000-6,000 followers but minimal likes or shares, and almost no followers on Instagram, Mastadon, or Threads. OpenAI’s teams also caught operations early. They rated the level of actual impact at a 2 or 3 on 1-6 scale (1 being the lowest, 6 the highest).
Yet outliers emerged: OpenAI noted TikTok videos with 25,000 likes and an X account with 10,000 views. Notably, Bluesky’s VeteransforJustice account – whose logo was generated by a ChatGPT user linked to the PRC network – reached over 11,000 followers, a relatively high total for a newer and smaller platform.
Moreover, the resources and willingness to invest in detecting and disrupting such influence operations remain uneven across platforms and tools. While the investigations from Meta and OpenAI demonstrate the potential importance of investing in these defenses, it remains unclear whether China-linked operatives are making similar use of tools like X’s Grok or Protonmail’s more private Lumo LLM. At the same time, apps owned by China-based companies like DeepSeek or TikTok have even fewer incentives to disrupt CCP-linked campaigns and a higher risk of reprisals if they do.
Closing Thoughts
As use of generative AI tools expands globally – for benign, productive, and malicious purposes – a close read of these studies makes clear that the Chinese party-state apparatus remains committed to engaging in foreign information influence campaigns and to using whatever tools are available to maximize their reach, improve their efficacy, smear CCP critics, and disrupt democratic debate.
From that perspective, any private initiatives or government regulatory action that could level the playing field and create a baseline for transparency and cross-platform collaboration would be a welcome next step. It is one of the many actions needed to enhance resilience to the inevitable future manipulation campaigns that will emanate from Beijing.