Supported by
OpenAI Uncovers Evidence of A.I.-Powered Chinese Surveillance Tool
The company said a Chinese operation had built the tool to identify anti-Chinese posts on social media services in Western countries.

OpenAI said on Friday that it had uncovered evidence that a Chinese security operation had built an artificial intelligence-powered surveillance tool to gather real-time reports about anti-Chinese posts on social media services in Western countries.
The company’s researchers said they had identified this new campaign, which they called Peer Review, because someone working on the tool used OpenAI’s technologies to debug some of the computer code that underpins it.
Ben Nimmo, a principal investigator for OpenAI, said this was the first time the company had uncovered an A.I.-powered surveillance tool of this kind.
“Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our A.I. models,” Mr. Nimmo said.
There have been growing concerns that A.I. can be used for surveillance, computer hacking, disinformation campaigns and other malicious purposes. Though researchers like Mr. Nimmo say the technology can certainly enable these kinds of activities, they add that A.I. can also help identify and stop such behavior.
Mr. Nimmo and his team believe the Chinese surveillance tool is based on Llama, an A.I. technology built by Meta, which open sourced its technology, meaning it shared its work with software developers across the globe.
In a detailed report on the use of A.I. for malicious and deceptive purposes, OpenAI also said it had uncovered a separate Chinese campaign, called Sponsored Discontent, that used OpenAI’s technologies to generate English-language posts that criticized Chinese dissidents.
The same group, OpenAI said, has used the company’s technologies to translate articles into Spanish before distributing them in Latin America. The articles criticized U.S. society and politics.
Separately, OpenAI researchers identified a campaign, believed to be based in Cambodia, that used the company’s technologies to generate and translate social media comments that helped drive a scam known as “pig butchering,” the report said. The A.I.-generated comments were used to woo men on the internet and entangle them in an investment scheme.
(The New York Times has sued OpenAI and Microsoft for copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz
Explore Our Coverage of Artificial Intelligence
News and Analysis
The Human Virome Program, an effort led by five universities, will use A.I. in an effort to understand how an estimated tens of trillions of viruses that live inside humans affect health.
Vice President JD Vance told European and Asian leaders in Paris that the Trump administration was adopting an aggressive America First approach to the race to dominate all the building blocks of artificial intelligence.
President Emmanuel Macron of France pitched lighter regulation to fuel an A.I. boom in Europe, but attendees at a summit in Paris worry that the risks of A.I. will be overlooked as the continent rushes to keep up with the United States and China.
An apparent breakthrough in efficiency from the Chinese start-up DeepSeek did not make tech’s biggest companies question their extravagant spending on new A.I. data centers.
The Age of A.I.
DeepSeek, a Chinese start-up, built one of the most powerful A.I. systems using far fewer computer chips than many experts thought possible. Here’s a guide to how it succeeded.
Most large employers play down the likelihood that A.I. bots will take our jobs. Then there’s Klarna, a darling of tech investors.
Advertisement