ADVERTISEMENT

Technology

OpenAI Bans Accounts Appearing to Work on a Surveillance Tool

The logos of the ChatGPT and OpenAI artificial intelligence apps on a mobile phone, arranged in Riga, Latvia, on Wednesday, Jan. 29, 2025. (Andrey Rudakov/Bloomberg)

(Bloomberg) -- OpenAI recently banned several accounts that had been using ChatGPT to write sales pitches and debug code for a suspected social media surveillance tool that likely originated in China, the company said — part of a broader effort by the AI startup to police malicious uses of its powerful AI models.

According to a report the San Francisco startup released on Friday, the accounts were using ChatGPT to advertise and augment what they claimed was an AI assistant capable of collecting real-time data and reports about anti-China protests in the US, UK and other Western countries. That information would then be relayed to Chinese authorities, the report said.

The findings come at a time of growing concern in the US around Chinese use of American technology to advance its own interests. “This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or US-based AI for non-democratic purposes, according to the materials they were generating themselves,” said Ben Nimmo, OpenAI’s principal investigator on the company’s intelligence and investigations team, during a press call Thursday.

By publishing such cases, Nimmo said OpenAI aims to shed light on how “authoritarian regimes may try to leverage US-built AI, democratic AI, against the US and allied countries, as well as their own people.”

OpenAI said that the accounts in the network referenced using other AI tools to develop their code, including a version of Llama, the open source model developed by Meta Platforms Inc. In a statement, Meta said that if its service was involved, it was likely one of many such tools available to the users, including AI models made in China. OpenAI noted it does not have visibility into whether this code was deployed. 

The software, called “Qianyue Overseas Public Opinion AI Assistant,” couldn’t be independently verified by OpenAI, though the startup had access to the text of apparent marketing materials. The marketing copy detailed how the purpose of the “social listening” software was to send surveillance reports to Chinese authorities, intelligence agents and staff at Chinese embassies. The software appeared to be specifically focused on identifying online conversations in Western countries about demonstrations related to human rights in China. Descriptions of the software said it pulled from social media conversations on platforms such as X, Facebook and Instagram.

It is against OpenAI’s policies to use its AI for communications surveillance or unauthorized monitoring of individuals, including “on behalf of governments and authoritarian regimes that seek to suppress personal freedoms and rights,” according to the company’s threat report.

In recent months, OpenAI has been warning politicians in the US about what it sees as a growing economic and national security threat from Chinese-built AI, particularly in the wake of the surprisingly competitive AI models from Chinese startup DeepSeek. Some China hawks in the US have criticized Meta for open sourcing its AI tools, saying that it is empowering Chinese AI companies to make advancements. While OpenAI’s models are currently kept proprietary, the company has recently been considering open sourcing models in line with growing competition from DeepSeek and others.

In a statement, Meta pointed to the growing availability of AI models globally, saying that the limited availability of some Western technology may not matter much when it comes to bad actors. “China is already investing more than a trillion dollars to surpass the US technologically, and Chinese tech companies are releasing their own open AI models as fast as companies in the US,” a representative for the company said.

In its report, OpenAI also shared several other examples of accounts that it banned for misusing its tools — including ones linked to Iranian influence operations using ChatGPT to generate social media posts and articles; another appearing to represent a deceptive employment scheme that mimicked scams linked to North Korea; and another set of accounts likely linked to China that were generating Spanish-language articles critical of the US government.

©2025 Bloomberg L.P.