The era of ChatGPT-powered propaganda is upon us

OpenAI says it’s already shut down influence campaigns that were using its products.

According to the company, China, Iran, Russia, and others are using OpenAI tools for covert influence operations.

In a blog post on Thursday, OpenAI said it’s been quick to react, disrupting five operations in the last three months that had tried to manipulate public opinion and sway political outcomes through deception.

The operations OpenAI shut down harnessed AI to generate comments and articles in different languages, create fake names and bios for social media accounts, debug code, and more.

OpenAI said it thwarted two operations in Russia, one in China, one in Iran, and one by a commercial company in Israel.

The campaigns involved “Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI wrote in its blog.

The company said the campaigns didn’t rely only on AI tools but also used human operators.

Some actors used AI to improve the quality of their campaigns, like producing text with fewer errors. In contrast, others used AI to increase their output, like generating more significant volumes of fake comments on social media posts.

For example, OpenAI said the operation in Israel used AI to produce short texts about the war in Gaza, post them on social media, and then manufacture AI-generated replies and comments on those posts from fake accounts.

However, OpenAI noted that none of the campaigns had meaningful engagement from actual humans, and their use of AI did not help them increase their audience or reach.

OpenAI said its own AI helped track the bad actors down. In its blog post, the company said it partnered with businesses and government organizations on the investigations, which AI fueled.

OpenAI said the investigations “took days, rather than weeks or months, thanks to our tooling.”

The company said its AI products also have built-in safety defenses that helped reduce the extent to which bad actors can misuse them. In multiple cases, OpenAI’s tools refused to produce the images and texts the actors had requested, the company explained.

OpenAI continues flexing its commitment to safety and transparency, but not everyone buys it. Some have argued, including OpenAI CEO Sam Altman himself, that highly advanced AI could pose an existential threat to humanity.

Stuart Russell, a leading AI researcher and technology pioneer, previously told Business Insider that he thinks Altman is building out the technology before figuring out how to make it safe and called that “completely unacceptable.”

“This is why most of the safety people at OpenAI have left,” Russell said.

Similar Posts

Leave a Reply