HomeCryptocurrencyOpenAI stopped 5 campaigns that used its tech for ‘deceptive influence’

OpenAI stopped 5 campaigns that used its tech for ‘deceptive influence’

-


Artificial intelligence firm OpenAI revealed it had identified and disrupted several online campaigns that leveraged its technology to manipulate public opinion around the globe.

On May 30, Sam Altman founded AI company OpenAI stated that it has “terminated accounts linked to covert influence operations.”

“In the last three months, we have disrupted five covert IO [influence operations] that sought to use our models in support of deceptive activity across the internet.”

The bad actors used AI to generate comments for articles, create names and bios for social media accounts, and translate and proofread texts.

The firm behind ChatGPT stated that an operation called “Spamouflage” used OpenAI to research social media and generate multilingual content across platforms like X, Medium, and Blogspot in an attempt to “manipulate public opinion or influence political outcomes.”

It also used AI to debug code and manage databases and websites.

Fake AI generated post by one of the Spamouflage accounts. Source: OpenAI

Additionally, an operation called “Bad Grammar” targeted Ukraine, Moldova, Baltic States, and the United States using OpenAI models to run Telegram bots and generate political comments.

Another group called Doppelganger used AI models to generate comments in English, French, German, Italian, and Polish that were posted on X and 9GAG, also designed to manipulate public opinion, it added.

Comment about Ukraine, and video about Gaza, posted to 9GAG’s “Motor Vehicles” channel. Source: OpenAI

OpenAI also mentioned a group called “International Union of Virtual Media” that used the tech to generate long-form articles, headlines, and website content published on their linked website.

OpenAI said it also disrupted a commercial company called STOIC, which used AI to generate articles and comments on social media platforms such as Instagram, Facebook, X, and websites associated with the operation.

OpenAI explained that the content posted by these various operations focused on a wide range of issues:

“Including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments.”

Related: Sam Altman’s OpenAI reportedly in partnership talks with his other firm, Worldcoin

“Our case studies provide examples from some of the most widely reported and longest-running influence campaigns that are currently active,” Ben Nimmo, a principal investigator for OpenAI who wrote the report, told The New York Times.

The outlet also reported that it was the first time that a major AI firm has revealed how its specific tools were used for online deception.

“So far, these operations do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” concluded OpenAI.

Magazine: ‘Sic AIs on each other’ to prevent AI apocalypse: David Brin, sci-fi author