in , , ,

OpenAI Closes Down ChatGPT-Based Iranian Influence Operation for False News

Read Time:2 Minute, 21 Second

On Friday, OpenAI declared that it had effectively taken down an Iranian influence operation that was creating false social media postings and news articles using ChatGPT, one of its AI tools. These pieces were a part of a larger campaign to mislead readers in the United States. Fortunately, the operation did not acquire any significant traction, according to OpenAI, which claimed that its purpose was to sow discord on delicate topics.

The “Storm-2035” effort was found to be a part of a wider range of Iranian influence activities that Microsoft had only last week identified and connected to the Iranian government. Under the cover of authentic news sources, OpenAI identified accounts linked to the operation that were producing material for five different websites. These websites addressed divisive topics such as the current Gaza conflict, LGBTQ+ rights, and the US presidential election.

Further disseminating the fake content, OpenAI also discovered “a dozen accounts on X (formerly Twitter) and one on Instagram” connected to the scheme. The public was reassured by the corporation, though, that the posts didn’t seem to make much of an impact. According to a statement from OpenAI, “the majority of social media posts that we identified received few or no likes, shares, or comments.”

The operation was rated just a Category 2 on the Brookings Institution’s Breakout Scale, which assesses the intensity of misinformation operations, according to OpenAI’s danger assessment. A Category 2 rating means that although the operation was conducted on several platforms, there was no proof of significant public involvement.

See also  2024's Top LED Face Masks: Dermatologist-Recommended Products for Glowing Skin

Articles acting as conservative and progressive news sites disseminated the phony information produced by the Iranian operation, which was directed towards both political extremes. There was a bogus report circulating that claimed former President Donald Trump was the target of social media censorship and may even proclaim himself the “king of the U.S.” A different report erroneously depicted Kamala Harris’ choice of Tim Walz as her running partner as a premeditated attempt at partisan unity.

The false narratives surrounding the operation extended beyond U.S. politics to cover international problems, including Venezuelan politics, Scotland’s independence, and Israel’s involvement in the Olympics. The organization combined these political themes with more lighthearted themes like beauty and fashion to give their work a more authentic feel.

Ben Nimmo, an investigator with OpenAI Intelligence and Investigations, told Bloomberg that although the campaign was smart, it did not interact with actual people. He said that despite trying to appease both parties, “the operation didn’t look like it got engagement from either.”

This unsuccessful Iranian influence effort comes after it was revealed earlier this week that Iranian hackers were attempting to compromise Kamala Harris’s and Donald Trump’s campaigns. Although phishing emails tricked Roger Stone, the FBI hasn’t discovered any proof that anyone in Harris’ campaign was similarly penetrated.

What do you think?

Google Admits It “Missed the Mark” in Asking Pixel 9 Influencers to Provide Content

“C.H. Robinson Diverts U.S. Cargo from Canadian Ports Amid Looming Rail Strike”