Ads

OpenAI Reveals Iranian Group Used ChatGPT to Attempt U.S. Election Influence

 



OpenAI disclosed on Friday that an Iranian group had exploited its ChatGPT chatbot to create and disseminate content across websites and social media platforms. The aim appeared to be exacerbating polarization among American voters in the ongoing presidential election.

According to the report, the Iranian-linked content spanned various topics, including the Gaza conflict, the Olympic Games, and the U.S. presidential election. The material generated with ChatGPT was designed to spread misinformation and criticize both presidential candidates. Some of this content appeared on sites recently identified by Microsoft as platforms used by Iran to propagate fake news intended to deepen political divisions in the United States.

In response, OpenAI has banned the ChatGPT accounts involved in this operation. The company noted that the posts generated by these accounts did not gain significant traction among social media users. OpenAI identified "a dozen" accounts on X and one on Instagram connected to the Iranian activities, which were reportedly removed after the company alerted the respective social media platforms.

Ben Nimmo, a principal investigator on OpenAI’s intelligence and investigations team, described this as the first instance of the company identifying an operation focused specifically on the U.S. election. He emphasized the need for vigilance but also urged for calm, noting that while the impact was minimal, it served as a critical reminder of the ongoing threats.

The OpenAI report complements recent findings from Microsoft and Google, which have highlighted tech-centric efforts by Iranian actors to influence U.S. elections. One flagged website, Teorator, presented itself as a source for exposing hidden truths and featured critical articles about Democratic vice-presidential candidate Tim Walz. Another site, Even Politics, published critical content about Republican candidate Donald Trump and conservative figures like Elon Musk.

Earlier this year, OpenAI had already reported instances of its AI being used by various state actors, including those from Iran, Russia, China, and Israel, to produce multilingual propaganda. Despite these efforts, none of these influence campaigns achieved significant visibility or impact, according to Nimmo.

As elections take place worldwide, concerns about AI's role in generating large volumes of seemingly authentic propaganda have been raised by democracy advocates, politicians, and AI researchers. However, there has been no widespread evidence to suggest that foreign governments have succeeded in swaying U.S. voters' preferences in a targeted manner.

Post a Comment

0 Comments