Tech Manipulation: OpenAI Discloses Russian and Chinese Use of AI for Propaganda

  • Editor
  • May 31, 2024
    Updated
Exposed_-OpenAI-Tech-Misused-by-Russian-and-Chinese-Groups-for-Propaganda

In recent reports, OpenAI has highlighted its successful efforts to identify and disrupt multiple covert influence campaigns leveraging its AI models. These operations, originating from Russia, China, Iran, and Israel, sought to manipulate public opinion and political outcomes globally.

The operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines, and debug computer programs to support political campaigns or influence public opinion in geopolitical conflicts.

Ben Nimmo, a principal investigator for OpenAI, explained that the campaigns often used OpenAI’s technology to post political content, but it was difficult to determine if they were targeting specific elections or just aiming to rile people up.

“Our case studies provide examples from some of the most widely reported and longest-running influence campaigns that are currently active,” he said.

The campaigns had failed to gain much traction, and the AI tools did not appear to have expanded their reach or impact. The influence operations still struggled to build an audience.

“These influence operations still struggle to build an audience,” Mr. Nimmo said.

Graham Brookie, the Atlantic Council’s Digital Forensic Research Lab’s senior director, warned that online disinformation could change as Generative AI technology grows increasingly powerful. OpenAI, which makes the ChatGPT chatbot, started training a new flagship AI model with enhanced capabilities.

OpenAI’s tools were used in long-running influence campaigns tracked for years, including Russia’s “Doppelganger” and China’s “Spamouflage”. The Doppelganger campaign generated anti-Ukraine comments on X and converted anti-Ukraine news articles into Facebook posts.

The Spamouflage network used OpenAI technology to debug code, analyze social media, and generate posts disparaging critics of the Chinese government.

An Iranian group, the International Union of Virtual Media, used OpenAI tools to produce and translate long-form articles and headlines spreading pro-Iranian, anti-Israeli, and anti-US sentiment. An Israeli firm, identified as STOIC, used OpenAI to create fictional personas and anti-Islamic messages on social media.

Comment
byu/TheGreenBehren from discussion
inActiveMeasures

The operations had limited success, with poor engagement and frequent errors revealing the AI-generated nature of the content. For example, the “Bad Grammar” campaign posted content with obvious AI-generated phrases and poor English.

Despite the advanced capabilities of AI, human operators behind these campaigns frequently made errors, reducing their effectiveness.

OpenAI’s report, the first of its kind from a major AI company, comes amid growing concerns about AI’s impact on upcoming global elections.

The report highlights that while current AI tools have not yet created the flood of convincing disinformation many experts feared, the potential for future sophisticated campaigns remains a significant concern.

Comment
byu/TheGreenBehren from discussion
inActiveMeasures

Jack Stubbs, the chief intelligence officer of Graphika, noted that some of the biggest fears about AI-enabled influence operations have not yet materialized. However, the evolving threat requires continuous monitoring and improvement of detection technologies.

“It suggests that some of our biggest fears about A.I.-enabled influence operations and A.I.-enabled disinformation have not yet materialized,” said Jack Stubbs, the chief intelligence officer of Graphika, which tracks the manipulation of social media services and reviewed OpenAI’s findings.

OpenAI says its tools were used in foreign influence campaigns. The company’s report detailed how the AI-generated material was just one of many types of content posted alongside traditional formats like manually written texts or memes.

The report aims to show the realities of how AI technology is changing online deception and underscores the need for vigilance and innovation to counteract these threats.

Comment
byu/TheGreenBehren from discussion
inActiveMeasures

OpenAI stressed that AI also gives defenders new tools to spot and disrupt coordinated attacks. The company emphasized the importance of being aware of threat actors’ changing tools and the human limitations that can affect their operations and decision-making.

For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *