Celebrity Deepfake Ad Goes Viral, Warns of AI Voting Manipulation!

  • Editor
  • November 5, 2024
    Updated
celebrity-deepfake-ad-goes-viral-warns-of-ai-voting-manipulation

Key Takeaways:

  • A viral PSA video highlights the danger of AI and deepfakes potentially being used to mislead voters.
  • Hollywood stars appear in the video, with AI-generated likenesses warning of possible election manipulation.
  • Academics and disinformation experts emphasize how AI tools can amplify doubts about truth and sow confusion.
  • Voters are advised to remain cautious, fact-check sources, and rely on verified information before making election decisions.

As Election Day nears, a widely shared public service announcement on YouTube, now with over 6 million views, is cautioning American voters to scrutinize the content they encounter online, particularly as artificial intelligence (AI) becomes a tool of disinformation.

The video, created by the non-partisan anti-corruption group RepresentUs, uses deepfake technology to simulate the appearance of popular celebrities like Rosario Dawson, Amy Schumer, Chris Rock, and Michael Douglas.

The goal: to warn viewers that AI-driven technology could be used to sway voter participation by spreading false or misleading information.

“This election bad actors are going to use AI to trick you into not voting,” the video states. “Do not fall for it. This threat is very real.”

How the Deepfake PSA Was Created

The Don’t Let AI Steal Your Vote campaign leverages deepfakes of familiar faces to drive home its message.

While some celebrities appear in person, others—such as Michael Douglas and Chris Rock—are digitally rendered through AI, emphasizing the deceptive potential of such technology.

RepresentUs CEO Joshua Graham Lynn shared that the campaign had enthusiastic participation from celebrities who wanted to help safeguard democracy.

“The artists who are involved in this were super enthusiastic about doing it,” Lynn noted. “Everybody that you see there either gave us their likeness or performed in person volunteer. They all were super excited to do it to help get out the vote because they know this is a really important election.”

The Rising Threat of AI in Election Disinformation

The Department of Homeland Security has previously alerted state election officials to the potential of AI tools being used to disrupt election integrity.

According to reports, AI could be deployed in various harmful ways:

  • Fabricating Election Records: False records could be spread to confuse or mislead voters about voting processes.
  • Impersonating Election Officials: AI-generated voices or avatars could mimic officials to access sensitive information or misinform the public.
  • Flooding Voter Support Lines: AI tools could create fake calls, overwhelming voter support centers and hindering real-time assistance.
  • Spreading False Narratives: Misinformation amplified by AI could affect public opinion and lead voters to question legitimate news sources.

RepresentUs CEO Lynn stressed the importance of critical thinking when consuming media, especially with messages that discourage voting or seem unusual.

“Be skeptical if you see something telling you not to participate. If you see something about a candidate that you support, question it. Double-check it,” Lynn advised.

Experts Warn of the “Liar’s Dividend” and Deepfake Misuse

With the sophistication of deepfake technology growing, disinformation experts are increasingly concerned about its impact on public trust.

Kaylyn Jackson Schiff, a professor at Purdue University and co-director of the Governance and Responsible AI Lab, explains that awareness of deepfakes can lead people to dismiss real events or information as fake, a phenomenon known as the “liar’s dividend.”

“Being able to credibly claim that real images or videos are fake due to widespread awareness of deepfakes and manipulated media,” Schiff said, describing the trend that could sow further confusion among voters.

Schiff and Ph.D. candidate Christina Walker have been tracking political deepfakes since mid-2023.

Their Political Deepfakes Incidents Database has logged over 500 cases, many of which use deepfakes for satirical or entertainment purposes rather than direct manipulation.

However, even humorous or satirical deepfakes can be problematic, especially if taken out of context.

“It’s still a concern that some of these deepfakes that are initially propagated for fun could deceive individuals who don’t know the original context if that post is then shared again later,” Schiff added.

Identifying Deepfakes: Tips for Voters

Though AI-generated videos are becoming more realistic, they often still contain subtle irregularities that can indicate manipulation.

RepresentUs’s PSA features realistic but not flawless renditions of certain celebrities, with signs of visual editing detectable upon close examination.

Christina Walker of Purdue suggests that voters look for clues in deepfake videos, such as:

  • Unusual Finger or Facial Details: Extra fingers, missing parts, or blurred faces are common in manipulated videos.
  • Misaligned Text or Shadows: Oddities in shadows or text that don’t line up can also suggest AI tampering.

As AI advances, spotting these details may become more difficult, but Walker encourages viewers to verify unusual claims through credible channels, especially when election-related content stirs strong emotional reactions.

Practical Voting Tips Amid Deepfake Concerns

Given the potential for AI-driven misinformation to disrupt elections, experts advise voters to be vigilant:

  1. Start with Verified Sources: Official websites, such as state election boards or vote.gov, are reliable starting points for fact-checking.
  2. Cross-Check Claims: Verify any suspicious information by consulting multiple reputable sources.
  3. Examine Content Motivations: Consider why certain content is being shared and by whom, as understanding intent can clarify potential biases or agendas.

“If there’s anything telling you as a voter, ‘Don’t go to the polls. Things have been changed. There’s a disturbance. Things have been delayed. You can come back tomorrow,’ to double-check your sources. That’s the most important thing right now,” Schiff advised.

With Election Day approaching and digital tools evolving rapidly, vigilance remains essential.

RepresentUs’s PSA serves as a timely reminder of the stakes and calls on voters to stay informed, skeptical, and proactive when consuming information.

By recognizing the power of AI to influence narratives and remaining attentive to the sources of their information, voters can guard against the risks of manipulation in a new era of digital technology.

For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *