KIVA - The Ultimate AI SEO Agent Try it Today!

AI Deepfakes Spark Election Fears in Australia

  • Writer
  • April 9, 2025
    Updated
ai-deepfakes-spark-election-fears-in-australia

Key Takeaways

• 77% of Australians report encountering more political deepfakes in recent months

• Only 12% feel confident in detecting AI-generated content online

• 68% have reconsidered a political stance based on online information

• 78% support labeling requirements for AI-generated political content

• Over 80% believe the government is not doing enough to combat deepfake risks


With national elections approaching, a new survey by Adobe reveals that Australians are increasingly worried about the role of AI-generated deepfakes in shaping public opinion and distorting political discourse.

The “Authenticity in the Age of AI 2025” report presents a snapshot of a population navigating a rapidly evolving digital information environment—one where the line between real and fake has become dangerously blurred.

The survey polled 1,010 Australian adults of voting age and uncovered a striking trend: 77% of respondents said they had seen an increase in political deepfakes over the last three months.

Simultaneously, just 12% expressed confidence in their ability to accurately identify such content.


Digital Misinformation Impacting Voter Behavior

The emotional and cognitive toll of misinformation is evident in how voters interact with digital platforms.

A significant 68% of Australians admitted to reconsidering their stance on a political candidate or issue based on online content, highlighting the direct impact manipulated media can have on democratic outcomes.

Additionally, 83% of respondents said they struggle to determine whether digital content is trustworthy. As a result, behavioral changes are emerging:

• 45% of people are ignoring content they suspect to be deepfakes
• Others continue to unknowingly share manipulated media
• Nearly half have scaled back their social media usage due to trust issues, especially on platforms like Facebook (29%) and X (15%)


The study reveals broad public support for stronger oversight. A resounding 78% of participants believe AI-generated political content should be clearly labeled, while over 80% view current governmental efforts as insufficient.

These sentiments reflect a growing demand for both regulatory action and technological safeguards. Commenting on this dynamic, Jennifer Mulveny, Director of Government Relations and Public Policy for Adobe Asia Pacific, stated:

“In the era of generative AI, voters may be acutely aware of the spread of harmful deepfakes. It has the power to influence voter views and more citizens need to be equipped with the digital media literacy and skills to stop, check and verify content.”

Mulveny emphasized the importance of tools that promote content transparency, noting:

“Tools like labelling, tagging, and embedded Content Credentials can empower Australians to more easily track the origins and integrity of the content they encounter. Widespread adoption of these tools is essential to provide the public with verifiable information about what they view online.”


Solutions: Transparency, Context, and Content Credentials

Beyond regulation, Australians are seeking self-directed methods for assessing digital authenticity. The report found that:

• 72% of respondents want additional context—such as location, time of creation, and editing history—to verify content integrity
• 70% believe these details would improve their trust in election-related media

Adobe’s recommendation includes the use of Content Credentials, a form of metadata that tracks a piece of content’s origin and any alterations it has undergone. This can help viewers verify authenticity without needing to rely solely on their own visual judgment.


The Broader Implications

The findings align with a growing global concern: how democracies can defend themselves against digitally sophisticated disinformation campaigns.

With elections becoming increasingly influenced by viral content, Australia’s situation offers a clear warning that other nations may soon face similar challenges if they haven’t already.

While technologies like generative AI promise innovation, they also introduce new vectors for manipulation and erosion of public trust. As the report shows, the Australian electorate is not only aware of these risks—they’re demanding swift action.


For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image
Writer
Articles written8

I’m Anosha Shariq, a tech-savvy content and news writer with a flair for breaking down complex AI topics into stories that inform and inspire. From writing in-depth features to creating buzz on social media, I help shape conversations around the ever-evolving world of artificial intelligence.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *