Key Takeaways
• 77% of Australians report encountering more political deepfakes in recent months
• Only 12% feel confident in detecting AI-generated content online
• 68% have reconsidered a political stance based on online information
• 78% support labeling requirements for AI-generated political content
• Over 80% believe the government is not doing enough to combat deepfake risks
With national elections approaching, a new survey by Adobe reveals that Australians are increasingly worried about the role of AI-generated deepfakes in shaping public opinion and distorting political discourse.
The “Authenticity in the Age of AI 2025” report presents a snapshot of a population navigating a rapidly evolving digital information environment—one where the line between real and fake has become dangerously blurred.
The survey polled 1,010 Australian adults of voting age and uncovered a striking trend: 77% of respondents said they had seen an increase in political deepfakes over the last three months.
Simultaneously, just 12% expressed confidence in their ability to accurately identify such content.
Digital Misinformation Impacting Voter Behavior
The emotional and cognitive toll of misinformation is evident in how voters interact with digital platforms.
A significant 68% of Australians admitted to reconsidering their stance on a political candidate or issue based on online content, highlighting the direct impact manipulated media can have on democratic outcomes.
Additionally, 83% of respondents said they struggle to determine whether digital content is trustworthy. As a result, behavioral changes are emerging:
• 45% of people are ignoring content they suspect to be deepfakes
• Others continue to unknowingly share manipulated media
• Nearly half have scaled back their social media usage due to trust issues, especially on platforms like Facebook (29%) and X (15%)
The study reveals broad public support for stronger oversight. A resounding 78% of participants believe AI-generated political content should be clearly labeled, while over 80% view current governmental efforts as insufficient.
These sentiments reflect a growing demand for both regulatory action and technological safeguards. Commenting on this dynamic, Jennifer Mulveny, Director of Government Relations and Public Policy for Adobe Asia Pacific, stated:
Mulveny emphasized the importance of tools that promote content transparency, noting:
Solutions: Transparency, Context, and Content Credentials
Beyond regulation, Australians are seeking self-directed methods for assessing digital authenticity. The report found that:
• 72% of respondents want additional context—such as location, time of creation, and editing history—to verify content integrity
• 70% believe these details would improve their trust in election-related media
Adobe’s recommendation includes the use of Content Credentials, a form of metadata that tracks a piece of content’s origin and any alterations it has undergone. This can help viewers verify authenticity without needing to rely solely on their own visual judgment.
The Broader Implications
The findings align with a growing global concern: how democracies can defend themselves against digitally sophisticated disinformation campaigns.
With elections becoming increasingly influenced by viral content, Australia’s situation offers a clear warning that other nations may soon face similar challenges if they haven’t already.
While technologies like generative AI promise innovation, they also introduce new vectors for manipulation and erosion of public trust. As the report shows, the Australian electorate is not only aware of these risks—they’re demanding swift action.
March 19, 2025: Viral Video Shows Tesla’s AI Struggling in Real-World March 17, 2025: Aussie AI Startup Breaker Raises $2M to Fuel Growth! February 28, 2025: OFS Introduces Mayvn AI—A New Era for AI-Powered
For more news and insights, visit AI News on our website.