KIVA - The Ultimate AI SEO Agent Try it Today!

Canada Kicks Off Election With No AI Rules

  • Writer
  • March 26, 2025
    Updated
canada-kicks-off-election-with-no-ai-rules

Key Takeaways

  • Canada’s 2025 federal election is proceeding without any legal framework governing the use of AI in political campaigns.

  • Major political parties have not disclosed whether they use AI-generated content, raising concerns over transparency and accountability.

  • Chief Electoral Officer Stéphane Perrault and academic experts warn of the risks posed by deepfakes and synthetic media in democratic processes.

  • Proposed legislation to address AI threats in elections was shelved, and no cross-party code of conduct has been established.

  • Experts emphasize that AI transparency—not just its use—is essential to safeguard electoral integrity and public trust.

Canada’s 2025 federal election is unfolding in a landscape reshaped by artificial intelligence—yet no specific rules, oversight mechanisms, or transparency requirements are in place to govern how political parties use AI-generated content in their campaigns.

This regulatory void has drawn pointed criticism from election officials, technology scholars, and democratic governance experts.

Despite growing global consensus around the need for safeguards against synthetic media and deepfakes in political discourse, Canada enters this election cycle without any legal or voluntary framework to address AI’s influence on voters.


No Legal Mechanism to Address AI Use in Campaigns

The Canada Elections Act currently contains no provisions that specifically regulate the use of artificial intelligence in campaign materials.

While the law prohibits misrepresentation of materials as being from official sources (such as Elections Canada), it remains silent on how AI-generated images, audio, or text should be disclosed or labeled.

Chief Electoral Officer Stéphane Perrault acknowledged the risk of synthetic content at a press conference in Ottawa:

“Synthetic materials have been used in elections around the world over the last year to provide misleading content,”
“Deepfakes are a serious concern… People tend to be too confident in their ability to detect deepfakes.”

Perrault has recommended that Canadian election law be updated to require transparency markers on AI-generated content, allowing voters to identify when a piece of media has been digitally manipulated or synthetically produced.


Political Parties Offer No Transparency on AI Use

Inquiries made to Canada’s major political parties revealed a lack of disclosure and unwillingness to commit to transparency regarding the use of AI tools in campaign materials.

Earlier this year, the Conservative Party did not respond to questions about whether AI was used in its campaign videos, including two French-language clips that appeared to include synthetically generated backgrounds.

The Liberal Party declined to comment on specific technologies used, stating only that it complies with Elections Canada’s existing regulations.

Only the NDP explicitly stated it does not use AI-generated content for campaign purposes, beyond minor edits:

“We provide clear specifications about head shots and other campaign materials so, aside from touch-ups like blurring out a piece of lint or a fly-away hair, Canadians will see our candidates as they are,”
Lucy Watson, National Director, New Democratic Party.

Despite repeated follow-ups, none of the three major parties has offered clarity on their policies regarding AI use or indicated willingness to adopt voluntary codes of conduct.


Experts Warn About AI’s Democratic Risks

Elizabeth Dubois, Associate Professor and Research Chair in Politics, Communication and Technology at the University of Ottawa, stressed that the threat lies in opacity, not the use of AI itself:

Generally speaking, simply using AI tools is not a problem from a democratic integrity perspective, but a lack of transparency around how these tools are integrated into campaigns is where we start seeing real risks.

Dubois noted that as AI tools become more accessible and more convincing, the ability to distinguish between authentic and manipulated content diminishes—especially in the absence of consistent detection systems.


Legislative Effort Abandoned

In 2024, the Liberal government introduced Bill C-65, which aimed to amend election law to address deepfakes and AI-generated content by mandating disclosure. However, the bill was shelved when the House of Commons was prorogued, leaving Canada without updated electoral safeguards.

Florian Martin-Bariteau, Research Chair in Technology and Society at the University of Ottawa, characterized the bill as a valuable—though imperfect—step toward protecting electoral integrity.

That law died on the order paper when the House of Commons was prorogued.

Martin-Bariteau, who co-authored an international report on AI and elections released in February, urged Canada to follow examples set in the European Union.

In the most recent EU elections, parties across the political spectrum agreed on voluntary rules, including labelling AI-generated content and refraining from using misleading synthetic media.

“In the EU at the last election, all of the parties from extreme left to extreme right… agreed on a set of rules,”
“They included rules on clear labelling and not using AI to produce misleading content.”

In contrast, no Canadian political party has committed to such voluntary measures, despite repeated calls for action.


International Precedent and AI Model Gaps

Martin-Bariteau’s report also highlighted cases of AI misuse in Brazil’s 2024 election, including deepfake pornography used to target five female candidates.

The report further documented discrepancies in AI model performance during the U.S. presidential election, especially in how they handled voting information across different languages and jurisdictions.

“These tests showed discrepancies… between AI companies’ stated commitments to accurate electoral information and the performance of their models.”

While Martin-Bariteau acknowledged that AI tools could play a constructive role—for example, by helping voters locate polling places or understand registration deadlines—he cautioned against their use without oversight.

“Chatbots can be useful… But when they aren’t well set up, they can hallucinate answers and deliver misleading information.”


With no regulatory framework, transparency obligations, or voluntary party commitments, Canada is heading into the 2025 federal election underprepared to confront the risks posed by AI in political communication.

The lack of disclosure, absence of legislation, and disregard for international best practices leave Canadian voters vulnerable to synthetic content that could distort political messaging and erode public trust.

Experts agree that it is not the presence of AI tools, but the lack of oversight and openness around their use, that poses the greatest risk to democratic integrity.

As other democracies implement labeling requirements and ethical standards, Canada remains stalled—without rules, transparency, or accountability in the AI age.

March 21, 2025: Meta to Require AI Disclosure in Political Ads for Canada Elections!

March 24, 2025: Apple Watch With Camera & AI Could Drop by 2027

March 19, 2025: Telus & Nvidia Join Forces to Launch Canada’s First AI Factory!

For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image
Writer
Articles written10

I’m Anosha Shariq, a tech-savvy content and news writer with a flair for breaking down complex AI topics into stories that inform and inspire. From writing in-depth features to creating buzz on social media, I help shape conversations around the ever-evolving world of artificial intelligence.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *