As the 2024 U.S. elections are coming, 65% of Americans are concerned that AI could harm national security and election integrity (Dartmouth News, 2024).
AI’s ability to create convincing deepfakes is increasingly being used to spread misinformation, mislead voters, and erode trust in the electoral process. These tools make AI election security 2024 a pressing issue for policymakers and election officials.
The blog below explores how AI in politics reshapes election security and the urgent steps to safeguard democracy.
Generative AI: Transforming Cybersecurity and Threatening Election Security 2024
Generative AI is reshaping cybersecurity across industries, with significant implications for election security. A recent report from Marketresearch.biz projects the Generative AI in Cybersecurity market to grow from USD 1.6 billion in 2022 to USD 11.2 billion by 2032, driven by leading companies like OpenAI, IBM, NVIDIA, and McAfee.
These companies are advancing key technologies such as Generative Adversarial Networks (GANs) and Reinforcement Learning (RL) to improve threat detection and network security.
However, AI enhances cybersecurity and presents serious risks, especially to election security. AI can create and spread disinformation through deepfakes—realistic videos, images, and audio that falsely depict political figures.
These AI-generated deepfakes can influence public opinion, discredit candidates, and cause confusion in elections.
For instance, during the 2024 elections in Slovakia, a deepfake audio falsely implicated a candidate in fraud, damaging public trust. Similarly, in the U.S., a deepfake robocall pretending to be President Biden urged New Hampshire voters not to participate in the primaries, casting doubt on the election’s integrity.
As Kayne McGladrey, an IEEE Senior Member, points out, “We can expect an increase in disinformation and phishing attacks during the 2024 elections, targeting voters through false information on voting processes.” AI-powered disinformation can spread quickly, disrupting elections before false claims are debunked. Such campaigns mislead voters and can suppress voter turnout by creating confusion.
In this evolving environment, Generative AI offers advancements in cybersecurity but also introduces new challenges for election security. As AI tools grow more sophisticated, election officials and cybersecurity experts must stay vigilant to protect the integrity of elections from AI-generated disinformation.
Election Security Concerns: The Impact of AI-Generated Deepfakes and Misinformation
AI-generated deepfakes are emerging election security issues, creating fake videos and audio clips nearly indistinguishable from authentic content. These deepfakes are being used by malicious actors to mislead voters and undermine the credibility of political candidates.
Significant AI Election Security Concerns:
- A digitally altered video falsely claimed that President Biden said Russia had occupied Kyiv for 10 years. A Reuters fact-checking news report on March 29, 2024, debunked this claim (Reuters, 2024).
- An altered video of Senator Elizabeth Warren falsely claiming she advocated for banning Republicans from voting in 2024. Newsweek highlights how deepfake videos are being used to spread political misinformation (Newsweek, 2023).
- An AI-generated audio on TikTok falsely portrays President Biden threatening to deploy F-15s to Texas. Forbes discusses the rise of deepfakes in spreading misinformation during elections (Sayegh, 2024).
- A fake image falsely claims to show Donald Trump dancing with an underage girl. AP News fact-checked the image, confirming it was digitally manipulated (Marcelo, 2023).
- A pro-DeSantis group used an AI-generated version of Donald Trump’s voice in a new ad. The Hill reports on the increasing use of AI in political ads, raising concerns about authenticity and manipulation (The Hill, 2024).
These examples demonstrate how AI is being weaponized to generate false narratives that mislead the public and undermine trust in political figures.
As John Fokker, Head of Threat Intelligence at Trellix, explains, “Cybercriminals are often motivated by media attention and engagement. Their activity during elections emphasizes the need to equip organizations and the public with education and intelligence to combat these threats.”
AI and Election Infrastructure: A New Cybersecurity Challenge
AI doesn’t only pose a threat to public perception; it also has the potential to compromise election infrastructure itself. Cybercriminals and foreign state actors could use AI to scan for vulnerabilities in voting systems, including voter registration databases and voting machines, to exploit weak points in election security.
The Department of Homeland Security (DHS) recently issued a bulletin warning about AI’s potential to disrupt elections by exploiting system vulnerabilities and spreading false narratives.
“Generative AI could help bad actors identify weak spots in election infrastructure, from scanning internet-facing election systems for flaws to providing real-time tactical guidance for attacks.”(The Bulletin)
Additionally, election officials need to remain vigilant about AI-generated content aimed at discrediting election results or undermining trust in election processes. For instance, AI-generated chatbots could flood social media with deceptive posts that question the security of voting systems or promote false information about election procedures.
As Director of National Intelligence Avril Haines testified before Congress, “Innovations in AI have enabled foreign influence actors to produce seemingly authentic and tailored messaging more efficiently, at greater scale.”
This presents a new layer of complexity for election officials, who must now consider the implications of AI in election security alongside more traditional cybersecurity threats.
Did You Know?
Google is taking a cautious approach in 2024 by avoiding AI-generated content for political topics like elections. While health and finance queries often trigger AI Overviews, political searches see only 16.67% AIO presence.
This reflects growing concerns about election security and the spread of deepfakes. With Google limiting AI in this area, delivering well-sourced, accurate content is more important than ever to protect voters and combat misinformation.
The Dual-Track Threat in the Cyber Domain
As we near the final major election in 2024, the cyber domain presents a dual threat to election security. This involves both technical manipulation—such as hacking voting systems—and disinformation, which undermines the electorate’s cognitive perception.
AI-generated content (AIGC) is being used to rapidly create content, making it easier for malicious actors to influence public opinion through deepfake videos, targeted propaganda, and synthetic media.
The RAND Corporation report warns of a “perfect storm” where cyber threats could simultaneously target physical, human, and reputational assets essential to fair elections.
Legal and Legislative Responses to the AI Threat
Given the rise of AI in disinformation and its potential to destabilize elections, lawmakers are beginning to take action. There have been legislative efforts, both at the state and federal levels, to address the use of AI in creating disinformation and deepfakes.
For example, H.R. 5586 – 118th Congress (2023-2024) was introduced to provide legal recourse for victims of harmful deepfakes and to protect national security against the threats posed by deepfake technology. You can read the full text of this bill on Congress.gov.
Source: “Artificial Intelligence (AI) in Elections and Campaigns,” National Conference of State Legislatures, June 3, 2024. https://www.ncsl.org/elections-and-campaigns/artificial-intelligence-ai-in-elections-and-campaigns
Additionally, several states have enacted laws regulating AI-generated deepfakes in elections. For instance, California and Texas have passed legislation that makes it illegal to use deepfakes within a certain timeframe before an election. These laws are designed to curb the spread of AI-generated disinformation and protect the integrity of the voting process.
As noted in the Onsolve blog, “AI capabilities present opportunities for increased productivity, potentially enhancing both election security and election administration. However, these capabilities also carry the increased potential for greater harm, as malicious actors, including foreign nation state actors and cybercriminals, could leverage these same capabilities for nefarious purposes.“
The Rising Threat of AI-Generated Content (AIGC) in Election Security
As AI-Generated Content (AIGC) advances, the risks to election security significantly increase. AIGC can be used for social engineering attacks, manipulating opinions, creating fake narratives, and ultimately sowing chaos, all while lowering costs for attackers.
One of the most alarming threats Dr. Badre Belabbess discussed in an interview was the potential for quantum computing to undermine election security. He explained that quantum computing could render current encryption technologies obsolete, making it much easier for hackers to infiltrate voting systems.
“Quantum computing can break today’s encryption methods in seconds. This poses an existential threat to the integrity of election systems globally, and the European Union is already preparing for this possibility.” Dr. Badre Belabbess
As highlighted by the OnSolve Blog, the threat from AIGC has become more prominent in the 2024 U.S. elections.
Key Risks of AIGC:
- Deepfake Videos: August 2024 AI-generated deepfake images of Donald Trump, Kamala Harris, Elon Musk, and Taylor Swift were circulated online, misleading viewers with highly realistic, fabricated content.
- Targeted Propaganda: In October 2023, New York Campaigns used AI-generated robocalls to target specific voter groups, manipulating opinions and spreading misinformation in different languages.
- Quantum computing: It threatens traditional cryptography in election security, requiring the adoption of post-quantum cryptography to safeguard sensitive data and ensure election integrity.
- AI-Generated Misinformation: Google DeepMind Report (2024) found that AIGC was increasingly misused in campaigns through impersonation, falsification, and amplification of false content.
- Automated Bots and Trolls: In the 2024 Elections, AI-driven bots flooded social media, distorting public perception and sabotaging political discussions globally.
- Synthetic Media for Manipulated Narratives: In April 2024 Microsoft reported that Chinese groups used AI-generated synthetic media to influence elections in Taiwan and the U.S., spreading false narratives about ballot tampering.
Combating AI-Driven Election Threats
Here are some election security best practices to mitigate the risks AI poses:
Advanced AI Detection Tools:
Governments and private organizations must invest in AI-powered tools capable of detecting and flagging deepfakes and other AI-generated disinformation. Early detection is crucial to minimizing the spread of false information before it goes viral.
Collaboration with Fact-Checkers:
Social media platforms and fact-checking organizations must collaborate closely to identify disinformation quickly. As AI-generated content becomes more sophisticated, it’s essential to have systems in place that can debunk false claims in real time.
Public Education Campaigns:
Educating voters on the risks of AI-generated disinformation is critical to maintaining trust in the electoral process. Voters need to know how to verify the authenticity of the information they encounter, especially in the age of AI-generated media.
Quantum Encryption:
It is crucial for securing voting systems against future cyber threats. This advanced encryption method can create unbreakable security keys, potentially rendering current hacking techniques obsolete.
As Rebecca Herold, an IEEE member, states, “The use of AI to make leaders and influencers ‘say’ disinformation in videos and audio is becoming very hard for the general public to detect. Effective training about AI use in phishing tactics and scams, and how to identify them, is a significant way to prevent such deceptions.”
FAQs
Why is election security important?
Will the 2024 election be secure?
Can we trust AI chatbots to provide correct information about elections?
What are the real harms of deepfakes, beyond the hype about election disinformation?
Bottom Line
The 2024 U.S. election marks a pivotal moment in the intersection of AI and election security 2024. While AI offers opportunities to enhance productivity and streamline election administration, it also presents new threats like disinformation, deepfakes, and cyberattacks.
Without strong safeguards, AI has the potential to erode public trust in the electoral process and undermine the foundation of democracy itself.
As we look ahead to the 2024 elections, election officials, security professionals, and lawmakers must remain vigilant.
By investing in detection technologies, fostering collaboration with fact-checkers, and enacting comprehensive legislation, we can mitigate AI’s risks and safeguard future elections’ integrity.