Soon after President Joe Biden announced that he would not seek re-election in the 2024 U.S. presidential race, a wave of misinformation began circulating on X, formerly known as Twitter. Posts falsely claimed that a new candidate could not be added to ballots in nine states, even though the deadlines for ballot submissions had not yet passed. Yeah, because all the other AI’s are completely accurate and non-biased. — J.D. (@fanofkfan) August 5, 2024 The Minnesota Secretary of State’s office received numerous requests for fact-checking these claims, which were incorrect. The source of this misinformation was traced back to Grok, X’s AI chatbot, which was providing erroneous responses to questions about the potential for adding a new candidate to the ballot. Any letter about all the other AIs saying there wasnt an attempt on trumps life and regurgitating talking points. — Big John (@BigJohnyRed) August 5, 2024 It served as a test case for how election officials and AI companies might handle such situations, particularly given concerns that AI could mislead voters or create distractions. Grok, in particular, was seen as having fewer safeguards to prevent the spread of misleading or inflammatory content. The misinformation prompted swift action from a group of state secretaries and the National Association of Secretaries of State. The AG should step in and block Twitter nationwide until the problem is fixed! — Antoine (@Antoine74591651) August 5, 2024 However, the initial response from the company was described as indifferent, with Steve Simon, the Minnesota Secretary of State, stating, “And that struck, I think it’s fair to say all of us, as really the wrong response.” While the misinformation in this case was relatively low-stakes — it did not prevent anyone from voting — the secretaries were concerned about what might happen if Grok made a mistake on a more critical issue. He did it on purpose. Note he is being investigated for election interference for misleading voters into thinking they are registered in swing states (they are not) while stealing their info . — Sonja Sackman (@SonjaSackman) August 5, 2024 The fact that the misinformation was being spread by the platform itself rather than by individual users added to their concern. He put community notes refuting it. It’s called Free Speech! He calls out the lies, especially the communist democrat lies! — April Maxine (@AprilMaxineT) August 5, 2024 They recommended that Grok direct users who asked election-related questions to a trusted, nonpartisan voting information site like CanIVote.org. This approach proved effective, and Grok now directs users to vote.gov for election-related inquiries. Elon Musk bought Twitter precisely because it enables him and his handlers to control the narrative. It’s not a bug, it’s a feature! — MrsDoubtFire🇺🇲 🫏🇺🇦 ⬜ (@MrsDoubtFireSF) August 5, 2024 For the secretaries, this marked a small victory in the fight against election misinformation and highlighted the importance of addressing such issues promptly. What about every single other AI denying the assassination attempt on Trump ever happened? Any letters about that? — Bern Errakaunt (@BernErrakaunt) August 5, 2024 While disappointed in X’s initial reaction, Simon acknowledged the company’s eventual decision to “do the right and responsible thing.” Elon Musk has described Grok as an “anti-woke” chatbot that gives “spicy” and often snarky answers. Why don’t they start with the false information being spread by their own offices. — MEC01 (@NoOneCaresEver0) August 5, 2024 Grok’s method of drawing from trending social media posts to generate its responses further impacts its reliability. X is getting in the way of the Dems’ efforts to fortify the election. — Tim (@Timintheclem) August 5, 2024 Adding to these concerns, Grok has the ability to generate provocative and misleading images. Reports indicate that the chatbot has created outlandish and inflammatory visuals, such as depictions of a Nazi Mickey Mouse, Donald Trump flying a plane into the World Trade Center, or Kamala Harris in a communist uniform. Musk himself is literally spreading misinformation about the Venezuelan election — Left Rush☭ (@Rush1862) August 6, 2024 In another investigation, Al Jazeera found that Grok could create lifelike images, such as one showing Harris holding a knife in a grocery store or another depicting Trump shaking hands with white nationalists on the White House lawn. Hansen highlighted the risk that Grok’s capabilities pose, noting that the chatbot enables any user to create content that is “substantially more inflammatory” than before. You mean it is telling actual information — The Gal (@giveu2tictacs) August 6, 2024 The episode with Grok has underscored the need for better safeguards and more responsible practices when it comes to using AI tools in sensitive contexts like elections. They just made a BIG MISTAKE. Only Google Ai is programmed to lie. — SAGEofZED (@TheSAGEofZED) August 6, 2024 Why are they only singling out Musk? So misinformation on the other platforms is ok? Hypocrites — Billy Bob (@The_Real_Solyad) August 6, 2024 As the 2024 election approaches, election officials and tech companies will need to work closely together to ensure that AI tools are used responsibly and do not undermine the democratic process. 11-September-2024: Melania Trump or AI Deepfake? Assassination Video Fuels Debate! 11-September-2024: Taylor Swift Endorses Kamala Harris and Calls Out Trump’s AI Post 10-September-2024: AI Trump Photos Fuel False Claims of Haitian Migrants Eating Cats and Ducks in Ohio! 09-September-2024: Trump Claims Photo With Carroll Was ‘AI-Generated’ – Here’s the Truth 21-August-2024: Trump Sparks Fury with AI-Generated Images of Taylor Swift Fans Backing Him! 12-August-2024: Trump Wrongly Blames Harris Campaign for AI-Generated Crowd Photos! For more news and trends, visit AI News on our website.
The misinformation quickly gained traction, accumulating millions of views and creating confusion among users.
This incident with Grok revealed a vulnerability in how artificial intelligence tools could be used or misused in the context of elections.
They reached out to Grok and its parent company, X, to flag the misinformation and request corrective measures.
Simon and other officials were particularly worried about the potential for Grok to give incorrect answers to questions such as “Can I vote?” “Where do I vote?” or “What are the voting hours?“ Such errors could have serious consequences, affecting voter turnout and undermining public trust in the electoral process.
To address these issues, the group of secretaries of state made their concerns public. Five of the nine secretaries in the group signed a public letter to X and its owner, Elon Musk, urging the platform to adopt a more responsible approach similar to that of other AI tools, such as ChatGPT.
In response to the public pressure, Wifredo Fernandez, X’s head of global government affairs, wrote to the secretaries expressing the company’s willingness to keep open lines of communication during the election season and to address any further concerns.
Steve Simon underscored that calling out misinformation early can help amplify corrective messages, lend them more credibility, and force a response.
Lucas Hansen, co-founder of CivAI, a nonprofit organization that raises awareness about the dangers of AI, pointed out that Musk’s resistance to centralized control contributes to Grok’s vulnerability to spreading misinformation.
Despite being a subscription-based service, Hansen emphasized that Grok has the potential for broad usage due to its integration into a widely used social media platform. This broad reach amplifies concerns that Grok could contribute to political polarization.
The Center for Countering Digital Hate noted that Grok could produce convincing but false images, including ones showing Harris using drugs or Trump appearing gravely ill in bed.
This risk, combined with its widespread availability on a major social media platform, makes Grok a particularly concerning tool when it comes to political content and elections.
While X’s eventual decision to correct the misinformation was a step in the right direction, the incident has revealed the potential dangers of AI-driven platforms and the importance of maintaining rigorous oversight to prevent future problems.
Report Claims Elon Musk’s X Platform is Spreading Misinformation About Elections!
Key Takeaways:
11-September-2024: Cats, Trump, and Harris AI Memes Dominate Social Media in Debate Frenzy
Was this article helpful?
YesNo