Report Claims Elon Musk’s X Platform is Spreading Misinformation About Elections!

  • Editor
  • September 13, 2024
    Updated
report-claims-elon-musks-x-platform-is-spreading-misinformation-about-elections

Key Takeaways:

  • Grok’s Role in Misinformation: X’s AI chatbot, Grok, played a central role in spreading misinformation about the 2024 U.S. presidential election, causing concerns about the influence of AI on democratic processes.
  • Election Officials’ Swift Response: A coalition of state election officials took immediate action to correct the misinformation, successfully pushing X to direct users to reliable voting information.
  • AI and Political Content: Grok’s design, which favors “spicy” and snarky responses, raises questions about its suitability for handling sensitive political content.
  • Potential Risks and Future Implications: The incident highlights the potential for AI to mislead voters, prompting calls for more robust oversight and guidelines for AI tools used in political contexts.

Soon after President Joe Biden announced that he would not seek re-election in the 2024 U.S. presidential race, a wave of misinformation began circulating on X, formerly known as Twitter.

Posts falsely claimed that a new candidate could not be added to ballots in nine states, even though the deadlines for ballot submissions had not yet passed.


The misinformation quickly gained traction, accumulating millions of views and creating confusion among users.

The Minnesota Secretary of State’s office received numerous requests for fact-checking these claims, which were incorrect.

The source of this misinformation was traced back to Grok, X’s AI chatbot, which was providing erroneous responses to questions about the potential for adding a new candidate to the ballot.


This incident with Grok revealed a vulnerability in how artificial intelligence tools could be used or misused in the context of elections.

It served as a test case for how election officials and AI companies might handle such situations, particularly given concerns that AI could mislead voters or create distractions. Grok, in particular, was seen as having fewer safeguards to prevent the spread of misleading or inflammatory content.

The misinformation prompted swift action from a group of state secretaries and the National Association of Secretaries of State.


They reached out to Grok and its parent company, X, to flag the misinformation and request corrective measures.

However, the initial response from the company was described as indifferent, with Steve Simon, the Minnesota Secretary of State, stating, “And that struck, I think it’s fair to say all of us, as really the wrong response.”

While the misinformation in this case was relatively low-stakes — it did not prevent anyone from voting — the secretaries were concerned about what might happen if Grok made a mistake on a more critical issue.


Simon and other officials were particularly worried about the potential for Grok to give incorrect answers to questions such as “Can I vote?” “Where do I vote?” or “What are the voting hours? Such errors could have serious consequences, affecting voter turnout and undermining public trust in the electoral process.

The fact that the misinformation was being spread by the platform itself rather than by individual users added to their concern.


To address these issues, the group of secretaries of state made their concerns public. Five of the nine secretaries in the group signed a public letter to X and its owner, Elon Musk, urging the platform to adopt a more responsible approach similar to that of other AI tools, such as ChatGPT.

They recommended that Grok direct users who asked election-related questions to a trusted, nonpartisan voting information site like CanIVote.org. This approach proved effective, and Grok now directs users to vote.gov for election-related inquiries.


In response to the public pressure, Wifredo Fernandez, X’s head of global government affairs, wrote to the secretaries expressing the company’s willingness to keep open lines of communication during the election season and to address any further concerns.

For the secretaries, this marked a small victory in the fight against election misinformation and highlighted the importance of addressing such issues promptly.


Steve Simon underscored that calling out misinformation early can help amplify corrective messages, lend them more credibility, and force a response.

While disappointed in X’s initial reaction, Simon acknowledged the company’s eventual decision to “do the right and responsible thing.”

Elon Musk has described Grok as an “anti-woke” chatbot that gives “spicy” and often snarky answers.


Lucas Hansen, co-founder of CivAI, a nonprofit organization that raises awareness about the dangers of AI, pointed out that Musk’s resistance to centralized control contributes to Grok’s vulnerability to spreading misinformation.

Grok’s method of drawing from trending social media posts to generate its responses further impacts its reliability.


Despite being a subscription-based service, Hansen emphasized that Grok has the potential for broad usage due to its integration into a widely used social media platform. This broad reach amplifies concerns that Grok could contribute to political polarization.

Adding to these concerns, Grok has the ability to generate provocative and misleading images. Reports indicate that the chatbot has created outlandish and inflammatory visuals, such as depictions of a Nazi Mickey Mouse, Donald Trump flying a plane into the World Trade Center, or Kamala Harris in a communist uniform.


The Center for Countering Digital Hate noted that Grok could produce convincing but false images, including ones showing Harris using drugs or Trump appearing gravely ill in bed.

In another investigation, Al Jazeera found that Grok could create lifelike images, such as one showing Harris holding a knife in a grocery store or another depicting Trump shaking hands with white nationalists on the White House lawn.

Hansen highlighted the risk that Grok’s capabilities pose, noting that the chatbot enables any user to create content that is “substantially more inflammatory” than before.


This risk, combined with its widespread availability on a major social media platform, makes Grok a particularly concerning tool when it comes to political content and elections.

The episode with Grok has underscored the need for better safeguards and more responsible practices when it comes to using AI tools in sensitive contexts like elections.


While X’s eventual decision to correct the misinformation was a step in the right direction, the incident has revealed the potential dangers of AI-driven platforms and the importance of maintaining rigorous oversight to prevent future problems.

As the 2024 election approaches, election officials and tech companies will need to work closely together to ensure that AI tools are used responsibly and do not undermine the democratic process.

 

For more news and trends, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *