Meta AI Chatbot Lauds Kamala Harris, Criticizes Trump as ‘Crude and Lazy’!

  • Editor
  • September 23, 2024
    Updated
meta-ai-chatbot-lauds-kamala-harris-criticizes-trump-as-crude-and-lazy

Key Takeaways:

  • Meta’s AI chatbot has come under scrutiny for showing apparent bias in its responses about Vice President Kamala Harris and former President Donald Trump.
  • The chatbot praised Harris for her leadership and policy achievements while offering a more critical view of Trump, highlighting controversies during his administration.
  • The contrasting evaluations by AI raise concerns about the potential impact of technology on public perception and election outcomes.
  • The incident reflects broader issues with AI-generated content, especially when political figures are involved, and underscores the need for more balanced and transparent algorithms.

Meta’s AI assistant panned former President Donald Trump while fawning over Vice President Harris.

When asked, “Why should I vote for Donald Trump?” Mark Zuckerberg’s chatbot warned critics had slammed the Republican nominee as “boorish and selfish,” “crude and lazy,” and that his administration had been lambasted for “potentially undermining voting rights and promoting voter suppression,” according to The Federalist, which earlier this week reported the AI tool’s derisive assessment of the former president.


Meta AI had a glowing review of Harris, however. After The Post asked, “Why should I vote for Kamala Harris?” the chatbot offered “compelling reasons” to cast a ballot for the Democratic presidential nominee: her “trailblazing leadership” as the first black and South Asian vice president; her “record job creation and low unemployment;” and her support of rent relief and voting rights.

“By voting for Kamala Harris, you’ll be supporting a leader dedicated to fighting for the rights and freedoms of all Americans,” it said.

The chatbot’s Trump response had softened by Thursday when The Post tried it. It described Trump’s first time in the White House as “marked by controversy and polarization” — a disclaimer with zero analog when the bot opined on Harris.

Meta-AI-Biased-Kamala-Harris-Donald-Trump-Views.

Image Source: NY Post

The AI tool tossed out a handful of Trump’s accomplishments, including passing “the most substantial veterans affairs reforms in the past 50 years” and that his “record-setting tax and regulation cuts were a boon to economic growth.”

It also erroneously stated Trump had appointed two Supreme Court justices, not three.

“[Trump’s] handling of issues like abortion and healthcare has been met with criticism from certain groups,” the chatbot wrote, adding, “Ultimately, whether or not to vote for Donald Trump depends on your individual values, priorities, and policy preferences.”

It’s not the first time artificial intelligence devices have gotten political. Earlier this month, Amazon’s Alexa refused to answer questions about why voters should support Trump while gushing over Harris’ qualifications for the executive office.

An Amazon spokesperson at the time blamed the disparity on an “error” that was quickly fixed following a flood of backlash.


Meta’s chatbot, meanwhile, bizarrely claimed in July there was “no real” assassination attempt on Trump after a gunman shot the former president during a rally in Butler, Pa., grazing his ear with a bullet.

“Meta’s query results raise troubling questions, particularly in light of recent history,” said Rep. James Comer (R-Ky.), chairman of the House Oversight Committee.

This committee has raised concerns about Big Tech’s attempts to influence elections through censorship policies baked into their algorithms.

Meta-AI-Biased-Kamala-Harris-Donald-Trump-Views-

Image Source: NY Post

A Meta spokesman said that asking the AI assistant the same question repeatedly can result in varying answers. The Post’s repeat queries to the chatbot, however, again led to responses that flagged criticism against the former president while celebrating the Dem nominee.

“Like any generative AI system, Meta AI can return inaccurate, inappropriate, or low-quality outputs,” the spokesman said. “We continue to improve these features as they evolve and more people share their feedback.”

The chatbot’s varying responses and perceived bias underscore the growing concerns about the role of AI in shaping political narratives.


The incident reflects the broader issue of AI’s potential influence on public perception, especially in politically charged contexts.

For more news and trends, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *