Key Takeaways:
The Los Angeles Times has introduced a controversial AI-powered tool called “Insights,” designed to assign political bias ratings and generate counterarguments to opinion pieces.
The initiative, announced by LA Times owner Patrick Soon-Shiong, is being positioned as a step toward enhancing transparency and ideological diversity.
However, the tool has sparked criticism from journalists, media experts, and the LA Times Guild, who argue that AI-generated analysis—published without human editorial oversight—could undermine the credibility of journalism.
The AI system was developed through partnerships with two AI startups: Particle.News and Perplexity.
While Particle.News provides the political classification labels, Perplexity powers the AI-generated counterpoints that accompany opinion pieces.
The AI tool does not analyze traditional news reports but is now being applied to opinion columns, editorials, news commentary, criticism, and reviews—a significant expansion beyond its original purpose.
The “Insights” feature categorizes opinion pieces into five political labels:How the AI Tool Works
Once an article is assigned a rating, the AI generates alternative viewpoints that are displayed alongside the article.
These counterpoints are designed to offer perspectives that differ from the original piece’s stance.
No Human Oversight Over AI-Generated Content
A major point of contention is that AI-generated counterpoints are published without being reviewed by LA Times journalists.
The lack of human editorial oversight has raised concerns about accuracy, fairness, and potential biases embedded in AI-generated responses.
“We support efforts to improve media literacy and clearly distinguish our news report from our opinion pages.But we don’t think this approach – AI-generated analysis unvetted by editorial staff – will do much to enhance trust in the media.” — Matt Hamilton, Vice-Chair, LA Times Guild
Hamilton and other journalists argue that an AI model trained on existing political discourse might reflect systemic biases rather than eliminate them.
A Divisive Rollout: AI Counters an Op-Ed Criticizing AI
One of the first controversies involving the AI feature arose when an LA Times op-ed criticizing AI itself was met with an AI-generated counterpoint defending artificial intelligence.
The article by Rachel Antell, Stephanie Jenkins, and Jennifer Petrucelli warned about the dangers of AI-generated content in documentary filmmaking and journalism, arguing that unregulated AI use could erode trust in visual storytelling.
“Proponents argue AI’s potential for artistic expression and education outweighs its misuse risks, provided users maintain critical awareness.”
This self-defensive response by AI to criticism of AI has fueled further concerns that the tool could be used to manufacture a false sense of balance in political discourse.
Patrick Soon-Shiong Defends the AI Initiative
Despite the backlash, Patrick Soon-Shiong has stood firmly behind the AI-driven initiative, arguing that it aligns with the LA Times’ mission to provide diverse viewpoints.
“Now the voice and perspective from all sides can be heard, seen, and read – no more echo chamber.” — Patrick Soon-Shiong, LA Times Owner
Soon-Shiong has framed the AI-generated analysis as a response to criticisms of media bias, stating that the tool empowers readers to explore multiple perspectives instead of being exposed to a singular ideological viewpoint.
“The purpose of Insights is to offer readers an instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article.”
However, critics argue that while the concept of diverse viewpoints is important, the execution—allowing AI to generate counterpoints without human oversight—could do more harm than good.
The timing of the AI tool’s launch comes after months of newsroom turmoil, particularly surrounding political endorsements and editorial decisions.Concerns Over Political Influence
The AI-generated counterpoints have already drawn scrutiny for their perceived political leanings.
For example: Critics argue that if AI-generated counterpoints overwhelmingly favor certain political narratives, it could create the appearance of balance while subtly reinforcing particular ideological perspectives. The LA Times is not alone in integrating AI into newsroom operations. Other major media organizations—including Reuters, The Associated Press, and Bloomberg—use AI for automated reporting in finance and sports. However, the use of AI to generate political analysis and counterpoints in opinion journalism is largely unprecedented. As AI technology continues to evolve, news organizations must decide whether these tools enhance or threaten journalistic integrity. The LA Times’ experiment with AI journalism is a significant test case, and its success—or failure—could influence how AI is integrated into newsrooms globally. The LA Times’ AI-powered “Insights” tool represents a major shift in how opinion journalism is structured. While Patrick Soon-Shiong promotes it as a step toward ideological diversity, journalists and critics argue that it removes human oversight, risks misinformation, and could reinforce pre-programmed biases. With journalists and union representatives voicing strong opposition, the fate of the AI tool remains uncertain. The LA Times now faces a pivotal decision—will it adjust its approach based on the concerns raised, or will it continue its AI-driven strategy despite the controversy? One thing is clear: this AI experiment is being closely watched by the broader media industry—and its implications for the future of journalism will be far-reaching. February 18, 2025: NYT Embraces AI With Internal Tools for Journalism & Operations! February 25, 2025: Chegg Sues Google, Blaming AI Overviews for Traffic and Revenue Loss! February 19, 2025: Perplexity Launches Free AI Research Assistant for Everyone! For more news and trends, visit AI News on our website.
The Future of AI in Journalism
What Comes Next?