Key Takeaways
• The Philippines has yet to enact any AI-specific law despite the presence of six pending bills in Congress.
• Agency-level guidelines, such as those from the Commission on Elections, are being issued in place of legislation, raising concerns about regulatory overlap.
• Risks from unregulated AI include job displacement, digital disinformation, and compromised national security.
• Experts emphasize the urgent need for both regulation and investment in AI education, R&D, and skilled local talent.
Despite its early recognition of artificial intelligence (AI) as a critical driver of digital transformation, the Philippine government has yet to pass any comprehensive regulatory framework to govern its development, use, or ethical boundaries.
As AI systems become more integrated into public and private life, experts warn that the absence of clear, enforceable policies could threaten the nation’s economic competitiveness, information integrity, and labor stability.
In 2021, the Department of Trade and Industry launched the National AI Strategy Roadmap (NAISR), marking the country’s first structured approach to AI development.
Its 2024 update expanded on areas such as digital infrastructure, workforce upskilling, and generative AI innovations. However, these strategic plans have not been matched by legal mechanisms to guide implementation or mitigate risks.
Legislative Proposals Still Stalled
Six AI-specific bills are currently awaiting action in the House of Representatives. Among them, House Bill 7913 seeks to establish an “AI Bill of Rights”, guaranteeing:
• The right to protection from algorithmic discrimination
• The right to personal data privacy
• The right to opt out or seek remedies from AI system decisions
But with the May 2025 midterm elections fast approaching and Congress nearing adjournment, the likelihood of any of these bills becoming law is slim. Legislative gridlock continues to delay progress in one of the world’s fastest-developing tech sectors.
Patchwork Policy Through Agency Guidelines
In the absence of national legislation, some government bodies have begun issuing their own policy directives.
-
The Commission on Elections (COMELEC) recently banned the use of deepfake content for political disinformation in the lead-up to the elections.
-
The Department of Information and Communications Technology (DICT) is drafting a similar directive focused on AI-generated misinformation.
However, experts caution that these agency-led initiatives lack cohesion and legal authority.
Such regulatory fragmentation could create confusion among businesses, developers, and public institutions, and may undermine public trust in government oversight.
Risks Escalating Across Multiple Fronts
1. Disinformation and National Security
One of the most pressing risks is the use of AI in spreading disinformation.
Earlier this year, concerns arose when an AI chatbot developed by a Chinese firm reportedly made claims asserting China’s sovereignty over territories in the West Philippine Sea.
The case highlighted the dangers of deploying foreign-developed AI models without adequate cultural, geopolitical, or ethical safeguards.
• AI-generated disinformation poses risks to election integrity and public trust
• National narratives are vulnerable to foreign-developed AI bias
• Lack of content moderation policies increases exposure to geopolitical influence
2. Economic Vulnerabilities
AI’s integration into automation-heavy sectors like the Business Process Outsourcing (BPO) industry has raised alarms among labor economists. Estimates indicate that up to 300,000 jobs, or roughly 15% of BPO employment, are at risk of automation in the short term.
Beyond job loss, there are concerns that many remaining positions could be downgraded to menial roles such as AI data labeling—limiting long-term career mobility and suppressing wage growth.
Investment Risks and Lost Competitive Edge
A lack of regulatory clarity is also creating hesitancy among investors. As other Southeast Asian countries, particularly Singapore, move forward with well-defined AI governance frameworks, the Philippines risks losing its competitive edge.
• Foreign direct investment may pivot to countries with stronger AI safeguards
• Investors prioritize markets with regulatory predictability
• Ambiguity hampers growth in the digital economy, which contributed 8.4% to GDP in 2023
This concern is echoed in government development strategies, including the Philippine Development Plan 2023–2028, which identifies AI as a critical growth driver.
Systemic Barriers to Policy Execution
Beyond legislative inaction, structural limitations within government agencies further hinder progress. Agencies tasked with AI oversight are often under-resourced and lack the technical expertise necessary for policy design or enforcement.
Delays in related legislation—such as the Konektadong Pinoy Bill (to lower internet service costs) and the e-Governance Bill—illustrate the broader challenge of implementing digital reforms even when declared presidential priorities.
Jose Miguelito Enriquez, Associate Research Fellow at the Centre for Multilateralism Studies, RSIS (Nanyang Technological University), emphasizes that the current impasse is not irreversible.
According to Enriquez, effective regulation should be complemented by long-term investments in public awareness, AI research and development, and local talent cultivation.
With AI increasingly embedded in daily life and business operations, inaction on governance is no longer a neutral stance—it is a risk factor.
The Philippines must move swiftly to enact comprehensive AI legislation, streamline executive oversight, and invest in the future of its digital workforce.
Failure to do so could cost the country not just jobs or investments, but also its sovereignty and standing in the global AI economy.
Mira Murati Launches $2B AI Lab ‘Thinking Machines’ to Redefine Artificial Intelligence Deb8Flow Launches GPT-4o AI Debate Platform Adobe’s TransPixar Uses AI to Redefine VFX With Smoke and Portals!
For more news and insights, visit AI News on our website.