See How Visible Your Brand is in AI Search Get Free Report

EU AI Act Work Delayed by Standards Bodies

  • August 22, 2025
    Updated
eu-ai-act-work-delayed-by-standards-bodies

Key Takeaways

• The EU’s technical standards for enforcing the AI Act are delayed and now expected in 2026.

• CEN-CENELEC, the bodies developing the standards, cite complex review and consensus processes as the cause.

• The delay may affect regulators’ ability to enforce compliance as national oversight bodies are due by August 2025.

• Experts warn the absence of clear standards could lead to compliance uncertainty for AI providers across Europe.


The European Union’s effort to regulate high-risk artificial intelligence applications has hit a notable delay.

The two main European standardization bodies, CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization),

have confirmed that technical standards meant to support the enforcement of the EU Artificial Intelligence Act will not be ready by the originally targeted August 2025 deadline.

Instead, the standards are now expected to be finalized sometime in 2026, introducing challenges for both regulators and businesses trying to ensure early compliance with the legislation.


“Based on the current project plans, the work will extend into 2026.”— CEN-CENELEC


Why These Standards Matter

The technical standards are essential for operationalizing the AI Act, which became applicable in August 2024 and is set for full enforcement by 2027.

These standards will serve as legal benchmarks that help companies demonstrate their AI systems are safe, trustworthy, and compliant with EU law.

They were requested by the European Commission in 2023 and are expected to guide businesses on:


• Risk classification and transparency requirements
• Safety and data governance protocols for high-risk AI systems
• Harmonized metrics for assessing system robustness

Without these guidelines, AI developers may struggle to self-certify compliance, and regulators may face inconsistencies in enforcement.


Reasons for the Delay

According to CEN-CENELEC, the delay stems from the extensive validation process required to ensure the standards are robust, consensus-based, and technologically current. The process includes:


• Multiple rounds of technical editing
• Formal evaluation by the European Commission
• Broad stakeholder consultations and voting procedures


“This is likely to take up much of 2025 and partly 2026 for some deliverables.”— CEN-CENELEC

The organizations also emphasized the importance of achieving pan-European agreement and alignment with the state of the art:


“This will reflect the state of the art and ensure consensus from the European stakeholders.”— CEN-CENELEC

To speed things up, they report working closely with the newly formed AI Office, a dedicated unit within DG Connect, and are implementing “extraordinary measures” to accelerate the timeline.


Regulatory Deadlines Loom

Despite the delays, the AI Act’s implementation milestones remain in place. By August 2025, all EU Member States are required to establish national regulatory authorities to oversee AI enforcement domestically.

These bodies will work alongside the AI Office, but the absence of finalized technical standards could leave them operating without a unified reference framework.

This gap raises concerns about fragmented enforcement and uncertainty for AI vendors deploying systems in multiple EU countries.


Industry Reactions and Expert Insight

The delay in standards has drawn concern from data protection and regulatory experts. Sven Stevenson, a senior official at the Dutch Data Protection Authority, highlighted the potential risks of proceeding without finalized criteria.


“The standards are a way to create certainty for companies, and for them to demonstrate compliance. There is still a lot of work to be done before those standards are ready. And of course, time is starting to run out.”— Sven Stevenson, Dutch Data Protection Authority

Such uncertainty could lead to increased compliance costs, delayed product rollouts, or legal ambiguity for firms deploying AI in critical sectors like healthcare, infrastructure, and public services.


What Comes Next

Drafts of the first technical standards are expected later in 2025. Once released, they will undergo EU Commission review, stakeholder input, and final approval processes.

Until then, businesses are urged to follow existing risk management best practices and monitor developments through regulatory briefings and stakeholder consultations.

For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image
Articles written 859

Khurram Hanif

Reporter, AI News

Khurram Hanif, AI Reporter at AllAboutAI.com, covers model launches, safety research, regulation, and the real-world impact of AI with fast, accurate, and sourced reporting.

He’s known for turning dense papers and public filings into plain-English explainers, quick on-the-day updates, and practical takeaways. His work includes live coverage of major announcements and concise weekly briefings that track what actually matters.

Outside of work, Khurram squads up in Call of Duty and spends downtime tinkering with PCs, testing apps, and hunting for thoughtful tech gear.

Personal Quote

“Chase the facts, cut the noise, explain what counts.”

Highlights

  • Covers model releases, safety notes, and policy moves
  • Turns research papers into clear, actionable explainers
  • Publishes a weekly AI briefing for busy readers

Related Articles

Leave a Reply