KIVA - The Ultimate AI SEO Agent Try it Today!

EU AI Act Work Delayed by Standards Bodies

  • Writer
  • April 16, 2025
    Updated
eu-ai-act-work-delayed-by-standards-bodies

Key Takeaways

• The EU’s technical standards for enforcing the AI Act are delayed and now expected in 2026.

• CEN-CENELEC, the bodies developing the standards, cite complex review and consensus processes as the cause.

• The delay may affect regulators’ ability to enforce compliance as national oversight bodies are due by August 2025.

• Experts warn the absence of clear standards could lead to compliance uncertainty for AI providers across Europe.


The European Union’s effort to regulate high-risk artificial intelligence applications has hit a notable delay.

The two main European standardization bodies, CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization),

have confirmed that technical standards meant to support the enforcement of the EU Artificial Intelligence Act will not be ready by the originally targeted August 2025 deadline.

Instead, the standards are now expected to be finalized sometime in 2026, introducing challenges for both regulators and businesses trying to ensure early compliance with the legislation.


“Based on the current project plans, the work will extend into 2026.”— CEN-CENELEC


Why These Standards Matter

The technical standards are essential for operationalizing the AI Act, which became applicable in August 2024 and is set for full enforcement by 2027.

These standards will serve as legal benchmarks that help companies demonstrate their AI systems are safe, trustworthy, and compliant with EU law.

They were requested by the European Commission in 2023 and are expected to guide businesses on:


• Risk classification and transparency requirements
• Safety and data governance protocols for high-risk AI systems
• Harmonized metrics for assessing system robustness

Without these guidelines, AI developers may struggle to self-certify compliance, and regulators may face inconsistencies in enforcement.


Reasons for the Delay

According to CEN-CENELEC, the delay stems from the extensive validation process required to ensure the standards are robust, consensus-based, and technologically current. The process includes:


• Multiple rounds of technical editing
• Formal evaluation by the European Commission
• Broad stakeholder consultations and voting procedures


“This is likely to take up much of 2025 and partly 2026 for some deliverables.”— CEN-CENELEC

The organizations also emphasized the importance of achieving pan-European agreement and alignment with the state of the art:


“This will reflect the state of the art and ensure consensus from the European stakeholders.”— CEN-CENELEC

To speed things up, they report working closely with the newly formed AI Office, a dedicated unit within DG Connect, and are implementing “extraordinary measures” to accelerate the timeline.


Regulatory Deadlines Loom

Despite the delays, the AI Act’s implementation milestones remain in place. By August 2025, all EU Member States are required to establish national regulatory authorities to oversee AI enforcement domestically.

These bodies will work alongside the AI Office, but the absence of finalized technical standards could leave them operating without a unified reference framework.

This gap raises concerns about fragmented enforcement and uncertainty for AI vendors deploying systems in multiple EU countries.


Industry Reactions and Expert Insight

The delay in standards has drawn concern from data protection and regulatory experts. Sven Stevenson, a senior official at the Dutch Data Protection Authority, highlighted the potential risks of proceeding without finalized criteria.


“The standards are a way to create certainty for companies, and for them to demonstrate compliance. There is still a lot of work to be done before those standards are ready. And of course, time is starting to run out.”— Sven Stevenson, Dutch Data Protection Authority

Such uncertainty could lead to increased compliance costs, delayed product rollouts, or legal ambiguity for firms deploying AI in critical sectors like healthcare, infrastructure, and public services.


What Comes Next

Drafts of the first technical standards are expected later in 2025. Once released, they will undergo EU Commission review, stakeholder input, and final approval processes.

Until then, businesses are urged to follow existing risk management best practices and monitor developments through regulatory briefings and stakeholder consultations.

For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image
Writer
Articles written139

I’m Anosha Shariq, a tech-savvy content and news writer with a flair for breaking down complex AI topics into stories that inform and inspire. From writing in-depth features to creating buzz on social media, I help shape conversations around the ever-evolving world of artificial intelligence.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *