Tech Giants Lobby to Weaken Europe’s AI Act Regulations!

  • Editor
  • September 23, 2024
    Updated
tech-giants-lobby-to-weaken-europes-ai-act-regulations

Key Takeaways:

  • The EU AI Act represents the world’s first comprehensive regulatory framework for artificial intelligence, aiming to balance innovation with the need for transparency and accountability.
  • Major tech companies, including Meta, Google, and Amazon, are lobbying for a more lenient approach to the AI Act, fearing that strict regulations could lead to substantial fines and stifle innovation.
  • The code of practice accompanying the AI Act, set to take effect in 2025, will not be legally binding but will serve as a critical checklist for companies to demonstrate compliance.
  • The tension between regulation and innovation is central to the ongoing debate, with stakeholders pushing for clarity, fairness, and a balanced approach to AI governance in Europe.

The world’s biggest technology companies have launched a final push to persuade the European Union (EU) to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines.

EU lawmakers agreed in May on the AI Act, the world’s first comprehensive set of technology rules, following months of intense negotiations between different political groups.


However, until the law’s accompanying codes of practice are finalized, it remains unclear how strictly rules around “general purpose” AI (GPAI) systems, such as OpenAI’s ChatGPT, will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face.

The EU has invited companies, academics, and other stakeholders to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number, according to a source familiar with the matter who requested anonymity because they were not authorized to speak publicly.


The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate compliance.

A company claiming to follow the law while ignoring the code could face a legal challenge.

“The code of practice is crucial. If we get it right, we will be able to continue innovating,” said Boniface de Champris, a senior policy manager at trade organization CCIA Europe, whose members include Amazon, Google, and Meta.

“If it’s too narrow or too specific, that will become very difficult,” he added.

Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators’ permission is a breach of copyright.

Under the AI Act, companies will be obliged to provide “detailed summaries” of the data used to train their models.


In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts.

Some business leaders have argued that the required summaries should contain minimal details in order to protect trade secrets, while others assert that copyright holders have a right to know if their content has been used without permission.


OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter.

Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to “contribute our expertise and ensure the code of practice succeeds.”

Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organization behind the Firefox web browser, expressed concern that companies are “going out of their way to avoid transparency.”

“The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box,” he said.

Some in business have criticized the EU for prioritizing tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise.

Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better-coordinated industrial policy, faster decision-making, and massive investment to keep pace with China and the United States.


Thierry Breton, a vocal champion of EU regulation and critic of non-compliant tech companies, quit his role as European Commissioner for the Internal Market this week after clashing with Ursula von der Leyen, the president of the bloc’s executive arm.

Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping for carve-outs to be introduced in the AI Act to benefit up-and-coming European firms.

“We’ve insisted these obligations need to be manageable and, if possible, adapted to startups,” said Maxime Ricard, policy manager at Allied for Startups, a network of trade organizations representing smaller tech companies.

Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it.


Non-profit organizations, including Access Now, the Future of Life Institute, and Mozilla, have also applied to help draft the code.

Gahntz said, “As we enter the stage where many of the AI Act’s obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates.”

As the debate over the AI Act progresses, the future of AI regulation in Europe remains at a crossroads.

For more news and trends, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *