Published
Should the EU Pause its AI Act? Why not Cancel it and Present a Better Regulation?
By: Fredrik Erixon
Subjects: Digital Economy European Union

In mid-September, the Swiss newspaper Neue Zürcher Zeitung had a remarkable interview with Gabriele Mazzini, a scholar at the MIT Media Lab, about the EU’s new AI Act. It isn’t remarkable for its content: what he says I have heard people saying since the discussions about an AI Act heated up around 4-5 years ago. Indeed, I have heard myself voicing the same concerns.
Mazzini makes a few observations about structural problems and awkward approaches in the EU’s landmark AI regulation, especially those parts that specifically concern General Purpose AI (GPAI). He says that many in the EU freaked out over the new ChatGPT model that was launched in early 2024, and lawmakers decided rush through new sections and content to a regulation that, broadly speaking, had already been constructed. Now one seems to have had time to review what these new additions would do, says Mazzini. It is these sections, especially, that make the AI Act problematic. The best thing to do now? It’s to halt the implementation and make a better regulation, if one is needed at all.
What’s remarkable about the article is that Mazzini was the “architect and lead author” of the AI Act. He has left the European Commission now, but I recall him from the early discussions on AI regulation as a thoughtful official who had a good grasp not just of development pathways for AI and their implications for policy. I do not agree with the implicit charge that the problem with the AI Act is all about the GPAI provisions: there are big problems around basic regulatory and legal concepts in the AI Act and how various forms of rights can be operationalised within a technology-specific regulation – in a field that is evolutionary and prone to substantial technological variation.
This is also why I think the growing calls for “pausing” the AI Act or “stop the clock” for its implementation are awkward. Many are adding their support to this view – Mario Draghi recently did it. The Danish government came out with a similar call just as it was taking over the rotating EU presidency, adding to statements by several other EU governments. The Commission may be about to propose something to this effect. But there is a superficial logic to the argument: pausing the AI Act does not mean improving it, it just means delaying its implementation. And if there is something the AI Act needs, it is improvements.
There are two reasonable arguments for pausing the AI Act. The first is that many of those that will be covered by it do not really understand what it means for their operations, their products and services, and regulatory compliance. If they don’t know if their AI development works or not in the EU regulatory environment, they may not invest in development in the first place. Much remains to be done in developing clear concepts and standards for the principles and the obligations included in the Act. The tormenting process of developing a code of practice for GPAI illustrated some of these problems. More time could allow various actors in the regulatory system to provide clearer guidance and make the regulation easier to work with for companies.
The second argument is that Europe obviously has a problem in AI diffusion and in fostering a growth of AI investment in the real economy. The EU now needs to play a game of catch-up with frontier economies that are clearly ahead of the EU. If economic history teaches anything, it is that economic catch-up rarely happens in countries with muddled and overly restrictive regulations that, additionally, may be enforced somewhat differently in the region. A pause may be helpful for Europe as they are trying to communicate with business, investors, and users that the region is a good place to develop AI, do business in and use AI.
However, for these two results to happen, it cannot just be a short pause – for, say, one to two years. Estimates on Europe’s distance from the frontier in AI investment and development suggest a challenge which is far bigger. Moreover, for the policy environment for AI development in Europe to improve, it is just not about delaying the AI Act. Indeed, it is not just about improving the AI Act.
The GDPR, for instance, imposes clear restriction on data collection and use that clearly impact on AI model development and training. Similarly, the Data Act and the Digital Markets Act have already been cited as reasons for avoiding the EU market with new AI services. It may be difficult to gauge exactly how each and every regulation provide restrictions or cause unpredictability for developers – and how that impact the economy. But regulation, like technologies themselves, are an ecology of activities that integrate with and compound each other. Various forms of AI output are already heavily regulated in Europe through standard market, risk, and consumer regulations. Hence, the operation and the effects of the AI Act are not just going to be a result of the regulation itself: it is the result of many different bodies of regulations, standards, and norms. I recall Mazzini was trying to make these points in discussions about the AI Act, but it was equally notable that the underlying analyses for the AI Act proposal had very little to say on this matter.
The problem is that, in the last five years, the EU has rushed to introduce an enormous amounts of various data regulation without bothering to understand how they collectively operate. What the EU needs is a review of the full body of regulation impacting AI and the broader data-based economy. My hunch is that delaying or changing the AI Act alone won’t have much of an impact on technology and economic outcomes. Improving the entire ecology of regulation, however, could help to unleash a new wave of economic development in Europe.