Key Takeaways
- The EU AI Act was built for narrower, more stable versions of AI and now struggles to keep up with fast‑evolving, general‑purpose systems.
- Rigid technical thresholds, unclear accountability for open‑weight models, and incomplete enforcement infrastructure widen the gap between law and reality.
- As AI systems become more distributed and modular, the AI Act struggles to clearly assign responsibility, making enforcement and accountability harder in practice.
- Because capabilities emerge after deployment, one‑time classification and certification are no longer meaningful; oversight needs to be continuous, not just a pre‑market checkpoint.
- Future frameworks should be more adaptive: capability‑based triggers, dynamic thresholds, machine‑readable rules, sunset clauses, and real feedback loops from sandboxes and incidents.
Imagine you spent three years writing a very detailed rulebook for chess, figuring out who can move what piece, under what conditions, and what happens if someone breaks the rules. You go through everything carefully, trying to cover all the edge cases. Then, right when you’re about to publish it, someone invents Go. And suddenly, that’s the game everyone’s playing.
That’s roughly where the EU AI Act finds itself today. While it wasn’t designed for the wrong game, at least not in the way our chess–Go analogy suggests, it no longer fully captures how AI actually works today. The rules still make sense within their own logic, but the context around them has shifted significantly. That’s not because the effort was misguided. It’s just what happens when regulation moves step by step while technology moves not only a lot faster but also in ways that are hard to predict.
To be fair, the EU wasn’t wrong to try. The ambition behind the AI Act is genuinely commendable. But ambition and timing are two different things. And when it comes to AI, poorly timed regulation can quickly become misaligned.
The AI Europe Thought It Was Regulating
When the European Commission first proposed the AI Act in April 2021, the dominant idea of AI looked quite different from what we see today. Most systems were built for a specific purpose, used in a specific context, and designed to do one thing well — like credit scoring models or medical classifiers. In other words, back then, AI was narrow, identifiable, and, most importantly, relatively stable.
So, the whole framework of the EU AI Act was built around a fairly structured idea: classify AI systems based on risk, define where and how they were used, and then apply rules depending on how much harm they could cause. On paper, that approach made sense because it was structured, easy to follow, and quite practical.
The problem was that the people designing the EU AI Act were thinking of AI as a tool. A complex tool, yes, but still a tool built to solve a specific problem and behave in a predictable way, a tool that could be tested, audited, certified, and eventually labeled and released.
At that time, issues like biased datasets, misuse of facial recognition, or automation risks in clearly defined areas reflected that way of thinking. They were important issues, no doubt, but also very much tied to a version of AI that was still, in many ways, contained.
But AI didn’t stay contained. It didn’t just evolve either. Instead, it switched categories, moving from being just a tool to something closer to infrastructure.
So, by the time the Act came into force in August 2024, the conversation had already moved on. We were no longer dealing with narrow systems making biased decisions. Instead, we were dealing with general-purpose models that could be adapted across domains, combined with other systems, and, in some cases, act with a degree of autonomy. Not full autonomy, but enough to raise new questions and, occasionally, new problems.
This is where things started to feel slightly out of sync. While the Act spent years banning certain hypothetical uses of AI, many of those uses never became the real risks everyone expected them to be. Meanwhile, new risks showed up and most of them didn’t even fit into the original framework.
The Act, to its credit, does try to address general-purpose AI. But it does so a bit like hosting a dinner you carefully planned for six people, and then one more shows up unannounced. You can find an extra chair and set another plate, but the question remains whether the food will actually be enough
The Real Mismatch: Deployment vs Emergence
When it comes to advanced AI systems, capabilities don’t just get deployed; they emerge. This means that a model can be released with a certain set of expected behaviours, and then, through fine-tuning, tool integration, or just creative use, it starts doing things no one explicitly planned for. Not because anyone broke the rules, but because the system itself is flexible enough to evolve through interaction and capability expansion.
For instance, an AI model that was originally assessed as a text summariser might, after fine-tuning, start behaving like an autonomous agent. As well, a customer service chatbot, once connected to a few APIs, can end up handling complex, multi-step workflows that were never part of the original design. These are just two examples that clearly show how the label an AI model receives at certification can quickly stop reflecting what it actually does in practice.
At this point, classification starts to blur and new questions emerge: Is it still the same system? Does it fall under the same risk category? And more importantly, who is responsible for what it turns into over time?
The AI Act doesn’t have an answer to these questions because it wasn’t designed for systems that keep changing after deployment.
And this isn’t just hypothetical. General-purpose AI provisions were added relatively late in the process, by which point GPT-4 class models were already released and widely deployed. So instead of getting ahead of the curve, the Act ended up trying to catch up with something that had already moved on.
Then there are the technical details, which make things even more awkward for the framework.
- 10²⁵ FLOPs threshold: The idea of using a fixed compute threshold to define “systemic risk” is already being questioned. More efficient models can now reach equivalent or superior performance with far less compute, which makes that kind of boundary increasingly irrelevant.
- Mixture-of-Experts Architectures: Architectures like Mixture-of-Experts (MoE) complicate things further. These systems can outperform traditional models while activating only a fraction of their parameters per task, which makes it less clear how capability should be measured. The Act relies in part on training compute, measured in FLOPs, as a proxy for systemic risk. But that has limits. Training compute doesn’t always reflect how a model behaves in practice, and MoE models make this visible.
- Open-Weight Models: Once model weights are publicly available, who exactly is the “provider” you regulate? The original developer? Every person who fine-tunes it? Or everyone who deploys a derivative? At that point, the enforcement chain starts to look less like a clear process and more like a guessing game.
What sits underneath all of this is a deeper shift. AI isn’t a stable system you deploy and leave as is. Most AI systems today are combined, adapted, and extended in ways that can be difficult to track and regulate cleanly.
And here’s the real problem: not only was the Act built around a more static picture, but it also still assumes systems can be clearly defined, classified, and assessed against set criteria before they reach the market. While it does include post-market monitoring, the core logic still centres on the moment of deployment: classify the system, meet the requirements, and release it. That works well enough for narrow, stable systems, but it fits less neatly when capabilities continue to evolve after deployment.
The Enforcement Gap: A Regulation That's Still Being Born
Here’s a detail that does not get talked about much: many EU member states missed the August 2025 deadline to designate the national authorities responsible for enforcing the EU AI Act.
That might sound like an administrative delay, but it has real consequences. Until those authorities are in place, conformity assessment bodies can’t be formally appointed. And without those bodies, high-risk AI systems can’t go through certification. So, in quite a few states across Europe, the AI Act remains partly theoretical, even though it’s already partially operational.
Which leads to a paradoxical situation: we have a regulation meant to govern an advanced technology that has already gone through several iterations, written for some versions that no longer quite exist, and relying on enforcement structures that, in many cases, are still being set up. Meanwhile, the technology itself is moving faster than all of this.
To deal with some of these gaps and make implementation easier, a “simplification” package has been proposed. But that has not exactly settled the debate. If anything, it has made the fault lines clearer.
On one side, consumer advocates argue that proposed adjustments to the framework are leaning too much toward deregulation, weakening protections and accountability. On the other, industry voices argue that the framework is already out of date and too rigid to reflect how AI systems actually work today. Interestingly, both sides seem to agree on one thing: the current thresholds and assumptions don’t quite match reality anymore. They just disagree on what to do about it.
So, in a way, a quiet shift in the regulatory approach already seems to be happening. The real question is what kind of shift this is. Is it a strategic pause to rethink and adjust, or the beginning of something that slowly starts to come apart?
The Core Paradox: The Act Is Too Slow, Too Early, and Too Late
The EU AI Act is in a slightly strange position: it’s too slow, too early, and too late — all at once. It feels too slow because it’s still grounded in how AI looked in 2021, while being applied to a reality that, by 2026, looks very different. At the same time, it’s too early because much of the enforcement infrastructure it depends on is still not fully in place across the EU. And somehow, it’s also too late because the technology it was meant to govern has already moved on.
Although we’ve already dedicated an entire section to the last point, it’s worth sitting with it a little bit more. The Act was designed to be future-proof, meaning it should have been broad enough to handle what was coming, not just what already existed. But in this case, future-proofing assumed the future would simply be an extension of the present. With AI, however, that assumption doesn’t really hold.
Because trying to lock down fixed definitions in a space where capabilities keep evolving and recombining isn’t really future-proofing. If anything, it’s closer to trying to pin down something that refuses to hold still.
When capabilities evolve quickly and in unexpected ways, rules built around a 2021 version of AI don’t stretch to cover what we see in 2026. The only thing that happens in these cases is the rules become less and less relevant over time.
But the Act isn’t exactly obsolete, as that would suggest it failed outright. What we’re seeing is more subtle: the Act is misaligned. What this means is that it’s trying to apply today’s regulatory tools to yesterday’s problems, while those problems are already shifting and tomorrow’s ones are piling up.
Which raises a question nobody really wants to sit with: can regulation, at least in its current form, keep up with exponential systems like AI? This is a critical question because AI won’t be the last case where this comes up.
Think about quantum computing where the implications touch everything from cryptography to financial systems and national security. Will we go through the same cycle? Build rules based on today’s capabilities, spend years debating them, and then end up applying them to a landscape that has already changed?
So, the question isn’t just whether the AI Act got things right or wrong. It’s whether we’re, without realising it, creating a template for a mistake we’re about to repeat across multiple domains.
The Global Contrast: Other Approaches to the Same Problem
It also helps to zoom out for a moment. Most European takes on the AI Act stay entirely internal, which misses something important: there are other regulatory models worth examining, not as better alternatives, but as different ways of dealing with the same problem. Let’s go through them one by one.
- In the US, the approach is mostly deregulation by default, with sector-specific guidance from agencies like the FTC, FDA, and NIST. There is no single, overarching AI law. Instead, it’s a mix of standards, recommendations, and the occasional executive order. That makes the system more flexible and faster to react, but also quite fragmented, with some obvious gaps in accountability.
- China has taken a very different route. Regulation comes in targeted waves, with specific rules for generative AI, recommendation algorithms, and deepfakes. Each is relatively narrow and rolled out quickly. The advantage is speed and clarity in defined areas. The trade-off is less transparency and a stronger link between regulation and political priorities.
- Singapore sits somewhere else entirely. The focus there is on guidance rather than strict enforcement, with tools like regulatory sandboxes and voluntary governance frameworks. This creates space for experimentation without locking things down too early. While this works well in a smaller, high-trust environment, this model is harder to scale across something like the EU, where institutional differences are much wider.
None of these approaches is clearly “right”, as each comes with its own trade-offs. That said, the EU’s model has real strengths — it’s comprehensive, it takes rights seriously, and, at least in theory, it aims to provide legal certainty. At the same time, the comparison highlights something important: regulatory responsiveness is a design choice. Any legal framework can be built to adapt faster, incorporate feedback, and evolve alongside the technology.. However, most legal systems were never designed for that simply because they didn’t have to be until now.
The Constructive Part: What Version 2.0 Could Look Like
The Act isn’t going away, and frankly, it shouldn’t. Why? Because the alternative — an AI regulatory vacuum — would be much worse. So, the real question isn’t whether to regulate but how to do it without building something that becomes outdated as soon as it’s applied.
Right now, most laws behave like static documents. They’re written, debated, finalised, and then updated over time. But AI doesn’t evolve on that timeline. So instead of trying to lock it into a fixed framework, it makes more sense to design legal frameworks that can adjust continuously.
A more adaptive approach could look something like this:
- Dynamic thresholds – Instead of fixing numbers like compute limits, the law should define how the thresholds get updated. So, it would govern the process instead of the number itself.
- Machine-readable rules – The compliance requirements should be published as structured data (e.g. regulatory APIs), not just text. That way, companies can easily track and respond to changes in real time.
- Capability-based triggers – The Act should focus less on how a system is used and more on what it can do. In practice, that means oversight should be based on a system’s capabilities, not just how it’s described or intended to be used.
- Mandatory sunset clauses for technical definitions – Any technical category should expire unless it’s actively renewed. That forces regular review instead of slow drift.
- Real regulatory sandboxes – Sandboxes should produce feedback that actually feeds back into the rules rather than becoming isolated experiments.
- Ex-post learning and incident-based updates – Major incidents or deployment milestones should trigger formal reviews of relevant categories and thresholds. This would make the framework genuinely adaptive, rather than only theoretically flexible.
- Cross-border alignment and dialogue – The framework should include mechanisms to periodically align thresholds and categories with other major jurisdictions (US, UK, China, Japan, etc). This would help ensure interoperability and avoid fragmentation as AI systems operate across borders.
Thus, the deeper shift should be conceptual, with a new direction: less rigid thresholds, more ongoing evaluations, and more rules that can interact with AI systems as they evolve.
The fact that the EU AI Act is outpaced isn’t a controversial claim anymore. But outpaced doesn’t mean useless, and it doesn’t mean unfixable. What it means is that the current version has done something valuable just by existing: it established that AI requires governance, forced companies to think about risk classification, and created an institutional foundation on which a better version could be built.
What’s needed now is Version 2.0. Not a rewrite, but a different architecture. A framework that watches, learns, and updates rather than one that keeps refining the rules of chess while everyone else has already started playing Go. In other words, what we need right now is regulation that behaves a bit more like software.
So, the question isn’t whether we can regulate AI. It’s whether we can design a governance framework that’s genuinely responsive to exponential change. That problem is hard, underexplored, and arguably the most important design challenge in policy right now. And we need to get started on it, preferably before the next technology that needs regulating is already three generations ahead of the people writing the rules.
Why This Actually Matters
If you’re not a policymaker or a lawyer, it’s easy to read all of this and think: so what? AI seems to be working fine. Products are shipping, tools are improving, and nobody’s asked you to fill out a compliance form. So why does it matter whether the EU AI Act is misaligned or not?
Well, regulation isn’t just about stopping bad things; it’s also about deciding who is responsible when they happen. Right now, if an AI system makes a decision that affects your job, your credit, your medical care, or your legal situation, the question of who is accountable is genuinely unclear. Not unclear in a theoretical way. Unclear in a “the company points to the algorithm, the algorithm has no address” kind of way. That gap isn’t abstract. It lands on real people in real situations, and without a functioning framework, it mostly lands on them alone.
The other reason to care is simpler: the rules being written now will shape what gets built next. Regulation doesn’t just constrain technology; it also signals what’s acceptable, what’s investable, and what’s worth building at all. A framework that’s misaligned with how AI actually works doesn’t just fail to protect people; it also fails to guide the people building these systems toward anything better. Getting this right isn’t just a governance problem. It’s a question of what kind of technologies we’re going to end up with.
Extra Sources and Further Reading
- What Is FLOPS (Floating Point Performance) and How Is It Used in Supercomputing? – BizTech
https://biztechmagazine.com/article/2023/08/what-flops-and-how-does-it-help-supercomputer-performance-perfcon
In this article, the author explains what FLOPS (floating-point operations per second) are and how they are used to measure the computational performance of supercomputers, particularly in tasks that require large-scale numerical processing. - What Is an Open-weights Model?
https://www.ai21.com/glossary/foundational-llm/open-weights-model/
This article describes what open-weight models are and how making their trained parameters available allows developers to customise and build on existing AI systems. - From FLOPs to Footprints: The Resource Cost of Artificial Intelligence – Cornell University
https://arxiv.org/html/2512.04142v1
This paper explores how the computational demands of AI translate into real-world resource use, showing the material and environmental costs of training large-scale models. - What Is Mixture of Experts? – IBM
https://www.ibm.com/think/topics/mixture-of-experts
This article outlines what Mixture-of-Experts models are and how splitting an AI system into specialised components allows it to handle tasks more efficiently.

