AI is currently advancing faster than we can respond to its impact. Despite growing debate, with some industry leaders raising the alarm over the dangers of unregulated AI and others arguing that introducing strict regulation would be premature at this stage, there’s still no clear path to global consensus. This raises a crucial question: how can we create laws that transcend borders and benefit all of humanity without holding back innovation?
A World Divided: Why Global AI Rules Are Hard to Agree On
Regulating AI at a national level is already a complex endeavour. Now imagine the challenge multiplied across diverse economic priorities, legal systems, and political interests worldwide. Although there have been some notable global agreements in other areas, such as the Internet Protocol Suite, maritime law, and universal adoption of ISO units, AI legislation remains a distant and thorny prospect.
Why? Because, unlike those agreements, AI regulation is expected to impact technological ecosystems, economic competitiveness, national sovereignty, and other crucial areas that countries often seek to protect. Additionally, views on AI legislation can vary widely between countries and regions. For instance, while the European Union moves ahead with its AI Act to set strict standards, countries like China take a more permissive approach, actively funding and guiding AI development as a strategic advantage. This divergence risks fragmenting global AI governance into incompatible regimes.
At the same time, some experts warn that overly strict regulation could stifle innovation, while too little oversight might drive a “race to the bottom”, where looser rules encourage investment but also increase risk. Meanwhile, the rapid pace of AI advancement is outpacing legislative cycles, making consensus even harder to achieve.
The Solution? Ethics as a Common Ground
Despite political differences, the world shares more ethical values than we often acknowledge. These values tend to converge on principles like fairness, respect for human rights, and concern for the vulnerable. All these principles could form the backbone of a shared AI ethics framework.
Given the difficulty of political and legislative alignment, a globally recognised ethical baseline could provide a foundation for cooperation. The UN has seen some success in this area by drawing on universal human rights principles to shape international norms. In a similar way, AI governance might begin with establishing universally accepted ethical standards that countries voluntarily adopt, offering guidance for national laws and corporate practices.
How Can We Get There?
To understand the deeper challenge of AI governance, it’s worth looking at an ancient idea from the Greek historian Polybius, who suggested that civilisations cycle through different forms of government, from monarchy to aristocracy, then democracy, and eventually declining into corruption or oligarchy, before the cycle begins anew.
While Polybius didn’t anticipate systems like today’s technocracy or cyberocracy, the core insight that political systems evolve, decay, and repeat remains strikingly relevant to how different nations currently approach AI regulation: democratic systems offer legitimacy but often move too slowly to keep up with AI’s pace; technocracies can act quickly but can lack broad consensus; and oligarchies may make sudden moves influenced by shifting internal power dynamics.
This mismatch can lead to what some call the “coordination paradox”: countries at different stages of governance struggle to align, which in turn could open the door to regulatory loopholes that may enable unethical, even unlawful AI development and use. As AI advancement accelerates, governments may be forced to adapt in ways that strain their institutions. With this in mind, international cooperation may need to shift towards more flexible “meta-governance” frameworks, which enable countries to collaborate effectively on global issues, like AI regulation, despite their differing legislative approaches.
While achieving global coordination on AI legislation is undeniably difficult, a few small but practical steps can help lay the groundwork for wider international cooperation in this area.
- Achieving Consensus on Ethics – A realistic path forward starts with agreeing on core ethical principles for AI development and use, including transparency, accountability, fairness, and privacy. International bodies like UNESCO and the OECD have begun this work, producing guidelines that, while non-binding, build a foundation for trust and shared understanding.
- Building on Existing Frameworks and Alliances – Instead of reinventing the wheel, policymakers could draw lessons from other successful international collaborations when shaping AI legislation. For example, the International Telecommunication Union (ITU) develops global standards for communication technologies, balancing technical progress with national interests. A similar coalition could work alongside the UN and other institutions to harmonise AI standards, certification, and governance.
- Flexible, Adaptive Regulation – Given AI’s fast pace of development, rigid treaties are unlikely to work. Instead, regulatory frameworks must be dynamic, allowing rules to evolve alongside technological advances. Review bodies and regular international conferences could help keep standards up to date, as well as promote best practices while tracking emerging risks.
- Public-Private Partnerships and Multi-Stakeholder Dialogue – Governments alone cannot effectively regulate AI. Collaboration with industry leaders, academics, civil society, and even users is crucial. By involving diverse stakeholders globally, not only we’ll be able to build legitimacy for any regulation agreed upon but also ensure inclusive innovation and proactive risk management.
- Pilot Programmes and Regional Coalitions – While reaching global consensus will take time, regional agreements could help pave the way. The EU’s AI Act, for instance, offers a practical example of how coordinated regional AI governance can work in practice and may serve as a model for like-minded countries or regions. Elsewhere, nations could develop their own tailored frameworks, gradually aligning through ongoing dialogue and mutual recognition.
Balancing Innovation and Responsibility
As we noted in a previous blog post, concerns about AI overregulation stifling innovation are completely valid. However, leaving AI unregulated poses even greater risks, including systemic bias, privacy violations, disinformation, and threats to democratic institutions. Furthermore, a world without clear rules is one where AI could worsen inequality, strengthen monopolies, and erode public trust.
When it comes to AI governance, the key lies in finding the right balance. While building a cooperative approach to AI legislation is undeniably complex, it can succeed when shared ethical values and common goals, like safety, fairness, and prosperity, are supported by inclusive dialogue, flexible frameworks, and clear enforcement mechanisms. Ultimately, that’s precisely what effective AI governance demands.