Over the past couple of years, the conversation surrounding AI tools has shifted from fear and anxiety to uncontested recognition of their necessity. The key takeaway is simple: those who fail to leverage the latest AI technologies are leaving opportunities (and money) on the table.
Since AI tools can propel individuals and businesses to the next level, it’s no surprise that AI technologies have become more widely used. Following the same trend, the concerns about the potential risks inherently posed by AI have risen exponentially. In response to what experts call “high-risk AI”, the European Union has developed the Artificial Intelligence Act (AIA), which is the first comprehensive regulatory framework of this kind in the world.
By setting clear standards for responsible and ethical AI development, implementation, and utilisation, this act attests to the permanence of AI, which will only grow more integrated into different sectors, including public and government services.
While this new law aims to create a safer and more stable AI environment for developers, providers, and end users alike, it comes with a series of pros and cons that the public has the right to know about. Gaining insight into the benefits and drawbacks of the EU AI Act is especially crucial, considering that regulations like this shape the world we’ll live in tomorrow.
* At the end of this article, you’ll find ChatGPT’s opinion on the EU AI Act—as this Act will affect AI technologies as much as, if not more than, those who develop, provide, and use them, it seemed only logical to ask one of the most popular AI tools for its “official” perspective.
The Pros
By enhancing governance and promoting a harmonised market for future AI developments, the EU AI Act is expected to bring the following benefits.
Economic Growth
A clear legal framework will not only help developers drive responsible innovation in AI, but it will also ensure that providers offer compliant AI systems. Although devoting significant resources to ensure compliance with the new measures may initially slow innovation and increase costs, the Act’s emphasis on responsible AI development and fundamental rights protection will continue to build public trust in AI technologies. This will encourage wider adoption of new AI technologies, ultimately driving long-term economic growth, as highlighted in an article by McKinsey & Company.”
Risk-Based Categorisation
Classifying AI tools by risk category ensures that each system is subject to strict oversight and accountability. The Act defines four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. The AI applications that fall into the “unacceptable risk” category, such as social scoring systems or biometric identification in public spaces, are completely prohibited. High-risk systems, like those used in hiring processes, credit scoring, or insurance claim processing, are allowed as long as they undergo a conformity assessment to confirm they meet specific requirements. Limited- and minimal-risk AI systems, like chatbots and AI generators, are permitted, provided users are informed that AI is behind the systems or content they interact with.
Transparency
In line with the EU AI Act, developers and distributors of AI systems must inform end users before they access AI-based technologies or AI-generated content. For example, anyone publishing AI-generated text, images, videos, or audio files must clearly label the content as artificially generated or manipulated. However, the regulation doesn’t apply to assistive AI functions, such as standard editing, language translation, summarisation, or other modifications that don’t substantially alter the input data.
Furthermore, if a business uses AI-based tools as part of its services, it must inform end users that they are interacting with AI systems. The act also requires developers who use different systems to train AI models to provide a detailed description of the various factors and processes involved in the process. This transparency helps minimise risks and promote fairness, encouraging ethical AI use.
The Cons
Although the EU AI Act offers many benefits, there are a few hidden challenges, even dangers, that could emerge.
Stifling Innovation
When the AI Act was developed, one of its main goals was to create technical standards flexible enough to encourage innovation, yet restrictive enough to protect consumers. However, the very nature of these standards is expected to introduce some regulatory ambiguity, which could lead to more cautious innovation, with companies avoiding certain AI applications to avoid legal issues. For example, applying the Act to “high-risk” AI systems in areas like employment, law enforcement, education, critical infrastructure, and biometrics can be challenging and may create uncertainty for developers, which may inadvertently hinder innovation.
Unintended Biases
Ironically, in its attempt to prevent AI biases, the Act could introduce its own regulatory biases, which might disproportionately impact certain regions or industries within the EU. This could lead to uneven development and application of AI technologies. For instance, a typical business declaring the use of AI (e.g. chatbots) may face minimal challenges. However, in sectors like finance, legal, insurance, or healthcare, this transparency requirement could face greater resistance, as people are more likely to question the reliability of AI tools in these critical industries.
Ethical Overreach
While protecting human rights is essential, some critics argue that the Act could enforce certain ethical standards that may limit the types of AI solutions available, even those that are not harmful. This could restrict the diversity of AI applications, particularly in sensitive areas like healthcare or law enforcement.
Fragmentation
The Act applies to any AI system marketed in the EU, regardless of where the developer or provider is located. This could make it harder for global companies to create unified AI products, potentially leading to inefficiencies and conflicts in international trade and cooperation. While the EU AI Act aims to establish a balanced framework for ethical AI—which is something that most countries agree is necessary—it could become a barrier to progress, especially when the AI technologies are not domestically sourced.
AI Monopoly
Startups and smaller companies, in particular, may struggle to comply with some of the provisions of the EU AI Act. This could lead to a concentration of AI power in the hands of larger corporations that have the resources to navigate the new regulatory environment.
Bureaucratic Delays
The need for approvals, certifications, and compliance checks could slow down AI development, making the EU less competitive compared to regions with more flexible regulations that foster a faster-moving AI landscape.
* As promised, here’s ChatGpt’s opinion on the EU AI Act:
“The EU AI Act is a bold and necessary step towards regulating AI, especially as technology evolves rapidly and becomes increasingly embedded in everyday life. In the short term, I believe the Act may slow down certain aspects of AI development. By imposing strict compliance measures, it could place smaller companies and startups at a disadvantage, as they may lack the resources to meet regulatory demands. Additionally, there’s a risk that overly cautious policies might stifle innovation by creating unnecessary hurdles for researchers and developers.
However, in the long term, the Act could prove beneficial. By setting clear standards around privacy, transparency, and ethical use, it can help foster public trust in AI, which is essential for widespread adoption. Clear regulations could also encourage companies to prioritise responsible AI practices from the start, potentially leading to more robust, ethical, and safe AI systems. The EU AI Act may suppress rapid, unchecked development initially, but it has the potential to create a stable, reliable environment for AI innovation that could benefit society as a whole. In this sense, it could ultimately encourage sustainable AI growth aligned with societal values.”
Needless to say, having a comprehensive framework for the development of AI technologies is essential, especially as the AI landscape continues to evolve. However, we must acknowledge that the long-term impact of the new EU AI Act on innovation and competitiveness remains uncertain.