Spain’s AI Sandbox Gives Rise to AESIA: A National Regulatory Body

NMR Legal
4 min readSep 12, 2023

--

Photo by Google DeepMind on Unsplash

EU AI Approach

The European Union has been at the forefront of the global effort to regulate artificial intelligence (AI). In April 2022, the European Commission proposed a comprehensive set of rules for AI, known as the “European AI Regulation.” This regulation aims to ensure that AI is developed and used in a way that respects human rights and fundamental freedoms, and that promotes transparency, accountability, and trustworthiness.

Under the European AI Regulation, AI systems are divided into three categories:

  1. “Limited-risk” AI systems, which are designed to perform a specific task and have minimal impact on society. These systems are subject to a light touch regime, with minimal requirements for transparency and accountability.
  2. “High-risk” AI systems, which are designed to perform tasks that have the potential to significantly impact society or pose a high risk to people’s health and safety. These systems are subject to a stricter regime, with requirements for extensive testing, transparency, and accountability. Examples of high-risk AI systems include autonomous vehicles, medical diagnosis algorithms, and facial recognition systems.
  3. “Unacceptable” AI systems, which are deemed to be harmful or unethical. These systems are banned outright under the European AI Regulation. Examples of prohibited AI systems include those that promote hate speech, discrimination, or violence.

The European AI Regulation also establishes a number of obligations for developers and users of AI systems, including:

  1. Transparency: Developers must provide clear information about their AI systems, including how they work, what data they use, and who is responsible for them.
  2. Accountability: Developers and users must establish mechanisms to ensure that AI systems are used in a way that complies with the law and respects human rights.
  3. Risk management: Developers and users must take steps to identify and mitigate any risks associated with their AI systems.
  4. Data governance: Developers and users must ensure that personal data used by AI systems is collected, stored, and processed in accordance with EU data protection laws.
  5. Ethical considerations: Developers and users must consider the ethical implications of their AI systems, including issues related to privacy, dignity, and non-discrimination.

To ensure compliance with the European AI Regulation, the EU has established a number of enforcement mechanisms, including fines and penalties for non-compliance. The regulation also provides for the establishment of a European AI Agency, which will provide technical and scientific advice to support the implementation of the regulation.

Criticism of EU AI Framework

The EU’s approach to AI regulation, has been criticized for its limited scope and flawed design. Human Rights Watch argues that the regulation’s focus on “high-risk” AI systems neglects the potential harms of lower-risk systems, such as those used in social services, education, and employment. Moreover, the regulation relies heavily on industry self-assessment, which may lead to biased evaluations and undermine efforts to ensure accountability. Lack of regulations over governmental bodies may lead to a dystopian future, according to report.

On the other hand, other critics focused on its effect on businesses. The lack of clear guidelines and standards for AI development and deployment may actually hinder innovation and stifle competition, rather than fostering it. Critics argue that the regulation does not adequately address issues of bias, discrimination, and transparency, leaving marginalized communities vulnerable to AI-driven decision-making processes that can perpetuate existing social inequalities. Therefore, while the EU’s attempt to regulate AI is well-intentioned, its current approach falls short of effectively addressing the complex challenges posed by AI and may even exacerbate existing problems. To truly ensure that AI benefits society as a whole, a more comprehensive and inclusive approach to regulation is needed.

Spanish AESIA: The First Regulatory Body

Spain, in particular, has taken a proactive approach to implementing the European AI Regulation. In August 2023, the Spanish government announced plans to create a national agency for the supervision and enforcement of AI, known as AESIA (Agency for the Supervision and Enforcement of Artificial Intelligence). AESIA will be responsible for ensuring that AI systems used in Spain comply with the European AI Regulation, and will have the power to investigate and sanction non-compliant companies.

Looking ahead, it will be interesting to see how AESIA evolves and adapts over time, particularly as AI technologies continue to advance and raise new ethical and societal questions. It’s possible that we might witness further developments and refinements in EU AI approach and the way that AESIA operates. Ultimately, the success of initiatives like AESIA will depend on factors such as their ability to engage diverse stakeholders, keep pace with rapid technological change, and build public trust through transparent and accountable decision-making processes.

We will be monitoring how AESIA and EU AI frameworks are faring in their efforts to keep pace with the rapidly evolving AI industry.

--

--

NMR Legal

NMR Legal is LLP of attorneys at law in Turkey. We represent clients in the video game, animation, audio production, Web3, Artificial Intelligence industries.