AI Regulation: prohibited practices and associated risks

AI Regulation: prohibited practices and associated risks, by CECA MAGÁN Abogados
30 Jan 2025

Table of contents

1. Prohibited AI practices

The Artificial Intelligence Regulation 2024 (RIA) represents a major step forward in the regulation of artificial intelligence, anticipating potential risks and balancing the protection of fundamental rights with the defence and promotion of innovation. 

Until now, this AI Regulation has served as a warning of what was to come, but February 2 will mark the first date of application of its provisions, more specifically of Chapters I and II, initiating a process of progressive implementation. In fact, the next phase of the implementation of the regulation will not come until 2 August this year.

But what do chapters I and II regulate? These regulate the general provisions of the regulation, such as the object, scope of application, etc. While chapter II talks about prohibited AI practices.

It makes sense that the first thing to be applied is the prohibition of those aspects that entail the most risks. However, in order to identify these risks and know our rights and obligations, both as users of the systems and from the perspective of potential buyers of systems to incorporate them into our companies, we must know what are the prohibited practices for the use of AI and what types there are.

Article 5 of the AI Regulation (RIA) does not provide an explicit definition of what is considered prohibited AI, but it does identify those that are prohibited based on their purposes. Based on this, we can define prohibited AIs as those systems that, by their design, use or purpose, represent such a high risk to the fundamental rights of individuals that they cannot simply be regulated or subject to mitigation measures, but must be completely excluded from the market and use in the European Union.

There is no doubt that, apart from the many beneficial uses of AI, it can also be misused and provide tools for manipulation, exploitation and social control. Such practices must be prohibited, as they run counter to the Union's values of respect for human dignity, freedom, equality, democracy, the rule of law and fundamental rights enshrined in the Charter, such as the right to non-discrimination, data protection and privacy and the rights of the child.
 

2. Types of AI banned under Regulation

As we have already discussed, Article 5 does not define prohibited AIs, but rather sets out a list of the purposes/practices for which the use of AI is prohibited

As we will see below, some of the prohibitions are established in an integral way, while many others have exceptions:

  • Manipulative or deceptive AI: These are those systems that can employ subliminal or manipulative techniques to alter people's behavior without their informed consent, generating a considerable risk of harm. An example of this is algorithms that use imperceptible visual or sound stimuli to influence purchasing decisions or, for example, voter turnout.
  • AI used to exploit vulnerabilities: These AI systems are used to take advantage of factors such as the age, disability, or social and economic situation of a person or a group (usually vulnerable) to modify their behavior or cause harm. For example: chatbots used to emotionally manipulate elderly or vulnerable people and make them hire unnecessary services.
  • Social score or rating AI: These are those that evaluate and classify people based on their social behavior or personal characteristics, generating discrimination or exclusion in areas outside the original context or disproportionately to the severity of their behavior. Examples of this type of practice can be found in algorithms that deny social benefits or access to banking services based on a person's history of online interactions.
  • Predictive AI to assess criminal risks: These are systems designed to predict the probability that a person will commit a crime based solely on their personal profile or personality characteristics. However, it does not apply to systems that support and support a human assessment based on objective facts.

    In addition, this prohibition has another exception, that is, when the systems analyze risks without evaluating specific people, such as those used to detect financial fraud in companies or to predict the location of illegal goods through traffic patterns.

  • I for massive facial recognition: These are the systems that create or expand facial recognition databases from images extracted indiscriminately from the internet or video surveillance cameras, as is the case of platforms that collect public photos from social networks to identify people without their consent.
  • AI for emotional recognition in the workplace and education: These are those that detect emotions at work or in schools, such as those that monitor employees' facial expressions to assess the level of stress and adjust their tasks based on the result. These systems may have limitations such as:
    • Limited reliability
    • Emotions can be misinterpreted
    • They don't work the same in all contexts

    However, these artificial intelligence  systems can be used in medical or security cases (such as therapeutic systems).

  • AI for biometric categorization: Systems that classify people based on biometric data to infer sensitive information such as race, religion, sexual orientation, union membership, or political opinions (such as algorithms that classify users based on their ethnicity by analyzing their facial image).
  • Real-time remote biometric identification AI in public spaces: These are those facial recognition systems used by authorities for the purpose of ensuring compliance with the law (such as cameras located on the street that automatically identify any person without their consent).

    This last case also has exceptions, since it is allowed for specific cases, such as: the search for missing persons; prevention of terrorist attacks; or identification of suspected felonies. 

    Although the use of this artificial intelligence  system is allowed in these cases, its use must be previously authorized by an independent judicial or administrative authority. In cases of emergency, use can be initiated without prior authorization, but must be requested within a maximum period of 24 hours.

In which sectors they can have the most impact

These systems prohibited by the AI Regulation may have a greater impact on certain sectors where, due to specific factors, they entail more risks in terms of the use of AI systems.

An example is the advertising, marketing or e-commerce sector, where there is a greater risk of systems being used to manipulate behaviour and influence the decision of vulnerable consumers or users.  

Another example is the human resources sector, which entails risks such as the use of scoring systems to evaluate and classify (and therefore discard) candidates based on factors such as their socioeconomic background or their background without objective justification, or in the financial sector, where there is a risk of conducting customer assessments based on their social behaviour or personal characteristics,  such as a bank that uses AI to analyze customers' social media activity and denies them loans or modifies conditions based on this. 

In the labor sector , there are also very notorious risks, such as the risk of using AI to detect emotions in employees and assess their level of stress or commitment, and affecting promotion or dismissal decisions.

3. What should I take into account?

As the provisions of the AI Regulation begin to be progressively implemented, it is time to pay particular attention to the obligations we will undertake, whether as buyers of AI systems, users or in any other role within the chain of actors in the lifecycle of an AI system.

Regardless of our position, it is essential not only to avoid the implementation of such systems, but also to actively monitor their use and report any irregularities, as the penalties for non-compliance can be high (depending, however, on the severity of the violation), and fines of up to €35 million or 7% of global turnover can be imposed.

For the imposition of these sanctions, the authorities will take into account different factors, such as:

  • Severity and duration of the infringement and the number of people affected.
  • Recidivism, that is, if he has already been sanctioned before for the same practice.
  • Company size, turnover, and market share.
  • Economic benefit obtained from the infringement or losses avoided.
  • Collaboration with the authorities to correct the problem.
  • Measures taken to mitigate the harm caused to those affected.
  • Whether the violation was intentional or negligent.

In any case, companies and organizations must implement internal mechanisms to:

  • Identify and prevent the use of prohibited AI in your processes.
  • Train and raise awareness among employees about risks and regulations.
  • Immediately report any AI misuse they detect to the authorities.

Our lawyers are experts in the legal issues involved in the use of artificial intelligence. To resolve any questions about the AI regulation to be complied with, you can contact them here.

Data protection and digital law department

Add new comment