
The oversight mechanisms that Brussels imposes on AI tools are not sufficient to ensure their respect for fundamental rights. This is what the European Union Agency for Fundamental Rights (FRA) believes, which has analyzed the impact of artificial intelligence in five areas considered high-risk: asylum, education, employment, law enforcement and social benefits.
The report published by the financial watchdog on Thursday represents a setback for the Commission, which just two weeks ago gave technology companies an additional 16 months so they could adapt their “high-risk” AI tools to European regulations. For this independent EU agency, European regulations do not have a clear understanding of how to deal with high-risk systems. What’s more, the companies that developed them also don’t know how to prevent their potential violation of basic rights.
Cases reviewed include the use of AI in job applications; More specifically, in tools that filter, weight, and resume. They also analyze automatic mechanisms for determining whether someone can get disability assistance, as well as tools for monitoring exams or measuring children’s reading ability. “These systems must be reliable, as they can shape important decisions that affect people’s daily lives,” the report said.
The agency concludes that providers of this type of systems are aware of the problems they can cause in terms of data protection or gender discrimination, but do not take into account that they can also violate fundamental rights. For example, tools that assess children’s ability to read do not take into account the extent to which their judgment affects a minor’s right to education.
The European AI Regulation has been in force since August 2024, although its full implementation is scheduled for two years later. It classifies AI systems based on the risks involved in their application for citizens and sets different obligations and requirements for them to comply with. For example, “zero risk” tools, e.g Spam emailsthey have no restrictions at all, while those that involve “unacceptable risk” (those that “beyond a person’s conscience”, those that exploit his vulnerabilities or those that infer emotions, race or political opinions) are directly prohibited.
One step below those banned are “high-risk” apps, which require constant supervision. This category includes remote biometric identification systems, biometric classification systems, those affecting the security of critical infrastructure and those related to education, employment, provision of essential public services, law enforcement or immigration management.
Confusion in the industry
The Authority decided two weeks ago to postpone the start of supervision of these systems for a period of 16 months. Brussels’ argument: Before enforcing compliance with the rule, standards for what is and is not acceptable for these tools should have been published, and this has not yet happened.
The FRA report, based on interviews with suppliers, operators and experts, reveals that “many of those who develop, sell or use high-risk AI systems do not know how to systematically assess or mitigate the risks” these tools entail in relation to fundamental rights.
The agency argues that self-regulation, work that AI service providers can do on their own to assess the potential risks of their systems, is important, but “does not systematically take fundamental rights into account.”