News

back

Five questions to Winston Maxwell

Article Intellectual Property, Media, and Art Law | 22/03/23 | 3 min. |

Tech & Digital

Does the draft AI Act adequately cover fundamental rights?

Everyone’s talking about the proposed AI Act, but the European Court of Justice’s decisions of June 21, 2022 is just as important. In La Ligue des Droits Humains, the Court had to evaluate the compatibility of the PNR Directive, which authorizes police and intelligence authorities to use algorithms to analyze passenger data, with the EU Charter. The Court had to determine whether these surveillance measures were proportionate and necessary in a democratic society. The Court found that they were, but set forth a number of conditions. These conditions will apply, with more or less intensity, to any algorithmic processing that has significant impacts on fundamental rights. That’s why I think that the Court’s June 21, 2022 decision is at least as important as the AI Act when it comes to AI use cases that create a high risk for fundamental rights.

What are the conditions imposed by the Court?

One of the most radical conditions is to exclude machine learning entirely from algorithmic detection of terrorist risks. Only predetermined human-made rules can be used.

Why did the Court exclude machine learning?

Lack of explainability. Because a human reviewer examining a machine-learning generated risk score would not be able to understand the “reason” for the score, the human reviewer’s role would become ineffective. Also, a citizen who later wants to challenge a decision based on the risk score would lack an effective remedy because the reasons for the score would remain hidden.

What other conditions did the Court impose?

The court said that the models used to identify terrorist risks should take account of both “guilt” and “innocence” factors. I found this fascinating, because the Court is trying to insert some fair trail features within the algorithm itself!

The Court said that each algorithmic alert must be reviewed by a human, to detect possible false positives (false alerts), and to detect discrimination, though I’m not sure how review of an individual alert can possibly detect discrimination. The Court emphasized that the algorithmic system must not lead to indirect discrimination (“disparate impact” in US parlance) against protected groups. The Court required that the system be proven to be effective in detecting terrorism risks, but did not specify what “effective” means. Measuring effectiveness is a big issue in criminal detection because of the class imbalance problem (base rate fallacy), which leads inevitably to a high proportion of false positives.

Does the AI Act cover any of these points?

Not currently. The draft AI Act refers to “human oversight”, to “bias” and to risks to fundamental rights, but does not currently provide a human rights framework for analyzing the proportionality of AI risks. The Court’s June 21, 2022 decision does that, albeit for a very sensitive use case - anti-terrorism algorithms. Other use cases will be less sensitive than the PNR case, but the Court’s step-by-step analysis of proportionality should provide a template for evaluating any AI application that creates risks for fundamental rights, even private sector measures.

Explore our collection of PDF documents and enrich your knowledge now!
[[ typeof errors.company === 'string' ? errors.company : errors.company[0] ]]
[[ typeof errors.email === 'string' ? errors.email : errors.email[0] ]]
The email has been added correctly