News

back

AI: 4 questions to Kai Zenner

Article IT and Data Protection | 28/07/23 | 7 min. |

4 questions to Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament. Mr. Zenner is involved in the political negotiations and writing of the AI Act.
 

  1. The proposed regulation on artificial intelligence is currently at the trilogue stage. What are the most difficult points of negotiation?

Before answering your question, it is important to underline that the political groups had huge problems to agree on a common line. The European Parliament is very much divided on the question of AI. The political compromise that was eventually found in June is therefore very, very fragile. This does not make it easy for the negotiations with the Member States as the co-rapporteurs have hardly room to manoeuvre. With that in mind, the key topics in the next months should be the following:

Firstly, prohibited techniques of artificial intelligence (AI) and in particular biometric identification. The Council of the EU demands here significant exemptions, in particular for their law enforcement agencies. For them it is crucial that, for example, the French police forces are enabled to use facial recognition AI systems during the Paris 2024 Summer Olympics. The Parliament has the opposite position. A majority of political groups stated that remote biometric identification in all publicly accessible spaces should be banned. No exemption granted.

Secondly, enforcement and governance. While some political actors would favour an AI Agency or at least a very powerful EU body (centralised approach), others have pleaded in favour of a rather powerless EU coordination mechanism, with most competences remaining with the Member States. What makes the topic tricky as well is that the Parliament and the Council are divided internally. Both feature varying positions, among the political groups as well as among the Member States.

Thirdly, generative AI systems like ChatGPT and their underlying foundation models such as GPT-4. The European Commission originally did not include them in their proposal for an AI Act. The technical advance over the past 24 months has however convinced most persons within the EU Institutions that this decision needs to be reconsidered.

Finally, the fundamental right impact assessment and all related issues. The European Parliament wants to have a lot of safeguard mechanism in place before an AI system can be deployed. Individuals should have also the possibility to question certain decisions made by an AI system or address caused harms in front of a court, for instance by filing a class action proceedings. The Member States are rather reserved on all those demands.

 

  1. How does the draft text envisage the regulation of generative AI?

The European Parliament did a lot of research on this topic and discussed intensively the different policy options. It also observed closely what has happened after the release of ChatGPT in November 2022. Eventually, we have put forward a two-level approach in Article 28 and 28b, trying to address the challenges posed by generative AI effectively along the whole AI value chain.

The first level is determined by Article 28b. The European Parliament wants to make sure that foundation models, which we see as building block of thousands of downstream AI systems including those that fall under the term 'generative AI', are in principle well understood and safe to be used. Since foundation models do not feature an intended purpose yet, they are unable to fulfil the concrete obligations for high-risk AI systems in Art 9-15 of the AI Act. Therefore, parliamentarians developed certain minimum standards that developers like OpenAI need to fulfil before their foundation models are entering the AI value chain.

This is the moment when the second level, represented by Article 28 of the AI Act, kicks in. It is based on the realization that a contemporary AI system is highly complex and involves so many different market actors that the compliance with the AI Act should be spread on many shoulders. The European Parliament wanted to avoid a situation in which only the European ‘downstream’ providers of the AI system that is placed on the market need to care about compliance with the AI Act. Those companies should of course carry most of the regulatory burden but other market actors should at least assist them and share all the required information that is needed to become compliant.

 

  1. Some people are concerned that this regulation could hinder innovation. What do you think of this?

I think they have a point, which is why I, in principle, also share some of the concerns recently mentioned by President Macron. It is a pity that the European Union did not stick to its early ideas of how to become a global leader in AI. When the European Commission presented the AI White Paper in 2020, it proposed the establishment of an ecosystem of trust (regulatory framework to make AI safe) and an ecosystem of excellence (measure to promote innovation and deployment of AI). The document was clear that we need both if we want to reach our ambitions in AI.

The AI Act is however very one-sided: within its 85 articles, 2 to 3 concern innovation and the rest addresses the risks posed by AI and how to tackle them. There are of course significant risks that AI is posing and which have already been materialised. But AI could also help us to overcome important challenges like climate change, starvation etc. Those positive use cases hardly play a role. Moreover, not much is being said about how to boost innovation and to make it worthwhile to invest in AI. To be fair, the Commission did propose together with the AI Act a communication, called the AI coordinated plan, which tries to push for an ecosystem of excellence. It lists ways of how to become more innovative. The problem is that the coordinated plan was not binding. This is why it led to 27 different national AI strategies, causing even more fragmentation and legal uncertainty within the EU.
 

  1. Can the AI Act, like the GDPR, establish global standards, or are joint efforts required with other states and regions of the world to that end (OECD, etc.)?

At this moment, no one can respond to this question with certainty. What is increasing the chances of a second Brussels effect is that the AI Act is based on internationally agreed concepts like transparency, human oversight, or fairness. It is important to underline in this regard that the AI Act is not a European intervention. It draws back heavily on the excellent preparation work of organisations such as the OECD. Those have helped to establish a kind of common understanding across the globe over the past two decades. If the AI Act manages to translate those internationally agreed principles for the first time in a concrete law, especially if it is done in a meaningful and balanced way, many non-European countries will indeed follow. They will design very similar AI laws.

The issue with this positive outlook is that the EU has recently moved quite a bit away from what has been discussed internationally. Council and Parliament have reshaped the AI Act and added items that match very unique EU requirements. This development makes it more and more difficult for other countries to understand our AI rules but also to integrate them in their legal system.

In the end, the most likely situation is that, like with the GDPR, other countries will cherry pick certain parts of the EU AI Act that fit for them, while ignoring the rest. Maybe they will even take some time to closely monitor which parts are working and which parts are failing in the EU, trying not to repeat the same mistakes. We saw something similar happening with the GDPR. Take California for instance, which has a GDPR-like data protection law but only took on board those parts that makes sense in the Western state of the USA.
 

Explore our collection of PDF documents and enrich your knowledge now!
[[ typeof errors.company === 'string' ? errors.company : errors.company[0] ]]
[[ typeof errors.email === 'string' ? errors.email : errors.email[0] ]]
The email has been added correctly