Insights

In a nutshell

Artificial intelligence (AI) will officially be 70 in 2026! Even so, the general public has only recently begun paying attention. Some are still discovering that they are free to use ChatGPT, Bing Chat or soon Bard in France. Others started asking questions with the call for a moratorium on AI research made in March 2023[1] by upwards of 1,000 experts, including business leaders like Elon Musk and researchers like Yoshua Bengio, one of the leading pioneers in AI.[2] The fact that Geoffrey Hinton, another of the “Godfathers of AI”,[3] joined the ranks of the “concerned” in early May has done little to calm the debate.[4]

We wanted to do our bit to contribute to the understanding of generative AI systems, the strong potential of which and the legal issues they raise are a cause for concern. Where is this phenomenon taking us? What does it mean for organizations baking this type of AI into their production processes? To talk about this with you, Philippe Limantour, Chief Technology Officer at Microsoft France, joined Emmanuelle Mignon and Marc Mossé, lawyers at August Debouzy last May 10th.

6 months ago: the breaking point

"I've been working on AI for 37 years, and I've been waiting for this moment.” For Philippe Limantour, the real AI revolution came a few months ago, with the emergence of generative AI. It took decades of research before major breakthroughs were made.[5]

The Dartmouth conference in 1956 is where it all began: a group of researchers set out to imitate human cognitive functions using the computing capabilities of computers. At the time, it was thought that this would not take long.

But only in the 1990s did we see the first significant applications of AI thanks to machine learning. However, the hyperspecialization of trained AI systems was an obstacle: for each domain, use case, language used, etc., a model had to be trained by a large number of specialists.

robot

Then, in the 2000s, a clear evolution took place with deep learning, [6] as the size of these models and the volume of data transmitted increased.

Today, we've literally changed the order of magnitude. “GPT-3 has 175 billion parameters”, explains Philippe Limantour. “You can ask it questions, ask it to retrieve information, generate a video or an image...”. Its cross-functional performance far exceeds that of previous specialist, juxtaposed models. Finally ˗ a major innovation ˗ these systems are now driven in natural language. In other words, a layperson can use generative AI to work in any field, as it makes accessible fields of knowledge previously reserved for specialists alone. As soon as the general public was able to converse with ChatGPT in November 2022 all kinds of tests were carried out. With the Bing search engine, the number of uses multiplied, and we witnessed rapid, massive uptake. A sign of a disruptive innovation that is here to stay, according to Philippe Limantour. But it also raises questions about the risks it inevitably entails.

Quelques chiffres clés

$11.1 billion

<span style="font-size:10.0pt">is the amount the IA market is expected to reach by 2024, compared to </span><span style="font-size:10.0pt">$</span><span style="font-size:10.0pt">200 million in 2015*</span>

1,550 AI start-ups in 70 countries

<p style="text-align:justify"><span style="font-size:12pt">Average fundraising: $22 million per company*</span></p>

global productivity

<p style="text-align:justify"><span style="font-size:12pt">By 2035, IA could help boost global productivity by <strong>40%</strong></span></p>

The risks in connection with generative IA

During the learning phases of AI systems, massive amounts of content are injected into them: protected data is "sucked in" amidst unprotected data. Does this "sucking in" of content by AIs infringe copyright[AD1] ? Will the European Union impose rules similar to those governing data mining, which stipulate that text and data mining is likely to fall under the copyright exception? But if authors have the right to opt out,[8] how will platforms implement this possibility?

During the content generation phase this time, the question arises as to who will incur liability [AD2] for the use of AI for infringement purposes: the company behind the models, the company baking them into its production processes, or the end user? What obligations should economic players assume in terms of monitoring, control and certification of tools?

And what about the risk of generative AI systems being exposed to cyber-attacks through the injection of biased content during their training phase, for example.

Another topic of discussion is is the difficulty in differentiating between AI-generated and non-AI content. Are there reliable watermarking-based techniques?

 

robot
The regulation envisaged by the European Union

The European Union is currently discussing a proposed regulation on AI (AI Act). With a view to facilitating innovation while protecting society, it has opted for a graduated, risk-based regulatory approach: AI applications are classified according to their level of risk, and legal rules vary according to this hierarchy,[1] which will evolve over time.

On the topic of the liability issues in connection with AI models, Marc Mossé told us that a proposed directive is also in the works, addressing issues such as the presumption of causality or access to evidence to facilitate compensation for victims. The categories established in the proposed regulation have been carried over in the wording of the directive.

 

Other issues were raised during our talk, such as AI systems and their true cognitive abilities, the security controls and measures some companies bake into their tools, the models that are poised to stand out on the market, and the new technological disruptions we are about to live through.

[1] This call was largely reported in the press, in articles such as https://www.courrierinternational.com/article/gpt-4-un-millier-d-experts-de-la-tech-demandent-un-moratoire-sur-la-recherche-en-ia or https://www.liberation.fr/economie/economie-numerique/lintelligence-artificielle-risque-majeur-pour-lhumanite-une-petition-mondiale-reclame-un-moratoire-de-six-mois-20230330_FCER5AORZBATBFQPZAMNNGCVD4/

[2] Yoshua Bengio, Slowing down development of AI systems passing the Turing test, April 2023, https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/

[3] The third being the Frenchman Yann LeCun. All three received the 2018 A.M. Turing Award, often referred to as the "Nobel Prize of Computing", for their work on AI. Yann LeCun is one of the optimists: https://www.lemonde.fr/idees/article/2023/05/04/sur-l-intelligence-artificielle-l-opposition-entre-les-pessimistes-et-les-optimistes-est-simpliste-voire-dangereuse_6171999_3232.html

[4] Cf. Alexandre Piquard’s column, “Sur l’intelligence artificielle, l’opposition entre les pessimistes et les optimistes est simpliste, voire dangereuse” (On artificial intelligence, the opposition between pessimists and optimists is simplistic, if not dangerous), Le Monde, May 2023, https://www.lemonde.fr/idees/article/2023/05/04/sur-l-intelligence-artificielle-l-opposition-entre-les-pessimistes-et-les-optimistes-est-simpliste-voire-dangereuse_6171999_3232.html

[5]Cf. video posted on Les Echos, “L'histoire de l'intelligence artificielle en 7 dates clefs” (The history of artificial intelligence in 7 key dates), https://www.lesechos.fr/tech-medias/intelligence-artificielle/video-lhistoire-de-lintelligence-artificielle-en-7-dates-clefs-1941688

[6] “Comment le ‘deep learning’ révolutionne l'intelligence artificielle” (How ‘deep learning’ revolutionized artificial intelligence), Les Echos, July 2015, https://www.lemonde.fr/pixels/article/2015/07/24/comment-le-deep-learning-revolutionne-l-intelligence-artificielle_4695929_4408996.html

[7] The right not to consent to the scraping of their data for model training purposes, by analogy with data mining.

[8] Brunessean Bertrand, “La régulation européenne de l’intelligence artificielle” (European regulation of artificial intelligence), March 2023,

 https://www.inshs.cnrs.fr/fr/cnrsinfo/la-regulation-europeenne-de-lintelligence-artificielle