Business

France saves furniture

The IA Act, the European project to regulate artificial intelligence (AI), was finally voted on by all member states on Friday, after a final upheaval caused by the revolts of France, Germany and Italy. The text thus sparked heated debates until the last days of its adoption. To understand this, we need to take a quick look.

The AI ​​Act saw the light of day in April 2021, ahead of the media buzz generated by ChatGPT. Generative artificial intelligence is therefore not at all present in people’s minds: it is a matter of legislating the most dystopian uses of AI, for example, Chinese-style social credit, the use of emotional recognition algorithms on surveillance cameras, or even predictive policing. systems

Defending European Champion of Generative AI

The emergence of generative AI has changed cards and led to adaptations of text. One of the key contributions of the AI ​​Act includes classifying artificial intelligence systems into several categories based on the level of risk, with limits ranging from zero to a complete ban for the most risky systems.

To deal with the specific case of generative AI, a separate category has been added, covering “ Foundation models », great software capable of creating text or images like ChatGPT or MidJourney. In particular, it introduces transparency obligations regarding the data used to train these models, as well as respect for copyright, giving rise to heated debates.

To prevent media from losing the AI ​​battle, the National Digital Council offers its recommendations

However, generative AI has quickly spawned technological colossi, mostly American (including OpenAI, which reinforced Microsoft’s dominance, or the very promising start-up Anthropic) but also European nuggets, such as the French Mistral AI and the European nodes. German Aleph Alpha.

A Franco-German blocking minority thus fears nipping innovation in the bud by regulating generative AI too fast and too early, thus nixing Europe’s chances of producing big names in the field, even as American and Chinese markets benefit for the moment. is With a certain laissez-faire attitude.

A generally favorable settlement for the French position

While the bill has just been voted on, has the maneuver succeeded, as the French government has welcomed? Overall, the answer is yes, according to Eric Le Quellenec, a lawyer specializing in the digital economy at Simmons & Simmons.

France notably found that the computing power threshold of which the most powerful models present risks “organized” and is subject to very strict constraints, particularly in terms of documentation and evaluation, so high that in practice it is only relevant to ChatGPT », notes the lawyer.

For these models, the law specifically introduces protection limits from the design stage (Privacy by design), which, inevitably, requires legal engineering: “ This can delay young projects by requiring the inclusion of several lawyers who must ensure that the rules are respected. Enough to slow market arrivals by a month or two, which could be crucial in the face of strong international competition. »

Impact at work, rapid democratization, unprecedented power: What generative AI has in store for us in 2024

However, France could not win everything, and had to accept a compromise. “ France aimed to eliminate any additional regulation for generative AI, which it did not achieve. », refers to a lawyer. Paris also wanted to give itself more time before implementing these new provisions, but on the contrary things will move very quickly: regulations on high-risk AI will usually come from the first quarter of 2025, for generative AI it will come second. Quarterly

The thorny question of copyright

Overall, it’s a balancing act, according to Jean-Baptiste Bouzige, a French expert in data science and AI, president and founder of Acmetrix. ” The overall philosophy of the text remains, that it is a voluntary law that proposes to regulate before the former than thatposteriorwhich considers the risks inherent in AI and intends to leverage European normative power. »

Where for a time French opposition led to fears that the project would at best be completely drained of its substance, or at worst be thrown into oblivion, this is ultimately not the case.

Generative AI companies, for example, will be required to be transparent about the data they use for their models, although Paris said at the last minute “ Business secrets » In law. This dimension is particularly important for the cultural industries, with artists denouncing the use of their works by start-ups to train generative AI algorithms without their consent. Conversely, transparency paves the way for the remuneration of the latter.

Why the media is fighting the AI ​​giants

When the algorithm works like a black box, it is difficult to allocate a share of the revenue to the rights holders. So the law should make things easier », notes Jean-Baptiste Bozige. A way to anticipate problems that can benefit everyone, including tech players. ” As the United States is already beginning to see lawsuits over copyright and generative AI, proactive legislation in this area such as the AI ​​Act should encourage technology companies to avoid these types of legal headaches upstream. »