Business

Digital giants sign pact against misleading use of AI in elections

It’s a three-page text, containing key principles and means of action, that nearly twenty major digital companies signed on Friday, February 16. Among these, Google, Meta, OpenAI, Microsoft, Amazon, X, TikTok, Adobe, Snap, and Stability AI, which promises, thanks to this agreement. “To help prevent misleading AI-generated content (artificial intelligence) Interfering with elections this year around the world »We can read in the joint press release.

The text was signed in the framework of the Munich Security Forum, an event that brought together numerous ministers and heads of government, such as American Vice-President, Kamala Harris, European Commission President Ursula von der Leyen, German Chancellor, Olaf Scholz, and Ukrainian President, Volodymyr. Zelensky.

With half of the world’s voting-age population expected to be called to the polls (whether free or in pre-determined ballots) in 2024, pressure is mounting on big digital companies, whose tools and platforms are routinely used for political manipulation. This year, generative artificial intelligence technology, which makes it possible to create images from scratch or manipulate video and sound, is the focus of concern.

Among the signatories are companies at the forefront of developing these tools, such as OpenAI or Google, and others whose platforms are used to distribute this content, such as TikTok – sometimes, Meta (like the owner of Facebook, Instagram and WhatsApp ), both the same. at the time.

Material traceability

In this text, this is the target of twenty companies “Forced AI-generated audio, video or images that dishonestly imitate or alter the appearance, voice or actions of candidates, poll organizers or other key stakeholders, or that provide false information to voters about when, where and how to vote is Vote “

They start working on the development of common tools, for example, “to mark” To ensure their discoverability, especially at the time of their creation. This would allow each AI-generated image to contain a kind of “digital watermark”, readable by machines. Work is already in progress, particularly with the C2PA standard.

These companies also support their desire to detect this misleading political content on their platforms, and mention some possible steps: using detection technologies, easily indicating to creators that they have used AI or allowing users to report suspicious content. is

But once this content is found, how does the platform plan to operate? The text remains relatively vague, evocative “Quick and Proportionate Answers”. “This may include – but is not limited to – adopting and publishing regulations, and working to provide contextual information” When such material is found. Unclear, but these companies are trying to reconcile the fight against misleading content and freedom of expression – they make it clear that they “Pay attention to context, and preserve academic, documentary, artistic, satirical as well as political expression”. The signatories also pledge to demonstrate transparency regarding the policies implemented and to participate in informing the general public of existing threats.

Still rare cases

The agreement, non-binding, serves above all as a major declaration of principle, on a sensitive topic that these companies are eagerly awaiting. But it doesn’t include any sweeping new measures — it’s consistent with what some of these groups have already announced in the past. In July, Google, Meta, OpenAI and Microsoft, for example, committed to the White House to develop ways to tell their users when content was generated by AI. Meta reiterated this commitment in early February, in a blog post, stating its intent to recognize “In the coming months” Any AI-generated image.

Also Read | Meta wants to automatically recognize AI-generated images on its social networks

These companies are under pressure from various governments, which threaten to legislate on artificial intelligence. With this agreement, they hope to reassure legislators about their capacity for self-regulation. The European Union, for its part, has already taken decisions, with the AI ​​Act approved by twenty-seven delegates on February 2, which should enter into force in 2025. Text is imposed, among other things. Labeling deepfakes.

world

Special offer for students and teachers

Unlimited access to all our content from 8.99 euros per month instead of 10.99 euros

Subscribe

However, twenty signatories, as of this writing, wanted to recall AI-generated content. “Not only represent risks, and those traditional manipulations (cheap build) Can also be used for the same purposes. Last year, a video edited to show Joe Biden repeatedly touching his granddaughter’s breasts was widely circulated. The signatories also highlighted that AI is a powerful tool to protect against manipulation attempts, citing the rapid detection of malicious campaigns, the ability to work simultaneously in different languages ​​and at scale.

The companies also stressed that the fight was not their sole responsibility: “We are committed to doing our part as technology companies, while ensuring that the deceptive use of AI is not just a technical challenge, but a political, social and ethical problem, and we hope others will commit to doing the rest. will happen of society. »

So far, few examples of misleading political content have emerged. New Hampshire residents recently received automated calls, reproducing Joe Biden’s voice, aimed at discouraging them from voting in the January primary. It is enough to fear a surge in the next elections – even if the cases are rare in reality. For the moment, most of the harmful content produced by AI mainly targets women, represented in pornographic scenes, as the American star Taylor Swift suffered recently.

world

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button