Technology

Wanting to combat stereotypes, the Google Gemini AI generated historically inaccurate images

Google’s Gemini tool began introducing and replacing white historical figures, such as the founding fathers of the United States or Nazi-era German soldiers, with people of color.
Figaro screenshot

Nazi soldiers of color, founding fathers of the United States of Asian descent… Faced with controversy, the company suspended the possibility of creating images of human beings.

The amplification of racist or sexist stereotypes by generative artificial intelligence is not a new problem. Companies behind software capable of generating text or images from simple text queries are often accused by users of perpetuating preconceived ideas. Powered by huge databases, these tend to materialize biases that exist in the real world.

Thus, the site The Verge reminds us, such as requests “productive person” Often giving rise to the representation of white people. While the demand for visuals linked to social hardship often creates images of black people. But wanting to avoid this pitfall on its new Gemini AI, available in the United States since early February, Google ran into the opposite problem.

Indeed, as internet users have noted, its AI ensures that it provides a diverse representation of humanity. But this leads Gemini to create historically inconsistent images. Thus, the automatic creation of scenes for “German Soldiers in 1943”, “Portrait of the Founding Fathers of the United States” Or “American Senators in the 1800s” This causes the AI ​​to create images of colored people.

Faced with outrage, Google this Thursday banned the possibility of images of humans from Gemini. “We are working to resolve issues with the Gemini image functionality. In the meantime, we are suspending the generation of people’s images and will release an improved version soon.” About the tool, the company explained in a press release published on its X account this Thursday.

Difficulty correcting bias

“We know that Gemini has inaccuracies in some of its depictions of historical images.”Google apologized in its first statement published on Wednesday, admitting to it “missed the boat”. “We are working to fix this type of presentation immediately.”

Jack Krawczyk, Product Director of Gemini, said on Wednesday than Google “Consistent with our AI principles, Google has designed its imaging tools to reflect the diversity of users around the world. We will continue to do this for general questions. But the historical context brings more nuances and so we will adapt our model.”

Google isn’t the only group to combat AI bias. Software such as DALL-E developed by Open AI (at the core of ChatGPT) or its competitor Stable Diffusion, for example, tend to be realized by 97% of male business leaders. At least, that’s what researchers at the start-up Hugging Face, which advocates for the open source design of AI resources, concluded last March.

To avoid the same pitfall, the latter developed a tool called Fair Diffusion based on semantic guidance. Specifically, this allows the user to stay with the software and modify the results of the acquired images. Thus, if a user asks the AI ​​to realize business leaders, it can ask for less biased visual suggestions. And hopefully his request is fulfilled by both women and men.

” data-script=”https://static.lefigaro.fr/widget-video/short-ttl/video/index.js” >

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button