Google is pausing its Gemini AI chatbot's photo tool after backlash over historically inaccurate photos

Google is pausing its Gemini AI chatbot's photo tool after backlash over historically inaccurate photos

Google said Thursday it would temporarily discontinue its Gemini chatbot image creation tool after it was widely criticized for creating “diverse” images that were not historically or factually accurate — such as Black Vikings, popes and Native Americans among the Founding Fathers.

Social media users have slammed Gemini as “ridiculously woke” and “unusable” after requests to create representational images of subjects resulted in strange retouched images.

“We are already working to address recent issues with Gemini's photo creation feature,” Google said in a statement posted on X. “While we do this, we will pause the creation of images for people and will re-release an improved version soon.”

Examples included an AI image of a black man who appears to represent George Washington, with a white wig and Continental Army uniform, and a Southeast Asian woman wearing a papal garb even though all 266 popes throughout history have been white men.

One social media user criticized the Gemini tool as “unusable”. Google Gemini

In another shocking example Detected by the edgeGemini even produced “diverse” representations of Nazi-era German soldiers, including an Asian woman and a black man in uniform in 1943.

Since Google has not published the parameters that govern the behavior of the Gemini chatbot, it is difficult to get a clear explanation for why the program invents various versions of historical figures and events.

William A. said: “In the name of fighting bias, actual bias is being built into systems,” Jacobson, a law professor at Cornell University and founder of the Equal Protection Project, a watchdog group, told The Washington Post.

“This is a concern not only for research results, but for real-world applications where testing a ‘bias-free’ algorithm actually builds bias into the system by targeting end-results that reach quotas.”

See also  Anniversary: ​​Gmail is celebrating 20 years

The problem may be due to Google's “training process” on the “big language model” that powers Gemini's images tool, according to Fabio Motoki, a lecturer at the University of East Anglia in the UK who co-authored a paper last year that found a solution. Noticeable left bias in ChatGPT.

“Remember, reinforcement learning from human feedback (RLHF) is about people telling the model what is better and what is worse, effectively shaping its ‘reward’ function — technically, the loss function,” Motoki told The Post.

“So, depending on who Google is recruiting, or what instructions Google is giving them, it could lead to this problem.”

This was a big misstep for the search giant, which just rebranded its flagship chatbot from Bard earlier this month and introduced much-touted new features — including image creation.

Google Gemini has been ridiculed online for producing “woke” versions of historical figures. Google Gemini

The blunder also came days after OpenAI, which powers the popular ChatGPT, introduced a new AI tool called Sora that creates videos based on users' text prompts.

Google had previously admitted that the erroneous behavior of the chatbot needed to be fixed.

“We're improving these types of images immediately,” Jack Krawczyk, Google's senior director of product management for Gemini experiences, told The Post.

“Gemini's AI image generation generates for a wide range of people. This is generally a good thing because people around the world use it. But it misses the mark here.”

The Post has reached out to Google for further comment.

When asked by The Washington Post to provide its trust and safety guidelines, Gemini acknowledged that they had not been “publicly disclosed due to technical complexities and intellectual property considerations.”

See also  Google says AI-focused Pixel 8 can't run latest AI smartphone models
Google has not published the parameters that govern Gemini's behavior. Google Gemini

The chatbot also acknowledged in its responses to the claims that it was aware of “criticism that Gemini may have prioritized forced diversity in image generation, resulting in historically inaccurate depictions.”

“The algorithms behind the image generation models are complex and still under development,” Gemini said. “They may have difficulty understanding the nuances of historical context and cultural representation, leading to inaccurate outputs.”

Leave a Reply

Your email address will not be published. Required fields are marked *