Google on Friday apologized for flaws in the rollout of a new artificial intelligence image generation tool, saying that in some cases, the tool may not be able to help people seek out diverse people, even if such a range doesn't make sense. He acknowledged that there is a possibility of “overcompensating.”
A partial explanation of why the image places people of color in a historical context not usually found was published a day after Google
Google
google
The company said its Gemini chatbot will temporarily stop generating images of people. This was in response to a social media outcry from some users who claimed there was anti-white bias in the way the tool generated racially diverse image sets in response to written prompts. Met.
“It's clear this feature missed the mark,” Prabhakar Raghavan, Google's senior vice president who runs search engines and other businesses, said in a blog post Friday. Some of the images generated may be inaccurate or even disturbing. We appreciate your feedback and regret that this feature didn't work for you. ”
Raghavan did not cite specific examples, but some images that have gained attention on social media this week include those depicting black women as America's Founding Fathers and people of black and Asian descent from the Nazi era. There were images that made him look like a German soldier. The Associated Press could not independently verify what prompts were used to generate these images.
Google added new image generation capabilities to its Gemini chatbot, formerly known as Bard, about three weeks ago. It built on Google's earlier research experiment called Imagen 2.
Google has long known that such tools can be unwieldy. The researchers who developed Imagen wrote in their 2022 technical paper that generative AI tools can be used to harass and spread misinformation, and that they “raise many concerns about social and cultural exclusion and bias.” “will cause this,” he warned. These considerations influenced Google's decision. The researchers added at the time that they plan to release a “public demo” of Imagen or its underlying code.
Since then, pressure to release generative AI products to the public has increased due to a race among technology companies to capitalize on the interest in the emerging technology, sparked by the emergence of OpenAI's chatbot ChatGPT.
The Gemini issue is not the first to impact image generators recently.microsoft
MSFT
The company had to adjust its own designer tool a few weeks ago after some users used it to create deepfake porn images of Taylor Swift and other celebrities. Research shows that AI image generators can amplify racial and gender stereotypes in training data, and that without filters, when asked to generate people in various situations, skin Men with lighter complexions are more likely to appear.
“When we built this functionality at Gemini, we tailored it to avoid falling into some of the traps we've seen in the past with image generation technology, such as creating violent or sexually explicit images,” or real-life images. It’s a depiction of a person,” Raghavan said on Friday. “And our users are from all over the world, so we want it to work well for everyone.”
He said many people might “like to receive photos of different people” when asking for photos of soccer players or dogs walking. But users looking for people of a particular race or ethnicity, or of a particular cultural background, “need to absolutely get a response that accurately reflects your request.”
While we overcompensate in response to some prompts, we are “more cautious than we intended” with others, refusing to answer certain prompts outright, and responding to highly unusual prompts with sensitivity. I misinterpreted it as something.”
He did not explain what that meant, but a test of the tool by The Associated Press on Friday found that Gemini routinely rejects requests on specific subjects, such as protests, and the Arab Spring and He reportedly refused to produce images of George Floyd. Protests and Tiananmen Square. In one instance, the chatbot said it did not want to contribute to the spread of misinformation or “trivialization of sensitive topics.”
Much of this week's outrage over Gemini's accomplishments started on X (formerly Twitter) and was further amplified by the social media platform's owner, Elon Musk. Musk slammed Google for what he described as an “insane racist.” Musk, who runs his own AI startup, has frequently criticized his rival AI developers and Hollywood for alleged liberal bias.
Raghavan said Google will conduct “extensive testing” before showing the chatbot's functionality to people again.
Surojit Ghosh, a University of Washington researcher who studies bias in AI image generators, said Friday that Raghavan's message ended with a disclaimer that Google executives “can't promise that Gemini won't generate it sometimes.” He said he was disappointed. Embarrassing, inaccurate, or offensive results. ”
For a company that has perfected its search algorithms and owns “one of the world's largest troves of data,” producing accurate and non-offensive results is a fairly low bar that we can hold ourselves accountable to. There has to be, Ghosh said.