The Deeper Problem With Google’s Racially Diverse Nazis

Generative AI is not built to honestly mirror reality, no matter what its creators say.
Illustration by Paul Spella / The Atlantic; Source: Keystone-France / Getty
notion image
Is there a right way for Google’s generative AI to create fake images of Nazis? Apparently so, according to the company. Gemini, Google’s answer to ChatGPT, was shown last week to generate an absurd range of racially and gender-diverse German soldiers styled in Wehrmacht garb. It was, understandably, ridiculed for not generating any images of Nazis who were actually white. Prodded further, it seemed to actively resist generating images of white people altogether. The company ultimately apologized for “inaccuracies in some historical image generation depictions” and paused Gemini’s ability to generate images featuring people.
The situation was played for laughs on the cover of the New York Post and elsewhere, and Google, which did not respond to a request for comment, said it was endeavoring to fix the problem. Google Senior Vice President Prabhakar Raghavan explained in a blog post that the company had intentionally designed its software to produce more diverse representations of people, which backfired. He added, “I can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results—but I can promise that we will continue to take action whenever we identify an issue,” which is really the whole situation in a nutshell.