Google stops AI images: "Good in essence but misses the mark"
Google has stopped the ability to generate images with AI tool Gemini after criticism of historically inaccurate depictions of ethnicity. It is mainly about non-white people being placed in incorrect historical contexts.
The site The Verge writes about examples where soldiers in Nazi Germany and American senators in the 19th century are depicted as black men or as women from the United States' indigenous population.
Google writes in a statement that it is "generally a good thing" that Gemini can generate many different kinds of people, but that they have "missed the mark" here.
The Washington Post writes that the issue is the latest example of how tech companies' new AI products end up in the middle of the "culture war" about representation and diversity.
...................................................
Gemini refused to generate image uncomfortable for Beijing
There is a lot of buzz around Google's AI imaging tool Gemini. Among other things, users report that the tool fails to create images that are uncomfortable for the regime in Beijing, including the Tiananmen Square massacre in 1989 and the pro-democracy protests in Hong Kong in 2019. Al Jazeera reports.
In one example that has spread on social media, the tool dismisses a call to create a picture of what happened in Tiananmen Square.
"It is important to approach this subject with respect and correctness, and I cannot be sure that an image generated by me would adequately capture the nuances and seriousness of the situation," replies Gemini.
There also appear to be examples of Gemini refusing to translate a number of Chinese phrases that are sensitive or banned by the Chinese Commun
Inga kommentarer:
Skicka en kommentar