The AI Culture Wars Are Just Getting Started

Posted on:
Key Points

Google was forced to turn off the image-generation capabilities of its latest AI model, Gemini, last week after complaints that it defaulted to depicting women and people of color when asked to create images of historical figures that were generally white and male, including vikings, popes, and German soldiers..

The company is working on projects that could reduce the kinds of issues seen in Gemini in the future, the source says.. Googles past efforts to increase the diversity of its algorithms output have met with less opprobrium..

Googles Gemini was often defaulting to showing non-white people and women because of how the company used a process called fine-tuning to guide a models responses..

Chatbots like Gemini and ChatGPT are fine-tuned through a process that involves having humans test a model and provide feedback, either according to instructions they were given or using their own judgment. Paul Christiano, an AI researcher who previously worked on aligning language models at OpenAI, says Geminis controversial responses may reflect that Google sought to train its model quickly and didnt perform enough checks on its behavior..

But although Raji believes Google screwed up with Gemini, she says that some people are highlighting the chatbots errors in an attempt to politicize the issue of AI bias..

You might be interested in

Gemini makes a mess of generating historic images and pushes Google into culture war, Google promises fix

22, Feb, 24

Google Gemini AI is currently under scrutiny for generating 'diverse' images when asked to create specific images. Results are: black Vikings, dark-skinned popes and European kings. Google says it will fix it.

‘We definitely messed up’: why did Google AI tool make offensive historical images?

08, Mar, 24

Experts say Gemini was not thoroughly tested, after image generator depicted variety of historical figures as people of colour