Categories: Technology

The culture war over AI values ​​has just begun

Last week, Google was forced to turn off the image capabilities of its latest artificial intelligence model, Gemini, following complaints that it did not by default depict women and people of other ethnic backgrounds when asked to create content. , dads and German soldiers. The company publicly apologized and said it would do better. Sundar Pichai, CEO of Alphabet, sent a memo my fault staff on Wednesday. “I know some responses have offended our users and showed bias,” the post said. “To be clear, this is completely unacceptable and we were wrong.”

However, Google’s detractors were not silenced. In recent days, conservative voices on social media have revealed text replies from Gemini that they claim reveal free from bias

l. On Sunday, Elon Musk posted screenshots of “Google Gemini is super racist and sexist,” Musk wrote.

The price Google Gemini paid for diversity

A source familiar with the situation notes that some at Google believe the furor reflects that the rules what is appropriate artificial intelligence (AI) models are still in the process of transformation. The company is working on projects that will mitigate Gemini’s problems in the future, the source said.

Google’s previous attempts to increase the diversity of its algorithms’ results have been less unsuccessful. The company previously modified its search engine to display a greater variety of images. This means there are more women and ethnically diverse CEOs, although this is not necessarily the reality of business.

Google’s Gemini model often favored people of color and women by default because the company used a process called fine-tuning to control the model’s responses. The company has attempted to compensate for the bias that often occurs in image generators due to the presence of harmful cultural stereotypes in the images used to train them, many of which tend to come from the Internet and exhibit a predominance of white and Western people. Without this adjustment, AI image generators reveal bias by primarily generating images of white people when asked to represent doctors or lawyers, or disproportionately showing black people when asked to create images of criminals. It appears that Google ended up overcompensating or not properly testing the effects of the adjustments it made to correct the bias.

Why did this happen? Maybe just because Google rushed with Gemini. It’s clear that the company is struggling to find the right pace for its AI launch. One day he became more cautious about his technology, deciding not to release a powerful chatbot

due to ethical reasons. After OpenAI’s ChatGPT took the world by storm, Google has shifted its focus. The rush may have affected their quality control.

Source link

Admin

Recent Posts

From Juan Roig to Florentino Perez: sport penetrates the top 100 richest Spaniards in 2024

Sports are synonymous with heritage. The Roig brothers, Florentino Perez, Juan Carlos Escote and Juan…

1 min ago

HONOR Watch 5 arrives in Spain with a 15-day battery for less than 150 euros.

The HONOR Watch 5 features a 1.85-inch AMOLED screen and a 480mAh battery that promises…

4 mins ago

Luis Enrique admits he had an offer from Atletico Madrid

Trainer Paris Saint-Germain, Luis Enrique, admitted that he has an offer Atlético Madrid trainwhich he…

5 mins ago

Film review – THE OUTRUN

Ce fut l'une des très belle surprises de la section Panorama du dernier Festival de…

49 mins ago

‘He wants to drag us into war with Russia’

Relations between Poland and Ukraine are not going through their best timesWarsaw has strongly supported…

54 mins ago

Nurses warn of low herpes zoster vaccination rate: 600,000 cases in a decade

A nurse cleans and disinfects a patient's arm before administering a vaccine (Shutterstock, Spain)The General…

55 mins ago