The culture war over AI values ​​has just begun

Last week, Google was forced to turn off the image capabilities of its latest artificial intelligence model, Gemini, following complaints that it did not by default depict women and people of other ethnic backgrounds when asked to create content. , dads and German soldiers. The company publicly apologized and said it would do better. Sundar Pichai, CEO of Alphabet, sent a memo my fault staff on Wednesday. “I know some responses have offended our users and showed bias,” the post said. “To be clear, this is completely unacceptable and we were wrong.”

However, Google’s detractors were not silenced. In recent days, conservative voices on social media have revealed text replies from Gemini that they claim reveal free from biasl. On Sunday, Elon Musk posted screenshots of “Google Gemini is super racist and sexist,” Musk wrote.

The price Google Gemini paid for diversity

A source familiar with the situation notes that some at Google believe the furor reflects that the rules what is appropriate artificial intelligence (AI) models are still in the process of transformation. The company is working on projects that will mitigate Gemini’s problems in the future, the source said.

Google’s previous attempts to increase the diversity of its algorithms’ results have been less unsuccessful. The company previously modified its search engine to display a greater variety of images. This means there are more women and ethnically diverse CEOs, although this is not necessarily the reality of business.

Google’s Gemini model often favored people of color and women by default because the company used a process called fine-tuning to control the model’s responses. The company has attempted to compensate for the bias that often occurs in image generators due to the presence of harmful cultural stereotypes in the images used to train them, many of which tend to come from the Internet and exhibit a predominance of white and Western people. Without this adjustment, AI image generators reveal bias by primarily generating images of white people when asked to represent doctors or lawyers, or disproportionately showing black people when asked to create images of criminals. It appears that Google ended up overcompensating or not properly testing the effects of the adjustments it made to correct the bias.

Why did this happen? Maybe just because Google rushed with Gemini. It’s clear that the company is struggling to find the right pace for its AI launch. One day he became more cautious about his technology, deciding not to release a powerful chatbot due to ethical reasons. After OpenAI’s ChatGPT took the world by storm, Google has shifted its focus. The rush may have affected their quality control.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button