Previously, if you asked Google if cats were on the moon, it would display an ordered list of websites so you could find the answer yourself.
Now it offers an instant answer, generated by artificial intelligence, which may or may not be correct.
“Yes, astronauts have met cats on the moon, played with them and cared for them,” the newly reformed Google search engine responded to a question from an Associated Press journalist.
He added: “For example, Neil Armstrong said, ‘One small step for a man,’ because it was a cat’s step. Buzz Aldrin also used cats on the Apollo 11 mission.
None of this is true. Mistakes like this — some funny, some harmful — have been common on social media since Google this month introduced the AI Review tool, a redesign of its search page that typically places summaries at the top of search results.
The new feature has alarmed experts, who warn it could perpetuate bias and misinformation and put people seeking help in an emergency at risk.
When Melanie Mitchell, an artificial intelligence researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims had been president of the United States, it confidently responded to a long-debunked conspiracy theory: “The United States had a Muslim president, Barack Hussein.”
Mitchell said the summary backs up that claim, citing a chapter from a scholarly book written by historians. But this chapter did not make this misleading statement, but only mentioned a false theory.
“Google’s artificial intelligence system is not smart enough to realize that this quote does not actually support this claim,” Mitchell said in an email to the AP. “Given how unreliable it is, I think this AI review feature is very irresponsible and should be removed.”
Google said in a statement Friday that the company is taking “swift action” to correct errors – such as lies about Obama – that violate its content policies, and that it is using them to “develop more comprehensive improvements” that are already underway. . But in most cases, Google says the system works as it should, thanks to extensive testing before its public release.
“The AI review overwhelmingly provides high-quality information with links for deeper dives into the web,” Google said in a written statement. “Many of the examples we looked at were unusual requests, but we also saw examples that were manipulated or that we were unable to reproduce.”
Mistakes made by AI language models are difficult to reproduce, in part because they are inherently random. They work by predicting which words will best answer questions they are asked, based on the data they were trained on. They tend to make things up, a well-studied problem known as hallucinations.
AP tested Google’s artificial intelligence feature by asking several questions and shared some of the answers with experts. Robert Espinoza, a professor of biology at California State University, Northridge and president of the American Society of Ichthyologists and Herpetologists, says that when he was asked what to do in case of a snake bite, Google’s response was “extremely strict.”
But when people come to Google with an urgent question, the possibility that the tech company’s answer contains a hard-to-find error becomes a concern.
“The more stressed, rushed, or rushed you are, the more likely you are to respond with the first thing that comes to mind,” said Emily M. Bender, professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington. “And in some cases these can be life-threatening situations.”
This is not the only concern of Bender, who has been warning Google about this for several years. When Google researchers published a paper in 2021 called “Rethinking Search,” which proposed using AI language models as “subject matter experts” who could answer questions with authority (something similar to what they are doing now ), Bender and his colleague Chirag Shah responded with an article. explaining why it was a bad idea.
They warned that these artificial intelligence systems could perpetuate the racism and sexism found in the massive amounts of written data on which they were trained.
“The problem with this type of misinformation is that we get caught up in it,” Bender says. “So people’s prejudices are likely to be confirmed. And it’s harder to spot misinformation if it confirms your preconceptions.”
The other problem was deeper: outsourcing information search to chatbots was undermining the human ability to discover knowledge, understanding what we see online, and the value of communicating in virtual forums with others who are experiencing the same thing.
These forums and other websites rely on Google to direct people to them, but Google’s new artificial intelligence reviews are putting lucrative internet traffic at risk.
Google’s competitors are also closely monitoring the response. The search giant has been under pressure for more than a year to offer more artificial intelligence features, competing with ChatGPT developer OpenAI and other upstarts such as Perplexity AI, which aims to challenge Google with its own quiz app and artificial answers. intelligence.
“It looks like Google was in a hurry,” said Dmitry Shevelenko, commercial director at Perplexity. “Too many quality inaccuracies.”
The Associated Press receives support from several private foundations to improve its coverage of elections and democracy. AP is solely responsible for all content.
Ce soir France 4 broadcast Lady Bird, three beautiful films signed by Greta Gerwig with…
It was the first British newspaper GuardianWhich announced this Wednesday that it would stop publishing…
The pharmaceutical company Lilly presented results of a three-year clinical study it conducted, Mugnaro's longest…
38 days left until Christmas lottery draw and today the traditional was presented advertising advertising…
He 11 of 11 AliExpress It continues to get people talking and if you are…
A new adventure for Andres Iniestathis time as the largest shareholder of his agency "Never…