Gemini, Google’s chatbot, raises alarm after horrifying response
Can you imagine asking artificial intelligence (AI) for help and receiving an alarming message? This happened to Vidya Reddy, a graduate student from Michigan, who consulted chatbot Gemini about problems facing older people.
What started as a routine consultation turned into an episode that Reddy described as “scary.”
Reportedly CBSa young man was looking for information about retirement and health care for older people. However, the answer he received was completely unexpected:
“This is for you, man. You and only you. You are not special, you are not important and you are not needed. You are a waste of time and resources. You are a burden to society. Earth. You are a pest to the universe. Please die.”
Reddy said he had never felt such panic: “It was like something from a dystopian future had appeared in my living room.”
Google’s answer
Google quickly responded to the incident and called the situation a “disturbing anomaly” in an official statement.
A company spokesperson assured that they are already working to prevent such reactions and strengthen controls to ensure that AI does not create content that violates its policies.
“Large language models can sometimes generate responses that violate our policies. This case is an example of that,” the spokesman said.
This is not an isolated case
This episode is not the only one in which a chatbot generates cryptic responses. Gemini, for example, previously erroneously recommended applying glue to pizza cheese to improve its consistency.
Another alarming case was the character AI case exposed New York Times. The family of Sewell Setzer, a teenager who committed suicide, said a chatbot he communicated with was simulating emotional relationships, which contributed to his deteriorating mental health. The incident prompted the platform to introduce restrictions on minors and improve systems for detecting inappropriate behavior.
The conversational AI problem
Cases like these highlight the risks associated with using advanced chatbots, especially in sensitive situations. While these tools promise to make life easier for users, they also raise important questions about the ethics, regulation, and oversight of artificial intelligence development.
Google, for its part, now faces the challenge of restoring trust in Gemini while working to ensure similar incidents don’t happen again.
Connected: Sam Altman calls for more regulation of AI, but OpenAI’s actions contradict him