AI thinks you should change your turn signal fluid
Artificial intelligence once again starred in a comical situation on social networks with one of its answers.
Gone are the days when you could look up information online and get a relatively reliable answer. Artificial intelligence has changed the way we ask questions; some experts even say that tools like ChatGPT can replace traditional search engines.
Google doesn’t want to be left behind and introduced new artificial intelligence capabilities for its applications at the Google I/O developer conference. The company is set to usher in a new era and Android Auto will be one of its top bets with an improved mobile app experience.
The AI Review tool has sparked controversy on social media, with some even making fun of it. User X (formerly Twitter) known as @daltoneverett decided to check it out with a simple query.
Daltoneverett entered the text “the flasher does not make a sound” into the Google search bar. The browser is no longer redirected to the typical opinion forums or list of SEO-optimized articles at the top, but leaves it up to the AI.
The answer surprised him when Google recommended that he follow the steps: “replace the bulb, replace the fuse, adjust the connections, refill the turn signal fluid, and check the control panel for faults.”
Artificial intelligence gets the old joke “have you checked the fluid in your turn signals?” wrong. used among car enthusiasts and applied in the most literal sense.
User He shared his experience on X and his post has already received over 27,000 views.. The AI Review Assistant aims to make searching easier by offering a summary of key sources of information, but sometimes it gets it wrong.
Why does AI make these mistakes?
The first criticism was not long in coming, and many users claim that this technology is not ready to enter the market. In his opinion, AI Review will have to go through a trial period of several months before companies start using the tool. Jalopnik.
One of the tasks of AI is to grasp the context of the information it receives and provide an answer. Today’s technology is still far from this, so it can cause some confusion when handling humor or irony.
The term “artificial intelligence hallucinations” is starting to gain more weight. This phenomenon explains errors made by artificial intelligence systems that accept information as reliable.
AI is unable to understand the context surrounding the data it processes, so it assumes that the text it has been trained with is correct. In other cases, the information is completely fictitious, which leads to misunderstandings and comical situations.
Google is one of the companies most affected by this phenomenon. Report analyzed in ComputerToday shows that Google’s artificial intelligence lies 27.2% of the time. The GPT and Llama models show the best performance – 3 and 6%, respectively.