When OpenAI was founded in 2015, Google was already thinking about developing machine learning
In an environment of increasingly powerful and widespread computing resources, the company has already taken the step of acquiring Demis Hassabis’ Deep Mind and training its entire staff in these technologies.However, this advantage, according to Clayton Christensen’s fantastic book The Innovator’s Dilemma, did not benefit it much: the fact that it was a leading search company caused it to show endless resistance to introducing this technology into its products, despite this. declaring that it was literally “rethinking them all” to include it – a resistance that led to what we already know has happened today: that a much younger company like OpenAI and another more experienced one that spoke little on this topic, Microsoft has overtaken Google on the right.
Causes? The first and most obvious was that, faced with a nascent technology that was not yet fully under control and which, for statistical reasons, tended to create artifacts from barely significant correlations, the trust risk that Google was potentially exposed to was much greater than that to which their company was subjected. competitors could flee. What would happen if the search engine you use for just about everything suddenly developed a “pickle craving” and suggested you put glue on your pizza, exercise by running with scissors in your hand while you laugh, or change your oil? speakers? Well, it’s entirely possible that instead of just laughing and brushing off the answer – if you have enough critical common sense to brush it off and you’re not one of those people who always does what Google says – it’s very possible that you’re thinking, that Google has gone crazy. and that you will lose all your trust not in these results, but in the search engine as a whole.
Ever since Google announced the release of its review of artificial intelligence, the inclusion of answers generated by its algorithm in its search pages, this is exactly what has been happening: answers that interpret memes and jokes as if they were reliable information, which they are not able to catch. the irony (Sheldon Cooper, are you on the development team?) or the fact that they directly create results based on false correlations, putting the reliability of the search engine in constant question. And to end up screwing it all up by inferring possible responsibility for an answer that Google didn’t just compile from another source, but made up itself.
If you’re developing a feature and the first thing you discover is that users install plugin specifically designed to eliminate it because they believe it annoying, annoying, doesn’t seem like a very good sign. Google now faces a major challenge: taking technology it knows, but whose limitations it is well aware of, and making it compatible not only with user trust in its core product, but also with its business model. and with its cost structure. And none of these tasks are easy at all.
This article is also available in English on my Medium page: How long can Google continue to advance artificial intelligence in the future?
Avec le Soleil qui se fait Discreet et le Mercure qui chute, on troque désormais…
Washington (EFE).- Republican Donald Trump, winner of the elections in the United States, announced this…
Similar news Rheumatoid arthritis and fibromyalgia are the two most common rheumatic diseases in the…
afp_tickers This content was published on November 8, 2024 - 01:59 The Federal Reserve on…
Infrared sensitivityLID-568 was discovered by an interinstitutional team of astronomers led by the Gemini International…
Madrid ranks 3-5 in the standings, and Armani is catching up at the bottom of…