Two years of ChatGPT: from complete blindness to falling into the “valley of disappointments” | Technology

“This is an amazing innovation, I was amazed too.” “It sounds much more natural than most similar programs.” “He intuitively learned how to carry on a conversation on almost any topic.” These are some of the first opinions of artificial intelligence (AI) experts on ChatGPT published in this newspaper. The tool took several days to impress both professionals and non-professionals, who shared snippets of their conversations with the bot on social media. Suddenly, anyone with an Internet connection could communicate with a machine that offered coherent, well-written, if not always truthful, answers. Many people felt like they were talking to someone rather than something. This Saturday marks the two-year anniversary of the launch of ChatGPT, which marked the unveiling of generative artificial intelligence that produces supposedly original content based on human instructions.

What moment is this technology going through? The initial fascination gave way to a business war to introduce this type of tool. Microsoft rushed to enter into a collaboration agreement with OpenAI, the developer of ChatGPT and Dall-E, and Google took two months to announce the launch of its own open models. Today we are already talking about what the consulting firm Gartner calls valley of disappointments: The initial euphoria created such high expectations that, because they were not immediately met, they led to a decline in interest. This is a phase of the natural cycle hype technological developments, and usually after some time (less than two years, according to Gartner) the expectation curve will rise again, albeit in a more moderate manner than the first time.

“Two years later, the artificial brain is still super sons-in-law stochastic: they speak with great authority, they seem to know everything, but what they say is not the result of real knowledge, but rather their ability, acquired intuitively, to appear wise,” summarizes Julio Gonzalo, professor of languages ​​and computers. Systems at UNED and Deputy Vice-Rector for Research. Andrey Karpatiy himself, one of the creators of the GPT model (and who left OpenAI in February), said a few weeks ago that he sees signs of exhaustion of generative AI: since the first versions of ChatGPT were already trained with almost all available text on the Internet, new versions will not be able to use much more data than their predecessors already discussed. This will mean that the models cannot be improved too much. “For the big leap to happen, innovation in algorithmic architecture will be required, as will the development in 2017 of transformers (a type of neural network that plays a key role in the development of large language models),” says Alvaro Barbero, responsible for data analysis at the Institute for Knowledge Engineering .

There are concerns on the business front as well. Investors have yet to see a way to make generative AI profitable. In October, OpenAI raised $10 billion “for flexible work” on top of Microsoft’s $13,000 commitment in 2023, but that may not be enough. The GPT-5, originally announced for late 2023, still hasn’t arrived, and analysts are starting to think it won’t be as revolutionary as company CEO Sam Altman sold it on.

According to OpenAI’s own forecasts, the company will not make a profit until 2029, but in the meantime it is spending about $500 million a month. According to calculations by specialized media Informationwhich estimates the cost of training its models at $7 billion by 2024, OpenAI could run out of money next summer. “In 12 months, the AI ​​bubble will burst,” AI expert Gary Marcus said last July. “The numbers don’t add up, the current approach has stalled, no definitive application has been developed, hallucinations (where the system makes something up) still exist, stupid mistakes persist, no one has an insurmountable advantage over others and people. He is aware of all of the above.”

Artificial Intelligence Revolution

Financial considerations aside, there is no doubt that the tool, which launched on November 30, 2022, has proven to be great. “From my point of view, the advent of ChatGPT was something absolutely revolutionary,” says Carlos Gómez Rodríguez, a professor of computer science and artificial intelligence at the University of A Coruña and an expert in natural language processing, a branch of AI that seeks to understand and generate text. . “For the first time, the same system could do everything without special training. Previously, you could create a Spanish-English translator, but develop it specifically for this purpose. It turned out that by developing these larger systems, the model was able to do a lot. It changed everything in my field of research.”

“Generative AI has provided interesting applications such as summarizing, writing letters in other languages ​​or extracting information from documents, as well as flawed applications such as using them to extract information when what they are doing is predicting rather than searching or making inferences, when they are not making, they are reasoning,” explains Ricardo Baeza-Yates, director of research at the Institute for Experimental Artificial Intelligence at Northeastern University (Boston) and professor at the University of Pompeu Fabra in Barcelona. Generative AI, hand in hand with image or video generators, helps blur the line between reality and lies through so-called deepfakes and has led to the emergence of more sophisticated and cheaper forms of cyber attacks.

Artificial intelligence showed a fictitious confrontation with Donald Trump and New York police officers
Images created by an artificial intelligence tool show an argument between former President Donald Trump and some police officers.J. David Eick (AP)

Just three months after the launch of ChatGPT, OpenAI introduced the GPT-4 model, a quantum leap from the first version of the tool. However, in the almost two years since then, there has been no further significant progress. “It appears that with GPT-4 the limits of what AI can do by simply imitating our intuition have been reached. It was also confirmed that the ability to think rationally did not magically appear by simply enlarging the brain,” Gonzalo illustrates.

Where we are and what remains to be seen

The latest developments in the field of generative artificial intelligence are multimodal systems that can combine different media (text, image and audio). For example, you can show the latest version of ChatGPT or Gemini a photo of a refrigerator and have it tell you what to cook for dinner. But achieve these results intuitively, not by reasoning. “The next thing will be to find out whether large language models can be agents. That is, they work independently and interact with each other on our behalf. They could book plane tickets and a hotel for us according to the instructions we give them,” describes Gomez Rodriguez.

“I think generative AI models are reaching their limit and other elements will need to be added such as true knowledge (Perplexity and others already cite the sources they use), deductive logic (classical AI) and, in the long term, general sense, the least common of the senses. Only then can we begin to talk about true reasoning,” says Baeza-Yates.

That’s what Altman promised for next year. This is called general AI that is equal to or superior to human capabilities. It seems clear that achieving something like this will take a long time and that, as Baeza-Yates says, achieving this goal will require more than just generative AI. “Large multimodal models will be a fundamental part of the overall solution to developing general AI, but I don’t think they will be enough: we will need several other major advances,” Demis Hassabis, director of AI research. This was announced last week by Google representatives and Nobel laureates in chemistry at a meeting with journalists in which EL PAÍS took part.

“Not only does generative AI not get us any closer to the big scientific questions of AI, such as whether there can be intelligence in something that is not organic, but it distracts us from them. These systems are not capable of reasoning, we would have to resort to symbolic AI (the one that is based on mathematical logic),” reflects Ramon Lopez de Mantaras, founder of the CSIC Artificial Intelligence Research Institute and one of the pioneers in Spain of a discipline that is already being cultivated more than 40 years. Alphafold, the tool Hassabis’ team developed to predict the structure of 200 million proteins for which they won a Nobel Prize, combines 32 different artificial intelligence techniques, generative being just one of them. “I think this type of hybrid system is the future,” says López de Mantaras.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button