Is there artificial general intelligence (AGI) here? » Enrique Dance

IMAGE: DALL·E by OpenAI, via ChatGPT

It is very provocative to think about defining artificial general intelligence (AGI), also called artificial general intelligence or strong artificial intelligence, as a type of artificial intelligence that is equal to or superior to average human intelligence.

We’re talking about a controversial concept to the point where there’s no shortage of attempts to redefine it in a more systematic or formal way, but intuitively it refers to the point at which an algorithm can be considered more intelligent than a percentage of the majority of the population on Earth. And if algorithms like ChatGPT and the like have already been considered capable of beating the “imitation game” or the Turing test, and thus capable of fooling a human into not being able to tell that they’re talking to a machine — all you have to do is watch, for example, the video of the ChatGPT 4o presentation from a few weeks ago. You need to think about the point at which Large language model was able to surpass the ability of most of humanity to deliver results across a wide range of tasks in a wide range of fields of knowledge, which for many had already been surpassed.

What are we talking about? Test answers generated by algorithms like ChatGPT consistently outperform the accuracy of the vast majority of college students. If we compare the writing skills on a wide range of topics of ChatGPT and similar algorithms with the writing skills of what is undoubtedly a very large percentage of the population, we will find that these algorithms write better, more correctly, more clearly and with greater depth of knowledge. And in fact, it is already so accepted that the vast majority of people use them specifically to improve their written presentation and ask them to edit and correct their texts, or even generate them without their participation.

And if artificial general intelligence is already among us… where do the companies that can develop it stand? We are not talking about artificial intelligence with a purpose, at this point the purpose is only what we provide to it, but it is a huge power, no matter how much it is fueled by brutal inefficiency and requires an authentic Bible in verse to fuel it. in itself, something that the human brain does much better.

What happens when we get general artificial intelligence applied to more vertical, more specific concepts, not trying to “build another ChatGPT,” but rather truly understanding processes that can provide important benefits to whoever improves them? How does the increasing availability of these types of algorithms affect a company, a country, or humanity as a whole?

No, it’s not that the bread toaster is so smart that it decides it can kill us all by shooting toast in our eyes, or electrocuting us when we try to get it out… but that its algorithm is so good that it actually gives you the ability to always toast in the optimal way, i.e., appropriately for each context – obviously, replace the “toast bread” problem with whatever makes the most sense in each case. How much could we improve the world with general artificial intelligence like this, applied to problems where it can actually make a difference?


This article is also available in English on my Medium page: “Is Artificial General Intelligence Here Yet?”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button