Yann LeCun, Meta’s chief artificial intelligence scientist: “It will take a long time to create human-level artificial intelligence” | Technologies

The extraordinary potential and enormous risks associated with the generative artificial intelligence (AI) revolution were the focus of discussions at the World Economic Forum’s annual meeting in Davos. Nick Clegg and Yann LeCun, President of Global Affairs and Chief Scientist at Meta, expressed their views on the matter in a meeting with journalists from five international media outlets, including EL PAÍS.

Meta, the parent company of Facebook, is one of the leading companies in the revolution. This is due to its outstanding capabilities in this particular sector and because it goes hand in hand with the enormous power provided by the control of its giant social platform, the management of which has, among other things, attracted serious criticism and accusations in recent years. things for its impact on democracy.

In the conversation, LeCun emphasizes that “contrary to what you may hear from some people, there is no system that can reach the level of human intelligence.” The expert believes that “asking for regulation because of fear of superhuman intelligence is like asking for regulation of transatlantic flights at speeds close to the speed of sound in 1925.” It’s not far off; “It’s going to take a long time with systems we don’t know yet,” he says, which is why he thinks it’s premature to legislate, given the risk that they could get out of human control. In December, the EU passed the world’s first artificial intelligence law, and other countries such as the US and UK are also working on specific laws to control the technology.

Clegg, for his part, is calling on lawmakers around the world who deal with the issue to regulate products, not research and development. “The only reason you might think that it would be useful to regulate research and development is because you buy into this fantasy that artificial intelligence systems can take over the world or that they are inherently dangerous,” says Clegg, former Deputy Prime Minister of Great Britain. and the leader of that country’s Liberal Democratic Party.

They are pleased that, after a period of some excitement following the advent of ChatGPT, public debate has moved away from apocalyptic hypotheses and focused on more specific issues and current issues such as disinformation, copyright, access to technology.

State of Technology

“These systems are intelligent in the relatively narrow domain in which they were trained. They speak the language and that makes us think they are smart, but in reality they are not that smart,” explains LeCun. “And we don’t have the ability to just scale up and develop it with more data, with bigger computers, and achieve human intelligence that way. It will not happen. What will happen is that we will have to discover new technologies, new architectures for these systems,” the scientist explains.

The expert explains that new forms of AI systems will need to be developed “that would allow these systems, first of all, to understand the physical world, which they cannot do at the moment. Remember what they can’t do at the moment either. Reason and plan, which they cannot do at the moment either. And when we learn to create machines that understand the world, remember, plan and reason, we will have a path to human intelligence,” continues LeCun, who was born in France. Many debates and speeches at Davos mentioned the paradox that Europe has very visible human capital in this sector, but is not a leading company on a global scale.

“It’s not far off,” insists LeCun. The scientist believes that this path “will take a lot of time; years, if not decades. This will require new scientific advances that we are not aware of. So it’s worth wondering why non-scientists are saying this, since they’re not the ones trying to make it work. The expert explains that we now have systems that can pass the bar exam, but we don’t have systems that can clear a desk and throw it in the trash.” It’s not because we can’t build a robot. This is because we can’t make them smart enough. So, it is clear that we are missing something big before we can achieve the kind of intelligence that we see not only in humans but also in animals. “I would be happy if at the end of my career (he’s 63) we had systems that were as smart as a cat or something,” he notes.

Regulation status

The debate over how to regulate this technology in its current state and in light of near-term development opportunities was one of the key issues at the annual forum in Davos. Legislation being introduced in the EU, innovative in many respects, has been one of the main areas of focus.

When asked about this, Clegg, who was a member of the European Parliament and is a staunch pro-European, avoids making definitive statements on the matter but does throw barbs at the Union. “The work is still ongoing. It’s a very classic EU thing. There is fanfare, they say that something has been agreed upon, but in fact it is an unfinished work. We will review it carefully once it is completed and published. I think the devil will really be in the details,” says Meta’s president of global affairs.

“For example, when it comes to data transparency in these models, everyone agrees,” continues Clegg. “But what level of transparency? Are these datasets? Is this personal data? Or, for example, in copyright law. Copyright legislation already exists in the EU. Are you going to limit yourself to this? Or will a new concrete layer finally be added? When these models are trained, a huge amount of data is consumed. Labeling every bit of data for intellectual property reasons is extremely difficult. So I think the devil is in the details. We will look into it.”

This is where criticism arises. “Personally, as a passionate European, it sometimes frustrates me a little that in Brussels they seem to pride themselves on being the first to legislate, rather than whether the legislation is good or not. Remember that this EU AI law was originally proposed by the European Commission three and a half years ago, before the whole generative AI thing (like ChatGPT) blew up. And then they tried to adapt it through a series of amendments, provisions, to try to capture the latest evolution of technology. “It’s a pretty clunky way of legislating for something as important as generative AI.”

The debate between establishing protections and preventing development impediments generates strong tensions within policy and between policy and the private sector. In this fine line that lawmakers must navigate, there is incalculable value at stake: productivity, jobs, opportunities that will shape the geopolitical balance of power.

Clegg touches that nerve. “I know that France and Germany, especially Italy, I think, have been reasonable in asking MEPs and the European Commission to be very careful not to include in legislation anything that could actually hinder European competitiveness. Of the ten largest companies in the world, not one is European.” On the other hand, a group of experts, in an open letter published by EL PAÍS, called on the EU to adopt even stronger legislation “to protect the rights of citizens and innovation.”

Optimism and prudence

Behind this enormous impulse of power lies the development of technology which, although not close to fully reaching human or superhuman levels, has already entered our lives with extraordinary power.

“AI enhances the corrective capabilities of human intelligence. There is a future in which all our interactions with the digital world will be carried out through an artificial intelligence system,” says LeCun. “This means that at some point these AI systems will be smarter than us in certain areas (in fact, they already are in some), and perhaps at some point smarter than us in all areas. This means that we will always have assistants with us who are smarter than us. Should we feel threatened by this? Or should we feel stronger? I think we should feel stronger.”

Throughout the interview, LeCun expresses several elements of cautious optimism. “If you think about the impact this could have on society in the long term, it could have an effect similar to the invention of the printing press. So creating a new Renaissance in which you can become smarter is inherently good. Now, of course, there are risks. And technology must be used responsibly to maximize benefits and mitigate or minimize risks.”

You can follow El Pais Technology V Facebook And X or register here to receive our weekly newsletter.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button