In the 1980s, futurist Hans Moravec warned that, paradoxically, it was the actions that are easiest for a human to do (such as holding a piece of sushi with two chopsticks) that would pose the greatest challenges for robots and computers. On the other hand, very complex tasks, such as finding errors in drug prescriptions, determining when a space telescope discovered something interesting, or choosing Christmas gifts for the whole family, ended up being extremely simple for algorithms.
“Artificial intelligence is already doing this,” we increasingly say. But according to thousands of scientists and philosophers, this label is not entirely appropriate. “Both words (artificial and intellectual) are controversial and very suspicious. I prefer the term machine learningIt’s easier to understand what we’re talking about: systems that allow machines to learn patterns or correlations and apply them to new situations and data,” explains Justin Jock, professor at the University of Michigan and author Revolutionary mathematics (Books of Poems, 2024).
“It is reasonable that there is some confusion among the general public as these concepts are difficult to understand for those without a mathematical background. There is a lot of mysticism around AI, just like in any other scientific field: cancer research, astronomical observatories, when it comes to UFOs… These are interesting questions about which a lot has been revealed, so there are always those who generate morbidity, explains Celsa Pardo. — Araujo, a mathematician at the Institute of Space Sciences whose research focuses on the application machine learning astrophysicist, who adds: “It is also clear that Google, DeepMind or Microsoft are creating algorithms that solve problems that could not be solved before.”
But here’s the thing for us: Algorithms not only solve certain problems and are very useful in scientific research, but they also generate content and, above all, organize and prioritize everything we have created. And here there is a place both for a huge set of universal human culture, and for the last photograph that we took while we were having breakfast. What criteria do they use? What are these creations? This is most alarming because, as Kyle Chaika shows in his essay Mundofilter
the map (that is, the algorithm that rewards one content over another) already influences the territory (that is, the shape of the content itself and the reality through which we move, especially in cities).Chaika gives the example of coffee shops that want to appear sophisticated: if they all offer the same products and their design is so similar, if the audience that visits them is so similar around the world, then it is because their managers follow an imposed model. Instagram when it prioritizes certain images over others. Instagram only attracts an audience to locals who post photos that match its algorithm, and this happens in all areas: there are already musicians who teach how to compose songs so that they go viral on TikTok (for example, with the chorus very close to the beginning), and many illustrators imitate Pixar’s style regardless of whether it stimulates them (many automatic image generators also use it) because they have proven that it helps them go viral.
Thanks to empirical studies conducted in France in the sixties. In the 20th century, sociologist Pierre Bourdieu studied the “social foundations of taste” and found dozens of correlations between issues such as educational level, type of employment, or disposable income (that is, social class factors) and aesthetic preferences. Today, when algorithms have much more accurate and personalized information about our tastes (we give them this all the time) and some of their suggestions satisfy us (Spotify usually makes no mistakes when creating playlists for us), we still have the feeling that many platforms distribute only the worst content, the most sensational or misleading.
“For example, YouTube’s recommendation system will first have a core trained on a certain number of users, and then it will receive feedback and retraining with each visualization,” explains Pardo-Araujo. “It is true that algorithms reproduce a lot of systematic errors because you can never train the entire population, and you need to be very careful with this process: the distributions must be representative of reality. But it’s funny how biases in algorithms cause so much concern, even though we all have so many of them that we have to put them out of our consciences too. “Perhaps they will be easier to recognize in algorithms than in ourselves,” adds the mathematician, who is convinced that algorithms reflect what is already happening in society.
But when it comes to algorithms, there’s a fine line between adapting to user tastes or shaping and controlling them. There is a feeling that we are being shown and recommended a variation of the same thing over and over again. For example, there are those who accuse Billie Eilish of writing her songs with TikTok in mind, but isn’t it easier to believe that they unintentionally turn out that way because at her age she was exposed to hours of TikTok? This algorithmic feedback to pre-existing trends is what worries the cultural world most. In fact, it is a process that some authors, such as Chaika, call the “flattening of culture,” and which gives rise to increasingly conservative works. Creators, consciously or unconsciously, reproduce algorithmic biases (which in turn belonged to previous artists and users).
On a technical level, introducing patterns into systems that are increasingly similar to each other (or directly created by previous algorithms) poses a significant threat to their evolution. “Systems trained with multiple generations of AI results quickly become absurd. The risk that quality content created by people will become a resource like oil or coal is very real. Of course, unlike fossil fuels, the same old casings can be used indefinitely, but new and additional data are needed to improve the models. Therefore, early discoveries are cheap and do not require much modification. But as sources are extracted from centuries-old texts, the cost of using and processing lower quality stocks increases more and more,” explains Joquet.
On an artistic level, algorithms “constantly reflect the recurring trend of the moment,” says Luis Demano, an illustrator and activist who campaigns against the misuse of generative artificial intelligence in his sector. He identified which images were most valued and reproduced by automated systems: “They tend to be realistic images, close to photographic profiles, with a very distinctive chromatic treatment that greatly enhances the lighting contrast between warm and cool tones.” Demano admits that “getting into the algorithm game” is not only beneficial for reducing costs, but also beneficial for those who use them: “it rewards us and makes us feel special because of the attention it receives. “Ego is a very powerful drug.”
When the concepts of authorship and originality emerged after the Enlightenment, art became the most characteristic practice of a new type of individual: creative, autonomous, and free to choose their own rules and those they would apply to their works. The rules that generative AI uses for the works it produces have nothing to do with this: it is a statistical approach that uses the characteristics of the works it is trained on, as well as data on the functioning of the attention market. .
“I strongly affirm that tech companies steal copyrighted works to train their models,” Demano complains. Although originality is difficult to define, philosophers make it clear: it cannot be found in artificial intelligence products. “Originality is a matter of both the work of art and the process of creating it,” explains Joquet. “Recently I asked my students to read the story Pierre Menard, author of Don QuixoteBorges. The story tells about the eccentric writer Menard, whose secret task is to rewrite Don Quixote word for word. Borges suggests writing exactly the same words in s. XX completely changes the work, since Menard gives them a different meaning than Cervantes gave them when he wrote them at the beginning of the 20th century. XVII. Although Borges writes this somewhat in jest, I think he is suggesting that the conditions in which art is created influences how we understand it and whether we find it interesting. Even if AI could create a Rothko-style work, in films it would do so automatically. The 21st century will never be able to compare with what Rothko did in the 20th century. XX,” develops the professor and philosopher.
So, what exactly do AIs do and why are all their jobs or products so similar to each other? Demano responds: “They are not created to create art, but to create content. The difference between both terms is determined by the function they perform. Content serves to enable the endless billboard that is the Internet to function as a business. Generative AI is the technology industry’s solution to address this need as quickly and efficiently as possible. Its greatest success has made us believe that using it can instantly turn us into artists, when in fact we are clients of a content demand service.” So, when we find family resemblances in everything that algorithms generate or suggest to us, we are not dealing with malicious bias or a matter of style: it is simply a market imposition.
Understanding how algorithms work helps us understand that they are not politically inclined, but rather circulate what causes us to react more intensely, what requires less concentration, or what can be consumed more quickly. When an algorithm is deified, it is forgotten that it is a simple mechanism, and that many people are involved in its development and operation: who commissions profit-maximizing programming, who writes that code to complete the task (probably a low-paid freelancer, who). teaches him, in many cases involuntarily, with his creations and performs it on his computer or phone and feeds him at the same time.
Of course, we shouldn’t blame the user, but we shouldn’t blame the mechanism behind which hides the true operator of this whole process: a businessman who doesn’t care about the type of content his platform plays; or what’s the same: Amazon doesn’t differentiate between copy distribution Brothers Karamazov or from Troll Book from ElRubius. Marx wrote that we often believe that social structures are immovable objects or unquestionable laws of nature. This is an illusion: all social structures, scientific and industrial constructs (and artificial intelligence is one of them) are a consequence of our actions and attitudes and, with sufficient collective power, can be changed.
In the Côte de Hollywood, languages seem delightful (un tout petit peu) within the framework…
Bucharest (EUROFE).- Legislative elections are being held in Romania this Sunday, following the unexpected victory…
Human immunodeficiency virus (HIV) infection continues to be a priority public health problem in Spain.…
In total, the ONCE Black Friday Cup distributed €10,760,000 between the cities of Extremadura, Castile-La…
This Black Friday, AliExpress has become one of the stores with best discountsand he had…
Real Madrid turned Palau white... and the Clasico in Europe: The score was 18-18, since…