Categories: Technology

Do you know what a baby peacock looks like? There is no artificial intelligence, and this is a serious problem for the Internet.

Peacocks are one of the most fascinating animals for humans. They were a symbol of beauty in India, where they originated, and were later recorded in Persian, Greek and Roman temples and artwork. Today they are one of the most photographed animals, especially due to the iridescent plumage of the male and the spreading of his tail in the mating dance. The appearance of their babies used to be less well known until in recent days they became one of the Internet phenomena with hundreds of images and videos of “peacock babies”. There’s just one problem: they’re all fake.

These are artificial intelligence hallucinations. Algorithms, whose databases contain less information about the actual appearance of peahens, mix the characteristics of adult males with those of chicks, which are actually brown, quail-like. The result was unrealistic hybrids that quickly attracted the attention of users. Countless accounts dedicated to monetizing content began sharing these creations with comments like “the peacock will be your moment.” oooooh day” or “the little peacock born to shine.”

Some publications warned that the images were made using AI, most did not. None of them emphasized that in fact the baby peacocks had nothing to do with this vacation. While many users highlighted that they were generational AI, many fell into the trap of asking where they can adopt peacocks as the TikTok post garnered nearly 30,000 comments. But the worst was yet to come.

@ku_13js

“Rare White Peacock ~ Good Luck 🍀”

♬ 原创音乐 – ku_13Js

The explosion in popularity of fake peacocks has moved beyond social media to Google, whose image search engine has prioritized artificial intelligence hallucinations over real photos. “I was curious, so I Googled ‘peacock’ to see what it looked like, and half the results were AI. “We screwed up, right?” — the user responded in one of the posts with fake images.

“I just had to search for ‘peacock’ to prove to someone on Facebook that what they posted was artificial intelligence, and about 60% of the images are made using artificial intelligence. Google is dying. Duckduckgo is only slightly better. “AI is making the internet worse and making people dumber,” another commented. This trend is more pronounced in English searches than in Spanish searches.

https://twitter.com/X/status/1842550658102079556?ref_src=twsrc%5Etfw”/>

This type of algorithmically generated avalanche of junk content already has a name. It has been dubbed “slop,” which translates to “swill,” and may soon become the twin brother of the term “spam.” Their only goal is to attract people’s attention with realistic fakes or impossible creations. It is so easy and cheap to create that it is widely used to encourage interaction and create a gimmicky AI economy. In many cases, this process is completely automated, and the algorithms themselves are responsible for detecting what goes viral, creating hundreds of copies, and flooding digital spaces with false or meaningless content.

Just as spam can fill an inbox with unwanted emails, making it much harder for relevant messages to be discovered, Slop can do the same to social media and search engines. What happened around Hurricanes Helen and Milton, which devastated the United States in recent days, was the first evidence of what lies ahead.

I’ve been working in natural disasters for almost 20 years and I can’t think of any other major disaster where there was so much misinformation.

“I’ve been working with disasters for almost 20 years and I can’t think of any other major disaster that there was so much misinformation about,” said the Massachusetts Maritime Academy emergency management professor. New York Times. The problem that emergency services face is that calls from citizens warning of false situations they have seen online waste their time on rescue efforts. The Red Cross warned that these hoaxes are discouraging survivors from seeking help from authorities because they believe neither the organization nor authorities have traveled to the area.

One of the main debates in the country was how opportunists took advantage of the enormous attention these disasters attracted to spread a huge amount of false images seeking economic and political gain. On the one hand, prominent far-right influencers, close allies of Donald Trump, or even Republican congressmen have spread the false image of a hurricane victim cuddling a puppy. The image is very low quality and clearly artificial, but when other users criticized it, those profiles refused to remove it.

While these types of disasters have always led to hoaxes about them on the Internet, the media scene has changed since Milton and Helen. Not only has AI made it possible to create misinformation easier and faster than ever, but large numbers of people have shared this artificial content knowing it was false, or have refused to take it down because it is not manipulation per se. “I don’t know where this photo came from and honestly it doesn’t even matter. This is forever etched in my memory,” Republican Congresswoman Amy Cramer justifies. “There are people who go through much worse things than what is shown in this photo. So I’m leaving it because it symbolizes the trauma and pain that people are experiencing right now,” he said.

Images of a girl affected by hurricanes also appeared on Google without any indication that they were created by AI. The transnational corporation also did not respond to questions from elDiario.es on this matter.

Other slop-contaminated cases pass below, simply to increase their visibility. One documented case is a page called Coastal species that before the hurricanes arrived, they shared images of beaches or the northern lights created using artificial intelligence. With the information explosion of Helen and Milton, it became a breeding ground for misinformation of fake photos being passed off as real.

“One of the most sinister and disgusting uses of AI that I have found is fake photos of the Appalachian floods in North Carolina and Tennessee,” lamented user X, who exposed it. “These posts are designed to attract page owners and generate advertising money at the expense of all the immeasurable human suffering happening in and around Asheville. There are no updated information on the situation on these pages, no links to donations or lists of missing people,” he continued.

Dumping is a new phenomenon and its impact on users has yet to be measured. On the other hand, the first analyzes of the impact that this may have on artificial intelligence technology itself have already been published. These systems use data they extract from the Internet for their learning mechanisms, but according to a recent study, the growing presence of junk content generated by other AIs could be their downfall.

Training with synthetic or unnecessary content causes AIs to lose variety, repeat more and more elements or phrases, and their ability to cope with new or unexpected situations during learning is dramatically reduced. “When they are trained on contaminated data, they misperceive reality,” the researchers emphasized. A funnel of false content that can get bigger and bigger.

—-

This article was updated on October 12 to include a response from Google sent after the original publication.

Source link

Admin

Recent Posts

Check out today’s ONCE Cuponazo

Updated: October 12, 2024 | 22:03 October 12, 2024 | 22:03 how is he last…

3 mins ago

First examinations of Lamin Yamal’s injury

Victory Spain in view of Denmark in Murcia (1-0) leaves bad news in the form…

7 mins ago

In formal outfits, Selena Gomez and Zoe Saldana filled the red carpet of the London festival

After the Cannes Film Festival and before the Oscar ceremony, at the lackey course, the…

52 mins ago

Youth gangs of Marseille: blood, power and social networks international

On Wednesday, October 2, a 15-year-old boy came to the 3rd arrondissement of the Félix…

54 mins ago

They are creating an “app” that predicts the 90-day mortality rate of cancer patients.

The web application helps oncologists determine which patients have a better prognosis and make decisions.The…

57 mins ago