bots that undress people using AI

It was 2019. Generative AI was still in its infancy, but terms like neural networks (for the better, for their potential) and deepfakes (usually for the worse) were already starting to make noise. One of the biggest scandals of 2019 was the scandal with DeepNude, a site that allowed any woman to undress simply by uploading a photo. Behind the scenes, there was a neural network trained on more than 10,000 photographs of naked women.

The website had been circulating for months, but only lasted a few hours after it was discovered. The developer, who said his name was Alberto and that he lived in Estonia, shut down the platform, saying that “the likelihood of people abusing it” was “too high” (it’s impossible to know) and that “the world is not ready yet.” for DeepNude”. This was in 2019.

Today, in 2024, this technology has evolved into what it is: a technological marvel whose magnitude is matched only by the challenge of stopping its misuse. Because AI has an endless number of positive uses, but it can also be used for less ethical and moral purposes. Goals such as undressing people through a Telegram bot. Similar to DeepNude, but simpler and more understandable to everyone. Because the world in 2024 is still not ready for such a challenge.

Four million users. These are the ones that, according to a WIRED investigation, collect at least 50 Telegram bots every month whose sole purpose is to generate images or videos of real naked people. Two of them, according to the magazine, have 400,000 users per month. The rest, 14 to be exact, exceed 100,000 users.

We’re talking about thousands of people who have (potentially) created nude images of other people without their consent. This is obviously a violation of data protection or confidentiality, as well as confidentiality, honor and self-esteem. And far from being something innocent, it is a practice that can (and does) have a real impact on people’s lives. According to Home Security Heroes’ State of Deepfakes study, the volume of deepfake porn content increased by 464% between 2022 and 2023. 99% of this content stars women.

Here's how Telegram makes money: More and more ways to monetize an app that's still unprofitable

How do they work. As WIRED details, these bots are sold with messages such as “I can do whatever you want with the face or clothes in the photo you give me” and, for the most part, require the user to purchase tokens with real money or cryptocurrency. . Whether they deliver the promised results or are scams is another story. Some of these bots allow you to upload photos of people, which they claim will train the AI ​​and generate more accurate images. Others do not advertise stripper bots, but provide links to bots that can strip.

Main problem. The point is not that such bots can be found and used on Telegram (that too), but how difficult it is to stop this content. As for Telegram, which itself is a deep network, the messaging app has been the subject of controversy from time to time due to such things.

The latest case is, in fact, recent: the arrest of its founder. Pavel Durov was arrested in France on suspicion of facilitating crimes on Telegram due to a lack of moderation. On Telegram, they defended themselves by arguing that “it is absurd to claim that a platform or its owner is responsible for abuses of that platform.” After the arrest, Durov assured that he would make moderation one of the priorities of the service.

The main problem is how difficult it is to stop the creation and distribution of this type of content.

However, it should be noted that, according to the WIRED article, Telegram has eliminated the channels and bots reported by the magazine. So, these channels are what they are, but of course not all of them are there. Telegram, as we pointed out, is itself a deep network, which, however, provides the user with all the necessary tools to search for content. Search engine without going any further.

It’s hard to deal with. The fight against deepfakes is “essentially hopeless.” These were the words of Scarlett Johansson back in 2019. The actress was one of the first victims of pornographic deepfakes (not the only one, of course) and today, in 2024, the truth is that the situation remains more or less the same. Major tech companies have taken some action, but the reality is that deepfakes are still rampant.

An example of a fake image created by AI during the devastation of Hurricane Helen | Click on the image to see the original tweet
An example of a fake image created by AI during the devastation of Hurricane Helen | Click on the image to see the original tweet

An example of a fake image created by AI during the devastation of Hurricane Helen | Click on the image to see the original tweet

Moreover, modern tools have made it even easier. Want a photo of Bill Gates with a gun? Taylor Swift in lingerie or supporting Donald Trump? You can do this directly in Grok, such as AI X. Although some platforms such as Midjourney or DALL-E block controversial requests, anyone can, with a simple internet search, free time, a lot of images and a bad idea, teach their own AI to do God knows what.

Examples. We can find as many as we want. We have the latest of these in the United States: deepfakes created in the wake of the devastation of Hurricane Helen. In South Korea, the problem of deepfake porn has reached its highest level and become a matter of national interest. So much so that a few days ago, a number of laws were passed that provide for imprisonment and fines for creating and even viewing this synthetic content. “Any person who possesses, purchases, stores or views illegal synthetic sexual materials will be sentenced to up to three years in prison or a fine of up to 30 million won (20,000 euros at the exchange rate),” the BBC reported. In fact, Telegram has also played a major role in the spread of synthetic pornographic content in South Korea.

AI has advanced so far that deepfakes are not the only problem. We just don't even trust real photographs.

What has been tried. One industry approach is to label AI-generated content with invisible watermarks. At the moment, whether content is marked as synthetic or not is up to the creator (Instagram and TikTok have tools for this, for example), but a watermark will prevent or at least reduce the spread of false content and fake news. This will also allow for early detection.

However, the reality is that its global implementation is not the norm. When it comes to synthetic pornographic content, the problem is much more serious. This is not only a matter of moderation on platforms, but also of early detection and prevention of harm. A watermark by itself does not solve the problem as it needs to be implemented.

Watermark for AI-generated content proposed by OpenAI | Image: OpenAI
Watermark for AI-generated content proposed by OpenAI | Image: OpenAI

Watermark for AI-generated content proposed by OpenAI | Image: OpenAI

To be effective, watermarking must be implemented in every model and synthetic content creation tool. Not only commercial ones, but also those that the user can run locally. This way, all AI-generated content will be tagged at the source and will be easier to detect by platform systems. But it’s one thing to say it, and quite another to do it.

Image | Wikimedia Commons and Pixabay edited by Hataki

In Hatak | We have a huge problem with AI-generated images. Google thinks it has a solution

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button