Just 10 Users Responsible for Massive Twitter Misinformation

Social platforms have become a key point of information dissemination in the digital age. They have also facilitated the spread of hoaxes, fake news, and untrustworthy content. A closer look at how they are created and distributed reveals that much of the bad content that influences public opinion is found in a very small number of users.

Responsible for Large-Scale Disinformation: The Twitter Case

In particular, a recent article published in PLOS ONE on Identifying and Characterizing Superspreaders of Inaccurate Content on TwitterIdentifying and Characterizing Low-Trust Content Super-Spreaders on Twitter– indicates that the viralization of false information is also closely linked to the algorithms of social platforms.

These algorithms prioritize content that generates the most engagement: likes, comments, and shares.

Additionally, the report shows that between January and October 2020, more than 70% of low-trust posts on Twitter came from just 1,000 accounts. What’s particularly striking is that 34% of the content was created by just 10 users. Yet, these posts generated more than 815,000 tweets.

The data analyzed preceded Twitter’s transformation into X. The social network subsequently deleted a total of 2,000 accounts. botHowever, much of the untrustworthy information has survived, suggesting that real people are responsible for its dissemination.

This content currently plays a leading role in social networks, exerting a noticeable influence on public opinion.

The speed with which information spreads, the large volume of existing data and the difficulty of verifying its accuracy exacerbate the situation and increase society’s vulnerability to disinformation.

Superspreaders are the epicenter of the problem

Superspreaders, people who can “infect” a crowd with content originally published from untrustworthy sources, are at the center of the problem.

Recent research shows that superspreaders include profiles with large numbers of followers, media outlets with low levels of trust, personal accounts affiliated with these media outlets, and influential people.

They tend to use more toxic language than the average disinformation user, and their posts tend to be political and religious in nature.

Fake News vs. Low Credibility Content

In this context, it is necessary to distinguish between fake news and low-trust content, which are related concepts that require different approaches.

Fake news is information created with the purpose of deceiving, misinforming or influencing public opinion. In this case, the authors are aware of the falsehood of the information and consciously seek to spread it to achieve a specific goal.

Low-trust content, on the other hand, is the type of information that contains errors, bias, or is the result of poor research. It is not necessarily created with the intent to deceive.

Low-trust content may include partially accurate information, rumors, or sensational news. These publications typically exaggerate the facts. Their purpose is nothing more than to attract an audience.

The problem of disinformation

In any case, both fake news and low-trust content contribute to misinformation.

In recent years, there have been major efforts to cleanse the networks of malicious content, although strategies used by superspreaders to fool verification systems continue to hold back various social platforms.

Part of the problem is the existence of accounts that, while not highly trusted, have official domains and are verified by networks like Facebook and X (formerly Twitter).

This situation has worsened as a result of the spread bots and automated accounts that act as sources of information dissemination. bots They distribute news in the first moments, targeting users with the largest number of subscribers.

However, we cannot ignore the fact that the mass distribution of this type of content is due to users who share information without checking its origin. In large part because it coincides with their beliefs.

In fact, multiple studies have shown that hoaxes spread ten times faster than truthful information, highlighting the challenge of combating disinformation on digital platforms.

The Future: Artificial Intelligence and Critical Thinking

Predicting the future of superspreaders is difficult. Artificial intelligence, machine learning, and social media monitoring and analysis are proposed as best practices to prevent fake news and low-trust content from going viral. But an unlimited outpouring of information with untraceable origins is expected a priori.

In this regard, one of the most important challenges is to find a balance between critical thinking and an environment in which disinformation is multiplied by the enormous power of new technologies.

In this regard, social media users are becoming increasingly concerned that they do not know how to distinguish truth from lies. Has the battle for truth been lost?

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button