Newspaper TODAY | ChatGPT, ten risks and ten opportunities for security

It is a form of machine learning, based on language models, capable of generating new content in any format (text, image, video, voice, code). But these kinds of advances are not new. A few weeks ago, Microsoft presented its proposal for Bing with AI (Artificial Intelligence), whose potential, in fact, is much greater than that of ChatGPT; and, shortly, the launch of Bard, the Google solution, is expected. Along with them, a multitude of proposals are proliferating that are not limited to offering textual results based on language models, but to the generation and combination of any type of content.

Taking these trends into account, Prosegur Research, Prosegur’s Insight&trends center, has analyzed the implications of ChatGPT from a security perspective, and has identified the main risks and opportunities that open up with its application to different areas.

The main ten security risks derived from ChatGPT, according to the Prosegur Research analysis, are the following:

1- Social polarization: Generative AIs, given their ability to produce multimedia content, can be used to spread messages of hate or discrimination, as well as messages of a radical or extremist nature.

2- Phishing: automated generation of realistic-looking emails in order to trick users into accessing confidential information or computer systems. It must be taken into account that generative AIs write with high quality, which invalidates the suspicions that low-quality phishing can arouse.

3- Disinformation: through the generation of false news, it is about influencing public opinion, damaging social cohesion or affecting electoral processes. Disinformation is a clear issue that affects national security, damaging social cohesion and democratic principles.

4- Doxing: Disinformation is also likely to affect companies and organizations, with the dissemination of hoaxes, biased information, the creation of false job profiles, or the manipulation of documents to damage the credibility of organizations. Its purpose can range from parody to attacking reputation or influence in the markets.

5- Leakage of information and data theft: Companies such as Amazon or Google have alerted their employees about the risks of sharing information about the company in ChatGPT and similar applications, which could later be revealed in the responses offered to users.

It may interest you: Twitter restores the blue badge for some media and celebrities

6- Frauds and scams: They are criminal typologies that have been growing in recent years. Traditional frauds, existing in all economic sectors, are boosted by the use of the Internet, social networks and new technologies. Generative AI can help design frauds with much higher quality, as well as sharpen targets.

7- Generation of malicious chatbots with criminal objectives: They can interact with individuals to obtain sensitive information or illicit economic purposes.

8- Identity theft: through the use of so-called “deep fakes” and the ability of AI to generate text, images, videos, and even simulate voice. With the support of the creation of avatars that integrate all these elements, the credibility of the identity is increased.

9- Generation of malicious code, such as viruses, trojans, malware, ransomware, spyware: the objective is to commit cybercrimes of a different nature.

10- Geopolitical and geoeconomic power struggle: In a context of diffuse and fragmented powers, leadership is no longer only measured by economic, diplomatic or military capacity. Already in the year 2017, Vladimir Putin pointed out that whoever mastered AI would rule the world. Geopolitics and geoeconomics present new risks, but also opportunities for states and companies that are capable of reading the future. Data, together with technologies, are at the center of the configuration of power, which generates an asymmetry between those who have them and those who do not.

However, technology, as a great factor of change in our societies, is generally not born oriented towards malicious use. For this reason, Prosegur Research has also wanted to analyze those opportunities that ChatGPT can generate in the field of security, among which the following stand out.

1- Automation of routine tasks in security functions: it could enhance the most human skills and facilitate the well-being of employees, by eliminating repetitive and tedious tasks.

2- Generation of attractive chatbots: with a friendlier and more human profile, to improve interaction with customers and other people.

3- Access to huge amounts of information of interest to security, in a way structured by the use of natural language: In this case, open source intelligence (OSINT) capabilities are enhanced, always being aware of the criticality of evaluating the reliability of the sources and the credibility of the information.

4- In risk analysis: it can support the detection and cataloging of risks for organizations.

5- Pattern recognition, within the broad set of data and information that these applications handle: the value would not be solely in the pattern, but in the anomaly, that is, what is out of the ordinary and can contribute to generating a weak signal or an early warning.

Also read: WhatsApp will allow you to save temporary messages

6- In intelligence analysis It can contribute to the generation of hypotheses, the identification of trends and the construction of security scenarios: although AI cannot replace human creativity, it can be an interesting complement to thinking outside the box.

7- Structuring recommendations in security matters: from how to defend against a cyber attack to what security measures to adopt before or during a trip. It is in no way a substitute for the work of an international security analyst, but does support some tasks.

8- Predictive analytics: It can provide certain predictions, with their associated probabilities, based on the huge amount of data on which it is based.

9- In cybersecurity it can support the detection of phishing, test and test code, identify vulnerabilities, generate secure passwords, or simulate conversations with adversary actors, or even with potential targets, in order to anticipate their actions.

10- Learning: Generative AI can be a first point for learning about issues related to security, about technologies, about risks. They will especially be of interest to the extent that they provide sources and these are increasingly refined.

Source link

Leave a Comment