How Chinese and Russian cybercriminals used ChatGPT to carry out cyberattacks
Microsoft and OpenAI have acknowledged that various cybercriminal groups are turning to tools such as the ChatGPT chatbot to enhance their cyberattacks. Specifically, criminals, apparently, associated with the governments of China, Russia, North Korea and Iran. They would turn to technology to investigate their targets, improve the quality of their attacks, and develop new methods based on social engineering to deceive victims.
“Cybercriminal groups, state-backed threat actors, and other adversaries are exploring and testing various artificial intelligence (AI) technologies as they emerge in an effort to understand the potential value of their operations and the security measures they may require.” On the defenders’ side, it is critical to strengthen the same security measures against attacks and implement equally sophisticated monitoring that anticipates and blocks malicious activity,” Microsoft said in a post on its official blog.
Among the attackers of the artificial intelligence technology created by Microsoft and OpenAI is a group of Russian origin. Strontium, also known by other names such as Fancy Bear, which is associated with that country’s military intelligence. The companies said they would use tools like ChatGPT to be able to research “various satellite and radar technologies that may be relevant to conventional military operations in Ukraine, as well as conduct general research aimed at supporting their cyber operations.”
Meanwhile, the North Korean group Thallium would resort to artificial intelligence technology to improve the quality of their scams in order to trick the victim into sharing their details without realizing that they are being scammed. This criminal group also used generative artificial intelligence to better understand the workings of security holes found in various Internet platforms.
Curium, a group originally from Iran associated with the Islamic Revolutionary Guard Corps, also used technology to improve the development of their scams, as well as to develop undetectable malicious code for their victims’ antivirus. Gangs associated with China –Chromium And Sodium– also turned to artificial intelligence to improve their work and efficiently translate their fraud attempts.
It’s worth remembering that the potential of generative AI platforms in malware campaigns has been worrying cybersecurity experts for years. And the fear has only intensified with the advent of ChatGPT and the progressive democratization of the technology, which is now available to everyone just an internet search away.
However, Microsoft notes that at this time, no “serious attacks” related to the use of platforms such as ChatGPT have been detected. He also notes that the previous examples are not unique: there are more organized crime groups that have tried to use this technology.
And the situation may get worse in the future, but that doesn’t mean you need to be afraid: “While attackers will continue to be interested in AI and will explore the technology’s current capabilities and security controls, it is important to remain aware of these risks.” in fear.”context”. In particular, both Mother Windows and OpenAI recommend that users and companies use tools like multi-factor authentication whenever possible and distrust any messages that might be in the least bit suspicious.