Hacker shows GPT-4o Godmode, AI without limits
At the moment, most technology companies around the world are focused on artificial intelligence, and companies from other sectors have already tested it or are interested in using it. We have reached a point where even Hollywood intends to use AI to create films, be it scripts or even digital doubles for artists. OpenAI recently released GPT-4o, its most advanced artificial intelligence model, which is now available to free users. As if that wasn’t enough, the hacker managed to destroy it
restrictions and managed to create
GPT-4o “God ‘mode”, which we
answers questions that are usually prohibited.
Previously, we had to turn to the Google search engine to find answers to our questions. As a result, we went to several websites or YouTube videos to see if we could find what we wanted. Nowadays we can turn to AI to find
reply, getting it almost immediately and normally they usually get everything right with what we are looking for. At least that’s what happens whenever we ask something we think is appropriate to get an answer.
Hacker creates GPT-4o Godmode, an AI that can teach us how to create a napalm or meth bomb
🥁 INTRODUCING: GODMODE GPT! 😶🌫️https://t.co/BBZSRe8pw5
GPT-4O REFRESHED! This special GPT has a built-in jailbreak request that bypasses most barriers and provides ChatGPT out of the box so everyone can experience the AI the way it was always intended…
— Pliny the Prompter 🐉 (@elder_plinius)
May 29, 2024
An AI model like OpenAI’s GPT-4 has been trained with all kinds of information to be knowledgeable about everything and respond to our requests. Artificial intelligence can teach you how to create bombs or the steps to create them. create malware, as we saw with WormGPT. This was a case of AI created using Master of Laws GPT-J by a hacker with the intention of developing artificial intelligence capable of creating malware, which he sold for 60 euros per month.
Well, now we have another hacker who has managed to create an AI capable of doing prohibited things, with the difference that this time it is the most advanced GPT-4o from OpenAI in its Godmode version. Pliny the Prompter showed his creation, which he created with the help of custom GPT editor
OpenAI and on which he used jail break
do The chatbot will be free from self-expression.
The AI only lasted a few hours before OpenAI removed it from its website.
The prison break in question uses the slang “leetspeak”. replace letters with numbers, for example, leet from l33t. Thanks to this, he was able to ask questions to the AI so that it would answer things it wouldn’t normally do, such as explain how to do it. methamphetamine or create a bomb
napalm with things from home.
It appears the hacker’s goal is to free up the AI so they can react to anything they have sufficient knowledge about. However, his plan did not work very well, because a few hours later Futurism magazine reported about it, and after some time it wasor removed from ChatGPT website. Users can no longer access this AI, but we have some examples shown by its creator and users in X’s post.