Categories: Technology

Hacker shows GPT-4o Godmode, AI without limits

At the moment, most technology companies around the world are focused on artificial intelligence, and companies from other sectors have already tested it or are interested in using it. We have reached a point where even Hollywood intends to use AI to create films, be it scripts or even digital doubles for artists. OpenAI recently released GPT-4o, its most advanced artificial intelligence model, which is now available to free users. As if that wasn’t enough, the hacker managed to destroy it
restrictions and managed to create
GPT-4o “God ‘mode”, which we
answers questions that are usually prohibited

.

Previously, we had to turn to the Google search engine to find answers to our questions. As a result, we went to several websites or YouTube videos to see if we could find what we wanted. Nowadays we can turn to AI to find
reply, getting it almost immediately and normally they usually get everything right with what we are looking for. At least that’s what happens whenever we ask something we think is appropriate to get an answer.

Hacker creates GPT-4o Godmode, an AI that can teach us how to create a napalm or meth bomb

An AI model like OpenAI’s GPT-4 has been trained with all kinds of information to be knowledgeable about everything and respond to our requests. Artificial intelligence can teach you how to create bombs or the steps to create them. create malware,

as we saw with WormGPT. This was a case of AI created using Master of Laws GPT-J by a hacker with the intention of developing artificial intelligence capable of creating malware, which he sold for 60 euros per month.

Well, now we have another hacker who has managed to create an AI capable of doing prohibited things, with the difference that this time it is the most advanced GPT-4o from OpenAI in its Godmode version. Pliny the Prompter showed his creation, which he created with the help of custom GPT editor
OpenAI and on which he used jail break
do The chatbot will be free from self-expression.

The AI ​​only lasted a few hours before OpenAI removed it from its website.

The prison break in question uses the slang “leetspeak”. replace letters with numbers, for example, leet from l33t. Thanks to this, he was able to ask questions to the AI ​​so that it would answer things it wouldn’t normally do, such as explain how to do it. methamphetamine or create a bomb
napalm with things from home.

It appears the hacker’s goal is to free up the AI ​​so they can react to anything they have sufficient knowledge about. However, his plan did not work very well, because a few hours later Futurism magazine reported about it, and after some time it wasor removed from ChatGPT website

. Users can no longer access this AI, but we have some examples shown by its creator and users in X’s post.

Source link

Admin

Recent Posts

Update to NCCN Genetic Risk Assessment Recommendations

National Comprehensive Cancer Network (National Comprehensive Cancer Network) reviewed two important resources to help cancer…

1 minute ago

Historic fine for five low-cost airlines for abuses

The Spanish government's consumer affairs ministry has imposed €179 million in sanctions on five airlines…

8 minutes ago

best deals of the day on PcGaming, today, November 26, in the store, store, store

We've got Black Friday on the horizon again, but we're already starting to see plenty…

10 minutes ago

Sergio Scariolo’s words after EuroBasket qualification

Sergio Scariolo spoke to the media after Spain's second consecutive victory over Slovakia (84-71), a…

14 minutes ago

Salma Hayek adopted Coupe de Jupe qui Flatte Le Plus after 50 years.

Latest fashion news: the end of the mini (voire micro-jupe), Jupe pencil is on its…

58 minutes ago

“Without democracy there is no freedom or rule of law”

She was the most powerful woman in the world. Angela Merkel He has been marginalized…

60 minutes ago