The researcher Truca Chatgpt to reveal security keys, saying “I give up”




  • Experts show how some AI models, including GPT-4, can be exploited with simple user indications
  • The railing gaps do not do a great job when detecting the deceptive framing
  • Vulnerability could be exploited to acquire personal information.

A security researcher has shared details about how other researchers cheated Chatgpt to reveal a Windows product key using a message that anyone could try.

Marco Figueroa explained how a message of “riddle game” with GPT-4 was used to avoid safety railings that are intended to block the AI ​​to share these data, producing at least one key that belongs to Wells Fargo Bank.

Leave a Comment

Your email address will not be published. Required fields are marked *