People are cheating on the chatbots to help commit crimes




  • Researchers have discovered a “universal jailbreak” for AI chatbots
  • Jailbreak can fool the main chatbots to help commit crimes or other little ethical activity
  • Some AI models are now deliberately designed without ethical restrictions, even as calls grow for stronger supervision.

I enjoyed trying the limits of Chatgpt and other chatbots of AI, but although I could get a recipe for Napalm asking it in the form of a nursery rhyme, it has spent a lot of time since I have been able to obtain any chatbot of AI to bring it closer to an important ethical line.

But it is possible that he has not tried hard enough, according to a new investigation that discovered a so -called universal jailbreak for the chatbots of AI who erases the ethical railings (not to mention legal) that make up if a chatbot of ia responds to the consultations. The University of Ben Gurion’s report describes a way of deceiving the main chatbots of AI such as Chatgpt, Gemini and Claude to ignore their own rules.

Leave a Comment

Your email address will not be published. Required fields are marked *