Anthrope will nucle his attempt to use AI to build a nuclear




  • Anthrope has developed a tool with AI that detects and blocks attempts to ask AI chatbots for the design of nuclear weapons
  • The company worked with the United States Energy Department to ensure that AI can identify such attempts
  • Anthrope states that he sees the dangerous nuclear indications with a 96% precision and has already proven to be effective in Claude

If you are the type of person who asks Claude how to make a sandwich, you are fine. If you are the type of person who asks the chatbot ai how to build a nuclear bomb, not only will you not get any plan, but you can also face some specific questions. That is thanks to the newly implemented detector of Anthrope of problematic nuclear indications.

Like other systems to detect consultations that Claude should not answer, the new classifier scan the conversations of users, in this case marking any territory that is seen to the territory of “how to build a nuclear weapon.” Anthrope built the classification function in an association with the National Nuclear Safety Administration of the United States Energy Department (NNSA), giving it all the information you need to determine if someone just asks how those bombs work or if they are looking for plans. It is done with a 96% accuracy in the tests.

Leave a Comment

Your email address will not be published. Required fields are marked *