AI open source cut for efficiency produced detailed instructions for the manufacture of pumps and other bad responses before training again




  • UCR researchers resume the AI ​​models to maintain intact safety when they are cut for smaller devices
  • Changing the output layers eliminates protections, re -entering restorations of blocked insecure responses
  • The study with Llava 1.5 showed reduced models rejected the dangerous indications after training

Researchers at the University of California, Riverside, address the weakened security problem in open -source artificial intelligence models when they adapt to smaller devices.

As these systems are trimmed to function efficiently on phones, cars or other low -power hardware, they can lose safeguards designed to prevent offensive or dangerous material from producing.

Leave a Comment

Your email address will not be published. Required fields are marked *