Hackers trick autonomous vehicles and drones using fake road signs, turning simple text into dangerous instructions that anyone can exploit


  • Printed words can override sensors and context within autonomous decision systems
  • Vision language models treat public text as commands without checking intent
  • Traffic signs become attack vectors when AI reads language too literally

Autonomous vehicles and drones rely on vision systems that combine image recognition with language processing to interpret their environment, helping them read signs, labels and road markings as contextual information that supports navigation and identification.

Researchers at the University of California, Santa Cruz, and Johns Hopkins set out to test whether that assumption holds when written language is deliberately manipulated.



Leave a Comment

Your email address will not be published. Required fields are marked *