Self-driving cars, drones hijacked by custom road signs
2026-01-31
![]()
Researchers at UC Santa Cruz demonstrated a technique called CHAI in which printed road signs containing textual instructions are interpreted by vision-language models in self-driving cars and autonomous drones as actionable commands rather than scene descriptions, achieving an 81.8% attack success rate on self-driving car systems. The findings highlight a concrete, physically deployable threat to embodied AI systems that rely on multimodal language models for scene interpretation and decision-making.
Was this useful?