LLM Prompt Injection Worm
2024-03-11
Bruce Schneier discusses the development and demonstration of a worm that can spread through large language models (LLMs) by prompt injection, exploiting GenAI-powered applications to perform malicious activities without user interaction. It emphasises the potential risks in the interconnected ecosystems of Generative AI applications.
Was this useful?