SecurityENFreitag, 28. November 2025
Prompt Injection Through Poetry
Schneier on Security
External Source
Summary
In a new paper, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” researchers found that turning LLM prompts into poetry resulted in jailbreaking the models...
External Article
This article is from Schneier on Security and is hosted there. We only link to external sources and do not host any content on our servers.