SicurezzaENFreitag, 28. November 2025
Prompt Injection Through Poetry
Schneier on Security
Fonte Esterna
Riepilogo
In a new paper, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” researchers found that turning LLM prompts into poetry resulted in jailbreaking the models...
Articolo Esterno
Questo articolo proviene da Schneier on Security ed è ospitato lì. Linkiamo solo a fonti esterne.