92 %

Cisco’s study “Death by a Thousand Prompts: Open Model Vulnerability Analysis” shows that so‑called multi‑turn jailbreaks against open AI models (Open-Weight-LLMs) succeed in up to 92% of cases, revealing serious security vulnerabilities in their architecture. The text was automatically translated from German into English. The German quotations were also translated in sense.

November 7, 2025 · 1 min · 52 words

250 manipulated documents

The study “Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples” shows that as few as about 250 manipulated documents are enough to poison even large language models with false information. With comparatively little effort, propaganda can be injected or faulty code generated when AI is used in software development. This turns artificial intelligence into the largest black box in IT. ...

October 16, 2025 · 1 min · 80 words