R23 — Prompt Injection / Jailbreaking

Stress-test Prompt Injection / Jailbreaking in the AI startup scenario Series-B, AI-first, six new AI risks in your register.
Start playing →
R23 AI Severity 8 · Major Residual offset +10

Adversarial prompts that bypass guardrails. Detection + prevention layer (AI firewall) dominate; awareness reduces social-engineered injection. Residual exposure because new attack patterns emerge constantly.

What is Prompt Injection / Jailbreaking?

Adversarial prompts that bypass guardrails. Detection + prevention layer (AI firewall) dominate; awareness reduces social-engineered injection. Residual exposure because new attack patterns emerge constantly. CISO Game tracks this as R23 in the live risk register, severity 8 (Major), category AI.

How does CISO Game model Prompt Injection / Jailbreaking?

Exposure for R23 runs from 0 to 100, recomputed live as you buy, cancel, or reassign products. How the exposure model works →

Real-world parallel

Prompt injection is the OWASP Top 10 of LLM applications — direct (user-supplied) and indirect (tool-fetched content) injection both exfiltrate data, abuse tools, or bypass policy. Mitigations are still maturing: input/output filtering, system-prompt isolation, tool-permission scoping, and red-team validation. The state of the art is moving fast enough that any tool you buy today needs a refresh path.

How do security teams mitigate Prompt Injection / Jailbreaking?

The dominant subscore levers for this risk are:

Residual offset: +10 exposure points are structural — no product fully removes them. Real-world parallels: zero-day windows, vendor monoculture, regulator unpredictability.

Gated: only active when AI focus is enabled in Setup.

Which investments mitigate Prompt Injection / Jailbreaking?

Products in CISO Game that reduce exposure to R23:

Which related risks should you also watch?

Risks with similar dominant subscores or shared category — addressing one often helps the others:

Why does Prompt Injection / Jailbreaking matter to a CISO?

AI risk is the newest category in the register. Prompt Injection / Jailbreaking requires controls that are still maturing — model cards, AI red-teaming, AI-SPM, prompt-injection detection. CISO Game's AI focus toggle activates these.

How can you test your mitigation strategy?

Click Play CISO Game free to see R23 appear live in your risk register and watch each purchase move the exposure number in real time. No signup required.

Stress-test Prompt Injection / Jailbreaking in the AI startup scenario →