R23 — Prompt Injection / Jailbreaking
Adversarial prompts that bypass guardrails. Detection + prevention layer (AI firewall) dominate; awareness reduces social-engineered injection. Residual exposure because new attack patterns emerge constantly.
What is Prompt Injection / Jailbreaking?
Adversarial prompts that bypass guardrails. Detection + prevention layer (AI firewall) dominate; awareness reduces social-engineered injection. Residual exposure because new attack patterns emerge constantly. CISO Game tracks this as R23 in the live risk register, severity 8 (Major), category AI.
How does CISO Game model Prompt Injection / Jailbreaking?
Exposure for R23 runs from 0 to 100, recomputed live as you buy, cancel, or reassign products. How the exposure model works →
Real-world parallel
Prompt injection is the OWASP Top 10 of LLM applications — direct (user-supplied) and indirect (tool-fetched content) injection both exfiltrate data, abuse tools, or bypass policy. Mitigations are still maturing: input/output filtering, system-prompt isolation, tool-permission scoping, and red-team validation. The state of the art is moving fast enough that any tool you buy today needs a refresh path.
How do security teams mitigate Prompt Injection / Jailbreaking?
The dominant subscore levers for this risk are:
- Detection subscore — weight 30%
- Prevention subscore — weight 30%
- Awareness subscore — weight 20%
- Response subscore — weight 10%
Residual offset: +10 exposure points are structural — no product fully removes them. Real-world parallels: zero-day windows, vendor monoculture, regulator unpredictability.
Gated: only active when AI focus is enabled in Setup.
Which investments mitigate Prompt Injection / Jailbreaking?
Products in CISO Game that reduce exposure to R23:
- AI Firewall (LLM I/O guardrails)AI Security
- AI Security Posture Management (AI-SPM)AI Security
- AI Red Team engagement (annual)AI Security
- AI Prompt-DLP (LLM I/O classification)AI Security
- Hire Security EngineerHeadcount
Which related risks should you also watch?
Risks with similar dominant subscores or shared category — addressing one often helps the others:
- R24 Training Data PoisoningAI · severity 9
- R27 Shadow AI / Unsanctioned LLM UseAI · severity 7
- R31 Autonomous AI Agent MisuseAI · severity 8
- R43 Insider AI MisuseAI · severity 7
Why does Prompt Injection / Jailbreaking matter to a CISO?
AI risk is the newest category in the register. Prompt Injection / Jailbreaking requires controls that are still maturing — model cards, AI red-teaming, AI-SPM, prompt-injection detection. CISO Game's AI focus toggle activates these.
How can you test your mitigation strategy?
Click Play CISO Game free to see R23 appear live in your risk register and watch each purchase move the exposure number in real time. No signup required.
Stress-test Prompt Injection / Jailbreaking in the AI startup scenario →