Skip to content

Prompt Injection

An attack technique targeting AI language models where malicious instructions are embedded in user inputs to manipulate the model's behaviour. Prompt injection can cause AI systems to ignore safety guidelines or reveal sensitive information.

AI & Technology

Related Frameworks

Frequently Asked Questions

What is Prompt Injection?
An attack technique targeting AI language models where malicious instructions are embedded in user inputs to manipulate the model's behaviour. Prompt injection can cause AI systems to ignore safety guidelines or reveal sensitive information.
Why is Prompt Injection important for compliance?
Prompt Injection is a key concept in AI & Technology. Understanding prompt injection helps organizations meet regulatory requirements, reduce risk, and demonstrate due diligence during audits. Our compliance platform covers this concept across 692 frameworks with 819,000+ control mappings.
Where can I learn more about Prompt Injection?
Explore our compliance framework pages to see how prompt injection applies across different standards and regulations. Our implementation guides provide step-by-step guidance, and the compliance platform offers AI-powered analysis of how this concept maps across 692 frameworks.

See how Prompt Injection applies across compliance frameworks

Our AI-powered platform maps 692 frameworks with 819,000+ control connections. Explore how this concept is addressed across standards.