EU AI Act: Compliance Guide for AI System Providers
The EU AI Act is the world's first comprehensive AI regulation, establishing a risk-based framework for AI systems in the European Union. This guide covers risk classification, obligations for high-risk AI, prohibited practices, and the conformity assessment process for providers.
The EU AI Act at a Glance
The EU Artificial Intelligence Act entered into force on 1 August 2024, establishing the world's first comprehensive legal framework for artificial intelligence. The Act takes a risk-based approach, categorising AI systems by their potential for harm and imposing proportionate obligations. It applies to providers placing AI systems on the EU market, deployers within the EU, and providers outside the EU whose systems produce outputs used within the EU.
Risk Classification Framework
The EU AI Act classifies AI systems into four risk categories:
- Unacceptable Risk (Prohibited): AI systems posing clear threats to safety, livelihoods, or rights are banned. Prohibitions took effect February 2025.
- High Risk: AI systems in sensitive areas such as biometrics, critical infrastructure, education, employment, and law enforcement face extensive obligations.
- Limited Risk: Systems with specific transparency obligations, such as chatbots disclosing they are AI.
- Minimal Risk: All other systems, deployable without additional obligations.
Prohibited AI Practices
The Act bans:
- Social scoring systems leading to detrimental treatment
- Real-time remote biometric identification in public spaces for law enforcement, with narrow exceptions
- AI exploiting vulnerabilities of specific groups
- AI using subliminal or manipulative techniques causing significant harm
- Emotion recognition in workplaces and educational institutions, with limited exceptions
- Untargeted scraping of facial images to build recognition databases
Obligations for High-Risk AI Providers
Providers must comply with:
- Risk Management System: Continuous risk management covering the entire AI lifecycle
- Data Governance: Training, validation, and testing datasets must be relevant and representative
- Technical Documentation: Comprehensive documentation demonstrating compliance
- Record-Keeping: Automatic logging of events relevant to risk identification
- Transparency: Clear instructions for deployers on intended purpose and limitations
- Human Oversight: Systems must allow effective human intervention
- Accuracy, Robustness, and Cybersecurity: Appropriate levels throughout the lifecycle
Conformity Assessment
Before placing a high-risk AI system on the market, providers must undergo conformity assessment. Most systems can use internal assessment based on Annex VI. Certain systems, including biometric identification, require third-party assessment. Upon success, providers draw up an EU declaration of conformity, affix CE marking, and register in the EU database.
Implementation Timeline
- February 2025: Prohibitions on unacceptable-risk practices apply
- August 2025: Obligations for general-purpose AI models apply
- August 2026: Most high-risk AI requirements become applicable
- August 2027: Obligations for high-risk AI in regulated products apply
Practical Compliance Steps
- Inventory all AI systems your organisation develops or deploys
- Classify each system according to the risk categories
- Assess whether any systems fall under prohibited practices
- For high-risk systems, conduct a gap analysis against provider obligations
- Implement lifecycle risk management
- Establish data governance for training and testing data
- Prepare technical documentation and usage instructions
- Plan for conformity assessment and CE marking
The EU AI Act sets a global precedent. Organisations that invest early in compliance will be well positioned as other jurisdictions adopt similar approaches.
Frequently Asked Questions
Put this guide into practice
Our platform covers 692 compliance frameworks with 819,000+ cross-framework control mappings. Map your compliance journey, track progress, and identify gaps. Start free, no credit card required.
Try the Platform Free →