EU AI Act Timeline: What You Need to Comply With and When
The EU AI Act entered into force in August 2024, but its requirements phase in over three years. Here's a practical timeline of what's prohibited now, what's required for high-risk AI systems, and the key compliance dates.
A Phased Approach to AI Regulation
The EU AI Act is the world's first comprehensive AI regulation. It entered into force on 1 August 2024, but compliance requirements are staggered over a three-year implementation period. Understanding the timeline is critical for any organisation developing, deploying, or using AI systems in the European market.
February 2025: Prohibited AI Practices
The first requirements took effect six months after entry into force. From February 2025, the following AI practices are prohibited:
- Social scoring systems by public authorities
- Real-time remote biometric identification in public spaces (with limited law enforcement exceptions)
- AI that exploits vulnerabilities of specific groups (age, disability, social situation)
- AI that uses subliminal techniques to distort behaviour in ways that cause harm
- AI that infers emotions in workplaces and educational institutions (with limited exceptions)
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
August 2025: Governance and General-Purpose AI
Nine months later, governance requirements and rules for general-purpose AI (GPAI) models take effect:
- National competent authorities must be designated
- Rules for GPAI models apply, including transparency requirements and technical documentation
- Providers of GPAI models with systemic risk face additional obligations including model evaluations and incident reporting
August 2026: High-Risk AI Systems
The most substantive requirements:for high-risk AI systems:take effect two years after entry into force:
- Mandatory conformity assessments for high-risk AI systems
- Risk management systems, data governance, technical documentation, and record-keeping
- Transparency obligations and human oversight requirements
- Registration in the EU database for high-risk AI systems
High-risk categories include AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
What Organisations Should Do Now
Don't wait for the deadlines. Start with an AI inventory:catalogue every AI system your organisation develops or deploys. Classify each system against the risk categories. For high-risk systems, begin building the required risk management and quality management systems now. The two-year runway for high-risk compliance will go quickly, especially for organisations with complex AI portfolios.
Frequently Asked Questions
Explore this topic on our compliance platform
Our platform covers 692 compliance frameworks with 819,000+ cross-framework control mappings. Start free, no credit card required.
Try the Platform Free →