EU AI Act Risk Classification Explained
The EU AI Act uses a risk-based approach to regulate artificial intelligence, categorizing AI systems into four tiers based on their potential for harm. Understanding which category your AI system falls into determines what compliance obligations apply.
Key Takeaways
| Point | Summary |
|---|---|
| Risk tiers | Unacceptable, high-risk, limited risk, and minimal risk |
| Unacceptable risk | Prohibited AI practices that threaten fundamental rights or safety |
| High-risk | Systems subject to extensive requirements before market placement |
| Limited risk | Systems requiring transparency obligations (e.g., disclosure of AI use) |
| Minimal risk | No specific requirements, but codes of conduct encouraged |
Quick Answer: The EU AI Act classifies AI systems into four risk categories. Most AI falls into minimal risk with no requirements. High-risk systems face extensive compliance obligations, while unacceptable-risk AI is prohibited entirely. Your classification depends on the AI system's use case and potential impact.
The Four Risk Categories
1. Unacceptable Risk (Prohibited)
Certain AI applications are banned outright because they pose clear threats to people's safety, livelihoods, or rights.
Prohibited practices include:
| Practice | Description |
|---|---|
| Social scoring | Public authority systems that evaluate individuals based on social behavior leading to unfavorable treatment |
| Subliminal manipulation | Techniques that deploy subliminal elements to materially distort behavior in harmful ways |
| Exploitation of vulnerabilities | Systems that exploit vulnerabilities of specific groups (age, disability, economic situation) |
| Real-time biometric identification | Remote identification in public spaces for law enforcement (with narrow exceptions) |
| Emotion recognition | Recognition of emotions in workplaces and educational institutions (with exceptions) |
| Biometric categorization | Categorizing individuals based on biometric data to infer sensitive attributes |
| Untargeted facial scraping | Creating facial recognition databases through untargeted scraping |
| Predictive policing | Making predictions about criminal offense risk based solely on profiling |
Exceptions exist for:
- Real-time biometric identification may be permitted for specific law enforcement purposes (missing children, imminent threats, serious crimes) with judicial authorization
- Emotion recognition for medical or safety purposes may be allowed
2. High-Risk AI Systems
High-risk AI systems are permitted but subject to extensive requirements. A system is classified as high-risk if it falls into one of two categories:
Category 1: Safety Components
AI systems that are:
- Safety components of products covered by EU harmonization legislation (e.g., medical devices, machinery, toys)
- Products themselves that require third-party conformity assessment
Category 2: Annex III Listed Uses
AI systems used in specific high-risk areas:
| Area | Examples |
|---|---|
| Biometrics | Remote biometric identification, biometric categorization, emotion recognition |
| Critical infrastructure | Management of road traffic, water, gas, heating, electricity |
| Education | Determining access to education, evaluating learning outcomes, detecting cheating |
| Employment | CV screening, interview analysis, promotion decisions, task allocation, termination |
| Essential services | Credit scoring, life/health insurance pricing, emergency services dispatching |
| Law enforcement | Evidence reliability assessment, polygraphs, crime analytics, profiling |
| Migration and asylum | Risk assessments, document authenticity verification |
| Justice | Assisting judicial authorities in researching and interpreting facts and law |
Important exception: An AI system listed in Annex III is NOT considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing decision-making outcomes.
This exception requires the provider to document the assessment and register it before placing the system on the market.
3. Limited Risk AI Systems
Limited risk systems have transparency obligations but fewer technical requirements than high-risk systems.
| System Type | Transparency Requirement |
|---|---|
| AI interacting with humans | Users must be informed they are interacting with an AI system |
| Emotion recognition systems | Users must be informed that such a system is in use |
| Biometric categorization | Users must be informed of the categorization |
| AI-generated content | Must be marked as artificially generated or manipulated (deep fakes) |
4. Minimal Risk AI Systems
The majority of AI systems fall into this category and face no specific legal requirements under the EU AI Act.
Examples of minimal risk AI:
- Spam filters
- AI-enabled video games
- Recommendation algorithms for entertainment
- AI-assisted writing tools
- Inventory management systems
While not legally required, the regulation encourages providers of minimal risk systems to voluntarily adopt codes of conduct that apply high-risk requirements.
How to Classify Your AI System
Follow this decision process to determine your AI system's risk classification:
Step 1: Check for Prohibited Practices
Does your AI system involve any of the explicitly prohibited uses?
- If yes: The system cannot be placed on the EU market
- If no: Continue to Step 2
Step 2: Check for Safety Component Status
Is your AI system a safety component of a product covered by EU product safety legislation, or is it such a product itself?
- If yes and third-party conformity assessment is required: High-risk
- If no: Continue to Step 3
Step 3: Check Annex III Categories
Is your AI system used for any purpose listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)?
- If yes: Likely high-risk, unless the exception applies
- If no: Continue to Step 4
Step 4: Check for General-Purpose AI
Is your AI system a general-purpose AI model (GPAI)?
- If yes: Specific GPAI rules apply (separate framework)
- If no: Continue to Step 5
Step 5: Check Transparency Obligations
Does your system interact directly with natural persons, generate synthetic content, or perform emotion recognition?
- If yes: Limited risk with transparency obligations
- If no: Minimal risk with no specific requirements
High-Risk AI: The Exception That May Apply
Even if your AI system falls under Annex III categories, you may be able to demonstrate it is not high-risk if:
- The system performs a narrow procedural task
- The system improves the result of a previously completed human activity
- The system detects decision-making patterns without replacing human assessment
- The system performs a preparatory task for an assessment
Documentation requirements for claiming the exception:
- Written assessment explaining why the system does not pose significant risks
- Risk assessment methodology used
- Evidence supporting the determination
- Registration in the EU database before market placement
This exception is narrow and should be applied carefully. When in doubt, assume high-risk classification applies.
General-Purpose AI Models (GPAI)
The EU AI Act includes specific rules for general-purpose AI models, recognizing that these powerful foundation models can be used for many purposes:
| GPAI Category | Requirements |
|---|---|
| All GPAI models | Technical documentation, copyright compliance information, EU representative designation |
| GPAI with systemic risk | Additional obligations including model evaluations, adversarial testing, cybersecurity measures, incident reporting |
A GPAI model is presumed to have systemic risk if:
- It has high impact capabilities, presumed when cumulative training compute exceeds 10^25 FLOPs. This is a rebuttable presumption; providers may contest with evidence that the model does not pose systemic risk.
- The European Commission designates it based on specific criteria
Note: The 10^25 FLOP threshold may be amended by the Commission via delegated acts to reflect technological developments. Providers must notify the Commission within two weeks of reasonably foreseeing or reaching this threshold, even before placing the model on the market.
Practical Examples
| AI System | Classification | Reasoning |
|---|---|---|
| Resume screening tool | High-risk | Employment use case in Annex III |
| Customer service chatbot | Limited risk | Requires disclosure of AI interaction |
| Email spam filter | Minimal risk | No significant harm potential |
| Credit scoring algorithm | High-risk | Essential services use case in Annex III |
| AI-generated marketing images | Limited risk | Synthetic content requires marking |
| Predictive maintenance | Generally minimal | Unless critical infrastructure safety component |
| Medical diagnosis assistant | High-risk | Medical device safety component |
| Recommendation engine (retail) | Minimal risk | Entertainment/commerce, no significant harm |
What Happens If You Misclassify?
Misclassifying an AI system to avoid requirements carries significant risks:
- Penalties up to 15 million EUR or 3% of global turnover for providing incorrect information to authorities
- Market access denied if the misclassification is discovered during market surveillance
- Liability exposure if harms occur from an AI system that should have had safeguards
- Reputational damage from regulatory enforcement actions
How Bastion Helps
Bastion helps organizations navigate AI Act classification:
- AI system inventory. We help identify and catalog all AI systems across your organization.
- Classification assessment. We evaluate each system against EU AI Act criteria and document the risk classification.
- Gap analysis. For high-risk systems, we identify what compliance measures are needed.
- Documentation support. We help prepare the assessments and documentation required for claiming exceptions where applicable.
- Ongoing monitoring. As guidance evolves and the EU issues clarifications, we help ensure classifications remain accurate.
Ready to classify your AI systems? Talk to our team
Sources
- EU AI Act Annex I and III (EUR-Lex) - Official list of high-risk AI system categories
- European Commission AI Risk Classification - Overview of the risk-based approach
