EU AI Act7 min read

What is the EU AI Act? A Complete Guide

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework regulating artificial intelligence. It establishes a risk-based approach to AI governance, setting requirements for developers and deployers of AI systems based on the potential harm those systems could cause.

Key Takeaways

Point Summary
What it is EU regulation establishing requirements for AI systems based on their risk level
Effective date Entered into force August 1, 2024, with phased implementation through 2027
Who it applies to Providers, deployers, importers, and distributors of AI systems in the EU market
Maximum penalty Up to 35 million EUR or 7% of global annual turnover for prohibited AI practices
Key approach Risk-based classification: unacceptable, high-risk, limited risk, and minimal risk

Quick Answer: The EU AI Act is the first comprehensive AI regulation globally. It classifies AI systems by risk level and imposes requirements accordingly. Organizations placing AI systems on the EU market or using AI to affect EU residents must comply, regardless of where they are headquartered.

Why the EU AI Act Matters

The EU AI Act represents a significant shift in how AI is governed. Until now, AI development has been largely self-regulated, with organizations setting their own standards for safety, transparency, and accountability. The EU AI Act changes this by establishing legally binding requirements.

Key reasons organizations should pay attention:

  • Market access. Non-compliant AI systems cannot be placed on the EU market or used to provide services to EU residents.
  • Enterprise sales. B2B customers are increasingly asking about AI governance as part of vendor assessments.
  • Regulatory precedent. The EU AI Act is likely to influence AI regulation in other jurisdictions, similar to how GDPR shaped global privacy law.
  • Liability exposure. Non-compliance can result in significant penalties and may affect liability in case of AI-related harms.
  • Competitive advantage. Early compliance demonstrates responsible AI practices, building trust with customers and partners.

The Risk-Based Approach

The EU AI Act categorizes AI systems into four risk levels, with requirements scaled to potential harm:

Risk Level Description Requirements
Unacceptable AI practices that pose clear threats to safety, livelihoods, or rights Prohibited outright
High-risk AI systems with significant potential to cause harm Extensive compliance requirements
Limited risk AI systems with specific transparency concerns Transparency obligations
Minimal risk All other AI systems No specific requirements (codes of conduct encouraged)

Most AI systems fall into the minimal risk category and face no specific obligations under the regulation. However, organizations should carefully assess whether their AI systems might qualify as high-risk, as the requirements for this category are substantial.

For a detailed breakdown, see our guide on EU AI Act risk classification.

Who Does the EU AI Act Apply To?

The regulation defines several roles with distinct obligations:

Role Definition Key Obligations
Provider Develops or places an AI system on the market under their name Primary compliance responsibility, conformity assessments, technical documentation
Deployer Uses an AI system under their authority (not personal use) Human oversight, data governance, incident reporting
Importer Brings an AI system from outside the EU into the EU market Verify provider compliance, maintain documentation
Distributor Makes an AI system available on the EU market (not provider or importer) Verify compliance markings, storage and transport conditions

Important: The regulation applies based on where AI systems have effects, not just where organizations are located. A company headquartered in the United States that provides AI systems to EU customers or uses AI to make decisions affecting EU residents falls within scope.

For more details, see our guide on who needs to comply with the EU AI Act.

What Counts as an AI System?

The EU AI Act defines an AI system as:

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

This definition is intentionally broad and covers:

  • Machine learning systems (supervised, unsupervised, reinforcement learning)
  • Deep learning and neural networks
  • Statistical approaches and Bayesian estimation
  • Search and optimization methods
  • Expert systems and knowledge-based reasoning

Traditional software that operates on fixed, predetermined rules without learning or inference capabilities generally falls outside the scope.

Key Requirements for High-Risk AI Systems

Organizations providing or deploying high-risk AI systems face the most significant compliance obligations:

Requirement Description
Risk management Implement and maintain a risk management system throughout the AI system lifecycle
Data governance Ensure training, validation, and testing data meets quality criteria
Technical documentation Maintain comprehensive documentation demonstrating compliance
Record-keeping Enable automatic logging of events throughout operation
Transparency Provide clear information to deployers about the system's capabilities and limitations
Human oversight Design systems to allow effective human oversight
Accuracy and robustness Ensure appropriate levels of accuracy, robustness, and cybersecurity
Conformity assessment Complete required conformity assessment procedures before market placement

Prohibited AI Practices

Certain AI applications are prohibited entirely due to unacceptable risks:

  • Social scoring by public authorities that leads to detrimental treatment
  • Exploitation of vulnerabilities of specific groups (age, disability, social or economic situation)
  • Real-time biometric identification in public spaces for law enforcement (with limited exceptions)
  • Emotion recognition in workplaces and educational institutions (with limited exceptions)
  • Biometric categorization inferring sensitive characteristics (race, political opinions, religious beliefs)
  • Facial recognition databases built through untargeted scraping of facial images
  • Predictive policing based solely on profiling or personality traits

How the EU AI Act Relates to Other Frameworks

Framework Relationship
GDPR Both apply when AI processes personal data. GDPR focuses on data protection, AI Act on AI-specific risks
ISO 42001 Voluntary standard for AI management systems that can help demonstrate AI Act compliance
ISO 27001 Information security controls support AI Act cybersecurity requirements
NIS 2 Organizations in scope for NIS 2 may have additional cybersecurity obligations for AI systems

Implementation Timeline

The EU AI Act entered into force on August 1, 2024, with a phased implementation:

Date Milestone
August 2024 Regulation enters into force
February 2025 Prohibitions on unacceptable AI practices apply
August 2025 Rules on general-purpose AI models apply
August 2026 Full application of high-risk AI requirements
August 2027 High-risk AI systems that are also safety components apply

For complete timeline details and penalty information, see our guide on EU AI Act timeline and enforcement.

How Bastion Helps with EU AI Act Compliance

Bastion provides comprehensive support for organizations navigating EU AI Act compliance:

  • AI inventory assessment. We help identify all AI systems in your organization and classify them according to the regulation's risk categories.
  • Gap analysis. We evaluate your current AI governance practices against EU AI Act requirements and identify areas needing attention.
  • Documentation support. We help develop the technical documentation, risk management processes, and conformity procedures required for high-risk systems.
  • Integration with existing frameworks. If you already have ISO 27001 or SOC 2 certifications, we help extend those frameworks to cover AI-specific requirements.
  • Ongoing compliance. We provide continuous monitoring as the regulatory landscape evolves and new guidance is issued.

Ready to assess your EU AI Act compliance needs? Talk to our team


Sources