Key Takeaways
| Point |
Summary |
| Start with inventory |
You cannot assess compliance without knowing what AI you have |
| Classification is critical |
Risk classification determines your obligations |
| Leverage existing frameworks |
ISO 27001, SOC 2, and GDPR programs provide foundations |
| Documentation matters |
Many requirements center on demonstrable compliance through documentation |
| Plan for ongoing compliance |
AI Act compliance is continuous, not one-time |
Quick Answer: Prepare for the EU AI Act by completing an AI inventory, classifying each system by risk level, conducting gap analysis against applicable requirements, prioritizing remediation based on risk and timeline, and establishing ongoing governance. Organizations with existing compliance programs can leverage those foundations.
Step 1: Build Your AI Inventory
Before you can assess compliance, you need visibility into what AI systems your organization uses and provides.
What to Capture
| Data Point |
Why It Matters |
| System name and description |
Basic identification |
| AI technology type |
Helps determine if it meets AI system definition |
| Provider |
Internal development vs. third-party |
| Use case |
Critical for risk classification |
| Data inputs |
Understand what data the AI processes |
| Output type |
Predictions, recommendations, decisions, content |
| Users |
Who interacts with the system |
| Affected individuals |
Who is impacted by AI outputs |
| Geographic scope |
Does it affect EU residents? |
| Business criticality |
Helps prioritize compliance efforts |
Where to Look
AI systems may exist across your organization in expected and unexpected places:
| Area |
Common AI Systems |
| Product/engineering |
AI features in products, ML models, recommendation engines |
| Sales and marketing |
Lead scoring, personalization, content generation |
| Customer service |
Chatbots, ticket routing, sentiment analysis |
| HR |
Resume screening, interview analysis, performance tools |
| Finance |
Fraud detection, forecasting, risk assessment |
| Operations |
Predictive maintenance, demand forecasting |
| IT/Security |
Threat detection, anomaly detection, access management |
| Third-party tools |
AI features in SaaS products you use |
Inventory Challenges
| Challenge |
Approach |
| Shadow AI |
Survey business units; review software procurement |
| Embedded AI |
Review vendor documentation for AI capabilities |
| Unclear boundaries |
Apply the AI system definition consistently |
| Rapid change |
Establish ongoing inventory maintenance process |
Step 2: Classify by Risk
For each AI system in your inventory, determine its risk classification.
Classification Workflow
- Check prohibited list. Is this use case banned under the AI Act?
- Check safety components. Is this AI a safety component of a regulated product?
- Check Annex III. Is the use case listed in the high-risk categories?
- Apply exception. If Annex III applies, does the "not significant risk" exception apply?
- Check transparency. Does the system interact with humans or generate content?
- Default to minimal. If none of the above, it is minimal risk.
Document Your Classification
For each AI system, record:
| Element |
Documentation |
| Classification decision |
Which risk level and why |
| Regulation references |
Specific articles or annexes that apply |
| Assessment rationale |
Why this classification is appropriate |
| Exception analysis |
If claiming an exception, detailed justification |
| Review date |
When the classification should be reassessed |
Step 3: Conduct Gap Analysis
Compare your current state against requirements for each risk level:
High-Risk AI Gap Assessment
| Requirement Area |
Assessment Questions |
| Risk management |
Do you have a documented risk management process for this AI? Is it maintained throughout the lifecycle? |
| Data governance |
Is training data documented? Have you assessed data quality, relevance, and representativeness? |
| Technical documentation |
Do you have comprehensive documentation of design, development, and capabilities? |
| Record-keeping |
Does the system generate and retain appropriate logs? |
| Transparency |
Do users receive clear information about capabilities and limitations? |
| Human oversight |
Can humans meaningfully oversee and intervene in AI operations? |
| Accuracy and robustness |
Have you tested and documented accuracy? Are cybersecurity measures in place? |
| Quality management |
Is there a quality management system covering this AI? |
| Conformity assessment |
What assessment procedure applies? Have you completed it? |
Limited-Risk AI Gap Assessment
| Requirement |
Assessment Questions |
| Interaction disclosure |
Do users know they are interacting with AI? |
| Content labeling |
Is AI-generated content appropriately marked? |
| Emotion recognition disclosure |
Are users informed when emotion recognition is used? |
Deployer Gap Assessment
| Requirement |
Assessment Questions |
| Provider documentation |
Do you have instructions for use and technical documentation? |
| Human oversight |
Have you assigned competent personnel to oversee AI operation? |
| Input data |
Are you providing appropriate input data? |
| Monitoring |
Are you actively monitoring for issues? |
| Record retention |
Are you retaining logs as required? |
| Employee notification |
Have you informed workers subject to AI decisions? |
Step 4: Prioritize and Remediate
Not all gaps carry equal weight. Prioritize based on:
| Factor |
Consideration |
| Timeline |
When do requirements become enforceable? |
| Risk level |
Higher-risk systems warrant more urgent attention |
| Gap severity |
Some gaps are more significant than others |
| Effort required |
Some remediations take longer |
| Dependencies |
Some changes require others to be completed first |
| Business impact |
Operational disruption from changes |
Remediation Categories
| Category |
Examples |
| Policy and process |
Risk management processes, quality management systems, incident response |
| Documentation |
Technical documentation, instructions for use, training data records |
| Technical measures |
Logging capabilities, human oversight interfaces, accuracy testing |
| Organizational |
Roles and responsibilities, training, competency development |
| Vendor management |
Obtaining documentation, contractual updates, ongoing monitoring |
| Governance |
Oversight structures, review processes, continuous improvement |
Step 5: Implement Governance
Sustainable compliance requires ongoing governance:
Roles and Responsibilities
| Role |
Responsibilities |
| AI compliance lead |
Overall coordination of AI Act compliance |
| Business unit owners |
Compliance for AI systems in their areas |
| Technical teams |
Implementing technical requirements |
| Legal/compliance |
Regulatory interpretation and documentation |
| Procurement |
Vendor assessment and contractual requirements |
| HR |
Employee-related AI and training |
Governance Processes
| Process |
Purpose |
| New AI review |
Assess classification before deploying new AI |
| Change assessment |
Evaluate whether changes affect classification |
| Periodic review |
Regular reassessment of AI inventory and classifications |
| Incident management |
Process for handling AI-related incidents |
| Vendor monitoring |
Ongoing assessment of AI providers |
| Regulatory tracking |
Monitor new guidance and enforcement |
Documentation Management
| Document Type |
Maintenance Requirement |
| AI inventory |
Update when systems added, modified, or removed |
| Risk assessments |
Review periodically and when significant changes occur |
| Technical documentation |
Maintain throughout AI system lifecycle |
| Training records |
Document AI literacy training completion |
| Incident logs |
Retain for required periods |
| Conformity evidence |
Preserve for regulatory review |
Leveraging Existing Frameworks
Organizations with existing compliance programs have advantages:
ISO 27001 Alignment
| ISO 27001 Element |
AI Act Contribution |
| Risk assessment process |
Foundation for AI risk management |
| Documentation practices |
Templates and culture for AI documentation |
| Internal audit |
Approach applicable to AI compliance verification |
| Management review |
Governance structure for AI oversight |
| Incident management |
Process adaptable for AI incidents |
| Supplier management |
Framework for AI vendor assessment |
See our detailed guide on EU AI Act and ISO 27001/SOC 2.
SOC 2 Alignment
| SOC 2 Element |
AI Act Contribution |
| Control environment |
Foundation for AI governance |
| Risk assessment |
Approach for AI risk identification |
| Monitoring activities |
Basis for AI post-market monitoring |
| Change management |
Process for assessing AI changes |
| Vendor management |
Framework for AI provider oversight |
GDPR Alignment
| GDPR Element |
AI Act Contribution |
| Data inventory |
Starting point for AI inventory |
| DPIA process |
Skills transferable to AI risk assessment |
| Documentation habits |
Culture supports AI documentation needs |
| Vendor due diligence |
Process applicable to AI providers |
| Incident response |
Framework for AI incident handling |
See our guide on EU AI Act vs GDPR.
Common Pitfalls to Avoid
| Pitfall |
Why It Matters |
| Underestimating scope |
AI is more pervasive than many organizations realize |
| Classification errors |
Misclassification leads to inadequate or excessive measures |
| Documentation gaps |
Compliance is difficult to demonstrate without records |
| Ignoring third parties |
Vendor AI creates obligations for your organization |
| One-time approach |
Compliance requires ongoing maintenance |
| Siloed implementation |
AI governance needs cross-functional coordination |
| Waiting too long |
High-risk requirements take time to implement |
Quick Wins
Some compliance activities can be started immediately with relatively low effort:
| Quick Win |
Impact |
| Start the inventory |
Foundation for everything else |
| AI literacy training |
Required by February 2025 |
| Add AI disclosures |
Transparency for chatbots and AI content |
| Review prohibited practices |
Identify any issues before February 2025 |
| Gather vendor documentation |
Understand what your AI providers offer |
| Designate responsibility |
Assign someone to coordinate AI compliance |
How Bastion Helps
Bastion provides comprehensive support for EU AI Act preparation:
- AI discovery. We help identify AI systems across your organization using structured approaches.
- Classification support. We apply the regulation's criteria consistently to classify your AI portfolio.
- Gap analysis. We assess current state against requirements and prioritize remediation.
- Documentation templates. We provide practical templates for required documentation.
- Framework integration. We help leverage existing ISO 27001, SOC 2, or GDPR programs.
- Governance design. We help establish sustainable AI governance structures.
- Implementation support. We guide remediation efforts through completion.
- Ongoing partnership. We provide continued support as regulations and your AI portfolio evolve.
Ready to start your EU AI Act preparation? Talk to our team
Sources