EU AI Act7 min read

EU AI Act Compliance Requirements for SaaS Companies

SaaS companies using or providing AI face specific obligations under the EU AI Act. Whether you integrate AI features into your product, use AI for internal operations, or build AI-powered solutions, understanding these requirements is essential for serving EU customers.

Key Takeaways

Point Summary
Provider vs. deployer Most SaaS companies are providers of their AI features and deployers of third-party AI services
Common high-risk areas HR tools, credit decisioning, and customer service AI may qualify as high-risk
Key requirements Risk management, documentation, human oversight, and transparency
Customer obligations Even if your SaaS has AI, your enterprise customers may have their own deployer obligations
Practical steps AI inventory, risk classification, gap analysis, documentation, and monitoring

Quick Answer: SaaS companies typically act as providers when offering AI features to customers and as deployers when using third-party AI internally. High-risk classifications commonly apply to HR tech, fintech, and customer decision-making use cases. Requirements include risk management, technical documentation, transparency, and human oversight mechanisms.

SaaS-Specific Role Analysis

Most SaaS companies hold multiple roles under the EU AI Act depending on their AI usage:

Scenario Your Role Key Obligations
You build AI features for your product Provider Full provider obligations for your AI
You integrate third-party AI into your product Provider (of the integrated system) Provider obligations for the combined system
You use AI for internal operations Deployer Follow provider instructions, oversight, monitoring
You resell AI solutions under your brand Provider You take on provider obligations
You use foundation models in your product Provider (downstream) Obligations depend on how you fine-tune/deploy

Important distinction: When you integrate a third-party AI service (like an LLM API) into your SaaS product, you typically become the provider of the integrated AI system, even though you did not build the underlying model. This is because you determine how the AI is used within your product.

High-Risk Areas for SaaS

Several common SaaS categories may involve high-risk AI systems under Annex III:

HR Technology

AI Use Case Risk Level Reasoning
Resume screening High-risk Employment decisions, Annex III
Interview analysis High-risk Employment decisions, Annex III
Performance evaluation High-risk Promotion/termination decisions
Employee monitoring High-risk If used for evaluation purposes
Shift scheduling Likely minimal Unless based on performance profiling

Fintech and Financial Services

AI Use Case Risk Level Reasoning
Credit scoring High-risk Access to essential services, Annex III
Loan decisioning High-risk Financial services access
Insurance pricing (life/health) High-risk Annex III specifically lists this
Fraud detection Generally minimal Unless used for significant decisions about individuals
Investment advice Depends May be high-risk depending on implementation

Customer Service and Operations

AI Use Case Risk Level Reasoning
Chatbots Limited risk Transparency required (disclose AI interaction)
Sentiment analysis Generally minimal Unless used for decisions about individuals
Customer churn prediction Generally minimal Internal operational use
Lead scoring Generally minimal Commercial decisions, not protected categories

Provider Requirements for SaaS

If you provide AI features in your SaaS product, your obligations depend on the risk classification:

For High-Risk AI Features

Requirement What It Means for SaaS
Risk management system Document risks throughout the AI lifecycle, mitigation measures, and residual risks
Data governance Ensure training data is relevant, representative, and error-free; document data sources and preparation
Technical documentation Maintain comprehensive docs covering design, development, capabilities, and limitations
Record-keeping Build automatic logging capabilities; retain logs appropriately
Transparency to users Provide clear documentation about AI capabilities, limitations, and proper use
Human oversight Design systems to allow meaningful human oversight and intervention
Accuracy and robustness Test and document accuracy levels; implement cybersecurity measures
Conformity assessment Complete required assessment before placing on EU market
Quality management Establish QMS covering AI development and deployment
Post-market monitoring Monitor system performance and collect feedback from deployers

For Limited-Risk AI Features

Requirement What It Means for SaaS
Disclose AI interaction Inform users when they interact with chatbots or AI-generated content
Label synthetic content Mark AI-generated images, audio, or video as artificially created

For Minimal-Risk AI Features

No specific requirements, but consider voluntary adoption of high-risk practices as a competitive differentiator.

Deployer Requirements for SaaS

When you use third-party AI services in your SaaS operations, you have deployer obligations:

Requirement Practical Implementation
Follow instructions Use AI services according to provider documentation
Human oversight Assign qualified staff to oversee AI operations
Input data quality Ensure data you feed to AI is relevant and appropriate
Monitor for risks Watch for unexpected behaviors or outputs
Keep records Retain automatically generated logs (where required by provider)
Report incidents Notify providers and authorities of serious incidents

Contractual Considerations

What to Include in Customer Contracts

If you provide AI-powered SaaS to enterprise customers, consider:

Element Purpose
AI disclosure Clearly identify which features use AI
Intended use Define the permitted use cases for AI features
Limitations Document what the AI is not designed to do
Customer responsibilities Specify deployer obligations they must fulfill
Human oversight requirements Clarify when human review is required
Data handling Explain how input data is processed and stored
Updates and changes How you will communicate material AI changes

What to Request from AI Vendors

When procuring AI services for your SaaS:

Element Purpose
Risk classification Vendor's assessment of the AI system's risk level
Technical documentation Access to documentation required for compliance
Instructions for use Clear guidance on proper deployment
Conformity evidence Proof of completed conformity assessment (if high-risk)
Support commitments Vendor assistance with your compliance obligations
Incident notification Commitment to notify you of security or safety issues

Building Compliance into SaaS Products

Design Considerations

Principle Implementation
Human-in-the-loop Build review and approval workflows for high-stakes decisions
Explainability Provide users with understandable explanations of AI outputs
Override capabilities Allow users to override or correct AI decisions
Audit trails Log AI decisions with context for later review
Graceful degradation Ensure functionality when AI is unavailable or overridden

Documentation Practices

Document Type Contents
System design Architecture, data flows, model descriptions
Training data Sources, preparation, representativeness analysis
Testing results Accuracy metrics, bias assessments, robustness tests
Known limitations Documented failure modes and edge cases
Deployment guide Instructions for proper deployment and use
Monitoring plan How you track performance and collect feedback

User-Facing Features

Feature Purpose
AI disclosure banners Clear indication when users interact with AI
Confidence indicators Show certainty levels where appropriate
Feedback mechanisms Allow users to report incorrect outputs
Settings and controls Let users adjust AI behavior where appropriate
Audit exports Allow customers to export AI decision logs

Common Challenges for SaaS Companies

Foundation Model Integration

Many SaaS companies use foundation models (GPT, Claude, etc.) via APIs. Key considerations:

  • You are the downstream provider for how you integrate and deploy the model
  • Provider documentation should help you meet your obligations
  • Fine-tuning may create new obligations depending on what you change
  • Intended use restrictions from the model provider may affect your use cases

Multi-Tenant Considerations

Challenge Approach
Different customer use cases Some customers may use features in high-risk ways others do not
Customer deployer obligations Your customers may have their own compliance requirements
Data isolation Ensure training data from one tenant does not affect another
Regional variations Some customers may be subject to different national implementations

Rapid Feature Development

Challenge Approach
New AI features Establish a classification process for new features
Model updates Assess whether updates require re-classification
A/B testing Consider compliance implications of testing on EU users
Beta programs Ensure beta features meet minimum requirements

How Bastion Helps

Bastion provides comprehensive EU AI Act compliance support for SaaS companies:

  • AI inventory and classification. We help catalog AI features and third-party AI services, classifying each by risk level.
  • Role assessment. We determine where you are provider, deployer, or both across your AI portfolio.
  • Gap analysis. We evaluate current practices against EU AI Act requirements and prioritize remediation.
  • Documentation templates. We provide SaaS-specific templates for technical documentation and risk assessments.
  • Contract review. We help ensure customer agreements and vendor contracts address AI Act requirements.
  • Ongoing compliance. We monitor regulatory developments and help you adapt as your product evolves.

Ready to assess your SaaS AI compliance? Talk to our team


Sources