EU AI ActComplianceApril 13, 2026·15 min read

EU AI Act
Articles 9, 13, 14, 52
Explained for Developers

The EU AI Act is the world's first comprehensive AI regulation. If you deploy AI systems used by anyone in Europe, these four articles will define your compliance roadmap for the next two years.

Risk ManagementLoggingHuman OversightTransparency
Enforcement Timeline
Feb 2025Prohibited AI practices enforceable
Aug 2025Rules for general-purpose AI models apply
Aug 2026Full enforcement of all provisions
Aug 2027Existing high-risk systems must comply
Penalty Structure
Prohibited Practices€35M or 7% global turnover
High-Risk Non-Compliance€15M or 3% global turnover
Incorrect Information€7.5M or 1% global turnover
9

Article 9: Risk Management System

Continuous risk identification and mitigation

Requirements
1.Identify and analyze all known and foreseeable risks
2.Estimate and evaluate risks that may emerge during intended use
3.Adopt risk management measures based on state of the art
4.Document the entire risk management process
5.Test the system against the defined risks
Risk Assessment Configuration
# risk-assessment.yaml
system:
  name: "Customer Support AI"
  version: "2.1.0"
  risk_classification: "high-risk"

risks:
  - id: RISK-001
    category: "bias"
    description: "Model may produce biased responses"
    likelihood: medium
    impact: high
    mitigation: "Bias testing pipeline + human review"
    
  - id: RISK-002
    category: "data_leak"
    description: "PII exposure in LLM responses"
    likelihood: medium
    impact: critical
    mitigation: "PII filter + output validation"

testing:
  schedule: "monthly"
  bias_benchmarks: ["winogender", "bbq"]
  adversarial_tests: ["prompt_injection", "jailbreak"]
13

Article 13: Transparency & Logging

Record-keeping for the full AI lifecycle

Requirements
1.Log all AI system operations with timestamps
2.Record input data, model version, and output
3.Maintain logs for the duration required by regulation
4.Ensure logs are accessible to authorities upon request
5.Enable traceability of AI-generated decisions
AI Operation Logging
import logging
from datetime import datetime

class AIAuditLogger:
    def log_interaction(self, request_id, user_id,
                        model_version, prompt, response,
                        confidence, decision_type):
        entry = {
            "timestamp": datetime.utcnow().isoformat(),
            "request_id": request_id,
            "user_id": user_id,
            "model_version": model_version,
            "input_hash": sha256(prompt),  # Don't log PII
            "output_summary": self.summarize(response),
            "confidence_score": confidence,
            "decision_type": decision_type,
            "retention_days": 730,  # 2 years
        }
        self.audit_store.append(entry)
14

Article 14: Human Oversight

Humans must be able to understand and override

Requirements
1.Provide clear documentation of system capabilities and limitations
2.Enable human operators to understand AI outputs
3.Allow human override or interruption at any time
4.Build interfaces that surface confidence levels and uncertainty
5.Ensure human operators can shut down the system
Human Override Interface
interface AIDecision {
  recommendation: string;
  confidence: number;
  reasoning: string[];
  alternatives: string[];
  overridable: true;  // Always true per Art. 14
}

function renderDecision(decision: AIDecision) {
  return (
    <DecisionPanel>
      <Recommendation>{decision.recommendation}</Recommendation>
      <ConfidenceBar value={decision.confidence} />
      <ReasoningChain steps={decision.reasoning} />
      <OverrideButton onClick={handleHumanOverride} />
      <EmergencyStop onClick={shutdownAISystem} />
    </DecisionPanel>
  );
}
52

Article 52: Transparency Obligations

Users must know they're interacting with AI

Requirements
1.Clearly disclose AI-generated content to users
2.Label deepfakes and synthetic media
3.Inform users when interacting with AI chatbots
4.Mark AI-generated text used in public communications
5.Provide contact information for human escalation
AI Disclosure Component
function AIDisclosure({ systemType, humanContact }) {
  return (
    <div className="ai-disclosure" role="alert">
      <p>
        This {systemType} is powered by artificial
        intelligence. Responses are generated by an AI
        model and may contain errors.
      </p>
      <p>
        For human assistance, contact{" "}
        <a href={humanContact.url}>{humanContact.name}</a>
      </p>
      <AISyntheticContentLabel />
    </div>
  );
}

What This Means for Your Team

If you ship AI features — chatbots, recommendation engines, automated decisions, content generation — you need a systematic approach to compliance. Not word-for-word legal interpretation, but engineering systems that satisfy the spirit and letter of each article.

Automate EU AI Act Compliance

Custodia's GRC tier automatically maps your codebase against EU AI Act requirements — risk management, logging, human oversight, and transparency obligations.

Start Compliance Scan →
← Model Output ValidationAll Articles →