EU AI ACTMarch 26, 2026·11 min read

EU AI Act Compliance Checker:
What Your Code Must Do
Under Articles 9, 13, 14 & 52

The EU AI Act is fully in force. Most developers still don't know which technical obligations apply to their codebase — or that violations carry fines up to €35 million. This guide maps every article to specific code changes.

Bottom Line Up Front

If your application calls an LLM API and has EU users, you face at minimum Article 52 transparency obligations — you must disclose AI interaction. If your system makes decisions about hiring, credit, or other high-stakes domains, you face the full Articles 9–15 compliance stack: documented risk management, human oversight gates in code, robustness testing, and audit trails. custodia scan . maps your codebase against every applicable article and generates an auditor-ready PDF report.

First: Which Risk Tier Is Your AI System?

The EU AI Act uses a four-tier risk classification. Your compliance obligations depend entirely on which tier your system falls into. Most developer tools, security scanners, and productivity SaaS land in Limited-Risk — meaning the only hard obligation is Article 52 disclosure.

Risk TierExamplesYour Obligation
ProhibitedSocial scoring, subliminal manipulation, real-time biometric surveillance in public spacesBan — no path to compliance
High-RiskHiring tools, credit scoring, critical infrastructure, law enforcement, education assessment, medical devicesFull Articles 9–15 compliance required
Limited-RiskChatbots, AI content generators, emotion recognitionArticle 52 transparency (disclose AI interaction)
Minimal-RiskSpam filters, AI-powered search, recommendation enginesVoluntary codes of conduct only

Developer shortcut:If your AI system makes or materially influences decisions about people in employment, education, healthcare, credit, or critical infrastructure — you're high-risk. Everything else is almost certainly limited or minimal risk.

The Technical Requirements — Article by Article

What each article means in terms of code, documentation, and architecture.

Art. 9

Risk Management System

Legal obligation: Establish and maintain a risk management system across the entire lifecycle of high-risk AI.

Code-level implementation: Documented risk register, version-linked to each model update. Custodia generates this automatically from scan findings.

Art. 10

Training Data Governance

Legal obligation: Training datasets must be relevant, representative, free of errors, and have appropriate statistical properties.

Code-level implementation: Data pipeline validation gates, schema checks, and bias testing before model training or fine-tuning.

Art. 13

Transparency & Documentation

Legal obligation: High-risk AI systems must provide enough information for deployers to understand and operate the system correctly.

Code-level implementation: System card in repo, model version pinning, documented capability boundaries, inference logging.

Art. 14

Human Oversight

Legal obligation: AI systems must be designed to allow natural persons to oversee, intervene, halt, or override outputs.

Code-level implementation: Human-in-the-loop gates before high-consequence actions; kill switches; override APIs with audit trails.

Art. 15

Accuracy, Robustness & Security

Legal obligation: High-risk AI systems must achieve appropriate levels of accuracy, be robust to errors, and be secure against adversarial attacks.

Code-level implementation: Adversarial input testing, output confidence bounds, fallback error handling, OWASP LLM Top 10 scanning.

Art. 52

Transparency Obligations (GPAI)

Legal obligation: AI systems designed to interact with natural persons must disclose that the user is interacting with an AI — unless obvious from context.

Code-level implementation: Chatbots, copilots, and voice assistants need UI disclosure. AI-generated content must be labelled. Missing disclosure = violation.

Article 52 Deep Dive: The Obligation Most Developers Are Missing

Article 52 applies to any AI system that interacts with natural persons — chatbots, copilots, AI assistants, customer support bots. The obligation: clearly inform users they are interacting with an AI. This applies regardless of your risk tier.

❌ Art. 52 Violation
// Chat interface with no AI disclosure
export function ChatInterface() {
  return (
    <div>
      <h2>Support Chat</h2>
      {/* No mention that this is AI-powered */}
      <ChatBubble />
    </div>
  );
}
✅ Art. 52 Compliant
// Explicit AI disclosure in UI
export function ChatInterface() {
  return (
    <div>
      <h2>Support Chat</h2>
      <p className="ai-disclosure">
        You are chatting with an AI assistant.
      </p>
      <ChatBubble />
    </div>
  );
}

Article 52(3): AI-Generated Content Labels

Any AI-generated text, images, audio, or video that could be mistaken for real human-produced content must be labelled as AI-generated. This directly affects: AI content writers, AI image generators, AI-voiced audio, and deepfake detection misses.

✅ Art. 52(3) Compliant — watermarking via metadata
// Attach AI generation metadata to content
const content = await generateContent(prompt);

return {
  text: content,
  metadata: {
    aiGenerated: true,
    model: 'claude-3-5-sonnet',
    generatedAt: new Date().toISOString(),
    // Machine-readable for automated detection
  },
  // UI label required: "Generated by AI"
};

Article 14: Human Oversight in Code

For high-risk systems, Article 14 requires that natural persons can monitor, intervene, and override the AI. This isn't a process requirement — it requires code-level implementation. Auditors will ask to see it.

✅ Art. 14 Implementation pattern
// Human oversight gate for high-risk AI decisions
async function makeHighRiskDecision(input: AIInput) {
  const aiDecision = await model.inference(input);

  // Confidence threshold check
  if (aiDecision.confidence < 0.85) {
    return await queueForHumanReview({
      input,
      aiDecision,
      reason: 'Below confidence threshold',
    });
  }

  // High-consequence action: require human sign-off
  if (REQUIRES_OVERSIGHT.includes(aiDecision.action)) {
    const approval = await requestHumanApproval(aiDecision);
    if (!approval.granted) return { status: 'rejected', by: 'human' };
  }

  // Audit trail required by Art. 12
  await logDecision({ input, aiDecision, timestamp: Date.now() });
  return aiDecision;
}
AUTOMATED COMPLIANCE

One Scan.
Full EU AI Act Report.

Custodia maps every finding to the relevant EU AI Act article. The PDF output is formatted for regulatory submissions and investor due diligence.

// custodia scan . --framework eu-ai-act
$ custodia scan . --framework eu-ai-act

  ┌──────────────────────────────────────────────────────┐
  │  CUSTODIA.DEV  //  EU AI ACT COMPLIANCE ANALYSIS     │
  └──────────────────────────────────────────────────────┘

  [CLASSIFY] Risk tier assessed: LIMITED-RISK
  [CLASSIFY] LLM interaction detected → Art. 52 applies

  ── EU AI ACT FINDINGS ────────────────────────────────

  [Art. 52] HIGH    AI Interaction Not Disclosed
           src/components/ChatWidget.tsx:12
           No AI disclosure text found in chat interface

  [Art. 52] MEDIUM  AI-Generated Content Unlabelled
           src/api/generate/route.ts:44
           Response object missing aiGenerated metadata

  [Art.  9] INFO    Risk Register Not Found
           No RISK_REGISTER.md or equivalent in repo root
           Recommended: add before audit

  ───────────────────────────────────────────────────────
  EU AI ACT SCORE:   71/100
  ARTICLES CHECKED:  Art. 9, 10, 13, 14, 15, 52
  FINDINGS:          3  (1 high · 1 medium · 1 info)
  PDF REPORT:        custodia.dev/reports/euAiAct_7tR3x
Run EU AI Act Scan FreeView Demo Report

Frequently Asked Questions

Who does the EU AI Act apply to?

The EU AI Act applies to any organization that places an AI system on the EU market or has AI outputs affect EU residents — regardless of where the company is headquartered. If your SaaS product uses an LLM and has EU users, the Act applies to you.

What is the penalty for EU AI Act non-compliance?

Fines up to €35 million or 7% of global annual turnover for prohibited AI violations. Up to €15 million or 3% of turnover for high-risk system violations. Up to €7.5 million or 1.5% for providing incorrect information to regulators.

What are the technical requirements under Article 9?

Article 9 requires a documented risk management system maintained across the entire lifecycle of the AI system. This means: a risk register that is version-linked to each model update, documented test results, and evidence of ongoing monitoring. Custodia auto-generates a risk register stub from scan findings.

Does Custodia generate EU AI Act compliance documentation?

Yes. The Custodia scan produces a structured report mapping findings to EU AI Act articles. Each finding includes the specific obligation, the at-risk code location, and remediation guidance. The PDF report is formatted for auditor review.

What is the difference between GPAI models and high-risk AI systems?

General-purpose AI (GPAI) models like GPT-4 or Claude face model-level obligations (transparency, documentation, copyright policy). High-risk AI systems are applications built on top of these models operating in regulated domains: hiring, credit scoring, critical infrastructure, law enforcement, education. Most developer SaaS in productivity or security falls under limited-risk, facing primarily Article 52 transparency obligations.

Related Articles

Prompt Injection Prevention
Stop LLM01 before it ships — direct vs indirect injection, code defenses, and automated detection.
SOC 2 for AI Companies
What auditors actually check in your code — CC6, CC7, and AI-specific trust criteria with code examples.
Are You Compliant?

Get Your
EU AI Act Score

Free scan covers Article 52 detection. Pro plan ($39/mo) unlocks full Articles 9–15 mapping with auditor-ready PDF output.

Scan My Code FreeView Demo Report →