If your application calls an LLM API and has EU users, you face at minimum Article 52 transparency obligations — you must disclose AI interaction. If your system makes decisions about hiring, credit, or other high-stakes domains, you face the full Articles 9–15 compliance stack: documented risk management, human oversight gates in code, robustness testing, and audit trails. custodia scan . maps your codebase against every applicable article and generates an auditor-ready PDF report.
The EU AI Act uses a four-tier risk classification. Your compliance obligations depend entirely on which tier your system falls into. Most developer tools, security scanners, and productivity SaaS land in Limited-Risk — meaning the only hard obligation is Article 52 disclosure.
| Risk Tier | Examples | Your Obligation |
|---|---|---|
| Prohibited | Social scoring, subliminal manipulation, real-time biometric surveillance in public spaces | Ban — no path to compliance |
| High-Risk | Hiring tools, credit scoring, critical infrastructure, law enforcement, education assessment, medical devices | Full Articles 9–15 compliance required |
| Limited-Risk | Chatbots, AI content generators, emotion recognition | Article 52 transparency (disclose AI interaction) |
| Minimal-Risk | Spam filters, AI-powered search, recommendation engines | Voluntary codes of conduct only |
Developer shortcut:If your AI system makes or materially influences decisions about people in employment, education, healthcare, credit, or critical infrastructure — you're high-risk. Everything else is almost certainly limited or minimal risk.
What each article means in terms of code, documentation, and architecture.
Legal obligation: Establish and maintain a risk management system across the entire lifecycle of high-risk AI.
Code-level implementation: Documented risk register, version-linked to each model update. Custodia generates this automatically from scan findings.
Legal obligation: Training datasets must be relevant, representative, free of errors, and have appropriate statistical properties.
Code-level implementation: Data pipeline validation gates, schema checks, and bias testing before model training or fine-tuning.
Legal obligation: High-risk AI systems must provide enough information for deployers to understand and operate the system correctly.
Code-level implementation: System card in repo, model version pinning, documented capability boundaries, inference logging.
Legal obligation: AI systems must be designed to allow natural persons to oversee, intervene, halt, or override outputs.
Code-level implementation: Human-in-the-loop gates before high-consequence actions; kill switches; override APIs with audit trails.
Legal obligation: High-risk AI systems must achieve appropriate levels of accuracy, be robust to errors, and be secure against adversarial attacks.
Code-level implementation: Adversarial input testing, output confidence bounds, fallback error handling, OWASP LLM Top 10 scanning.
Legal obligation: AI systems designed to interact with natural persons must disclose that the user is interacting with an AI — unless obvious from context.
Code-level implementation: Chatbots, copilots, and voice assistants need UI disclosure. AI-generated content must be labelled. Missing disclosure = violation.
Article 52 applies to any AI system that interacts with natural persons — chatbots, copilots, AI assistants, customer support bots. The obligation: clearly inform users they are interacting with an AI. This applies regardless of your risk tier.
// Chat interface with no AI disclosure
export function ChatInterface() {
return (
<div>
<h2>Support Chat</h2>
{/* No mention that this is AI-powered */}
<ChatBubble />
</div>
);
}// Explicit AI disclosure in UI
export function ChatInterface() {
return (
<div>
<h2>Support Chat</h2>
<p className="ai-disclosure">
You are chatting with an AI assistant.
</p>
<ChatBubble />
</div>
);
}Any AI-generated text, images, audio, or video that could be mistaken for real human-produced content must be labelled as AI-generated. This directly affects: AI content writers, AI image generators, AI-voiced audio, and deepfake detection misses.
// Attach AI generation metadata to content
const content = await generateContent(prompt);
return {
text: content,
metadata: {
aiGenerated: true,
model: 'claude-3-5-sonnet',
generatedAt: new Date().toISOString(),
// Machine-readable for automated detection
},
// UI label required: "Generated by AI"
};For high-risk systems, Article 14 requires that natural persons can monitor, intervene, and override the AI. This isn't a process requirement — it requires code-level implementation. Auditors will ask to see it.
// Human oversight gate for high-risk AI decisions
async function makeHighRiskDecision(input: AIInput) {
const aiDecision = await model.inference(input);
// Confidence threshold check
if (aiDecision.confidence < 0.85) {
return await queueForHumanReview({
input,
aiDecision,
reason: 'Below confidence threshold',
});
}
// High-consequence action: require human sign-off
if (REQUIRES_OVERSIGHT.includes(aiDecision.action)) {
const approval = await requestHumanApproval(aiDecision);
if (!approval.granted) return { status: 'rejected', by: 'human' };
}
// Audit trail required by Art. 12
await logDecision({ input, aiDecision, timestamp: Date.now() });
return aiDecision;
}The EU AI Act applies to any organization that places an AI system on the EU market or has AI outputs affect EU residents — regardless of where the company is headquartered. If your SaaS product uses an LLM and has EU users, the Act applies to you.
Fines up to €35 million or 7% of global annual turnover for prohibited AI violations. Up to €15 million or 3% of turnover for high-risk system violations. Up to €7.5 million or 1.5% for providing incorrect information to regulators.
Article 9 requires a documented risk management system maintained across the entire lifecycle of the AI system. This means: a risk register that is version-linked to each model update, documented test results, and evidence of ongoing monitoring. Custodia auto-generates a risk register stub from scan findings.
Yes. The Custodia scan produces a structured report mapping findings to EU AI Act articles. Each finding includes the specific obligation, the at-risk code location, and remediation guidance. The PDF report is formatted for auditor review.
General-purpose AI (GPAI) models like GPT-4 or Claude face model-level obligations (transparency, documentation, copyright policy). High-risk AI systems are applications built on top of these models operating in regulated domains: hiring, credit scoring, critical infrastructure, law enforcement, education. Most developer SaaS in productivity or security falls under limited-risk, facing primarily Article 52 transparency obligations.
Free scan covers Article 52 detection. Pro plan ($39/mo) unlocks full Articles 9–15 mapping with auditor-ready PDF output.