Continuous risk identification and mitigation
# risk-assessment.yaml
system:
name: "Customer Support AI"
version: "2.1.0"
risk_classification: "high-risk"
risks:
- id: RISK-001
category: "bias"
description: "Model may produce biased responses"
likelihood: medium
impact: high
mitigation: "Bias testing pipeline + human review"
- id: RISK-002
category: "data_leak"
description: "PII exposure in LLM responses"
likelihood: medium
impact: critical
mitigation: "PII filter + output validation"
testing:
schedule: "monthly"
bias_benchmarks: ["winogender", "bbq"]
adversarial_tests: ["prompt_injection", "jailbreak"]Record-keeping for the full AI lifecycle
import logging
from datetime import datetime
class AIAuditLogger:
def log_interaction(self, request_id, user_id,
model_version, prompt, response,
confidence, decision_type):
entry = {
"timestamp": datetime.utcnow().isoformat(),
"request_id": request_id,
"user_id": user_id,
"model_version": model_version,
"input_hash": sha256(prompt), # Don't log PII
"output_summary": self.summarize(response),
"confidence_score": confidence,
"decision_type": decision_type,
"retention_days": 730, # 2 years
}
self.audit_store.append(entry)Humans must be able to understand and override
interface AIDecision {
recommendation: string;
confidence: number;
reasoning: string[];
alternatives: string[];
overridable: true; // Always true per Art. 14
}
function renderDecision(decision: AIDecision) {
return (
<DecisionPanel>
<Recommendation>{decision.recommendation}</Recommendation>
<ConfidenceBar value={decision.confidence} />
<ReasoningChain steps={decision.reasoning} />
<OverrideButton onClick={handleHumanOverride} />
<EmergencyStop onClick={shutdownAISystem} />
</DecisionPanel>
);
}Users must know they're interacting with AI
function AIDisclosure({ systemType, humanContact }) {
return (
<div className="ai-disclosure" role="alert">
<p>
This {systemType} is powered by artificial
intelligence. Responses are generated by an AI
model and may contain errors.
</p>
<p>
For human assistance, contact{" "}
<a href={humanContact.url}>{humanContact.name}</a>
</p>
<AISyntheticContentLabel />
</div>
);
}If you ship AI features — chatbots, recommendation engines, automated decisions, content generation — you need a systematic approach to compliance. Not word-for-word legal interpretation, but engineering systems that satisfy the spirit and letter of each article.
Custodia's GRC tier automatically maps your codebase against EU AI Act requirements — risk management, logging, human oversight, and transparency obligations.
Start Compliance Scan →