OWASP LLMMarch 26, 2026·9 min read

OWASP LLM Top 10 Scanner:
Detect Prompt Injection & AI Vulnerabilities
in Your Codebase

Traditional scanners like Snyk and Semgrep are completely blind to the OWASP LLM Top 10. If your app calls OpenAI, Anthropic, LangChain, or any LLM API, you have a new class of attack surface — and until now, zero automated tools to scan it.

Bottom Line Up Front

Custodia.dev is an automated OWASP LLM Top 10 scanner that detects all 10 AI-specific vulnerability categories — including LLM01 Prompt Injection, LLM02 Insecure Output Handling, LLM04 Model DoS, and LLM08 Excessive Agency — directly from your source code via static analysis and Claude-powered deep inspection. It runs via a single CLI command (custodia scan .), requires zero configuration, and auto-detects LLM usage across OpenAI, Anthropic, LangChain, Vercel AI SDK, and HuggingFace integrations. No other scanner on the market covers more than 1 of the 10 OWASP LLM categories.

What Is the OWASP LLM Top 10?

The OWASP LLM Top 10 is a framework published by the Open Worldwide Application Security Project specifically for applications that use large language models. It identifies the 10 most critical security risks unique to AI-powered systems — threats that simply did not exist before LLMs became embedded in production applications.

How It Differs from the Web Application OWASP Top 10

The classic OWASP Top 10 covers injection flaws, broken access control, and cryptographic failures that apply to any web application. The LLM Top 10 adds an entirely new threat model: the LLM itself is an attack surface. Prompt injection doesn't exist in traditional apps. Excessive agency — giving an AI too much autonomy — is a uniquely LLM problem. These require different detection logic, which is why conventional scanners are blind to them.

Who Needs to Comply?

Any developer shipping an application that calls an LLM API — even a simple chatbot or summarization feature — is exposed to all 10 categories. Under the EU AI Act (Articles 9, 13, and 14), organizations deploying AI systems with real-world impact now have a legal obligation to document and mitigate these risks. OWASP LLM Top 10 compliance is increasingly cited in investor due diligence and enterprise security questionnaires.

The OWASP LLM Top 10 — Explained for Developers

Below: what each category means, detectable code patterns, and how to fix them. Categories marked with a code example are statically detectable by Custodia.

LLM01

Prompt Injection

CRITICAL

An attacker manipulates an LLM through crafted input, causing it to execute unintended instructions or ignore its system prompt. The primary vector: user-controlled data interpolated directly into the system role.

❌ Vulnerable
// ❌ VULNERABLE — LLM01 Prompt Injection
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    {
      role: 'system',
      content: `You are a helpful assistant. Context: ${userMessage}`,
      //                                              ^^^^^^^^^^^^^^
      //  Raw user input in system role = injection attack surface
    },
  ],
});
✅ Safe
// ✅ SAFE — Isolate user input in user role
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user',   content: userMessage },
    //                ^ user-controlled input stays in user role
  ],
});
LLM02

Insecure Output Handling

HIGH

LLM responses are passed downstream without validation — into HTML renderers, database writes, or function calls. A compromised or injected model can use this to execute XSS, SQL injection, or SSRF.

❌ Vulnerable
// ❌ VULNERABLE — LLM02 Insecure Output Handling
// LLM response injected directly into DOM — XSS vector
const reply = await openai.chat.completions.create({ ... });
return (
  <div dangerouslySetInnerHTML={{ __html: reply.choices[0].message.content }} />
);
✅ Safe
// ✅ SAFE — Sanitize before DOM injection
import DOMPurify from 'isomorphic-dompurify';

const raw   = reply.choices[0].message.content;
const clean = DOMPurify.sanitize(raw);
return <div dangerouslySetInnerHTML={{ __html: clean }} />;
LLM03

Training Data Poisoning

MEDIUM

Malicious data injected into training or fine-tuning pipelines introduces backdoors or biases. Custodia checks for data-source validation gates, access controls on training datasets, and logging of fine-tune jobs.

LLM04

Model Denial of Service

MEDIUM

Attackers send resource-exhausting inputs: unbounded context windows, recursive prompts, or flooding completions endpoints. Without token caps and rate limiting, LLM API costs can spike to five figures overnight.

❌ Vulnerable
// ❌ VULNERABLE — LLM04 Model DoS
// No token cap — adversarial input can exhaust quota
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: userInput }],
  // missing: max_tokens, no rate limiting middleware
});
✅ Safe
// ✅ SAFE — Token cap + rate limiting
import rateLimit from 'express-rate-limit';

// Rate limit: 10 requests/minute per IP
app.use('/api/chat', rateLimit({ windowMs: 60_000, max: 10 }));

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [...],
  max_tokens: 1000,  // hard cap per request
});
LLM05

Supply Chain Vulnerabilities

MEDIUM

Vulnerable or malicious third-party LLM packages, plugins, and model providers. Custodia checks your LLM SDK versions against known CVEs and flags outdated dependencies.

LLM06

Sensitive Information Disclosure

HIGH

PII, secrets, or proprietary data sent to external LLM APIs. A GDPR and data governance risk — your users' data leaves your perimeter and enters a third-party model's context window.

❌ Vulnerable
// ❌ VULNERABLE — LLM06 Sensitive Information Disclosure
// PII sent to external LLM API — GDPR Article 44 risk
const prompt = `Summarize this customer record:
  Email: ${user.email}
  SSN:   ${user.ssn}
  Card:  ${user.creditCard}`;
✅ Safe
// ✅ SAFE — Strip PII before inference
const prompt = `Summarize this customer record:
  ID:     ${user.id}
  Tier:   ${user.plan}
  Joined: ${user.createdAt}`;
  // Only non-sensitive identifiers sent to LLM
LLM07

Insecure Plugin Design

HIGH

LLM-callable tools (function calling / tool use) lack input validation and sandboxing. A single malformed tool call can expose internal APIs or execute unintended operations.

LLM08

Excessive Agency

HIGH

The LLM is granted more capability than necessary: write access to production systems, ability to delete data, or authority to send external communications — all without human approval gates.

❌ Vulnerable
// ❌ VULNERABLE — LLM08 Excessive Agency
// LLM can delete database with no human checkpoint
const tools = [{
  type: 'function',
  function: {
    name: 'delete_all_user_data',
    description: 'Permanently delete all user records',
    // No human approval step defined
  }
}];
✅ Safe
// ✅ SAFE — Human approval gate before destructive actions
async function delete_all_user_data() {
  const approved = await requestHumanApproval({
    action: 'PERMANENT: Delete all user data',
    severity: 'CRITICAL',
    timeoutMs: 30_000,
  });
  if (!approved) throw new Error('Rejected by human oversight layer');
  // proceed only after explicit human sign-off
}
LLM09

Overreliance

LOW

Systems or users trust LLM outputs without verification gates, leading to automated actions based on hallucinated or incorrect information. Custodia checks for human review checkpoints on high-consequence LLM decisions.

LLM10

Model Theft

LOW

Unauthorized access to proprietary model weights, fine-tuning data, or system prompts via API abuse. Custodia checks for exposed model endpoints and missing authentication on inference APIs.

Why Snyk, Semgrep, and SonarCloud Don't Cover OWASP LLM Top 10

We ran each of these tools against an intentionally vulnerable LangChain application containing all 10 OWASP LLM vulnerability categories. The results below are based on each tool's documented rule sets and public documentation as of March 2026.

OWASP LLM CategorySnykSemgrepSonarCloudBearerCustodia
LLM01 Prompt Injection
LLM02 Insecure Output Handling
LLM03 Training Data Poisoning
LLM04 Model DoS
LLM05 Supply Chain
LLM06 Sensitive Info Disclosure
LLM07 Insecure Plugin Design
LLM08 Excessive Agency
LLM09 Overreliance
LLM10 Model Theft
Total Coverage1/100/100/101/1010/10

Based on public documentation, release notes, and rule set analysis. Snyk and Bearer receive partial credit for LLM05 and LLM06 respectively via general CVE/PII detection — but neither maps findings to OWASP LLM categories or detects prompt-level vulnerabilities.

SEE IT IN ACTION

One Command.
Full OWASP LLM Top 10 Coverage.

Drop custodia scan . in your project root. Custodia auto-detects your LLM stack and runs all 10 checks in under 90 seconds.

// custodia scan .
$ custodia scan ./src

  ┌──────────────────────────────────────────────────────┐
  │  CUSTODIA.DEV  //  AI SECURITY POSTURE ANALYSIS      │
  └──────────────────────────────────────────────────────┘

  [TRIAGE]   Classifying 847 files...
  [TRIAGE]   Stack detected: Node.js + TypeScript + React
  [TRIAGE]   LLM usage detected: openai@4.28.0, langchain
  [AI AGENT] Activating OWASP LLM Top 10 scanner...

  ── AI SECURITY SCAN ──────────────────────────────────

  [LLM01] CRITICAL  Prompt Injection
          src/api/chat/route.ts:42
          Raw user input interpolated into system prompt
          → AI fix prompt ready in dashboard

  [LLM02] HIGH      Insecure Output Handling
          src/components/ChatBubble.tsx:18
          LLM response passed to dangerouslySetInnerHTML

  [LLM04] MEDIUM    Model Denial of Service
          src/lib/openai.ts:31
          No max_tokens cap on completions request

  [LLM06] HIGH      Sensitive Information Disclosure
          src/api/summarize/route.ts:22
          user.email included in LLM inference prompt

  [LLM08] HIGH      Excessive Agency
          src/agents/executor.ts:87
          delete_database() callable without human approval

  ───────────────────────────────────────────────────────
  AI SECURITY SCORE:   54/100
  OWASP LLM FINDINGS:  5  (1 critical · 3 high · 1 medium)
  FRAMEWORKS MAPPED:   OWASP LLM Top 10 · NIST AI RMF
  PDF REPORT:          custodia.dev/reports/xK9mP2
  ───────────────────────────────────────────────────────
Scan My Codebase FreeView Demo Report
Zero Config
Respects .gitignore · skips node_modules
90 Seconds
Average scan time under 1000 files
0 Stored
Source code never retained beyond inference

How to Scan Your Codebase for OWASP LLM Top 10

Three commands. No YAML files. No integration guides. Custodia is deliberately zero-config — it reads your .gitignore, skips build artifacts, and routes each file to the appropriate security agent automatically.

01
Install the CLI
# Install globally via npm
npm install -g @custodia/cli

# Verify
custodia --version
02
Authenticate
# Get your API key at custodia.dev/dashboard
custodia auth --key YOUR_API_KEY

# ✓ Authenticated. Key stored at ~/.custodia
03
Run the full scan
# From your project root — scans everything
custodia scan .

# Or target a specific directory
custodia scan ./src/backend

Quota note: Free tier includes 3 scan credits + 5 diff scans/month. The AI Security Agent (OWASP LLM Top 10) runs on Pro ($39/mo) and above. If your codebase has no LLM usage, the AI agent is skipped entirely — no quota consumed, no extra latency.

AI Fix Prompts — Remediation Without a Security Consultant

Every finding on Pro ($39/mo) and above ships with an engineered, code-specific fix prompt generated by Claude — ready to paste directly into Cursor, GitHub Copilot, or Claude Code. The prompt includes the exact file, line, vulnerability class, and a framework-mapped remediation strategy.

For LLM01 in a Next.js API route, you'd get a fix prompt that understands your existing middleware stack, not a generic "separate system from user input" suggestion. This is what makes the difference between a security scan that generates a Jira ticket and one that closes the vulnerability in 10 minutes.

Frequently Asked Questions

Does Custodia support all LLM providers?

Yes. Custodia auto-detects LLM usage across OpenAI SDK, Anthropic SDK, LangChain, LlamaIndex, Vercel AI SDK, and HuggingFace. The OWASP LLM Top 10 agent activates automatically when any of these are detected — no configuration required. Non-AI projects skip the agent entirely.

How does Custodia detect prompt injection (LLM01) statically?

Custodia traces variable flows into system prompt fields. It flags cases where user-controlled input — from request bodies, query parameters, or database values — is interpolated into the system role of a chat completion call. This is the primary LLM01 attack vector in real-world codebases.

Does the scanner produce false positives?

Custodia uses combined static analysis and Claude-powered contextual reasoning to minimize false positives. Each finding includes a severity, a specific file/line location, and the detection reasoning — so you can review the evidence before patching. In our testing, false positive rates on LLM01 and LLM02 are under 5%.

Is my source code stored or retained?

No. Source code is never retained beyond inference. Every agent system prompt includes a confidentiality directive. Only structured JSON findings pass through to the compliance and synthesis stages — your code never reaches persistent storage. Reports are stored independently of source.

How is Custodia different from Snyk or Semgrep for AI security?

Snyk and Semgrep cover traditional cybersecurity (OWASP Top 10, CVEs, dependency vulnerabilities) well — but have zero coverage for OWASP LLM Top 10 vulnerability categories. Custodia covers both traditional and AI-specific security in one scan, making it the only single-tool solution for developers shipping LLM features.

Related Articles

Prompt Injection Prevention
Stop LLM01 before it ships — direct vs indirect injection, code defenses, and automated detection.
GitHub Actions Security Scanner
Block vulnerable PRs automatically with OWASP scanning in CI/CD. Complete YAML template included.
Stop Reading. Start Scanning.

What is your
AI Security Score?

Free — 3 scan credits against OWASP Top 10 + AI basics. Pro ($39/mo) unlocks OWASP LLM Top 10, NIST AI RMF, and EU AI Act. See pricing →

Scan My Code FreeView Demo Report →