OWASP LLM02Data IntegrityApril 11, 2026·12 min read

Model Output
Validation
Why LLMs Hallucinate Into Your Database

Your LLM generates SQL. That SQL drops your users table. Your LLM generates JSON. That JSON contains "admin": true. Your LLM generates code. That code exfiltrates your secrets. You never validated any of it.

LLM HallucinationOutput ValidationSQL InjectionCode Generation
The Blind Spot

Most teams validate user input obsessively. But when an LLM generates output — SQL queries, JSON configs, executable code — they pass it straight through. The model is trusted implicitly. This is the equivalent of executing user input without sanitization.

Three Scenarios That Happen Every Week

The SQL HallucinationCRITICAL

Your "natural language to SQL" feature generates a query for "show me last month's revenue." The LLM returns a valid-looking query that includes DROP TABLE users. Your app executes it because the SQL is syntactically correct.

Impact: Production database destroyed. 48 hours of downtime. Customer data lost.
The JSON CorruptionHIGH

Your API uses an LLM to generate configuration JSON. The model hallucinates an extra field: "admin": true. Downstream code checks for that field and grants elevated privileges.

Impact: Privilege escalation. Any user interaction with the LLM can produce admin-level configs.
The Code Generation BackdoorHIGH

Your AI coding assistant generates a helper function. Buried in the implementation is a fetch() call to an external URL that exfiltrates environment variables. It looks like a logging utility.

Impact: Secret exfiltration. API keys and database credentials sent to attacker's server.

Three Types of Output Hallucination

Structural Hallucination

Output has invalid syntax. Unclosed JSON braces, malformed SQL, broken code. Your parser throws an error — or worse, silently corrupts data.

Example

LLM returns { "users": [{ "name": "Alice" } — missing closing brackets. JSON.parse() throws. If you catch and retry, you burn tokens. If you don't catch, your app crashes.

Semantic Hallucination

Output is syntactically valid but means the wrong thing. The SQL query runs but returns wrong data. The JSON parses but contains fabricated values.

Example

Asked for "active users this month," LLM generates SELECT * FROM users (no date filter, no active check). Query succeeds. Dashboard shows 10x the real number. Nobody notices for weeks.

Injection Hallucination

Attacker influences the LLM to produce output that bypasses your downstream validation. The output is designed to exploit whatever system consumes it.

Example

Attacker's prompt injection causes the LLM to output JSON with "role": "admin" or SQL with UNION SELECT password FROM credentials. The output is syntactically perfect — it's the intent that's malicious.

The 5-Step Defense Framework

1
Define Expected Format

Create explicit schemas (JSON Schema, SQL grammar subset, AST patterns) for every type of LLM output your app accepts.

const OutputSchema = z.object({
  query: z.string().max(500),
  parameters: z.array(z.string()),
  tables: z.array(z.enum(["users", "orders", "products"])),
});
2
Parse and Validate Structure

Never use LLM output as a raw string. Always parse it through a strict validator before any downstream use.

const parsed = OutputSchema.safeParse(llmOutput);
if (!parsed.success) {
  logger.warn("LLM output rejected", parsed.error);
  return fallbackResponse();
}
3
Sanitize SQL Output

If your LLM generates SQL, use an allowlist of permitted operations. Block DROP, DELETE, UPDATE, ALTER, and any DDL.

const BLOCKED = /\b(DROP|DELETE|ALTER|TRUNCATE|INSERT|UPDATE|GRANT|EXEC)\b/i;
if (BLOCKED.test(generatedSQL)) {
  throw new Error("Destructive SQL blocked");
}
4
Validate Code Output

If your LLM generates code, parse the AST and check for dangerous patterns: network calls, file system access, eval(), dynamic imports.

// Check for dangerous patterns in generated code
const DANGEROUS = [
  /\bfetch\s*\(/,    // Network calls
  /\beval\s*\(/,     // Code execution
  /require\s*\(/,    // Dynamic imports
  /process\.env/,    // Env access
];
5
Sandbox Execution

If you must execute LLM-generated code or SQL, run it in a sandboxed environment with no network access, read-only data, and strict timeouts.

// Execute in isolated context
const result = await sandbox.execute(validatedCode, {
  timeout: 5000,
  network: false,
  filesystem: "read-only",
  maxMemory: "128MB",
});

Validation Checklist

Define strict output schemas for every LLM interaction
Parse LLM output through Zod or JSON Schema before use
Block destructive SQL operations with allowlist patterns
AST-check generated code for dangerous function calls
Sandbox any LLM-generated code execution
Log all raw LLM outputs before and after validation
Set up alerts for validation failure rate spikes
Catch Hallucination Vulnerabilities

Custodia detects LLM output used without validation — unvalidated SQL generation, raw JSON parsing, unsandboxed code execution, and more.

Scan Your Code Free →
← Prompt Injection AttacksEU AI Act Compliance →