AI Code SecurityApril 14, 2026·9 min read

Is AI Generated Code Secure?
What We Found Scanning Real Projects

AI editors write working code. They don't write audited code. After scanning AI-generated projects, the same four vulnerability classes appear repeatedly — not because the AI is careless, but because security and correctness are different problems.

Short answer: AI generated code is not inherently insecure — but it is systematically undertested for security before it ships. The vulnerability classes it produces are predictable, detectable, and fixable. The problem is that most developers assume AI editors are doing security analysis. They are not.

What "AI Generated" Actually Means for Security

AI code generation models are trained on billions of lines of code from public repositories, documentation, and tutorials. They learn to predict statistically likely continuations of code patterns. This makes them extraordinarily good at producing syntactically correct, functionally plausible code quickly.

The security problem is in the training distribution. Public repositories are not audited codebases. Stack Overflow answers optimize for answers that work, not answers that are safe. Tutorial repositories hardcode credentials because it makes examples simpler. Open source projects have broken access control because authorization is hard and the maintainer was focused on shipping a feature.

When an AI model completes your code, it is completing it toward the distribution of code it was trained on — which is a distribution that contains these vulnerabilities at scale. This is not a flaw in the model. It is the natural outcome of learning from code as it actually exists in the world.

What Custodia Finds in AI-Generated Projects
68%
contain hardcoded secrets
61%
have broken access control
89%
with LLM features have prompt injection
74%
with LLM features render output unsafely

Based on Custodia scan results across AI-generated codebases. Projects identified as AI-generated based on commit history, README attribution, and scan metadata.

The 4 Findings — What They Look Like and Where to Find Them

Every finding below includes the detection signal — what to look for in your own codebase if you want to check manually. Or run the scan and skip to the remediation.

#1

Hardcoded Secrets

A02: Cryptographic Failures
Found in ~68% of scanned AI projects

The most consistent finding. AI models complete API client instantiation with credential patterns that match their training data. Stripe keys, OpenAI keys, database connection strings, JWT secrets — all hardcoded in the initial scaffold and committed before any production-readiness review happens.

Detection Signal

If you asked an AI editor to "set up OpenAI" or "add Stripe" and didn't immediately move the key to an env file yourself, check `src/lib/` and any API client initialization files.

// Real pattern from scanned projects:
const stripe = new Stripe('sk_live_4xKy8m...', { apiVersion: '2024-11-20' });
const openai = new OpenAI({ apiKey: 'sk-proj-K9m2...' });

// Both found in git history even after "fixing" — keys live in prior commits.
// Rotation required, not just deletion.
#2

Broken Access Control

A01: Broken Access Control
Found in ~61% of scanned AI projects

AI editors build features, not permission systems. A route that deletes a record, modifies user data, or returns sensitive information will be generated correctly in terms of its happy path — but the authorization check is either missing entirely or only checks authentication (is the user logged in?) without checking authorization (is this user allowed to do this?).

Detection Signal

Check every API route that accesses a database with a user-provided ID parameter. If the only auth logic is `const { userId } = await auth()` with no subsequent check that `userId === params.id` or an admin role check, it's likely vulnerable.

// Common pattern — auth present, authorization absent:
export async function GET(req: Request, { params }) {
  const { userId } = await auth(); // ✓ authenticated
  if (!userId) return new Response('Unauthorized', { status: 401 });

  // ❌ Returns ANY user's data based on URL param — should validate userId === params.id
  const user = await db.query.users.findFirst({
    where: eq(users.id, params.id)
  });
  return Response.json(user);
}
#3

Prompt Injection (LLM01)

OWASP LLM01: Prompt Injection
Found in ~89% of projects with LLM features

The most alarming finding in projects with AI features. When AI editors scaffold chat interfaces, AI assistants, or copilot features, they almost always concatenate user input directly into the system prompt. This is the textbook LLM01 attack vector — an attacker can override the system prompt entirely by typing the right message.

Detection Signal

Search your codebase for any ChatCompletion or similar call where the `system` message content includes a template literal with user-provided variables. Any interpolation of `${message}`, `${userInput}`, or similar into the system role is vulnerable.

// What AI editors generate for "add a chatbot":
messages: [{
  role: 'system',
  content: `You are a helpful assistant. Context: ${userContext}. User asked: ${message}`
  // ❌ User controls "message" — can override the entire system prompt
}]

// What it should be:
messages: [
  { role: 'system', content: 'You are a helpful assistant.' }, // static
  { role: 'user',   content: message }  // isolated — no system prompt access
]
#4

Insecure Output Handling (LLM02)

OWASP LLM02: Insecure Output Handling
Found in ~74% of projects with LLM features

When AI editors add UI components to display LLM responses, they render the output directly. This creates an XSS attack surface if the LLM output is ever influenced by external content (which it will be in any RAG or agentic system). It also creates a code injection risk in backend contexts where LLM output is executed.

Detection Signal

Search for `dangerouslySetInnerHTML` where the `__html` value comes from an LLM API response. Also check for `eval()`, `exec()`, or `new Function()` called with LLM output in backend handlers.

// Direct render of LLM output — XSS if output is influenced by attacker
<div dangerouslySetInnerHTML={{ __html: chatResponse.content }} />

// Correct approach:
import DOMPurify from 'dompurify';
const safe = DOMPurify.sanitize(chatResponse.content);
<div dangerouslySetInnerHTML={{ __html: safe }} />

// Or render as plain text if HTML is not needed:
<p>{chatResponse.content}</p>

Why AI Editors Will Never Fully Solve This

It is tempting to assume that as AI editors get smarter, they will start catching these vulnerabilities themselves. Some already have basic guards — GitHub Copilot flags obvious SQL injection in simple contexts, for example. But there is a structural reason why AI completion cannot replace security analysis:

Security analysis requires reasoning about your entire application — tracing how a piece of user-controlled data flows from an HTTP request through your middleware, into a database query, back through an API response, and into a UI component. It requires knowing that this specific route is reachable by unauthenticated users even though most routes are protected. That kind of whole-codebase reasoning is outside what next-token prediction does well.

Static analysis tools were built exactly for this. They traverse your codebase as a graph, trace data flows, and apply pattern matching to find vulnerability classes across the entire application — not just the file currently open in the editor.

Scan Your AI-Generated Codebase

One command. OWASP Top 10 + OWASP LLM Top 10. Free — 3 scan credits, no credit card.

# Install and run — no config needed
npx custodia-cli scan

# Or scan only files changed in the last PR (faster for CI):
npx custodia-cli scan --diff
  ┌──────────────────────────────────────────────────────┐
  │  CUSTODIA.DEV  //  AI CODE SECURITY SCAN            │
  └──────────────────────────────────────────────────────┘

  [DETECT] OpenAI SDK found → LLM agent activated
  [DETECT] LangChain found  → RAG security checks enabled
  [SCAN]   Scanning 412 files...

  ── FINDINGS ──────────────────────────────────────────

  [A02]    CRITICAL  Hardcoded Stripe Secret Key
             src/lib/stripe.ts:3  →  move to STRIPE_SECRET_KEY env var

  [LLM01]  CRITICAL  Prompt Injection — System/User Context Mixed
             src/app/api/assistant/route.ts:22

  [A01]    HIGH     Broken Access Control — No Ownership Check
             src/app/api/documents/[id]/route.ts:18

  [LLM02]  HIGH     Insecure Output — Raw LLM HTML Render
             src/components/ChatMessage.tsx:31

  ─────────────────────────────────────────────────────
  SCORE: 38/100  ·  2 critical  ·  2 high  ·  1 medium
  REPORT: custodia.dev/reports/ai_scan_9pK4n
Scan My Code FreeView Sample Report

What To Do About It — The Practical Workflow

You do not need to stop using AI editors. You need to add a security gate to your pre-launch workflow. Here is the minimal version:

01
Run a full scan before your first deploy
Before you flip a new project from private to public or ship to real users, run custodia scan. Review Critical and High findings. Use the included fix prompts to remediate with your AI editor — the same tool that created the issue can fix it with the right prompt.
02
Add diff mode to your CI pipeline
After initial remediation, add `custodia scan --diff` to your GitHub Actions workflow. It runs on every PR, only scans changed files, and takes under 60 seconds. It catches regressions before they merge.
03
Re-scan after major AI-generated additions
If you ship a major new feature with an AI editor — a new LLM integration, a new admin panel, a payment flow — run a full scan again. The same vulnerability classes will appear in the new code.
Key Takeaways
  • AI generated code is not inherently insecure — but it is systematically untested for security before it ships.
  • The 4 most common vulnerability classes (hardcoded secrets, broken access control, prompt injection, insecure output handling) are predictable because they exist at scale in the training data.
  • AI editors are completion engines, not security analyzers. These are fundamentally different tools with different purposes.
  • The fix is a pre-launch scan, not a change in workflows. Keep using AI editors. Add one gate before deploy.
  • AI-generated LLM features have an additional attack surface (OWASP LLM Top 10) that traditional scanners like Snyk and Semgrep miss entirely.

Frequently Asked Questions

Is AI generated code secure?

AI generated code is functional but not security-audited. AI models like GPT-4, Claude, and Gemini are trained to predict correct-looking code — not to reason about threat models, trace data flows, or verify authorization logic. The result is working code that consistently contains predictable vulnerability classes: hardcoded secrets, missing authorization, prompt injection in LLM features, and insecure output rendering. These are not random — they are the natural output of applying pattern completion to code that was written without security in mind.

What security vulnerabilities does ChatGPT or Copilot code typically contain?

The four most common vulnerability classes in AI-generated code are: (1) Hardcoded API keys and secrets — AI models complete credential patterns from training data that often look like real keys. (2) Broken access control — AI generates routes and handlers that work for authenticated users but don't check authorization, allowing any user to access any resource. (3) Prompt injection — when scaffolding LLM features, AI editors interpolate user input directly into system prompts, creating LLM01 vulnerabilities. (4) Insecure output handling — LLM responses rendered directly as HTML expose XSS attack surfaces.

Can I trust AI generated code in production?

You can use AI generated code in production, but you should not deploy it without a security scan. AI editors dramatically accelerate development and produce high-quality, working code. They are not a replacement for security analysis. Run a static analysis scan (like Custodia) against AI-generated code before deploying — the same way you run a test suite. The overhead is one command and 5 minutes of review.

Does GitHub Copilot check for security vulnerabilities in the code it generates?

GitHub Copilot has some basic vulnerability detection that can flag obvious patterns like SQL injection in simple contexts. However, it does not perform full static analysis, does not trace authorization flows across your application, and has no coverage for AI/LLM-specific vulnerabilities like prompt injection (OWASP LLM01) or insecure output handling (LLM02). Copilot is a code completion tool — it is not a security scanner.

How do I check if my AI generated code is secure?

Run a static analysis scan against your full codebase. For AI-generated code specifically, you need a tool that covers both traditional OWASP Top 10 (injection, broken access control, XSS) and OWASP LLM Top 10 (prompt injection, insecure output handling, excessive agency). Custodia covers both in one scan. Run `npx custodia-cli scan` in your project root — it auto-detects LLM usage and runs the appropriate checks. Results in 3–8 minutes.

Related Articles

Vibe Coding Security Risks
What Cursor and Claude can't catch — and the 4 vulnerability classes every vibe-coded project ships.
OWASP LLM Top 10 Scanner
All 10 AI vulnerability categories explained — and why traditional scanners are blind to every one of them.
Find Out What's In Your Code

Is Your
AI Code Secure?

Free — 3 scan credits. OWASP Top 10 + OWASP LLM Top 10. Results in 5 minutes. See pricing →

Scan My Code FreeView Demo Report →