We Scanned Supabase and It Scored 100/100 — Here's What Perfect Security Looks Like
Supabase powers over 1 million databases. We ran a full Custodia scan across 628 source files — auth, data protection, secrets, injection, logging, and AI governance — and found zero vulnerabilities. Here's exactly what they did right and how to apply every practice to your own codebase.
All Six Domains: Perfect
Custodia evaluates six security domains against 40+ individual checks spanning OWASP Top 10, CWE, NIST CSF, and OWASP LLM Top 10. Supabase scored 100 on every one. To put that in context: in our scans of Fastify (97/100), Hono (92/100), Elysia (88/100), and Rollup (85/100), each had at least one HIGH finding. A clean sweep across all six domains is rare.
The 5 Practices That Drove a Perfect Score
A 100/100 score is not magic — it is the result of specific, learnable engineering decisions applied consistently across a large codebase. Here is exactly what Supabase does in each domain and what the equivalent pattern looks like in your own code.
Zod + TypeScript: A Two-Layer Validation Wall
Most applications validate at one layer and trust the rest. Supabase validates twice — TypeScript catches type mismatches at compile time, Zod catches them at runtime. This matters because TypeScript types disappear at runtime. A POST body typed as `{ email: string }` in TypeScript is still `unknown` when it arrives from the network. Zod's `schema.parse()` turns that unknown into a verified shape before it touches any business logic. Combined, these two layers mean no raw request data ever reaches a database query or a downstream function without being explicitly validated.
// Pattern Supabase uses throughout its API routes
import { z } from 'zod'
const CreateProjectSchema = z.object({
name: z.string().min(1).max(64),
region: z.enum(['us-east-1', 'eu-west-1', 'ap-southeast-1']),
plan: z.enum(['free', 'pro', 'enterprise']),
organization_id: z.string().uuid(),
})
// At the API boundary — before ANY business logic
const result = CreateProjectSchema.safeParse(req.body)
if (!result.success) {
return res.status(400).json({ error: result.error.flatten() })
}
// result.data is now fully typed AND runtime-verified
const { name, region, plan, organization_id } = result.dataRow Level Security: Authorization at the Database Layer
The most common authorization mistake is doing it only in application code. If an attacker bypasses the app layer — via a misconfigured proxy, a forgotten debug route, a SSRF flaw — they hit the database directly. Row Level Security (RLS) in PostgreSQL means the database itself enforces access control. Every query runs as the authenticated user, not as a superuser. A query that should return rows for user A will return zero rows if run as user B, even if issued directly against the database without going through the application. This is defense-in-depth: access control that cannot be bypassed by finding a hole elsewhere in the stack.
-- RLS policy pattern from Supabase's own infrastructure
-- Users can only read/modify their own project data
CREATE POLICY "Users can only access their own projects"
ON projects
FOR ALL
USING (auth.uid() = owner_id);
-- Service-level operations use a separate, tightly-scoped role
CREATE POLICY "Service role can manage all projects"
ON projects
FOR ALL
TO service_role
USING (true);
-- RLS is enforced at the Postgres level — not bypassable
-- from application code or direct DB connectionsSecrets in Environment Variables, Zero Exceptions
Custodia scanned 628 files — including configuration files, test helpers, and example code — and found zero hardcoded secrets. This sounds like a low bar, but it is surprisingly rare in large codebases. The pattern that trips teams up is "temporary" secrets in test files, example configurations with real-looking placeholder values, or fallback values in code for when the env var is missing. Supabase's codebase has none of these. Every secret reference consistently reads from `process.env`, with no in-code fallback defaults that could ship to production accidentally.
// What Supabase's pattern looks like — clean env var access
// No fallback values, no hardcoded strings
const supabaseUrl = process.env.SUPABASE_URL!
const supabaseAnonKey = process.env.SUPABASE_ANON_KEY!
const jwtSecret = process.env.JWT_SECRET!
// ❌ The pattern that fails scans in most codebases:
// const jwtSecret = process.env.JWT_SECRET || 'dev-fallback-secret'
// ^^^^^^^^^^^^^^^^^^^^
// This fallback ships to production when env var is missing
// ✓ Instead, fail loudly if config is missing:
if (!supabaseUrl || !jwtSecret) {
throw new Error('Missing required environment variables')
}Structured Logging Without Leaking Sensitive Data
Logging is a double-edged sword. Insufficient logging means you cannot detect or investigate incidents. Excessive logging means you are writing user data, tokens, and PII to log files that eventually end up in a SIEM, a third-party logging service, or a developer's screen. Supabase threads this needle by logging structured events through Sentry — enough information to reconstruct what happened, but with sensitive fields explicitly omitted. No JWT payloads, no raw request bodies, no API key values in log lines.
// Supabase-style structured logging — events, not raw data
import * as Sentry from '@sentry/node'
// ✓ Log the event with safe metadata
Sentry.captureEvent({
message: 'auth.sign_in',
level: 'info',
extra: {
user_id: userId, // ✓ safe — internal ID
provider: 'email', // ✓ safe — enum value
// ❌ Never log:
// password, token, session, request.body
},
})
// ✓ Log errors with context, not raw caught values
try {
await processRequest(data)
} catch (err) {
Sentry.captureException(err, {
extra: { operation: 'processRequest', userId },
// ❌ Not: { rawInput: data } — could contain PII
})
throw err
}AI Governance: OpenAI SDK With Guardrails
Supabase uses the OpenAI SDK for AI-powered features. The Custodia scan specifically checks for AI security controls (LLM-01 through LLM-15) — prompt injection vectors, insecure output handling, missing token limits, and lack of input sanitization before LLM calls. Supabase scored 100 across all AI checks. The SDK is called through a centralized wrapper rather than being called directly from routes, allowing consistent governance: token limits enforced, user inputs sanitized before being incorporated into prompts, and model outputs validated before being returned to clients.
// AI governance pattern — centralized LLM wrapper
// Never call OpenAI directly from route handlers
async function callAI(
systemPrompt: string,
userInput: string,
opts: { maxTokens?: number } = {}
): Promise<string> {
// ✓ Sanitize user input before incorporating into prompt
const sanitized = userInput
.replace(/<[^>]*>/g, '') // strip HTML
.slice(0, 2000) // enforce input length limit
// Add prompt injection defenses specific to your use case
const response = await openai.chat.completions.create({
model: 'gpt-4o',
max_tokens: opts.maxTokens ?? 1024, // ✓ always set token limit
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: sanitized },
],
})
const output = response.choices[0]?.message?.content ?? ''
// ✓ Validate output before returning to client
return OutputSchema.parse(output)
}Why a Perfect Score Is Harder Than It Sounds
Most codebases fail on one or two domains that get deprioritized because they're "not the core product." Logging and monitoring is the most common failure — teams ship without structured logging, then add it reactively after an incident. Supabase has Sentry integration from day one, with clear separation between operational events (logged) and sensitive data (never logged).
The second most common failure is secrets hygiene in test and example files. A developer writes a quick test with a placeholder JWT secret that looks real enough to get flagged — and it ships because nobody added test fixtures to the scanner ignore list. Supabase's .env.example files use clearly fake placeholder values throughout, and no file in the 628-file scan contained a string pattern consistent with a real API key or secret.
The third common failure is the AI governance domain, which is new. Most codebases that added LLM features in 2024-2025 did it quickly, calling the OpenAI API directly from route handlers with no token limits, no input length constraints, and no output validation. Supabase centralized its AI calls through a wrapper that enforces all three. This is the right architecture — one place to update when you need stricter controls, one place where governance policy lives.
The combination of all five practices applied consistently across a large team and a large codebase is what separates a 100/100 from an 85/100. Individual brilliant engineers can make individual great decisions. What Supabase has done is make secure patterns the default path — the pattern you fall into when you write a new API route or a new database query or a new AI feature. That is the real achievement.
You do not need Supabase's headcount or funding to achieve this score. The five practices above are all implementable in a weekend: add Zod to your existing API routes, move any hardcoded secrets to environment variables, set up Sentry with structured events, wrap your OpenAI calls in a centralized helper, and run RLS policies in your database. Start with the domain where you scored lowest — and use Custodia to tell you what that is.
Frequently Asked Questions
Find Out Where You Stand Against Supabase
One command. 628 files analyzed. You'll get a domain-by-domain breakdown, a score, and specific findings with fix guidance — all from your terminal.
npx @custodia/cli scan --repo your-org/your-repo