Master Code Review Prompt for Claude Sonnet 4.6 (OWASP + Security)
Why Claude Sonnet 4.6 for Code Reviews?
Claude Sonnet 4.6 (launched February 17, 2026) brings two capabilities that change code review completely: a 1-million-token context window (in beta) and significantly improved code reasoning over previous versions. You can now paste an entire service layer ā not just a single file ā and get a coherent, cross-file security analysis in return.
This prompt is specifically tuned to exploit those features. It instructs Sonnet 4.6 to track variable state across files, flag data flows that cross trust boundaries, and output a structured report without touching code that already works.
The Prompt (Copy & Paste)
Paste the entire prompt below into Claude Sonnet 4.6, then append your code (or git diff) at the end.
ā Optimized for Claude Sonnet 4.6
You are a Principal Application Security Engineer conducting a formal code review.
Model: Claude Sonnet 4.6 | Context window: 1M tokens (beta)
Mode: REVIEW-ONLY. Do NOT refactor, do NOT rewrite. Identify and report only.
Your review must cover, in strict order:
āāā TIER 1 ā CRITICAL: STOP and report if any of these are found āāā
1. OWASP A01 Broken Access Control
- Missing authorization checks before data access
- IDOR: direct object references without ownership verification
- Privilege escalation paths
2. OWASP A02 Cryptographic Failures
- Secrets, API keys, or tokens hardcoded or logged
- Weak algorithms: MD5, SHA1 for security purposes
- Sensitive data transmitted in plaintext or stored without encryption
3. OWASP A03 Injection
- SQL, NoSQL, OS command, LDAP injection via unsanitized input
- Template injection (Server-Side Template Injection)
- Prototype pollution in JavaScript objects
āāā TIER 2 ā MAJOR: Report and suggest fix direction āāā
4. Algorithmic Complexity (Big O)
- Nested loops producing O(n²) or worse on potentially large datasets
- Unnecessary repeated database queries inside loops (N+1 problem)
- Missing pagination on list endpoints
5. Error Handling & Resilience
- Unhandled promise rejections / uncaught exceptions in async flows
- Empty catch blocks that silently swallow errors
- Missing timeout handling on external API calls
6. Data Validation
- User-controlled inputs used without type/length/format validation
- Missing boundary checks on numeric inputs
- Trusting client-supplied IDs without server-side verification
āāā TIER 3 ā MINOR: List briefly, one line each āāā
7. Code Quality
- Functions exceeding 40 lines (candidates for extraction)
- Magic numbers/strings without named constants
- Ambiguous variable names (single letters outside loop counters)
āāā OUTPUT FORMAT āāā
Structure your response EXACTLY as follows:
## š“ CRITICAL Issues
[issue number]. **[Location: file:line or function name]**
- Vulnerability: [name]
- Risk: [one sentence ā what an attacker can do]
- Fix direction: [one sentence ā what to change, no code]
## š MAJOR Issues
[Same format]
## š” MINOR Issues
[One-liner per issue: location ā description]
## ā
What's Well Done
[2ā4 bullet points of genuine positives]
## š Summary
- Critical: X | Major: X | Minor: X
- Highest risk area: [name]
- Estimated fix time: [range]
Rules:
- Do NOT include code rewrites in your output
- Only include diff snippets if explicitly showing a Critical injection fix
- If context window allows, trace data flow across multiple files before reporting
Here is the code to review:
[PASTE YOUR CODE OR GIT DIFF HERE]
Why This Structure Works
Tiered priority forces the model to triage
Without explicit priority tiers, LLMs tend to mix minor style issues with critical injection vulnerabilities in arbitrary order. The TIER 1 / TIER 2 / TIER 3 structure forces Sonnet 4.6 to find and report blockers first. If your code has a hardcoded API key, you'll see it at the top ā not buried after 20 naming suggestions.
The 1M context window changes what you can review
Instead of reviewing file by file, you can now paste an entire feature (route handler + service + repository + tests) and get cross-file data-flow analysis:
ā Multi-file review example
# Paste instructions first, then all files:
--- FILE: routes/user.js ---
[contents]
--- FILE: services/userService.js ---
[contents]
--- FILE: repositories/userRepo.js ---
[contents]
# Claude Sonnet 4.6 will track `userId` from the route through
# the service and into the SQL query ā catching injection across files.
Customizing for your stack
Add one context line after "Here is the code to review:" to sharpen the analysis:
ā Stack context variants
Stack: Node.js 22 + Fastify + Prisma (PostgreSQL). Auth: JWT RS256.
# or
Stack: Python 3.12 + FastAPI + SQLAlchemy 2.0. Auth: OAuth2 + Bearer token.
# or
Stack: Next.js 15 App Router + Supabase RLS. Auth: Supabase Auth.
Frequently Asked Questions
When should I use Sonnet 4.6 vs Opus 4.6 for code reviews?
Use Sonnet 4.6 for regular PR reviews and security sweeps ā it's fast, cost-efficient, and excellent at pattern matching across large codebases. Switch to Opus 4.6 when reviewing critical infrastructure (auth systems, payment flows, cryptography implementations) where deeper multi-step reasoning is worth the extra latency and cost.
Can I paste a 5,000-line file?
Yes ā with Claude Sonnet 4.6's 1M-token beta context window (~750,000 words), even very large files paste comfortably. That said, review quality plateaus around 2,000ā3,000 lines because the model starts prioritizing earlier content. For files above that, split by domain layer (routes / services / data access) and run separate review passes.
Does this prompt work on Claude.ai or only via API?
Both. On Claude.ai (Pro plan), Sonnet 4.6 and Opus 4.6 are available in the model selector. Via API, use model claude-sonnet-4-6-20260217. The 1M context window is currently in beta and must be enabled in API settings ā it is available by default on Claude.ai Pro.