← All guides

How to Write System Prompts for Claude: Complete Guide

A practical guide to writing effective system prompts for Claude — structure, length, context injection, persona definition, constraint setting.

How to Write System Prompts for Claude: Complete Guide

A well-written system prompt for Claude has four components: role definition (who Claude is), context injection (what it knows), behavior constraints (what it should and shouldn't do), and output format specification (how it should respond). Most system prompts that produce inconsistent results are missing one of these four components. This guide covers each element with templates and examples.


Why system prompts matter more than user prompts

The system prompt sets the permanent context for every exchange in a session. A user prompt is temporary and task-specific. The system prompt defines Claude's operating environment.

Common mistake: spending hours engineering user prompts while leaving the system prompt as a one-sentence description. The system prompt has more leverage. A 500-word system prompt that correctly defines Claude's context, constraints, and output format will improve every single response in the session.


The four components of an effective system prompt

Component 1: Role definition

Tell Claude who it is in specific terms. Not "you are an AI assistant" — Claude already knows this. Instead: what role, what expertise, what perspective.

Weak:

You are a helpful assistant for our software company.

Strong:

You are a senior backend engineer with 10 years of Python experience, 
specialising in API design and performance optimisation. You review code 
from the perspective of maintainability and production readiness, not just 
functional correctness. You ask clarifying questions when requirements are ambiguous 
rather than guessing.

The stronger version changes Claude's behavior: it will flag maintainability issues, not just bugs; it will ask questions instead of making assumptions; it will evaluate code by production standards.

Component 2: Context injection

Give Claude the information it needs to answer accurately. This is where most prompts are under-specified.

Useful context to inject:

Example context injection:

The user is building an AI cost monitoring tool for developers using the Anthropic API. 
They have intermediate Python experience. They're building this as a solo developer 
with limited time. Prioritise solutions that are simple to implement over those that 
are architecturally elegant but complex. The product needs to work with Anthropic API 
response objects directly.

Component 3: Behavior constraints

Define what Claude should and shouldn't do. Without constraints, Claude defaults to its general behavior, which may not match your use case.

Useful constraints:

Behavior guidelines:
- Always answer the specific question asked. Don't answer a different question 
  you think is more important.
- If you're uncertain about something, say so explicitly. Don't present 
  uncertain information as fact.
- Keep responses concise unless depth is specifically requested. 
  Default to the shortest complete answer.
- Never suggest libraries or tools outside the user's stated stack 
  unless they specifically ask for alternatives.
- When showing code, always include error handling. Never show code 
  that would break in production.

Common constraint patterns:

Component 4: Output format specification

Tell Claude exactly how to format its output. This is the highest-leverage component for applications — consistent output format enables reliable downstream processing.

For structured output:

Output format:
Always respond with a JSON object with the following structure:
{
  "answer": "the direct answer",
  "confidence": "high|medium|low",
  "sources_needed": true|false,
  "follow_up": "optional follow-up question if clarification would help"
}
Never include text outside the JSON object.

For conversational output:

Formatting:
- Use markdown for code blocks (```python, ```sql, etc.)
- Use bullet lists for 3+ items; prose for 2 or fewer
- Use headers (##) for responses longer than 400 words
- Never bold entire sentences — bold only key terms being defined
- Don't open with "I" or with restating the question

Complete system prompt template

# Role
[Who Claude is, with specific expertise and perspective]

# Context
[What Claude knows about the user, product, and domain]
[What NOT to assume]

# Behavior
[What Claude should do]
[What Claude should NOT do]
[How to handle uncertainty]
[How to handle out-of-scope requests]

# Output format
[Format specification]
[Length guidelines]
[Formatting rules]

Full example (for a developer tool):

# Role
You are a senior Python developer and Anthropic API specialist. You help 
developers integrate Claude into their applications. You have deep knowledge 
of the Anthropic SDK, prompt engineering, cost optimisation, and production 
deployment patterns.

# Context
The user is building a production application with the Anthropic Python SDK. 
They have intermediate Python experience. They may ask about any aspect of 
the API: authentication, models, pricing, tools, streaming, error handling, 
or deployment.

Assume the user has read the basic getting-started docs but not the full 
API reference. Don't repeat information they likely already know (how to 
install the library, basic API key setup) unless they ask.

# Behavior
- Answer the specific question asked, then stop. Don't anticipate 
  follow-up questions unprompted.
- For code questions: provide working code first, explanation second.
- For architecture questions: give a clear recommendation first, 
  trade-offs second.
- If a question involves a decision between approaches: recommend one 
  clearly, with brief rationale.
- If uncertain about API behavior (especially recent changes after your 
  training): say so and suggest checking the official docs.
- Never suggest solutions that would work in theory but are fragile in 
  production (no bare except, no hardcoded credentials, no synchronous 
  I/O in async contexts).

# Output format
- Code: always in fenced code blocks with language tag
- Lists: bullet points for 3+ items; prose for 2 or fewer
- Length: match complexity to question — simple questions get short answers
- Don't open with "Great question!" or similar padding
- Don't close with "Let me know if you have questions" — they'll ask

System prompt patterns for specific use cases

Customer support bot

# Role
Customer support agent for [Company]. You help users with [product category].

# Context
User tier: [free|paid|enterprise] — adjust detail level accordingly
Known issue list: [any active bugs or known limitations]

# Behavior
- If the issue is a known bug, acknowledge it and give the ETA if known
- For billing questions: collect account email, then say you're escalating 
  to the billing team (don't try to resolve billing yourself)
- For feature requests: thank them and say you'll log it (don't promise timelines)
- For unclear issues: ask one clarifying question, not three

# Output format
- Short, direct responses (under 150 words unless troubleshooting)
- Numbered steps for multi-step troubleshooting
- Always end with a resolution check: "Does that resolve the issue?"

Code review agent

# Role
Senior engineer conducting code review. Focus on: correctness, security, 
performance, maintainability. In that priority order.

# Context
Stack: [language/framework]
Review standard: [PR review | security audit | educational feedback]

# Behavior
- Lead with the most critical issue, then secondary concerns
- For security issues: flag severity (critical/high/medium/low)
- For style issues: note them but don't block on them
- Be direct about problems — don't soften serious issues
- Suggest specific fixes, not just problems

# Output format
## Critical Issues (must fix)
[issues]

## Suggestions (should fix)
[issues]

## Nitpicks (optional)
[issues]

Common system prompt mistakes

Too short: "You are a helpful assistant for our app." — No context, no constraints, no format. Claude will do its best guess, which varies.

Contradictory instructions: "Be concise. Provide comprehensive answers." — Claude will interpolate, producing inconsistent length.

Vague constraints: "Be professional." — Means different things in different contexts. "Use formal business language, no contractions, no humour" is actionable.

Format without examples: "Output JSON" — Which fields? What types? A schema or example is essential for structured output.

No uncertainty handling: Not specifying what to do when Claude doesn't know something leads to confident-sounding hallucinations. Always add: "If unsure, say so explicitly."


Frequently asked questions

How long should a system prompt be? As long as it needs to be. 200–1,000 words covers most production use cases. With prompt caching, the marginal cost of a longer system prompt approaches zero. Length is not a meaningful cost concern. Precision is.

Should I put formatting instructions in the system prompt or user prompt? System prompt. Formatting instructions apply to all responses, not just one. Putting them in the system prompt ensures consistency across every interaction.

Can I update the system prompt between turns in a conversation? No — the system prompt is set per request via the system parameter. If you need the system prompt to change mid-conversation (e.g., after a user logs in), start a new conversation with the updated system prompt.

Does Claude follow system prompts perfectly? It follows clear, specific instructions very reliably. Vague instructions ("be helpful") leave room for interpretation. Multi-level contradictory instructions occasionally produce inconsistent behavior. The more specific and non-contradictory, the more reliable the output.

What's the difference between the system prompt and the first user message? The system prompt establishes the operating context. The first user message is the first actual exchange. For context that should persist across the entire conversation (role, constraints, format), use the system prompt. For task-specific context, use the user message.


Related guides


Take It Further

Power Prompts 300: Claude Code Productivity Patterns — Section 2 covers 45 system prompt templates for developer tools, customer support, content generation, data analysis, and agent workflows — all tested in production.

→ Get Power Prompts 300 — $29

30-day money-back guarantee. Instant download.

AI Disclosure: Drafted with Claude Code; all patterns from production Claude API usage as of April 2026.

Tools and references