Claude API Email Generation Templates
To generate personalized emails with the Claude API, pass your recipient data as variables into a structured system prompt that defines tone, intent, and constraints, then render each message in the user turn. Claude returns ready-to-send copy — no post-processing required. For cold outreach, use claude-haiku-4-5 at $0.0002 per email; for high-value sequences where tone precision matters, switch to claude-sonnet-4-6 at $0.0030. A single Batch API job can generate 1,000 personalized emails overnight at half the synchronous cost.
Prompt Patterns for Cold, Warm, and Transactional Emails
Cold Email
Cold email prompts need a firm word-count ceiling, a clear value hook, and an explicit instruction to avoid spam triggers (more on that in Deliverability Considerations).
import anthropic
client = anthropic.Anthropic()
COLD_EMAIL_SYSTEM = """You are an expert B2B copywriter.
Write a cold outreach email following these rules exactly:
- Subject line: max 50 characters, no all-caps, no exclamation marks
- Body: 80-120 words, plain conversational tone
- One specific pain point tied to the recipient's industry
- One concrete offer or question as the call to action
- No generic openers ("I hope this finds you well", "My name is...")
- No spam trigger words: free, guaranteed, act now, limited time, winner
Return format:
Subject: [subject line]
[email body]"""
def generate_cold_email(recipient: dict) -> str:
prompt = (
f"Recipient: {recipient['name']}, {recipient['title']} at {recipient['company']}.\n"
f"Industry: {recipient['industry']}.\n"
f"Our product: {recipient['our_product']}.\n"
f"Known pain point: {recipient['pain_point']}."
)
response = client.messages.create(
model="claude-haiku-4-5",
max_tokens=400,
system=COLD_EMAIL_SYSTEM,
messages=[{"role": "user", "content": prompt}],
)
return response.content[0].text
Warm Email (Existing Lead)
Warm emails can reference past interactions. Include a last_touchpoint variable so Claude anchors the opening to something real.
WARM_EMAIL_SYSTEM = """You are a helpful sales advisor writing a follow-up email.
Rules:
- Reference the last touchpoint naturally in the first sentence
- Body: 60-90 words
- Offer one concrete next step (demo, resource, or question)
- Tone: friendly but professional, no pressure language
Return Subject + body only."""
def generate_warm_email(lead: dict) -> str:
prompt = (
f"Name: {lead['name']}. Last touchpoint: {lead['last_touchpoint']}.\n"
f"Their interest: {lead['interest']}. Next step to offer: {lead['next_step']}."
)
response = client.messages.create(
model="claude-haiku-4-5",
max_tokens=350,
system=WARM_EMAIL_SYSTEM,
messages=[{"role": "user", "content": prompt}],
)
return response.content[0].text
Transactional Email
Transactional emails (order confirmations, password resets, receipts) are low-creativity, high-accuracy tasks — ideal for Haiku with a strict JSON output contract.
TRANSACTIONAL_SYSTEM = """You write concise transactional emails.
Return a JSON object: {"subject": "...", "body": "..."}
Rules:
- Subject: factual, max 45 chars
- Body: under 60 words, no marketing language
- Include all provided data fields verbatim (order ID, amounts, dates)
- Plain text only — no markdown, no HTML"""
def generate_order_confirmation(order: dict) -> dict:
import json
prompt = json.dumps(order)
response = client.messages.create(
model="claude-haiku-4-5",
max_tokens=300,
system=TRANSACTIONAL_SYSTEM,
messages=[{"role": "user", "content": prompt}],
)
return json.loads(response.content[0].text)
Personalization Variables
Effective personalization goes beyond {first_name}. The variables below consistently lift open and reply rates:
| Variable | Example value | Impact |
|---|---|---|
company |
Acme Corp | Signals research |
title |
Head of Engineering | Role-specific framing |
industry |
FinTech | Pain-point relevance |
pain_point |
Manual compliance reporting | Hooks attention |
last_touchpoint |
Downloaded our SOC 2 guide | Continuity signal |
next_step |
15-min demo on Friday | Reduces friction |
company_size |
120 employees | Right-sizes the pitch |
Pass all available variables into the prompt. Claude uses them selectively — you do not need to engineer which ones appear in the output. For data-sparse recipients (missing pain_point, for example), add a fallback instruction: "If a field is empty, omit that element rather than inventing a placeholder.".
A/B Testing Email Variants
Generate two variants per recipient in a single call by asking for a JSON array. Run the split in your CRM or email tool.
AB_SYSTEM = """You are a conversion copywriter.
Generate TWO cold email variants for A/B testing.
Variant A: leads with a pain-point question.
Variant B: leads with a bold proof statement (stat or outcome).
Return JSON: [{"variant": "A", "subject": "...", "body": "..."}, {"variant": "B", ...}]
Each body: 80-110 words. Same recipient data, different angle."""
def generate_ab_variants(recipient: dict) -> list[dict]:
import json
response = client.messages.create(
model="claude-sonnet-4-6", # Sonnet for sharper variant differentiation
max_tokens=700,
system=AB_SYSTEM,
messages=[{"role": "user", "content": json.dumps(recipient)}],
)
return json.loads(response.content[0].text)
Use Sonnet for A/B variant generation — the quality difference between Haiku and Sonnet is most visible when you need two meaningfully distinct angles rather than surface-level rewrites. See Claude Haiku vs Sonnet vs Opus — Which Model for the full decision framework.
Batch Generation with Batch API
The Batch API runs asynchronous jobs at a 50% discount off synchronous pricing. For 1,000-email campaigns, this cuts cost from $0.20 (Haiku sync) to $0.10 per batch run.
import anthropic
import json
client = anthropic.Anthropic()
def build_batch_requests(recipients: list[dict]) -> list[dict]:
"""Build a Batch API request list from a recipient list."""
requests = []
for i, r in enumerate(recipients):
prompt = (
f"Recipient: {r['name']}, {r['title']} at {r['company']}.\n"
f"Industry: {r['industry']}. Pain point: {r['pain_point']}."
)
requests.append({
"custom_id": f"email_{i}_{r['email']}",
"params": {
"model": "claude-haiku-4-5",
"max_tokens": 400,
"system": COLD_EMAIL_SYSTEM,
"messages": [{"role": "user", "content": prompt}],
},
})
return requests
def submit_email_batch(recipients: list[dict]) -> str:
"""Submit a batch job and return the batch ID."""
requests = build_batch_requests(recipients)
batch = client.messages.batches.create(requests=requests)
print(f"Batch submitted: {batch.id} — {len(requests)} emails queued")
return batch.id
def collect_batch_results(batch_id: str) -> dict[str, str]:
"""Poll until complete, then return {custom_id: email_text}."""
import time
while True:
batch = client.messages.batches.retrieve(batch_id)
if batch.processing_status == "ended":
break
print(f"Status: {batch.processing_status} — waiting 30s")
time.sleep(30)
results = {}
for result in client.messages.batches.results(batch_id):
if result.result.type == "succeeded":
results[result.custom_id] = result.result.message.content[0].text
else:
results[result.custom_id] = None # log failures separately
return results
# --- Usage ---
# recipients = load_from_crm(limit=1000)
# batch_id = submit_email_batch(recipients)
# emails = collect_batch_results(batch_id)
For concurrent synchronous sends (when you need results in under a minute), see Claude API Concurrent Requests for an asyncio-based approach.
Agent SDK Cookbook — Email Automation Recipes
The cookbook includes complete email agent blueprints: multi-step outreach sequences, reply-detection loops, CRM sync patterns, and Haiku/Sonnet routing logic for cost-optimized campaign generation. All recipes ship with prompt caching enabled and Batch API integration out of the box.
→ Get the Agent SDK Cookbook — $49
Instant download. 30-day money-back guarantee.
Deliverability Considerations: Avoiding Spammy Patterns
AI-generated email fails deliverability for two distinct reasons: content triggers (spam-filter keywords) and structural red flags (identical copy sent to many recipients). Address both in the prompt and in your sending infrastructure.
Prompt-level controls:
DELIVERABILITY_RULES = """
Banned words and phrases (never use):
- free, guaranteed, no risk, act now, limited time offer, click here
- winner, congratulations, you have been selected
- $$, 100%, earn money, make money fast
- Excessive punctuation: !!!, ???
Structural rules:
- Vary sentence length — avoid uniform 15-word sentences throughout
- No ALL CAPS words except established acronyms (API, SaaS, CRM)
- Include one genuine question to invite a reply
- Sign off with a real name and role, not a generic "The Team"
"""
Infrastructure controls (outside Claude):
- Send from a warmed domain (at least 30 days old with gradual volume ramp)
- Limit daily send volume to under 200 per domain until reputation is established
- Use a unique
custom_idper recipient and deduplicate before sending - Add plain-text MIME part alongside HTML — spam filters penalize HTML-only emails
- Honor unsubscribe immediately; bounce rate above 2% triggers ISP throttling
Even with perfect copy, high send volume from a cold domain will land in spam. The Claude prompt controls content quality; your ESP configuration controls inbox placement.
Template Versioning Pattern
As your prompts evolve, version them explicitly. This lets you A/B test prompt versions across campaigns and roll back if quality drops.
from dataclasses import dataclass
from datetime import date
@dataclass
class EmailTemplate:
version: str
model: str
system: str
max_tokens: int
created: str
TEMPLATES: dict[str, EmailTemplate] = {
"cold_v1": EmailTemplate(
version="cold_v1",
model="claude-haiku-4-5",
system=COLD_EMAIL_SYSTEM,
max_tokens=400,
created="2026-04-01",
),
"cold_v2": EmailTemplate(
version="cold_v2",
model="claude-haiku-4-5",
system=COLD_EMAIL_SYSTEM + "\n- Open with a specific industry stat if possible.",
max_tokens=450,
created="2026-04-30",
),
}
def generate_with_template(template_key: str, recipient: dict) -> dict:
tpl = TEMPLATES[template_key]
prompt = (
f"Recipient: {recipient['name']}, {recipient['title']} at {recipient['company']}.\n"
f"Industry: {recipient['industry']}. Pain point: {recipient['pain_point']}."
)
response = client.messages.create(
model=tpl.model,
max_tokens=tpl.max_tokens,
system=tpl.system,
messages=[{"role": "user", "content": prompt}],
)
return {
"template_version": tpl.version,
"model": tpl.model,
"email": response.content[0].text,
"input_tokens": response.usage.input_tokens,
"output_tokens": response.usage.output_tokens,
}
Log template_version and token counts per send. This data feeds directly into the break-even analysis below.
Cost Analysis: Per-Email Cost and Break-Even
Per-email cost (April 2026 list pricing)
Typical cold email: ~250 input tokens (system + recipient data) + ~150 output tokens.
| Model | Input cost | Output cost | Per-email total |
|---|---|---|---|
| Haiku | 250 × $0.80/M = $0.000200 | 150 × $4.00/M = $0.000600 | ~$0.0008 |
| Sonnet | 250 × $3.00/M = $0.000750 | 150 × $15.00/M = $0.002250 | ~$0.0030 |
| Haiku + Batch API (−50%) | — | — | ~$0.0004 |
| Sonnet + Batch API (−50%) | — | — | ~$0.0015 |
For a 1,000-email campaign: Haiku sync ≈ $0.80, Haiku batch ≈ $0.40, Sonnet sync ≈ $3.00.
Break-even vs. human copywriter
A mid-level freelance copywriter charges $5–$15 per personalized cold email. At $0.0008 per email with Haiku, Claude reaches break-even in fewer than 1 email — there is no cross-over point; Claude is cheaper at every volume. The real comparison is Claude + human review vs. fully human:
| Volume | Human only ($8/email) | Claude Haiku + 20 min human QA ($35/hr) |
|---|---|---|
| 100 emails | $800 | $0.08 + $11.67 = $11.75 |
| 1,000 emails | $8,000 | $0.80 + $116.67 = $117.47 |
| 10,000 emails | $80,000 | $8.00 + $1,166.67 = $1,174.67 |
For prompt caching strategies that reduce input token costs by up to 90% on repeated system prompts, see Claude API Cost & Prompt Caching Break-Even.
Frequently Asked Questions
How do I generate personalized emails at scale without hitting rate limits?
Use the Batch API for non-time-sensitive campaigns (results available within minutes to hours). For real-time personalization at scale, run concurrent async requests with asyncio — see Claude API Concurrent Requests. Haiku's rate limits are significantly higher than Sonnet's, making it the right choice for bulk sends.
Will AI-generated emails get flagged as spam?
Claude-generated copy avoids spam triggers when you include explicit banned-word rules in the system prompt (see the Deliverability section above). The bigger deliverability risk is not content quality but sending infrastructure: domain age, DKIM/DMARC setup, and gradual volume ramp. Spam filters evaluate the envelope (sender reputation) before they evaluate the body. Clean copy from a cold domain will still land in spam.
What is the best model for cold email — Haiku or Sonnet?
Haiku is the right default for cold email at scale. The quality gap between Haiku and Sonnet is small for structured short-copy tasks like 100-word emails, and Haiku is ~3.75x cheaper per email. Use Sonnet when you need sharper A/B variant differentiation, nuanced executive-level tone, or when you are writing fewer than 100 emails and cost is not a constraint. See Claude Haiku vs Sonnet vs Opus — Which Model.
Can Claude generate full multi-step email sequences?
Yes. Structure the sequence as a loop: pass the previous email's text as context in the next call so Claude maintains tone and avoids repeating the same hooks. For a 5-email drip sequence, generate all five in a single session with a rolling context window rather than five independent calls — this produces more coherent sequences and reduces redundancy.
How do I version and roll back email prompt templates?
Use the EmailTemplate dataclass pattern above. Store templates in a config file or database keyed by version string. Log the template_version and token counts for every send. When a campaign underperforms, compare reply rates by version to identify which prompt change caused the regression. Never overwrite a live template in place — always create a new version key.
Agent SDK Cookbook — Full Email Automation Agent
Go beyond single-email generation: the cookbook's email chapter covers a full agentic pipeline — CRM data ingestion, multi-step sequence generation, reply detection, and automated follow-up scheduling. Built on the Anthropic Agent SDK with Batch API integration, prompt caching, and template versioning baked in.
→ Get the Agent SDK Cookbook — $49
Instant download. 30-day money-back guarantee.
Sources
- Anthropic — Claude model pricing — April 2026
- Anthropic — Message Batches API — April 2026
- Anthropic — Rate limits — April 2026