How Anthropic Employees Use Claude (Patterns Worth Stealing)
Across publicly shared Anthropic posts, podcasts, and engineer interviews, ten patterns appear repeatedly: writing CLAUDE.md as the source of truth, using plan mode for anything multi-file, deferring to subagents for parallel research, encoding repetitive workflows as skills, treating Claude as a code reviewer (not just a writer), aggressive prompt caching, structured output for data tasks, scheduled tasks for monitoring, the "ask Claude to code-review your prompt" loop, and the discipline of small, well-scoped tasks. None of these are secret techniques — they're public-facing best practices, but they appear so consistently across Anthropic engineering content that they're clearly the dominant internal style.
Sources for this article
This article is compiled exclusively from public sources. No internal Anthropic information is included. Public sources include:
- The Anthropic engineering blog
- Public talks at conferences (LangChain, AI Engineer Summit, etc.)
- Podcast appearances (Lenny's Newsletter, Latent Space, No Priors)
- Engineer interviews (TechCrunch, The Information, Wired)
- Anthropic Cookbook and example repositories on GitHub
- Posts on the Anthropic Discord that engineers have explicitly made public
Where a specific person is referenced, they're someone who has spoken publicly under their own name about their workflows.
Pattern 1: CLAUDE.md as the project's source of truth
The single most consistent pattern across Anthropic-shared workflows is that every active codebase has a well-maintained CLAUDE.md file. It's not an afterthought — it's the first file new contributors read.
What's in a typical Anthropic CLAUDE.md (based on public templates and shared examples):
- Project context: what does this codebase do, in 3–5 sentences
- Development setup: install commands, environment variables, common gotchas
- Code style: framework conventions, naming patterns, testing approach
- Workflows: how to run tests, deploy, debug common issues
- Don't-do list: anti-patterns specific to this project
- File map: where the important things live
Why this matters: When Claude reads CLAUDE.md before any task, it inherits the project's context for free. Without CLAUDE.md, Claude infers context from file structure — which is fine for small tasks but degrades on multi-file work.
Pattern to steal: maintain CLAUDE.md as actively as you maintain README. Update it when conventions change. Treat outdated CLAUDE.md as a bug.
Pattern 2: Plan mode for anything that touches >2 files
In multiple talks, Anthropic engineers have described plan mode as a "default on" for non-trivial work. The pattern:
- Frame the task as a goal, not a step list
- Engage plan mode (
Shift+Tabtwice in Claude Code) - Let Claude propose a plan with file-by-file changes
- Edit the plan, especially for tradeoffs
- Approve the plan to execute
Why this works: plan mode separates "what to do" from "how to do it". The "what" benefits from human judgment; the "how" can be delegated to Claude.
The pattern is so prevalent inside Anthropic that engineers describe their workflow as "Claude makes the plan, I review the plan, Claude executes the plan, I review the diff" — four distinct review touchpoints, each fast.
Pattern to steal: don't skip plan mode just because the task seems small. The cost is 30 seconds; the catch rate on subtle issues is high.
Pattern 3: Subagents for parallel research
Anthropic engineers consistently delegate independent research tasks to subagents in parallel. The pattern:
Use the Agent tool to launch three subagents in parallel:
1. Find all references to AuthMiddleware
2. Find all places where session tokens are read
3. Find all tests that exercise the auth flow
Return concise reports from each.
This compresses what would be 3–5 sequential searches into a single round-trip. The orchestrator then synthesises.
Why this matters: parallel context isolation prevents one search from polluting another's context. Each subagent gets a clean ~200K token budget for its task.
Pattern to steal: when you'd otherwise run three or four searches sequentially, ask Claude to spawn parallel subagents instead. The pattern is described in Anthropic's building-effective-agents post.
Pattern 4: Skills for repetitive workflows
The skill system in Claude Code lets you encode a complete workflow — instructions, examples, tools — as a reusable artifact. Anthropic engineers use this aggressively for any workflow they run more than 5 times.
Examples of skill patterns Anthropic has publicly shared:
- A skill for "create a new internal RFC document" with template + commit conventions
- A skill for "run the SDK release process" with version bumping, changelog, and publish steps
- A skill for "investigate a regression" with reproduction, bisect, and root cause workflow
The skills don't replace Claude's reasoning. They constrain it: this is the procedure we follow, here are the templates, here are the validation steps.
Pattern to steal: maintain a ~/.claude/skills/ directory. Every workflow you do more than five times in a quarter becomes a skill.
Pattern 5: Claude as code reviewer, not just writer
A consistent meta-pattern across Anthropic engineering content: don't only ask Claude to write code; ask Claude to review code, including code that Claude wrote.
Common review patterns:
- "Review this diff for security issues"
- "Review this change for performance regressions"
- "Review this PR for test coverage gaps"
- "Review this code for adherence to the patterns in CLAUDE.md"
The interesting case is asking Claude to review its own output. This catches a meaningful fraction of issues that single-pass generation misses, especially around edge cases and error handling.
Pattern to steal: every implementation task ends with a review pass. The review is run as a fresh subagent or fresh prompt, not in the same context as the implementation.
Pattern 6: Aggressive prompt caching for stable system prompts
Anthropic engineers building agent systems consistently emphasise prompt caching. The pattern:
- Identify the parts of your prompt that don't change between calls (system prompt, role instructions, tool definitions, few-shot examples)
- Mark them as cached using
cache_control: { "type": "ephemeral" } - Pay 1.25x the input price on the first call to populate the cache
- Pay 0.1x the input price on subsequent cache hits
For agent workloads with stable 50K+ token system prompts, this collapses inference costs by 60–90%.
Pattern to steal: for every production agent, audit your system prompt for stable vs dynamic parts. Cache aggressively. The cache TTL is 5 minutes by default; for chat-driven agents, that's plenty for sub-conversation reuse.
Pattern 7: Structured output for any data task
When Claude is generating data (not prose), Anthropic engineers use the structured output mode aggressively. The pattern:
- Define a strict JSON schema for the output
- Set
response_format: { type: "json_schema", schema: ... } - Validate the output against the schema before consuming
- On schema failure, retry once with the validation error in context
Why this matters: structured output cuts post-processing code by ~80% and turns "Claude sometimes returns malformed JSON" from a real problem into a non-issue.
Pattern to steal: any time you're parsing Claude's output, use structured output. The boilerplate is small. The reliability gain is large.
Pattern 8: Scheduled tasks for monitoring and recurring work
Claude Code supports scheduled tasks (via the mcp__scheduled-tasks__create_scheduled_task tool family). Anthropic engineers use this for:
- Weekly content quality audits
- Daily monitoring of key metrics with anomaly callouts
- Monthly architecture review sweeps
- Quarterly tech debt audits
The pattern is to define the task once with clear instructions and acceptance criteria, schedule it on a cron, and review the output. The scheduled run does 90% of the legwork; the human reviews and acts.
Pattern to steal: identify any recurring task you do manually. If you can write down what good looks like, it's a candidate for a scheduled task.
Pattern 9: "Code-review your prompt" loop
Several Anthropic engineers have described a meta-pattern: when a prompt isn't producing good results, ask Claude to review the prompt itself.
Here's the prompt I'm using to extract structured data from invoices:
{{prompt}}
Here's a sample output that's wrong:
{{actual_output}}
Here's what I expected:
{{expected_output}}
Review this prompt. What's likely causing the failure? What changes would improve it?
This is a fast loop: minutes per iteration. Instead of guessing what to change, you get a model-level read on what the prompt is signaling vs what you intended.
Pattern to steal: when prompt engineering isn't working, treat the prompt as code under review. Claude is reasonably good at diagnosing its own prompt comprehension failures.
Pattern 10: Small, well-scoped tasks > big, vague tasks
The most consistently emphasised pattern across Anthropic content is: don't ask Claude to do too much in one shot.
Bad pattern: "Refactor the auth system, add SSO, and update all the docs"
Good pattern: a sequence of:
- "List all files involved in the auth system" (subagent)
- "Plan the refactor as a sequence of independent steps" (plan mode)
- "Execute step 1 of the plan" (focused task)
- "Review the diff for correctness" (fresh review pass)
- "Execute step 2..." (continue)
The pattern is mechanical: split big tasks into many small tasks, with explicit context resets between them. This keeps each task within Claude's reliability range and produces auditable diff sequences.
Pattern to steal: when a task feels big, your first action is to decompose it, not to start it.
Pattern 11 (bonus): Treat Claude's failures as data, not noise
Mike Krieger and Anthropic engineers have publicly described internal "Claude failure mode" investigations: when Claude produces a wrong or surprising output, treat the case as a bug worth understanding, not a glitch to ignore.
The investigation looks like:
- What was the prompt?
- What was the expected output?
- What was the actual output?
- Why did the model produce that output?
- Is the prompt structurally setting up the failure, or is this a model limitation?
Why this matters: most Claude failures are prompt-shaped, not model-shaped. Investigating them upgrades your prompts. Ignoring them keeps you at the same baseline.
Pattern to steal: maintain a failure log for production agents. Review weekly. Most entries become prompt improvements.
What Anthropic engineers don't do (publicly)
A few non-patterns are also informative:
They don't ask for "best" practices generically. Specific tasks with specific context produce better outputs than generic "what's the best way to X" queries.
They don't accept long, generic system prompts. Public examples consistently show focused, task-specific system prompts under 1,000 words for most use cases. Bloated prompts are an anti-pattern.
They don't trust the first answer on hard problems. Important questions get cross-checked: a second prompt, a different framing, a verification pass.
They don't manually re-run the same workflow. If something runs more than five times, it becomes a skill, a script, or a scheduled task.
They don't keep stale CLAUDE.md or stale tests. Both are described in public talks as "lying documents" that mislead Claude (and humans).
Putting it together: a typical workflow
Combining the patterns above, a typical Anthropic-style workflow for "add a new feature" looks like:
- Read CLAUDE.md — Claude inherits project context
- Frame the task as a goal, not a step list
- Plan mode — Claude proposes a plan
- Review and edit the plan — human judgment on tradeoffs
- Execute step-by-step — small, focused tasks
- Review each diff — fresh review pass on Claude's output
- Add tests — structured task with test patterns from CLAUDE.md
- Documentation — update CLAUDE.md if conventions changed
- Failure log — record any Claude failures encountered for future improvement
This is not a 10-minute workflow. A non-trivial feature might take 1–4 hours of guided collaboration. The point isn't speed of any single step — it's reliability of the end-to-end output.
Frequently asked questions
Where can I read more public Anthropic engineering content? The Anthropic engineering blog is the primary source. Recent talks from Anthropic engineers at AI Engineer Summit and similar conferences are available on YouTube. Mike Krieger's podcast appearances cover product-engineering process specifically.
Are these patterns specific to Anthropic, or do they generalise? They generalise. Most are good practices for any LLM-based development workflow. They're emphasised more inside Anthropic because Anthropic engineers eat their own dog food, but the underlying patterns work with GPT-5, Gemini 2, and open models too.
What's the single most underrated pattern? Pattern 9 (code-review your prompt). It's underrated because it's recursive — using the model to debug your use of the model — and most people don't think to try it.
How long does it take to internalise these patterns? Most developers report a 4–8 week ramp from "writing prompts naively" to "running this workflow by default". The plan mode + subagent + skill triad takes the longest because it requires a mental model shift away from "Claude is autocomplete" toward "Claude is a junior engineer".
Is there a community where these patterns are discussed? The Anthropic Discord has active channels for Claude Code, Agent SDK, and prompt engineering. The r/ClaudeAI subreddit is also active. The patterns spread fastest in those venues, often weeks before they appear in formal blog posts.
Take It Further
Claude Code Mastery: Skills, Workflows, Productivity — The patterns above, distilled into a practical implementation guide: CLAUDE.md templates, plan mode workflows, the skill library to copy-paste, and the productivity loops that compound over time.
→ Get Claude Code Mastery — $29
30-day money-back guarantee. Instant download.
Sources
- Anthropic Engineering Blog
- Building Effective Agents — Anthropic
- Mike Krieger on Lenny's Podcast (2025)
- Anthropic Cookbook (GitHub)
- Claude Code documentation
- AI Engineer Summit 2025 talks (Anthropic engineers)