Reproduce-then-fix
Reproduce the bug in {file} with a minimal failing test, then fix it. Don't change unrelated code. Show the failing test before the fix and the passing test after.When: Best for non-trivial bugs where regression risk is real.
Production-tested prompts you can copy, paste, and ship today — organized by debugging, refactoring, testing, architecture, documentation, and security.
Get this cheatsheet + weekly Claude guides
One email — delivers the printable PDF link plus weekly practical Claude Code and API patterns. Unsubscribe any time.
No spam. Unsubscribe any time.
Reproduce the bug in {file} with a minimal failing test, then fix it. Don't change unrelated code. Show the failing test before the fix and the passing test after.When: Best for non-trivial bugs where regression risk is real.
Here is a stack trace: {paste}. Find the root cause file and line, explain what's wrong in 2 sentences, then propose the smallest fix that makes the failing test pass. Don't speculate about causes you can't verify.When: Production crash investigation, especially for unfamiliar codebases.
I broke {feature} sometime in the last 10 commits. Read git log -p HEAD~10..HEAD, propose 3 hypotheses ranked by likelihood, then run git bisect with the verification command `{cmd}`.When: Regression introduced recently — narrow the search faster than manual bisect.
Add temporary structured logging around {function} to capture the variables that drive its behavior. Run the test 10 times. If it intermittently fails, identify the timing/race condition. Remove the logging when done.When: Flaky tests, race conditions, timing-sensitive bugs.
Run `bun run typecheck` and fix all errors in {dir}. Don't use `any` or `@ts-ignore`. If you must widen a type, justify it in a comment.When: After a TypeScript upgrade or strict mode migration.
Extract {large_function} into smaller pure functions. Each new function has one responsibility. Don't change observable behavior. Run the existing tests after each extraction to confirm.When: Long functions that are hard to read or test in isolation.
Find the 3 most-duplicated patterns in {dir} (use grep + manual reading). Propose an abstraction for each, then implement the one with the highest leverage. Show before/after line counts.When: Codebase grew quickly and copy-paste accumulated.
Find abstractions in {dir} that have only 1 caller. For each, propose: keep, inline, or delete. Apply the deletes/inlines that don't lose meaning.When: Codebases where premature abstraction made things harder, not easier.
Identify functions in {file} that take I/O or mutate global state but could be pure. Refactor them to accept dependencies as arguments. Don't break callers.When: Hard-to-test code with hidden dependencies.
Audit the public API of {module}. For each export, decide: stays public, becomes internal (`_` prefix or unexported), or moves to a separate internal module. Apply the changes.When: Modules with leaky public surface that's hard to evolve.
Run coverage. For files under 80%, write tests targeting the uncovered branches. Use the existing test framework and patterns. Don't test implementation details — test behavior.
When: Pre-release coverage push, or after a feature lands.
Write a property-based test for {function} using fast-check (or hypothesis for Python). Identify 3 properties that must always hold (idempotence, commutativity, bounds), then generate 1000 random inputs.When: Pure functions with edge cases that example-based tests miss.
Add a snapshot test for {component} that captures only its semantic output (rendered HTML stripped of whitespace), not styling or implementation. Reject the snapshot if it's >100 lines — refactor first.When: UI components where structure matters more than pixel-perfect output.
Replace 5 unit tests for {module} with one integration test that exercises the full public API. Goal: same coverage, less brittle when refactoring.When: Test suites that break on every refactor without catching real bugs.
Add tests for the failure cases of {function}: invalid input, network failure, timeout, partial result. For each, assert the exact error type/code, not just 'throws'.When: Code that handles external dependencies (DB, API, filesystem).
I want to add {feature}. Read the relevant files in {dir}, then propose 2-3 implementation approaches with trade-offs (complexity, perf, future flexibility). Don't write code yet — wait for me to choose.When: Before any non-trivial feature. Cheaper than rewriting.
Map the import graph of {dir} (use ripgrep on `import` statements). Find any cycles. For each cycle, propose how to break it (extract shared module, invert dependency, delete one direction).When: Codebases where 'where do I put this?' is unclear.
Define the contract for {module} as a TypeScript interface (or Python protocol). List every assumption it makes about its callers and every guarantee it makes to them. Update the implementation if it breaks the contract.When: Module that multiple teams or services depend on.
Before designing {new_thing}, read these 5 files: {list}. Summarize their patterns. Then propose a design that's consistent with what already works in this codebase — not a 'best practice' from elsewhere.When: Joining a codebase or proposing a non-trivial change.
Before I add {feature}, estimate: (a) lines changed, (b) test surface to update, (c) docs to write, (d) likely follow-up bugs in the next 30 days based on similar past changes. If any number is surprising, suggest a smaller scope.When: Sizing work before committing to a sprint estimate.
Read {dir}. Generate a README that covers: what this does, why it exists, how to run it locally, common gotchas, where the entry points are. No marketing fluff.When: Inherited or undocumented codebase you need to ramp others on.
Add comments to {file} only where the WHY isn't obvious from the code. Don't comment 'increment counter'. Do comment 'we re-fetch here because X server caches for 5min'.When: Code that's correct but mysterious to anyone but the author.
We changed {API} from version X to Y. Read the diff between {tag1} and {tag2}, then write a migration guide: each breaking change, before/after code, why we changed it.When: Released a major version and want to reduce support load.
Write a 'first 30 minutes' guide for a new engineer joining {service}. Tasks they should complete to verify their environment works, with expected output for each step.When: Recurring 'help me set up' Slack messages.
Generate API documentation from the TypeScript types in {file}. Include each function's purpose, parameters, return value, error cases, and one usage example.When: Library or shared module without external docs.
Scan {dir} for code paths that take user input. For each, verify validation/sanitization. Flag any that trust the input or pass it directly to SQL, shell, eval, or filesystem ops.When: Pre-launch security pass or after a security advisory.
Scan the repo for accidentally committed secrets (API keys, JWTs, private keys). Use regex patterns. List findings without printing the secret value. Recommend rotation steps.
When: Quarterly or before open-sourcing a repo.
Run `bun audit` (or `npm audit`/`pip-audit`). For each vulnerability, classify: false positive (with reason), low priority, fix now. Open issues for 'fix now' with reproduction.
When: Weekly maintenance or after a CVE announcement.
List every API endpoint in {dir} and the auth check applied to each. Flag endpoints with no auth, weak auth (e.g., user-supplied user_id), or unclear auth.When: After introducing new endpoints or before a security review.
Audit access patterns in {module}. For each role/permission, list what it can do and read. Flag any 'admin' role doing more than necessary, or non-admin paths that bypass checks.When: Multi-tenant or role-based systems before major releases.
Power Prompts 300 includes all 30 prompts on this page plus 270 more — slash command templates, token-optimized variants, and JSONL imports for Claude Code. 12 months of free updates. 30-day money-back guarantee.
Get Power Prompts 300 — $29 →Get the printable PDF + weekly guides
The same email signup as above — only takes a second.
No spam. Unsubscribe any time.