← All guides

Claude Code Pull Request Review Automation: Full Setup Guide (2026)

Automate PR code reviews with Claude Code — GitHub Actions workflow, diff-aware prompts, comment posting, and cost control. Working setup in 20 minutes.

Claude Code Pull Request Review Automation: Full Setup Guide (2026)

Claude Code can automatically review pull requests by fetching the diff, analyzing changed files, and posting structured comments — triggered on every PR open or update via GitHub Actions. A working setup takes under 20 minutes and costs roughly $0.02-$0.08 per review with Claude Haiku. This guide walks through the complete pipeline: the Actions workflow, prompt engineering for useful reviews, posting comments via the GitHub API, and controlling costs at scale.


Why Automate PR Reviews with Claude Code?

Manual code review is a bottleneck. In a study of engineering teams, 35% of developer time blocked on PRs was waiting for reviewer availability — not the actual review itself.

Claude Code PR automation delivers:

Cost benchmark: 100 PRs/month with average 200-line diffs costs approximately $4-8/month using Claude Haiku.


Architecture Overview

PR opened/updated
      ↓
GitHub Actions triggered
      ↓
Fetch PR diff (GitHub API)
      ↓
Send diff + review prompt → Claude API
      ↓
Parse Claude's response
      ↓
Post as PR comment (GitHub API)

Step 1: The GitHub Actions Workflow

# .github/workflows/claude-pr-review.yml
name: Claude PR Review

on:
  pull_request:
    types: [opened, synchronize]
    paths-ignore:
      - "*.md"
      - "docs/**"

permissions:
  pull-requests: write
  contents: read

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Get PR diff
        id: diff
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          gh pr diff ${{ github.event.pull_request.number }} \
            --patch > pr_diff.txt
          echo "diff_lines=$(wc -l < pr_diff.txt)" >> $GITHUB_OUTPUT

      - name: Review with Claude
        id: review
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
        run: |
          python3 .github/scripts/claude_review.py \
            --diff pr_diff.txt \
            --output review.md

      - name: Post review comment
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          gh pr comment ${{ github.event.pull_request.number }} \
            --body-file review.md

Step 2: The Review Script

# .github/scripts/claude_review.py
import anthropic
import argparse
import sys

def review_pr(diff_path: str, output_path: str):
    with open(diff_path, "r") as f:
        diff = f.read()

    # Truncate very large diffs to stay within context limits
    max_diff_chars = 50_000
    if len(diff) > max_diff_chars:
        diff = diff[:max_diff_chars] + "\n\n[... diff truncated for length ...]"

    client = anthropic.Anthropic()

    prompt = f"""You are a senior software engineer reviewing a pull request.
Analyze the following git diff and provide a structured code review.

Focus on:
1. Bugs or logic errors
2. Security vulnerabilities
3. Performance issues
4. Missing error handling
5. Test coverage gaps
6. Code clarity and maintainability

Format your response as:

## Summary
[1-2 sentence overview of what this PR does]

## Issues Found
[List each issue with severity: 🔴 Critical | 🟡 Warning | 🔵 Suggestion]

## Positive Observations
[What was done well]

## Recommended Changes
[Specific actionable suggestions]

Git diff:

{diff}

"""

    response = client.messages.create(
        model="claude-haiku-4-5",
        max_tokens=2048,
        messages=[{"role": "user", "content": prompt}]
    )

    review_text = response.content[0].text
    
    # Add metadata footer
    input_tokens = response.usage.input_tokens
    output_tokens = response.usage.output_tokens
    cost_usd = (input_tokens * 0.00000025) + (output_tokens * 0.00000125)
    
    footer = f"\n\n---\n*Reviewed by Claude Haiku · {input_tokens} input / {output_tokens} output tokens · ~${cost_usd:.4f}*"

    with open(output_path, "w") as f:
        f.write(review_text + footer)
    
    print(f"Review written to {output_path}")
    print(f"Cost: ${cost_usd:.4f}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--diff", required=True)
    parser.add_argument("--output", required=True)
    args = parser.parse_args()
    review_pr(args.diff, args.output)

Install the dependency in CI:

- name: Install Anthropic SDK
  run: pip install anthropic

50 tested prompts for code review, PR automation, and CI/CD

Power Prompts ($29) includes battle-tested review prompts that catch real bugs — not just style nits.

Get Power Prompts — $29


Step 3: Prompt Engineering for Useful Reviews

The default review prompt above covers the basics. Customize it for your codebase:

Language-Specific Prompts

# Detect language from diff and customize
if ".py" in diff:
    language_rules = """
    Additional Python-specific checks:
    - Type hints on all function signatures
    - No bare `except:` clauses
    - f-strings instead of % formatting
    - Docstrings on public functions
    """
elif ".ts" in diff or ".tsx" in diff:
    language_rules = """
    Additional TypeScript-specific checks:
    - No `any` types without justification
    - Proper async/await error handling
    - React hooks rules compliance
    - No console.log left in code
    """

Severity Filtering

If you only want critical issues posted as blocking reviews:

# Parse Claude's response for critical issues
if "🔴 Critical" in review_text:
    # Post as a blocking review request
    subprocess.run([
        "gh", "pr", "review", pr_number,
        "--request-changes",
        "--body-file", output_path
    ])
else:
    # Post as informational comment
    subprocess.run([
        "gh", "pr", "comment", pr_number,
        "--body-file", output_path
    ])

Step 4: Skip Conditions

Not every PR needs a full review. Add skip logic to control costs:

- name: Check if review needed
  id: check
  run: |
    CHANGED_FILES=$(gh pr view ${{ github.event.pull_request.number }} \
      --json files --jq '.files | length')
    
    # Skip if only docs changed
    ONLY_DOCS=$(gh pr diff ${{ github.event.pull_request.number }} \
      --name-only | grep -v "\.md$" | wc -l)
    
    if [ "$ONLY_DOCS" -eq "0" ]; then
      echo "skip=true" >> $GITHUB_OUTPUT
    else
      echo "skip=false" >> $GITHUB_OUTPUT
    fi

- name: Review with Claude
  if: steps.check.outputs.skip != 'true'
  # ... rest of review step

Skip conditions to consider:


Cost Control at Scale

Model Cost per 200-line diff Quality
Claude Haiku 3.5 ~$0.02 Good for style/bugs
Claude Sonnet 4.5 ~$0.15 Better for logic/security
Claude Opus 4.5 ~$0.60 Best for complex systems

Recommended strategy: Use Haiku for all PRs, Sonnet only when Haiku flags critical issues. This 80/20 split reduces costs by ~85% while maintaining quality for the PRs that matter.

For detailed model pricing, see Claude Haiku vs Sonnet vs Opus: Which Model to Use.


Advanced: Inline Code Comments

Instead of a single PR comment, post inline comments on specific lines:

import re
import requests

def post_inline_comment(token, repo, pr_number, commit_sha, path, line, body):
    """Post an inline review comment on a specific file and line."""
    url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}/comments"
    headers = {
        "Authorization": f"Bearer {token}",
        "Accept": "application/vnd.github.v3+json"
    }
    data = {
        "body": body,
        "commit_id": commit_sha,
        "path": path,
        "line": line,
        "side": "RIGHT"
    }
    response = requests.post(url, json=data, headers=headers)
    return response.status_code == 201

To use this, update your Claude prompt to return structured JSON instead of markdown:

prompt = """
Review this diff and return JSON:
{
  "summary": "...",
  "comments": [
    {
      "file": "src/main.py",
      "line": 42,
      "severity": "warning",
      "message": "This could raise KeyError if key is missing"
    }
  ]
}
"""

Parse the JSON and post each comment to its corresponding line. This creates a review experience identical to a human reviewer's GitHub review.

For full multi-step agent patterns including inline review workflows, see Claude Agent SDK Guide.


Security Considerations


Frequently Asked Questions

How much does automated PR review with Claude cost per month?

For a team merging 100 PRs/month with average 200-line diffs, Claude Haiku costs approximately $2-4/month. Sonnet costs $12-18/month for the same volume. The exact cost depends on diff size — use response.usage.input_tokens logging to track your actual spend.

Can Claude Code review PRs in private repositories?

Yes. The workflow uses the GitHub API to fetch the diff, then sends it to the Anthropic API. Your code is not stored by Anthropic between calls. Check Anthropic's data usage policy if your organization has strict data residency requirements.

How do I prevent Claude from posting redundant comments on re-reviews?

Before posting, check if a review comment from your bot already exists and delete it first: gh pr comment --delete $(gh pr comment list --json id --jq '.[0].id'). Alternatively, use a fixed comment ID and update rather than create.

What's the best prompt for catching security vulnerabilities?

Explicitly list OWASP Top 10 in your prompt: "Check for: SQL injection, XSS, insecure deserialization, broken authentication, sensitive data exposure, missing access controls, security misconfiguration, using components with known vulnerabilities, insufficient logging." Specific checklists consistently outperform generic "check for security issues" prompts.

Can I use Claude Code PR review on GitLab or Bitbucket?

Yes, with modifications. GitLab CI/CD uses the same Python script — replace gh pr diff with git diff origin/main...HEAD and GitLab's merge request API for posting comments. The Anthropic API call is identical regardless of your Git platform.


Battle-tested review prompts for real-world codebases

Power Prompts ($29) includes specialized review prompts for Python, TypeScript, Go, and SQL — tuned to catch the bugs that matter.

Get Power Prompts — $29

Tools and references