← All guides

Automated Code Review with Claude: Setup and Workflow

How to set up automated code review with Claude — GitHub Actions integration, PR review automation, inline comment generation, and the prompts that catch.

Automated Code Review with Claude: Setup and Workflow

Automated code review with Claude runs on every PR, flags issues before human review, and posts specific inline comments on the diff. The setup: a GitHub Action that triggers on pull requests, extracts the diff, sends it to Claude with a focused review prompt, and posts the response as a PR comment. This takes about 30 minutes to set up and catches real bugs — particularly security issues, missing error handling, and edge cases that manual review often misses.


GitHub Actions workflow

Create .github/workflows/code-review.yml:

name: Claude Code Review

on:
  pull_request:
    types: [opened, synchronize]

permissions:
  pull-requests: write
  contents: read

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Full history for better diff

      - name: Get diff
        id: diff
        run: |
          git diff origin/${{ github.base_ref }}...HEAD -- \
            '*.py' '*.ts' '*.tsx' '*.js' '*.go' \
            > diff.txt
          # Truncate if too large (50KB limit to manage costs)
          head -c 50000 diff.txt > diff_truncated.txt

      - name: Run Claude review
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          PR_NUMBER: ${{ github.event.pull_request.number }}
          REPO: ${{ github.repository }}
        run: |
          pip install anthropic requests
          python .github/scripts/claude_review.py

The review script

Create .github/scripts/claude_review.py:

import anthropic
import requests
import os

REVIEW_PROMPT = """Review this pull request diff for:

1. **Security issues** (SQL injection, XSS, exposed credentials, unsafe deserialization)
2. **Bugs** (null pointer dereferences, off-by-one errors, race conditions, wrong comparisons)
3. **Missing error handling** (unhandled exceptions, missing null checks, no timeout)
4. **Performance problems** (N+1 queries, missing indexes, unnecessary loops in critical paths)
5. **Code quality** (dead code, naming confusion, overly complex logic)

Format your response as:

## Summary
One paragraph summary of the overall quality and key concerns.

## Issues Found

### [CRITICAL/HIGH/MEDIUM/LOW] Issue Title
**File**: filename.py (if determinable from diff)
**Problem**: Specific description of the issue
**Fix**: Concrete fix recommendation

[Repeat for each issue found]

## Approved
If no significant issues: write "No critical or high severity issues found."

Focus on real bugs and security issues. Don't flag style issues or suggest refactors 
unless the current code is likely to cause bugs.

Diff:
"""

def get_diff() -> str:
    try:
        with open("diff_truncated.txt") as f:
            return f.read()
    except FileNotFoundError:
        return ""

def post_review_comment(review_text: str, pr_number: int, repo: str, token: str):
    url = f"https://api.github.com/repos/{repo}/issues/{pr_number}/comments"
    headers = {
        "Authorization": f"token {token}",
        "Accept": "application/vnd.github.v3+json",
    }
    body = f"## Claude Code Review\n\n{review_text}\n\n---\n*Automated review by Claude API*"
    
    response = requests.post(url, headers=headers, json={"body": body})
    response.raise_for_status()

def main():
    diff = get_diff()
    
    if not diff.strip():
        print("No diff found. Skipping review.")
        return
    
    client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
    
    # Route based on diff size: Haiku for small, Sonnet for large
    model = "claude-haiku-4-5" if len(diff) < 10_000 else "claude-sonnet-4-5"
    
    response = client.messages.create(
        model=model,
        max_tokens=4096,
        messages=[{"role": "user", "content": REVIEW_PROMPT + diff}]
    )
    
    review_text = response.content[0].text
    
    post_review_comment(
        review_text=review_text,
        pr_number=int(os.environ["PR_NUMBER"]),
        repo=os.environ["REPO"],
        token=os.environ["GITHUB_TOKEN"],
    )

if __name__ == "__main__":
    main()

Add the API key secret

In your GitHub repository:

  1. Settings → Secrets and variables → Actions
  2. New repository secret: ANTHROPIC_API_KEY = your API key

Customising the review for your stack

The default prompt catches general issues. Customise for your tech stack:

DJANGO_REVIEW_PROMPT = """Review this Django PR diff for:

1. SQL injection risks (raw queries, .extra(), annotate() with user input)
2. Missing @login_required or permission checks on views
3. N+1 query patterns (loops calling .get() or unoptimised querysets)
4. Missing select_related/prefetch_related on ForeignKey traversals
5. Celery task issues (missing error handling, missing bind=True for retries)
6. Missing database transactions for multi-step writes
"""

TYPESCRIPT_REACT_PROMPT = """Review this TypeScript React PR diff for:

1. Type safety issues (any types, missing null checks, incorrect type assertions)
2. React anti-patterns (state mutation, missing dependencies in useEffect/useMemo)
3. Memory leaks (event listeners not cleaned up, subscriptions in useEffect)
4. Accessibility issues (missing ARIA labels, keyboard navigation problems)
5. Security issues (XSS risks from unsanitised HTML rendering, exposed API keys)
6. Performance issues (expensive operations in render, missing memoisation)
"""

Cost management

Each PR review at ~10KB diff consumes approximately:

For a team with 50 PRs/week: ~$2/week using Haiku for small diffs. Negligible relative to the time saved in human review.


Advanced: creating a GitHub review with inline comments

For GitHub's "review" format (rather than a plain comment):

def create_github_review(
    review_text: str,
    pr_number: int,
    repo: str,
    token: str,
    commit_sha: str,
):
    """Create a GitHub review (shows as 'commented' in PR timeline)."""
    url = f"https://api.github.com/repos/{repo}/pulls/{pr_number}/reviews"
    headers = {
        "Authorization": f"token {token}",
        "Accept": "application/vnd.github.v3+json",
    }
    
    review_payload = {
        "commit_id": commit_sha,
        "body": review_text,
        "event": "COMMENT",  # Use REQUEST_CHANGES for blocking reviews
    }
    
    response = requests.post(url, headers=headers, json=review_payload)
    response.raise_for_status()
    return response.json()["html_url"]

Frequently asked questions

Will Claude review every PR or only ones it's triggered on? The GitHub Actions workflow triggers on every pull_request event. Add branches: [main, staging] under the trigger to restrict to specific target branches.

Can Claude approve PRs automatically? Technically yes, but not recommended. Use COMMENT mode and let humans make the approval decision. Automated blocking reviews (REQUEST_CHANGES) can be useful for critical security findings.

How do I prevent the review from running on draft PRs? Change the trigger to types: [opened, synchronize, ready_for_review] and add a condition: if: github.event.pull_request.draft == false.

What if the diff is very large? The workflow truncates to 50KB. For very large PRs (500KB+), split the review by file type or focus only on changed files in specific directories.

Does this replace human code review? No — it augments it. Claude catches mechanical issues (null checks, security patterns, error handling) so human reviewers focus on architecture, business logic, and design decisions.


Related guides


Take It Further

Claude Agent SDK Cookbook: 40 Production Patterns — Pattern 38 covers the full CI/CD Integration: automated code review, test generation from diffs, security scanning, and the PR workflow that blocks deploys on critical findings.

→ Get the Agent SDK Cookbook — $49

30-day money-back guarantee. Instant download.

AI Disclosure: Drafted with Claude Code; all GitHub Actions patterns verified as of April 2026.

Tools and references