Claude Code Database Schema Migration: Safe Strategy with Examples (2026)
Claude Code can plan, write, and review database schema migrations when you provide the current schema, target schema, and constraints (zero-downtime, existing data, foreign key rules). The safest workflow is: describe the change → Claude generates the migration script → you review → run on a copy of production data first. This guide covers Alembic (Python/SQLAlchemy), Prisma (Node.js), and raw SQL migrations with rollback and zero-downtime patterns.
Why Database Migrations Are High-Risk
Migrations are among the most dangerous operations in a software project:
- A bad migration can cause data loss that is irreversible
- Schema changes on live tables can lock rows for minutes, causing outage
- Rollback is often harder than forward — some changes (column drops, type narrowing) cannot be undone without a restore
Benchmark: In a PostgreSQL table with 50M rows, adding a column with a DEFAULT value the naive way (ALTER TABLE ... ADD COLUMN ... DEFAULT ...) locks the table for 4–8 minutes on PostgreSQL < 11. The correct pattern (add nullable column, backfill in batches, add default, add NOT NULL constraint) takes 30 minutes but causes zero downtime. Claude Code knows the difference — if you describe your requirements correctly.
The Core Prompt Pattern for Migrations
Every migration prompt to Claude should include:
- Current schema — the DDL or ORM model as it exists today
- Target state — what you want the schema to look like
- Database version — e.g., PostgreSQL 15, MySQL 8.0
- Table size — approximate row count (affects strategy choice)
- Downtime tolerance — zero-downtime vs. maintenance window OK
Template:
Current schema:
[paste CREATE TABLE or ORM model]
Target: I need to [describe the change — add column, rename, change type, etc.]
Constraints:
- Database: PostgreSQL 15
- Table has ~2M rows
- Zero-downtime required (service is live 24/7)
- Existing data must be preserved
Generate a migration that achieves the target safely.
Also generate the rollback migration.
Alembic Migrations (Python / SQLAlchemy)
Scenario 1: Add a NOT NULL column to a large table
Current SQLAlchemy model:
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
email = Column(String, nullable=False)
created_at = Column(DateTime, default=func.now())
I need to add: subscription_tier = Column(String, nullable=False, default='free')
Table has 3M rows. Zero-downtime required. PostgreSQL 14.
Generate the Alembic migration (upgrade and downgrade).
Claude's output should use a three-phase approach:
# migrations/versions/add_subscription_tier.py
def upgrade():
# Phase 1: Add nullable column (instant, no lock)
op.add_column('users',
sa.Column('subscription_tier', sa.String(), nullable=True)
)
# Phase 2: Backfill in batches (avoids long lock)
op.execute("""
DO $$
DECLARE
batch_size INT := 10000;
offset_val INT := 0;
BEGIN
LOOP
UPDATE users
SET subscription_tier = 'free'
WHERE id IN (
SELECT id FROM users
WHERE subscription_tier IS NULL
ORDER BY id
LIMIT batch_size
);
EXIT WHEN NOT FOUND;
PERFORM pg_sleep(0.01); -- throttle
END LOOP;
END $$;
""")
# Phase 3: Add NOT NULL constraint + server default
op.alter_column('users', 'subscription_tier',
nullable=False,
server_default='free'
)
def downgrade():
op.drop_column('users', 'subscription_tier')
Scenario 2: Rename a column
Renaming a column requires a coordinated deploy — you cannot rename atomically while the app is reading the old name.
I need to rename column 'username' to 'display_name' in the users table.
App code uses the old column name. How do I do this with zero downtime?
Expected response pattern from Claude: expand-then-contract
# Step 1 migration: Add new column, keep old
def upgrade():
op.add_column('users', sa.Column('display_name', sa.String()))
# Copy data
op.execute("UPDATE users SET display_name = username")
# Index on new column
op.create_index('ix_users_display_name', 'users', ['display_name'])
def downgrade():
op.drop_index('ix_users_display_name', 'users')
op.drop_column('users', 'display_name')
# Step 2 (after deploying code that writes to both columns):
# Step 3 migration: Drop old column once all reads use new column
def upgrade():
op.drop_column('users', 'username')
50 database migration prompts that work in production
Power Prompts ($29) includes tested migration prompt templates for PostgreSQL, MySQL, and SQLite — add column, rename, type change, index creation, and foreign key patterns.
Prisma Migrations (Node.js / TypeScript)
Prisma manages migrations differently — you edit the schema file and Prisma generates the SQL. Claude Code is most useful for designing the schema change and writing the corresponding data migration.
Scenario: Add a relation between two existing tables
Current Prisma schema:
model User {
id Int @id @default(autoincrement())
email String @unique
}
model Order {
id Int @id @default(autoincrement())
createdAt DateTime @default(now())
}
I need to link Orders to Users. Each order belongs to one user.
Existing orders should be assigned to a default system user (id=1).
Generate:
1. The updated Prisma schema
2. The SQL data migration to populate userId on existing rows
Claude output:
// schema.prisma (updated)
model User {
id Int @id @default(autoincrement())
email String @unique
orders Order[]
}
model Order {
id Int @id @default(autoincrement())
createdAt DateTime @default(now())
userId Int
user User @relation(fields: [userId], references: [id])
}
-- Data migration: assign existing orders to system user
-- Run BEFORE applying the NOT NULL constraint
UPDATE orders SET user_id = 1 WHERE user_id IS NULL;
Prisma workflow with Claude:
# 1. Edit schema.prisma with Claude's suggested changes
# 2. Create migration (Prisma generates SQL)
npx prisma migrate dev --name add_order_user_relation
# 3. Apply data migration separately if needed
npx prisma db execute --file migrations/data/backfill_order_users.sql
Raw SQL Migrations
For teams not using an ORM, or for DBA-controlled environments:
I need to split the 'full_name' column in the contacts table into
'first_name' and 'last_name'. PostgreSQL 15. Table has 500K rows.
Generate:
1. DDL to add the new columns
2. UPDATE statement to populate from existing full_name
3. Verification query to check results
4. Rollback DDL
Claude output:
-- Step 1: Add columns (nullable first)
ALTER TABLE contacts ADD COLUMN first_name VARCHAR(255);
ALTER TABLE contacts ADD COLUMN last_name VARCHAR(255);
-- Step 2: Populate from existing data
UPDATE contacts
SET
first_name = SPLIT_PART(full_name, ' ', 1),
last_name = NULLIF(TRIM(SUBSTRING(full_name FROM POSITION(' ' IN full_name) + 1)), '')
WHERE full_name IS NOT NULL AND full_name != '';
-- Step 3: Verify before adding NOT NULL
SELECT
COUNT(*) FILTER (WHERE first_name IS NULL) AS missing_first,
COUNT(*) FILTER (WHERE last_name IS NULL) AS missing_last,
COUNT(*) as total
FROM contacts;
-- Step 4: Add NOT NULL after verification (run only if Step 3 shows 0 nulls)
ALTER TABLE contacts ALTER COLUMN first_name SET NOT NULL;
-- Rollback:
-- ALTER TABLE contacts DROP COLUMN first_name;
-- ALTER TABLE contacts DROP COLUMN last_name;
Pre-Migration Checklist Prompt
Before executing any migration, ask Claude to review it:
Review this migration script for:
1. Lock risks on PostgreSQL (what operations will lock the table?)
2. Missing rollback
3. Data loss potential
4. Index impact (will existing indexes be invalidated?)
5. Constraint violations on existing data
Migration script:
[paste script]
This review prompt consistently catches mistakes that cause production incidents. The most common catches: missing index on the new foreign key column, and ALTER TABLE operations that lock on older PostgreSQL versions.
Zero-Downtime Migration Checklist
For every zero-downtime migration, verify:
- All ALTER TABLE operations are non-locking for your PostgreSQL version
- New columns are added as nullable before any NOT NULL constraint
- Backfills use batched updates, not a single UPDATE on the whole table
- The app can handle both old and new schema simultaneously during deploy
- Rollback migration is written and tested before apply
- Indexes are created with
CREATE INDEX CONCURRENTLY(non-locking) - Foreign keys are validated separately from being added (
NOT VALID+VALIDATE CONSTRAINT)
For integrating migration checks into your CI pipeline, see Claude Code CI/CD Integration and Claude Code Complete Guide.
Frequently Asked Questions
Can Claude Code write migration scripts for any database?
Claude Code generates migrations for PostgreSQL, MySQL, SQLite, SQL Server, and ORMs including SQLAlchemy/Alembic, Prisma, Sequelize, TypeORM, and Django ORM. Specify the database type and version explicitly — the safe strategy for adding a NOT NULL column differs between PostgreSQL 10 and PostgreSQL 11+.
How do I prevent Claude from generating a migration that locks the table?
Include the constraint explicitly in your prompt: "Zero-downtime required — avoid any operation that locks the table for more than milliseconds." Also tell Claude the row count. For tables over 1M rows, Claude should automatically suggest the batched backfill pattern instead of a single-statement approach.
Should I review Claude's migration before running it?
Always. Treat Claude's migration output as a starting point, not a final product. Run EXPLAIN on any UPDATE or SELECT involved, verify the rollback works on a copy of the schema, and test on a staging database with production-scale data before applying to production.
How do I handle migrations that change column data types?
Type changes are among the most dangerous operations. Tell Claude the current type, target type, and what data is in the column. For example: "I need to change user_id from VARCHAR to INTEGER. Current data contains only numeric strings." Claude will generate a migration that adds a new column, casts data, and swaps — rather than a destructive ALTER TYPE.
What is the safest way to drop a column with Claude Code?
Follow the expand-then-contract pattern: first deploy code that stops reading/writing the column, then in a later release, run the DROP COLUMN migration. Never drop a column in the same deployment where you remove the code that uses it — if the deploy fails, you cannot roll back the schema while the old code is still trying to read the column.
50 tested migration prompts for PostgreSQL, MySQL, and ORMs
Power Prompts ($29) includes zero-downtime migration patterns, rollback templates, and data backfill strategies for the most common schema change scenarios.