database-migrations by affaan-m/everything-claude-code
npx skills add https://github.com/affaan-m/everything-claude-code --skill database-migrations适用于生产系统的安全、可逆的数据库模式变更。
应用任何迁移之前:
-- 良好:可为空的列,无锁
ALTER TABLE users ADD COLUMN avatar_url TEXT;
-- 良好:带默认值的列(Postgres 11+ 是瞬时的,无需重写)
ALTER TABLE users ADD COLUMN is_active BOOLEAN NOT NULL DEFAULT true;
-- 不良:对现有表添加 NOT NULL 但没有默认值(需要完全重写)
ALTER TABLE users ADD COLUMN role TEXT NOT NULL;
-- 这会锁定表并重写每一行
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
-- 不良:在大表上阻塞写入
CREATE INDEX idx_users_email ON users (email);
-- 良好:非阻塞,允许并发写入
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
-- 注意:CONCURRENTLY 不能在事务块内运行
-- 大多数迁移工具需要对此进行特殊处理
切勿在生产环境中直接重命名。使用扩展-收缩模式:
-- 步骤 1:添加新列(迁移 001)
ALTER TABLE users ADD COLUMN display_name TEXT;
-- 步骤 2:回填数据(迁移 002,数据迁移)
UPDATE users SET display_name = username WHERE display_name IS NULL;
-- 步骤 3:更新应用程序代码以读写两列
-- 部署应用程序更改
-- 步骤 4:停止写入旧列,删除它(迁移 003)
ALTER TABLE users DROP COLUMN username;
-- 步骤 1:移除应用程序中对列的所有引用
-- 步骤 2:部署不引用该列的应用程序
-- 步骤 3:在下一次迁移中删除列
ALTER TABLE orders DROP COLUMN legacy_status;
-- 对于 Django:使用 SeparateDatabaseAndState 从模型中移除
-- 而不生成 DROP COLUMN(然后在下次迁移中删除)
-- 不良:在一个事务中更新所有行(锁定表)
UPDATE users SET normalized_email = LOWER(email);
-- 良好:分批更新并显示进度
DO $$
DECLARE
batch_size INT := 10000;
rows_updated INT;
BEGIN
LOOP
UPDATE users
SET normalized_email = LOWER(email)
WHERE id IN (
SELECT id FROM users
WHERE normalized_email IS NULL
LIMIT batch_size
FOR UPDATE SKIP LOCKED
);
GET DIAGNOSTICS rows_updated = ROW_COUNT;
RAISE NOTICE 'Updated % rows', rows_updated;
EXIT WHEN rows_updated = 0;
COMMIT;
END LOOP;
END $$;
# 根据模式变更创建迁移
npx prisma migrate dev --name add_user_avatar
# 在生产环境中应用待处理的迁移
npx prisma migrate deploy
# 重置数据库(仅限开发)
npx prisma migrate reset
# 模式变更后生成客户端
npx prisma generate
model User {
id String @id @default(cuid())
email String @unique
name String?
avatarUrl String? @map("avatar_url")
createdAt DateTime @default(now()) @map("created_at")
updatedAt DateTime @updatedAt @map("updated_at")
orders Order[]
@@map("users")
@@index([email])
}
对于 Prisma 无法表达的操作(并发索引、数据回填):
# 创建空迁移,然后手动编辑 SQL
npx prisma migrate dev --create-only --name add_email_index
-- migrations/20240115_add_email_index/migration.sql
-- Prisma 无法生成 CONCURRENTLY,因此我们手动编写
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users (email);
# 根据模式变更生成迁移
npx drizzle-kit generate
# 应用迁移
npx drizzle-kit migrate
# 直接推送模式(仅限开发,不生成迁移文件)
npx drizzle-kit push
import { pgTable, text, timestamp, uuid, boolean } from "drizzle-orm/pg-core";
export const users = pgTable("users", {
id: uuid("id").primaryKey().defaultRandom(),
email: text("email").notNull().unique(),
name: text("name"),
isActive: boolean("is_active").notNull().default(true),
createdAt: timestamp("created_at").notNull().defaultNow(),
updatedAt: timestamp("updated_at").notNull().defaultNow(),
});
# 根据模型变更生成迁移
python manage.py makemigrations
# 应用迁移
python manage.py migrate
# 显示迁移状态
python manage.py showmigrations
# 为自定义 SQL 生成空迁移
python manage.py makemigrations --empty app_name -n description
from django.db import migrations
def backfill_display_names(apps, schema_editor):
User = apps.get_model("accounts", "User")
batch_size = 5000
users = User.objects.filter(display_name="")
while users.exists():
batch = list(users[:batch_size])
for user in batch:
user.display_name = user.username
User.objects.bulk_update(batch, ["display_name"], batch_size=batch_size)
def reverse_backfill(apps, schema_editor):
pass # 数据迁移,无需反向操作
class Migration(migrations.Migration):
dependencies = [("accounts", "0015_add_display_name")]
operations = [
migrations.RunPython(backfill_display_names, reverse_backfill),
]
从 Django 模型中移除列,但不立即从数据库中删除:
class Migration(migrations.Migration):
operations = [
migrations.SeparateDatabaseAndState(
state_operations=[
migrations.RemoveField(model_name="user", name="legacy_field"),
],
database_operations=[], # 暂时不操作数据库
),
]
# 创建迁移对
migrate create -ext sql -dir migrations -seq add_user_avatar
# 应用所有待处理的迁移
migrate -path migrations -database "$DATABASE_URL" up
# 回滚最后一次迁移
migrate -path migrations -database "$DATABASE_URL" down 1
# 强制版本(修复脏状态)
migrate -path migrations -database "$DATABASE_URL" force VERSION
-- migrations/000003_add_user_avatar.up.sql
ALTER TABLE users ADD COLUMN avatar_url TEXT;
CREATE INDEX CONCURRENTLY idx_users_avatar ON users (avatar_url) WHERE avatar_url IS NOT NULL;
-- migrations/000003_add_user_avatar.down.sql
DROP INDEX IF EXISTS idx_users_avatar;
ALTER TABLE users DROP COLUMN IF EXISTS avatar_url;
对于关键的生产变更,遵循扩展-收缩模式:
Phase 1: EXPAND
- 添加新列/表(可为空或带默认值)
- 部署:应用程序同时写入旧列和新列
- 回填现有数据
Phase 2: MIGRATE
- 部署:应用程序从新列读取,同时写入新旧两列
- 验证数据一致性
Phase 3: CONTRACT
- 部署:应用程序仅使用新列
- 在单独的迁移中删除旧列/表
Day 1: 迁移添加 new_status 列(可为空)
Day 1: 部署应用程序 v2 — 同时写入 status 和 new_status
Day 2: 为现有行运行回填迁移
Day 3: 部署应用程序 v3 — 仅从 new_status 读取
Day 7: 迁移删除旧的 status 列
| 反模式 | 失败原因 | 更好的方法 |
|---|---|---|
| 在生产环境中手动执行 SQL | 无审计跟踪,不可重复 | 始终使用迁移文件 |
| 编辑已部署的迁移 | 导致环境间出现差异 | 改为创建新的迁移 |
| NOT NULL 没有默认值 | 锁定表,重写所有行 | 先添加可为空的列,回填数据,然后添加约束 |
| 在大表上内联创建索引 | 在构建期间阻塞写入 | 使用 CREATE INDEX CONCURRENTLY |
| 在一个迁移中混合模式和数据的变更 | 难以回滚,事务时间长 | 分开进行迁移 |
| 在移除代码之前删除列 | 应用程序因缺少列而报错 | 先移除代码,下次部署时再删除列 |
每周安装量
660
代码仓库
GitHub 星标数
69.1K
首次出现
2026年2月13日
安全审计
已安装于
codex586
opencode583
gemini-cli567
github-copilot557
kimi-cli531
amp528
Safe, reversible database schema changes for production systems.
Before applying any migration:
-- GOOD: Nullable column, no lock
ALTER TABLE users ADD COLUMN avatar_url TEXT;
-- GOOD: Column with default (Postgres 11+ is instant, no rewrite)
ALTER TABLE users ADD COLUMN is_active BOOLEAN NOT NULL DEFAULT true;
-- BAD: NOT NULL without default on existing table (requires full rewrite)
ALTER TABLE users ADD COLUMN role TEXT NOT NULL;
-- This locks the table and rewrites every row
-- BAD: Blocks writes on large tables
CREATE INDEX idx_users_email ON users (email);
-- GOOD: Non-blocking, allows concurrent writes
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
-- Note: CONCURRENTLY cannot run inside a transaction block
-- Most migration tools need special handling for this
Never rename directly in production. Use the expand-contract pattern:
-- Step 1: Add new column (migration 001)
ALTER TABLE users ADD COLUMN display_name TEXT;
-- Step 2: Backfill data (migration 002, data migration)
UPDATE users SET display_name = username WHERE display_name IS NULL;
-- Step 3: Update application code to read/write both columns
-- Deploy application changes
-- Step 4: Stop writing to old column, drop it (migration 003)
ALTER TABLE users DROP COLUMN username;
-- Step 1: Remove all application references to the column
-- Step 2: Deploy application without the column reference
-- Step 3: Drop column in next migration
ALTER TABLE orders DROP COLUMN legacy_status;
-- For Django: use SeparateDatabaseAndState to remove from model
-- without generating DROP COLUMN (then drop in next migration)
-- BAD: Updates all rows in one transaction (locks table)
UPDATE users SET normalized_email = LOWER(email);
-- GOOD: Batch update with progress
DO $$
DECLARE
batch_size INT := 10000;
rows_updated INT;
BEGIN
LOOP
UPDATE users
SET normalized_email = LOWER(email)
WHERE id IN (
SELECT id FROM users
WHERE normalized_email IS NULL
LIMIT batch_size
FOR UPDATE SKIP LOCKED
);
GET DIAGNOSTICS rows_updated = ROW_COUNT;
RAISE NOTICE 'Updated % rows', rows_updated;
EXIT WHEN rows_updated = 0;
COMMIT;
END LOOP;
END $$;
# Create migration from schema changes
npx prisma migrate dev --name add_user_avatar
# Apply pending migrations in production
npx prisma migrate deploy
# Reset database (dev only)
npx prisma migrate reset
# Generate client after schema changes
npx prisma generate
model User {
id String @id @default(cuid())
email String @unique
name String?
avatarUrl String? @map("avatar_url")
createdAt DateTime @default(now()) @map("created_at")
updatedAt DateTime @updatedAt @map("updated_at")
orders Order[]
@@map("users")
@@index([email])
}
For operations Prisma cannot express (concurrent indexes, data backfills):
# Create empty migration, then edit the SQL manually
npx prisma migrate dev --create-only --name add_email_index
-- migrations/20240115_add_email_index/migration.sql
-- Prisma cannot generate CONCURRENTLY, so we write it manually
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users (email);
# Generate migration from schema changes
npx drizzle-kit generate
# Apply migrations
npx drizzle-kit migrate
# Push schema directly (dev only, no migration file)
npx drizzle-kit push
import { pgTable, text, timestamp, uuid, boolean } from "drizzle-orm/pg-core";
export const users = pgTable("users", {
id: uuid("id").primaryKey().defaultRandom(),
email: text("email").notNull().unique(),
name: text("name"),
isActive: boolean("is_active").notNull().default(true),
createdAt: timestamp("created_at").notNull().defaultNow(),
updatedAt: timestamp("updated_at").notNull().defaultNow(),
});
# Generate migration from model changes
python manage.py makemigrations
# Apply migrations
python manage.py migrate
# Show migration status
python manage.py showmigrations
# Generate empty migration for custom SQL
python manage.py makemigrations --empty app_name -n description
from django.db import migrations
def backfill_display_names(apps, schema_editor):
User = apps.get_model("accounts", "User")
batch_size = 5000
users = User.objects.filter(display_name="")
while users.exists():
batch = list(users[:batch_size])
for user in batch:
user.display_name = user.username
User.objects.bulk_update(batch, ["display_name"], batch_size=batch_size)
def reverse_backfill(apps, schema_editor):
pass # Data migration, no reverse needed
class Migration(migrations.Migration):
dependencies = [("accounts", "0015_add_display_name")]
operations = [
migrations.RunPython(backfill_display_names, reverse_backfill),
]
Remove a column from the Django model without dropping it from the database immediately:
class Migration(migrations.Migration):
operations = [
migrations.SeparateDatabaseAndState(
state_operations=[
migrations.RemoveField(model_name="user", name="legacy_field"),
],
database_operations=[], # Don't touch the DB yet
),
]
# Create migration pair
migrate create -ext sql -dir migrations -seq add_user_avatar
# Apply all pending migrations
migrate -path migrations -database "$DATABASE_URL" up
# Rollback last migration
migrate -path migrations -database "$DATABASE_URL" down 1
# Force version (fix dirty state)
migrate -path migrations -database "$DATABASE_URL" force VERSION
-- migrations/000003_add_user_avatar.up.sql
ALTER TABLE users ADD COLUMN avatar_url TEXT;
CREATE INDEX CONCURRENTLY idx_users_avatar ON users (avatar_url) WHERE avatar_url IS NOT NULL;
-- migrations/000003_add_user_avatar.down.sql
DROP INDEX IF EXISTS idx_users_avatar;
ALTER TABLE users DROP COLUMN IF EXISTS avatar_url;
For critical production changes, follow the expand-contract pattern:
Phase 1: EXPAND
- Add new column/table (nullable or with default)
- Deploy: app writes to BOTH old and new
- Backfill existing data
Phase 2: MIGRATE
- Deploy: app reads from NEW, writes to BOTH
- Verify data consistency
Phase 3: CONTRACT
- Deploy: app only uses NEW
- Drop old column/table in separate migration
Day 1: Migration adds new_status column (nullable)
Day 1: Deploy app v2 — writes to both status and new_status
Day 2: Run backfill migration for existing rows
Day 3: Deploy app v3 — reads from new_status only
Day 7: Migration drops old status column
| Anti-Pattern | Why It Fails | Better Approach |
|---|---|---|
| Manual SQL in production | No audit trail, unrepeatable | Always use migration files |
| Editing deployed migrations | Causes drift between environments | Create new migration instead |
| NOT NULL without default | Locks table, rewrites all rows | Add nullable, backfill, then add constraint |
| Inline index on large table | Blocks writes during build | CREATE INDEX CONCURRENTLY |
| Schema + data in one migration | Hard to rollback, long transactions | Separate migrations |
| Dropping column before removing code | Application errors on missing column | Remove code first, drop column next deploy |
Weekly Installs
660
Repository
GitHub Stars
69.1K
First Seen
Feb 13, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex586
opencode583
gemini-cli567
github-copilot557
kimi-cli531
amp528
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装