postgresql-best-practices by mindrally/skills
npx skills add https://github.com/mindrally/skills --skill postgresql-best-practices使用适当的原生类型:UUID、JSONB、ARRAY、INET、CIDR
对于需要时区感知的应用,优先使用 TIMESTAMPTZ 而非 TIMESTAMP
当不需要长度限制时,使用 代替
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
TEXTVARCHAR对于精确的小数计算(如财务数据),考虑使用 NUMERIC
使用 SERIAL 或 BIGSERIAL 作为自增 ID,或在分布式系统中使用 UUID
CREATE TABLE orders ( order_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), customer_id UUID NOT NULL REFERENCES customers(customer_id), order_data JSONB NOT NULL DEFAULT '{}', tags TEXT[] DEFAULT '{}', total_amount NUMERIC(12, 2) NOT NULL, created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() );
始终定义主键
使用外键并指定适当的 ON DELETE/UPDATE 操作
在适当的地方添加 NOT NULL 约束
使用 CHECK 约束进行数据验证
对于大表,考虑使用分区
CREATE TABLE products ( product_id SERIAL PRIMARY KEY, sku VARCHAR(50) NOT NULL UNIQUE, name TEXT NOT NULL, price NUMERIC(10, 2) NOT NULL CHECK (price >= 0), status VARCHAR(20) NOT NULL DEFAULT 'active' CHECK (status IN ('active', 'inactive', 'discontinued')), metadata JSONB DEFAULT '{}' );
对于大表(数百万行)使用声明式分区
选择适当的分区策略:RANGE、LIST 或 HASH
在分区后为分区表创建索引
CREATE TABLE events ( event_id BIGSERIAL, event_type VARCHAR(50) NOT NULL, payload JSONB, created_at TIMESTAMPTZ NOT NULL ) PARTITION BY RANGE (created_at);
CREATE TABLE events_2024_q1 PARTITION OF events FOR VALUES FROM ('2024-01-01') TO ('2024-04-01');
对于等值和范围查询,使用 B-tree 索引(默认)
对于 JSONB、数组和全文搜索,使用 GIN 索引
对于几何数据和范围类型,使用 GiST 索引
对于大型、自然有序的数据,使用 BRIN 索引
对于过滤查询,考虑使用部分索引
-- 用于常见查找的 B-tree 索引 CREATE INDEX idx_orders_customer ON orders(customer_id);
-- 用于 JSONB 查询的 GIN 索引 CREATE INDEX idx_orders_data ON orders USING GIN (order_data);
-- 仅针对活动记录的部分索引 CREATE INDEX idx_active_products ON products(name) WHERE status = 'active';
-- 覆盖索引以避免表查找 CREATE INDEX idx_orders_covering ON orders(customer_id) INCLUDE (order_date, total_amount);
定期运行 ANALYZE 以更新统计信息
对膨胀的索引使用 REINDEX
使用 pg_stat_user_indexes 监控索引使用情况
移除未使用的索引以减少写入开销
-- 检查索引使用情况 SELECT schemaname, relname, indexrelname, idx_scan, idx_tup_read FROM pg_stat_user_indexes ORDER BY idx_scan ASC;
始终分析慢查询的执行计划
查找大表上的顺序扫描
从查询计划中识别缺失的索引
注意预估行数与实际行数之间的巨大差异
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) SELECT c.name, COUNT(o.order_id) FROM customers c LEFT JOIN orders o ON c.customer_id = o.customer_id WHERE c.created_at > '2024-01-01' GROUP BY c.customer_id, c.name;
使用 CTE 来组织复杂查询
注意:在旧版 PostgreSQL 中,CTE 是优化屏障
在 PostgreSQL 12+ 中使用 MATERIALIZED/NOT MATERIALIZED 提示
WITH recent_orders AS MATERIALIZED ( SELECT customer_id, COUNT(*) AS order_count, SUM(total) AS total_spent FROM orders WHERE order_date > CURRENT_DATE - INTERVAL '30 days' GROUP BY customer_id ) SELECT c.name, ro.order_count, ro.total_spent FROM customers c JOIN recent_orders ro ON c.customer_id = ro.customer_id WHERE ro.total_spent > 1000;
在分析查询中使用窗口函数
利用 PARTITION BY 和 ORDER BY 进行复杂计算
SELECT order_id, customer_id, total_amount, SUM(total_amount) OVER (PARTITION BY customer_id ORDER BY order_date) AS running_total, ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date DESC) AS order_rank FROM orders;
为了更好的性能和索引支持,使用 JSONB 而非 JSON
为需要查询的 JSONB 列创建 GIN 索引
使用包含运算符 (@>, <@) 进行高效查询
将频繁查询的字段提取到常规列中
-- 使用 GIN 索引的高效 JSONB 查询 SELECT * FROM products WHERE metadata @> '{"category": "electronics"}';
-- 提取特定字段 SELECT product_id, metadata->>'brand' AS brand, (metadata->>'rating')::numeric AS rating FROM products WHERE metadata ? 'rating';
-- 推荐的会话设置
SET statement_timeout = '30s';
SET lock_timeout = '10s';
SET idle_in_transaction_session_timeout = '60s';
使用适当的事务隔离级别
保持事务简短以减少锁争用
使用咨询锁进行应用级锁定
监控并解决锁冲突
-- 使用咨询锁进行应用协调 SELECT pg_advisory_lock(hashtext('resource_name')); -- 执行工作 SELECT pg_advisory_unlock(hashtext('resource_name'));
-- 检查阻塞的查询 SELECT blocked_locks.pid AS blocked_pid, blocking_locks.pid AS blocking_pid, blocked_activity.query AS blocked_query FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.relation = blocked_locks.relation AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid;
启用 autovacuum 并根据工作负载进行调整
批量操作后运行手动 VACUUM ANALYZE
监控表膨胀
-- 检查表膨胀 SELECT schemaname, relname, n_live_tup, n_dead_tup, round(n_dead_tup * 100.0 / nullif(n_live_tup + n_dead_tup, 0), 2) AS dead_pct FROM pg_stat_user_tables WHERE n_dead_tup > 1000 ORDER BY n_dead_tup DESC;
使用 SSL/TLS 进行连接
为多租户应用实施行级安全策略
使用角色和 GRANT/REVOKE 进行访问控制
使用 pgAudit 扩展审计敏感操作
-- 启用行级安全 ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
CREATE POLICY documents_tenant_policy ON documents FOR ALL USING (tenant_id = current_setting('app.tenant_id')::uuid);
-- 授予最小权限 GRANT SELECT, INSERT, UPDATE ON orders TO app_user; GRANT USAGE ON SEQUENCE orders_order_id_seq TO app_user;
使用 pg_stat_statements 扩展进行监控
跟踪慢查询并定期优化
为复制延迟、连接数和磁盘使用设置警报
使用 pg_stat_activity 监控活动查询
-- 启用 pg_stat_statements CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- 查找慢查询 SELECT query, calls, mean_exec_time, total_exec_time FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;
每周安装数
312
代码仓库
GitHub 星标数
43
首次出现
2026年1月25日
安全审计
安装于
opencode269
gemini-cli261
codex254
github-copilot243
cursor218
kimi-cli205
Use appropriate native types: UUID, JSONB, ARRAY, INET, CIDR
Prefer TIMESTAMPTZ over TIMESTAMP for timezone-aware applications
Use TEXT instead of VARCHAR when no length limit is needed
Consider NUMERIC for precise decimal calculations (financial data)
Use SERIAL or BIGSERIAL for auto-incrementing IDs, or UUID for distributed systems
CREATE TABLE orders ( order_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), customer_id UUID NOT NULL REFERENCES customers(customer_id), order_data JSONB NOT NULL DEFAULT '{}', tags TEXT[] DEFAULT '{}', total_amount NUMERIC(12, 2) NOT NULL, created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() );
Always define primary keys
Use foreign keys with appropriate ON DELETE/UPDATE actions
Add NOT NULL constraints where appropriate
Use CHECK constraints for data validation
Consider partitioning for large tables
CREATE TABLE products ( product_id SERIAL PRIMARY KEY, sku VARCHAR(50) NOT NULL UNIQUE, name TEXT NOT NULL, price NUMERIC(10, 2) NOT NULL CHECK (price >= 0), status VARCHAR(20) NOT NULL DEFAULT 'active' CHECK (status IN ('active', 'inactive', 'discontinued')), metadata JSONB DEFAULT '{}' );
Use declarative partitioning for large tables (millions of rows)
Choose appropriate partition strategy: RANGE, LIST, or HASH
Create indexes on partitioned tables after partitioning
CREATE TABLE events ( event_id BIGSERIAL, event_type VARCHAR(50) NOT NULL, payload JSONB, created_at TIMESTAMPTZ NOT NULL ) PARTITION BY RANGE (created_at);
CREATE TABLE events_2024_q1 PARTITION OF events FOR VALUES FROM ('2024-01-01') TO ('2024-04-01');
Use B-tree indexes (default) for equality and range queries
Use GIN indexes for JSONB, arrays, and full-text search
Use GiST indexes for geometric data and range types
Use BRIN indexes for large, naturally ordered data
Consider partial indexes for filtered queries
-- B-tree index for common lookups CREATE INDEX idx_orders_customer ON orders(customer_id);
-- GIN index for JSONB queries CREATE INDEX idx_orders_data ON orders USING GIN (order_data);
-- Partial index for active records only CREATE INDEX idx_active_products ON products(name) WHERE status = 'active';
-- Covering index to avoid table lookup CREATE INDEX idx_orders_covering ON orders(customer_id) INCLUDE (order_date, total_amount);
Regularly run ANALYZE to update statistics
Use REINDEX for bloated indexes
Monitor index usage with pg_stat_user_indexes
Remove unused indexes to reduce write overhead
-- Check index usage SELECT schemaname, relname, indexrelname, idx_scan, idx_tup_read FROM pg_stat_user_indexes ORDER BY idx_scan ASC;
Always analyze query plans for slow queries
Look for sequential scans on large tables
Identify missing indexes from query plans
Watch for high row estimates vs actual rows
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) SELECT c.name, COUNT(o.order_id) FROM customers c LEFT JOIN orders o ON c.customer_id = o.customer_id WHERE c.created_at > '2024-01-01' GROUP BY c.customer_id, c.name;
Use CTEs for complex query organization
Note: CTEs are optimization fences in older PostgreSQL versions
Use MATERIALIZED/NOT MATERIALIZED hints in PostgreSQL 12+
WITH recent_orders AS MATERIALIZED ( SELECT customer_id, COUNT(*) AS order_count, SUM(total) AS total_spent FROM orders WHERE order_date > CURRENT_DATE - INTERVAL '30 days' GROUP BY customer_id ) SELECT c.name, ro.order_count, ro.total_spent FROM customers c JOIN recent_orders ro ON c.customer_id = ro.customer_id WHERE ro.total_spent > 1000;
Use window functions for analytics queries
Leverage PARTITION BY and ORDER BY for complex calculations
SELECT order_id, customer_id, total_amount, SUM(total_amount) OVER (PARTITION BY customer_id ORDER BY order_date) AS running_total, ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date DESC) AS order_rank FROM orders;
Use JSONB over JSON for better performance and indexing
Create GIN indexes for JSONB columns you query
Use containment operators (@>, <@) for efficient queries
Extract frequently queried fields to regular columns
-- Efficient JSONB query with GIN index SELECT * FROM products WHERE metadata @> '{"category": "electronics"}';
-- Extract specific fields SELECT product_id, metadata->>'brand' AS brand, (metadata->>'rating')::numeric AS rating FROM products WHERE metadata ? 'rating';
-- Recommended session settings
SET statement_timeout = '30s';
SET lock_timeout = '10s';
SET idle_in_transaction_session_timeout = '60s';
Use appropriate transaction isolation levels
Keep transactions short to reduce lock contention
Use advisory locks for application-level locking
Monitor and resolve lock conflicts
-- Use advisory locks for application coordination SELECT pg_advisory_lock(hashtext('resource_name')); -- Do work SELECT pg_advisory_unlock(hashtext('resource_name'));
-- Check for blocking queries SELECT blocked_locks.pid AS blocked_pid, blocking_locks.pid AS blocking_pid, blocked_activity.query AS blocked_query FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.relation = blocked_locks.relation AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid;
Enable autovacuum and tune for your workload
Run manual VACUUM ANALYZE after bulk operations
Monitor table bloat
-- Check table bloat SELECT schemaname, relname, n_live_tup, n_dead_tup, round(n_dead_tup * 100.0 / nullif(n_live_tup + n_dead_tup, 0), 2) AS dead_pct FROM pg_stat_user_tables WHERE n_dead_tup > 1000 ORDER BY n_dead_tup DESC;
Use SSL/TLS for connections
Implement row-level security (RLS) for multi-tenant applications
Use roles and GRANT/REVOKE for access control
Audit sensitive operations with pgAudit extension
-- Enable row-level security ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
CREATE POLICY documents_tenant_policy ON documents FOR ALL USING (tenant_id = current_setting('app.tenant_id')::uuid);
-- Grant minimal privileges GRANT SELECT, INSERT, UPDATE ON orders TO app_user; GRANT USAGE ON SEQUENCE orders_order_id_seq TO app_user;
Monitor with pg_stat_statements extension
Track slow queries and optimize regularly
Set up alerts for replication lag, connection count, and disk usage
Use pg_stat_activity to monitor active queries
-- Enable pg_stat_statements CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Find slow queries SELECT query, calls, mean_exec_time, total_exec_time FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;
Weekly Installs
312
Repository
GitHub Stars
43
First Seen
Jan 25, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode269
gemini-cli261
codex254
github-copilot243
cursor218
kimi-cli205
Azure 升级评估与自动化工具 - 轻松迁移 Functions 计划、托管层级和 SKU
66,100 周安装