重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
npx skills add https://github.com/patricio0312rev/skills --skill rfc-generator使用 RFC 创建全面的技术提案。
# RFC-042: 为分析功能实现只读副本
**状态:** 草案 | 评审中 | 已接受 | 已拒绝 | 已实施
**作者:** Alice (alice@example.com)
**评审者:** Bob, Charlie, David
**创建日期:** 2024-01-15
**更新日期:** 2024-01-20
**目标日期:** 2024年第一季度
## 摘要
添加 PostgreSQL 只读副本,将分析查询与事务工作负载分离,以提高数据库性能并启用新的分析功能。
## 问题陈述
### 现状
我们的 PostgreSQL 数据库同时服务于事务型(OLTP)和分析型(OLAP)工作负载:
- 每分钟 1000 次写入(结账、订单、库存)
- 每分钟 5000 次读取(用户浏览、搜索)
- 每分钟 500 次分析查询(仪表板、报告)
### 问题
1. **性能下降**:分析查询拖慢事务处理速度
2. **资源争用**:复杂报告消耗 CPU/内存
3. **功能受阻**:在不影响用户的情况下无法添加更多仪表板
4. **高峰时段问题**:分析任务安排在业务时间内进行
### 影响
- 结账 p95 延迟:800 毫秒(目标:<300 毫秒)
- 数据库 CPU 使用率:平均 75%,峰值 95%
- 客户抱怨页面加载缓慢
- 产品团队在分析功能上受阻
### 成功标准
- 结账延迟 <300 毫秒 p95
- 数据库 CPU 使用率 <50%
- 支持 2 倍以上的分析查询
- 对事务性能零影响
## 提议的解决方案
### 高层设计
┌─────────────┐ │ 主数据库 │────────────────┐ │ (写入) │ │ └─────────────┘ │ ▼ ┌─────────────┐ │ 副本 1 │ │ (读取) │ └─────────────┘ ▼ ┌─────────────┐ │ 副本 2 │ │ (分析) │ └─────────────┘
### 架构
1. **主数据库**:处理所有写入和关键读取
2. **只读副本 1**:服务于面向用户的读取查询
3. **只读副本 2**:专用于分析/报告
### 路由策略
```typescript
const db = {
primary: primaryConnection,
read: replicaConnection,
analytics: analyticsConnection,
};
// 写入
await db.primary.users.create(data);
// 关键读取(始终最新)
await db.primary.users.findById(id);
// 非关键读取(允许轻微延迟)
await db.read.products.search(query);
// 分析
await db.analytics.orders.aggregate(pipeline);
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
# 主数据库
max_connections: 200
shared_buffers: 4GB
work_mem: 16MB
# 读取副本
max_connections: 100
shared_buffers: 8GB
work_mem: 32MB
# 分析副本
max_connections: 50
shared_buffers: 16GB
work_mem: 64MB
const pools = {
primary: new Pool({ max: 20, min: 5 }),
read: new Pool({ max: 50, min: 10 }),
analytics: new Pool({ max: 10, min: 2 }),
};
enum QueryType {
WRITE = "primary",
CRITICAL_READ = "primary",
READ = "read",
ANALYTICS = "analytics",
}
function route(queryType: QueryType) {
return pools[queryType];
}
方法: 升级到更大的数据库实例
方法: 将数据复制到专用的分析数据库(例如 ClickHouse)
方法: 预计算分析结果
影响: 分析看到过时数据 概率: 中等 缓解措施:
影响: 路由错误,性能问题 概率: 低 缓解措施:
影响: 超出预算 概率: 低 缓解措施:
| 组件 | 当前 | 提议 | 变化 |
|---|---|---|---|
| 主数据库 | $500/月 | $500/月 | $0 |
| 读取副本 | - | $500/月 | +$500 |
| 分析副本 | - | $300/月 | +$300 |
| 总计 | $500/月 | $1,300/月 | +$800/月 |
投资回报率: 更好的性能促进收入增长;分析功能解锁产品洞察
2024-01-15:初始草案(Alice)
2024-01-17:添加成本分析(Bob)
2024-01-20:处理评审意见
每周安装次数
64
代码仓库
GitHub 星标数
20
首次出现
2026年1月24日
安全审计
安装于
opencode55
codex55
gemini-cli54
github-copilot53
claude-code48
cursor48
Create comprehensive technical proposals with RFCs.
# RFC-042: Implement Read Replicas for Analytics
**Status:** Draft | In Review | Accepted | Rejected | Implemented
**Author:** Alice (alice@example.com)
**Reviewers:** Bob, Charlie, David
**Created:** 2024-01-15
**Updated:** 2024-01-20
**Target Date:** Q1 2024
## Summary
Add PostgreSQL read replicas to separate analytical queries from transactional workload, improving database performance and enabling new analytics features.
## Problem Statement
### Current Situation
Our PostgreSQL database serves both transactional (OLTP) and analytical (OLAP) workloads:
- 1000 writes/min (checkout, orders, inventory)
- 5000 reads/min (user browsing, search)
- 500 analytics queries/min (dashboards, reports)
### Issues
1. **Performance degradation**: Analytics queries slow down transactions
2. **Resource contention**: Complex reports consume CPU/memory
3. **Blocking features**: Can't add more dashboards without impacting users
4. **Peak hour problems**: Analytics scheduled during business hours
### Impact
- Checkout p95 latency: 800ms (target: <300ms)
- Database CPU: 75% average, 95% peak
- Customer complaints about slow pages
- Product team blocked on analytics features
### Success Criteria
- Checkout latency <300ms p95
- Database CPU <50%
- Support 2x more analytics queries
- Zero impact on transactional performance
## Proposed Solution
### High-Level Design
┌─────────────┐ │ Primary │────────────────┐ │ (Write) │ │ └─────────────┘ │ ▼ ┌─────────────┐ │ Replica 1 │ │ (Read) │ └─────────────┘ ▼ ┌─────────────┐ │ Replica 2 │ │ (Analytics)│ └─────────────┘
### Architecture
1. **Primary database**: Handles all writes and critical reads
2. **Read Replica 1**: Serves user-facing read queries
3. **Read Replica 2**: Dedicated to analytics/reporting
### Routing Strategy
```typescript
const db = {
primary: primaryConnection,
read: replicaConnection,
analytics: analyticsConnection,
};
// Write
await db.primary.users.create(data);
// Critical read (always fresh)
await db.primary.users.findById(id);
// Non-critical read (can be slightly stale)
await db.read.products.search(query);
// Analytics
await db.analytics.orders.aggregate(pipeline);
# Primary
max_connections: 200
shared_buffers: 4GB
work_mem: 16MB
# Read Replica
max_connections: 100
shared_buffers: 8GB
work_mem: 32MB
# Analytics Replica
max_connections: 50
shared_buffers: 16GB
work_mem: 64MB
const pools = {
primary: new Pool({ max: 20, min: 5 }),
read: new Pool({ max: 50, min: 10 }),
analytics: new Pool({ max: 10, min: 2 }),
};
enum QueryType {
WRITE = "primary",
CRITICAL_READ = "primary",
READ = "read",
ANALYTICS = "analytics",
}
function route(queryType: QueryType) {
return pools[queryType];
}
Approach: Upgrade to larger database instance
Approach: Copy data to dedicated analytics DB (e.g., ClickHouse)
Approach: Pre-compute analytics results
Impact: Analytics sees stale data Probability: Medium Mitigation:
Impact: Routing errors, performance issues Probability: Low Mitigation:
Impact: Budget exceeded Probability: Low Mitigation:
| Component | Current | Proposed | Delta |
|---|---|---|---|
| Primary DB | $500/mo | $500/mo | $0 |
| Read Replica | - | $500/mo | +$500 |
| Analytics Replica | - | $300/mo | +$300 |
| Total | $500/mo | $1,300/mo | +$800/mo |
ROI: Better performance enables revenue growth; analytics unlocks product insights
2024-01-15: Initial draft (Alice)
2024-01-17: Added cost analysis (Bob)
2024-01-20: Addressed review comments
Weekly Installs
64
Repository
GitHub Stars
20
First Seen
Jan 24, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode55
codex55
gemini-cli54
github-copilot53
claude-code48
cursor48
论文复现上下文解析器 - 解决AI论文复现中的关键信息缺口与冲突检测
20,800 周安装