Performance Optimizer by daffy0208/ai-dev-standards
npx skills add https://github.com/daffy0208/ai-dev-standards --skill 'Performance Optimizer'让应用快速、可扩展且成本高效。
先测量,后优化。 不要猜测瓶颈——先分析、测量,然后修复最慢的部分。
Core Web Vitals:
Largest Contentful Paint (LCP): < 2.5s # 主要内容可见
First Input Delay (FID): < 100ms # 交互响应能力
Cumulative Layout Shift (CLS): < 0.1 # 视觉稳定性
Additional Metrics:
First Contentful Paint (FCP): < 1.8s # 首次内容渲染
Time to Interactive (TTI): < 3.8s # 完全可交互
Total Blocking Time (TBT): < 200ms # 主线程阻塞
Speed Index: < 3.4s # 视觉进度
Backend Metrics:
API Response Time (P95): < 500ms
Database Query Time (P95): < 100ms
Server Response Time (TTFB): < 600ms
目标:识别实际瓶颈,而非感知瓶颈
Chrome DevTools:
// 1. Performance 标签页 → Record → Reload page
// 2. 分析:
// - 主线程活动
// - 网络瀑布流
// - JavaScript 执行时间
// - 渲染时间
// 3. Lighthouse 审计
// 运行:chrome://lighthouse 或 `npm i -g lighthouse`
lighthouse https://yoursite.com --view
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
React DevTools Profiler:
// 包装组件以进行分析
import { Profiler } from 'react'
function onRenderCallback(id, phase, actualDuration) {
console.log(`${id} (${phase}) took ${actualDuration}ms`)
}
;<Profiler id="ExpensiveComponent" onRender={onRenderCallback}>
<ExpensiveComponent />
</Profiler>
Node.js 性能分析:
# 生成 CPU 性能分析文件
node --prof app.js
# 处理性能分析文件
node --prof-process isolate-0x*.log > processed.txt
# 火焰图(更好的可视化)
npm i -g 0x
0x app.js
Python 性能分析:
import cProfile
import pstats
# 分析函数
cProfile.run('slow_function()', 'output.prof')
# 分析结果
p = pstats.Stats('output.prof')
p.sort_stats('cumulative').print_stats(20)
PostgreSQL:
-- 启用查询日志记录
ALTER DATABASE yourdb SET log_min_duration_statement = 100; -- 记录耗时 >100ms 的查询
-- 分析查询
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM users WHERE email = 'test@example.com';
-- 查找慢查询
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
MongoDB:
// 启用性能分析
db.setProfilingLevel(1, { slowms: 100 })
// 查看慢查询
db.system.profile.find({ millis: { $gt: 100 } }).sort({ ts: -1 })
// 解释查询
db.collection.find({ email: 'test@example.com' }).explain('executionStats')
-- 之前:全表扫描(慢)
SELECT * FROM users WHERE email = 'user@example.com';
-- 执行时间:在 100 万行数据上耗时 2000ms
-- 之后:索引扫描(快)
CREATE INDEX idx_users_email ON users(email);
SELECT * FROM users WHERE email = 'user@example.com';
-- 执行时间:5ms
-- 为多列查询创建复合索引
CREATE INDEX idx_posts_user_date ON posts(user_id, created_at DESC);
SELECT * FROM posts WHERE user_id = 123 ORDER BY created_at DESC;
-- 为筛选查询创建部分索引
CREATE INDEX idx_active_users ON users(created_at) WHERE is_active = true;
// ❌ 差:N+1 查询问题(101 次数据库查询)
const users = await User.findAll() // 1 次查询
for (const user of users) {
user.posts = await Post.findAll({ where: { userId: user.id } }) // N 次查询
}
// ✅ 好:预加载(2 次查询)
const users = await User.findAll({
include: [{ model: Post }]
})
// ✅ 更好:DataLoader(批处理 + 缓存)
const userLoader = new DataLoader(async userIds => {
const users = await User.findAll({ where: { id: userIds } })
return userIds.map(id => users.find(u => u.id === id))
})
-- 避免 SELECT *
-- ❌ 差
SELECT * FROM users WHERE id = 1;
-- ✅ 好
SELECT id, name, email FROM users WHERE id = 1;
-- 使用 LIMIT
-- ❌ 差
SELECT * FROM posts ORDER BY created_at DESC;
-- ✅ 好
SELECT * FROM posts ORDER BY created_at DESC LIMIT 20;
-- 避免在 WHERE 子句中使用函数
-- ❌ 差(无法使用索引)
SELECT * FROM users WHERE LOWER(email) = 'user@example.com';
-- ✅ 好(可以使用索引)
SELECT * FROM users WHERE email = 'user@example.com';
-- 将 email 存储为小写,或使用生成列 + 索引
// PostgreSQL 连接池
import { Pool } from 'pg'
const pool = new Pool({
max: 20, // 最大连接数
min: 5, // 最小连接数
idleTimeoutMillis: 30000, // 30 秒后关闭空闲连接
connectionTimeoutMillis: 2000 // 如果 2 秒内无法连接则报错
})
// 始终释放连接
const client = await pool.connect()
try {
const result = await client.query('SELECT * FROM users')
return result.rows
} finally {
client.release()
}
浏览器缓存(HTTP 头部)
↓
CDN 缓存(Cloudflare、CloudFront)
↓
应用缓存(Redis、Memcached)
↓
数据库查询缓存
↓
数据库
import Redis from 'ioredis'
const redis = new Redis({
maxRetriesPerRequest: 3,
enableReadyCheck: true
})
async function getUser(id: string): Promise<User> {
const cacheKey = `user:${id}`
// 1. 检查缓存
const cached = await redis.get(cacheKey)
if (cached) {
return JSON.parse(cached)
}
// 2. 缓存未命中 - 从数据库获取
const user = await db.users.findById(id)
// 3. 存储到缓存(1 小时后过期)
await redis.setex(cacheKey, 3600, JSON.stringify(user))
return user
}
// 缓存失效
async function updateUser(id: string, data: Partial<User>) {
await db.users.update(id, data)
await redis.del(`user:${id}`) // 使缓存失效
}
// Express 中间件
app.use((req, res, next) => {
// 静态资源:缓存 1 年
if (req.url.match(/\.(js|css|png|jpg|jpeg|gif|svg|woff|woff2)$/)) {
res.setHeader('Cache-Control', 'public, max-age=31536000, immutable')
}
// HTML:不缓存(始终重新验证)
if (req.url.endsWith('.html') || req.url === '/') {
res.setHeader('Cache-Control', 'no-cache, must-revalidate')
}
// API 响应:缓存 5 分钟
if (req.url.startsWith('/api/')) {
res.setHeader('Cache-Control', 'public, max-age=300')
res.setHeader('ETag', generateETag(req.url))
}
next()
})
静态资源到 CDN:
- 图片:/images/**
- JavaScript:/js/**
- CSS:/css/**
- 字体:/fonts/**
CDN 设置:
- 缓存时长:1 年(使用版本化 URL)
- Gzip/Brotli 压缩:启用
- 图片优化:WebP 转换
- 部署时清除:是(通过 API)
推荐的 CDN:
- Cloudflare(免费层优秀)
- CloudFront(AWS 集成)
- Fastly(企业级,非常快)
// React 懒加载
import { lazy, Suspense } from 'react'
// ❌ 差:一次性加载所有内容
import Dashboard from './Dashboard'
import AdminPanel from './AdminPanel'
// ✅ 好:懒加载路由
const Dashboard = lazy(() => import('./Dashboard'))
const AdminPanel = lazy(() => import('./AdminPanel'))
function App() {
return (
<Suspense fallback={<LoadingSpinner />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/admin" element={<AdminPanel />} />
</Routes>
</Suspense>
)
}
// Next.js 动态导入
import dynamic from 'next/dynamic'
const HeavyComponent = dynamic(() => import('./HeavyComponent'), {
loading: () => <LoadingSpinner />,
ssr: false // 为此组件跳过 SSR
})
// Next.js Image 组件(自动优化)
import Image from 'next/image'
<Image
src="/photo.jpg"
width={800}
height={600}
alt="描述"
loading="lazy" // 懒加载屏幕外图片
placeholder="blur" // 加载时显示模糊占位符
quality={75} // 75% 质量(良好平衡)
/>
// WebP 格式与回退方案
<picture>
<source srcset="image.webp" type="image/webp" />
<source srcset="image.jpg" type="image/jpeg" />
<img src="image.jpg" alt="描述" loading="lazy" />
</picture>
// 响应式图片
<img
srcset="
small.jpg 480w,
medium.jpg 768w,
large.jpg 1200w
"
sizes="(max-width: 480px) 480px, (max-width: 768px) 768px, 1200px"
src="medium.jpg"
alt="描述"
/>
# 分析包
npm run build -- --analyze
# 减少包大小:
# 1. 移除未使用的依赖
npm uninstall unused-package
# 2. 使用支持树摇的导入方式
# ❌ 差
import _ from 'lodash'
# ✅ 好
import debounce from 'lodash/debounce'
# 3. 对大型库使用动态导入
const moment = await import('moment')
# 4. 代码压缩(生产构建中自动完成)
# Vite/Next.js 会自动处理
// 1. 记忆化昂贵的计算
import { useMemo } from 'react'
function DataTable({ data }) {
const sortedData = useMemo(
() => data.sort((a, b) => a.name.localeCompare(b.name)),
[data]
)
return <Table data={sortedData} />
}
// 2. 记忆化组件
import { memo } from 'react'
const ExpensiveComponent = memo(function ExpensiveComponent({ data }) {
// 仅当 data 变化时重新渲染
return <div>{/* 昂贵的渲染 */}</div>
})
// 3. 使用 useCallback 保持函数引用稳定
import { useCallback } from 'react'
function Parent() {
const handleClick = useCallback(() => {
console.log('已点击')
}, [])
return <ExpensiveChild onClick={handleClick} />
}
// 4. 虚拟化长列表
import { FixedSizeList } from 'react-window'
<FixedSizeList
height={600}
itemCount={10000}
itemSize={50}
>
{({ index, style }) => (
<div style={style}>行 {index}</div>
)}
</FixedSizeList>
// ❌ 差:同步(响应慢)
app.post('/send-email', async (req, res) => {
await sendEmail(req.body) // 3 秒
res.json({ success: true })
})
// ✅ 好:队列任务(响应快)
import Bull from 'bull'
const emailQueue = new Bull('emails', 'redis://localhost:6379')
app.post('/send-email', async (req, res) => {
await emailQueue.add('send', req.body)
res.json({ success: true, message: '邮件已加入队列' })
})
// 在后台工作进程中处理任务
emailQueue.process('send', async job => {
await sendEmail(job.data)
})
// 1. 压缩
import compression from 'compression'
app.use(compression()) // Gzip 响应
// 2. 分页
app.get('/api/posts', async (req, res) => {
const page = parseInt(req.query.page) || 1
const limit = parseInt(req.query.limit) || 20
const posts = await db.posts.findAll({
offset: (page - 1) * limit,
limit: limit
})
res.json({
data: posts,
pagination: {
page,
limit,
total: await db.posts.count()
}
})
})
// 3. 字段过滤(GraphQL 风格)
app.get('/api/users/:id', async (req, res) => {
const fields = req.query.fields?.split(',') || ['id', 'name', 'email']
const user = await db.users.findById(req.params.id, {
attributes: fields
})
res.json(user)
})
import rateLimit from 'express-rate-limit'
// 通用 API 速率限制
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 分钟
max: 100, // 每个窗口 100 次请求
message: '请求过多,请稍后再试'
})
app.use('/api/', apiLimiter)
// 对昂贵端点设置更严格的限制
const authLimiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 小时
max: 5, // 每小时 5 次请求
skipSuccessfulRequests: true
})
app.post('/api/auth/login', authLimiter, loginHandler)
工具:
自定义监控:
// 跟踪响应时间
app.use((req, res, next) => {
const start = Date.now()
res.on('finish', () => {
const duration = Date.now() - start
// 记录到监控服务
metrics.recordResponseTime(req.path, duration)
// 对慢请求发出告警
if (duration > 1000) {
logger.warn(`慢请求:${req.path} 耗时 ${duration}ms`)
}
})
next()
})
// 跟踪数据库查询时间
db.on('query', (query, duration) => {
if (duration > 100) {
logger.warn(`慢查询:${query} 耗时 ${duration}ms`)
}
})
需要跟踪的关键指标:
- 响应时间(P50、P95、P99)
- 吞吐量(请求/秒)
- 错误率(%)
- 数据库查询时间
- 缓存命中率
- 内存使用率
- CPU 使用率
告警阈值:
- P95 响应时间 > 1s
- 错误率 > 1%
- 缓存命中率 < 80%
- 内存使用率 > 80%
相关技能:
deployment-advisor - 用于基础设施优化frontend-builder - 用于 React 性能模式api-designer - 用于 API 优化相关模式:
META/DECISION-FRAMEWORK.md - 扩展决策STANDARDS/architecture-patterns/caching-patterns.md - 缓存策略(创建后)相关操作手册:
PLAYBOOKS/optimize-database-performance.md - 数据库优化步骤(创建后)PLAYBOOKS/frontend-performance-audit.md - 前端审计流程(创建后)每周安装次数
0
仓库
GitHub 星标数
18
首次出现时间
1970年1月1日
安全审计
Make applications fast, scalable, and cost-efficient.
Measure first, optimize second. Don't guess at bottlenecks—profile, measure, then fix the slowest parts.
Core Web Vitals:
Largest Contentful Paint (LCP): < 2.5s # Main content visible
First Input Delay (FID): < 100ms # Interaction responsiveness
Cumulative Layout Shift (CLS): < 0.1 # Visual stability
Additional Metrics:
First Contentful Paint (FCP): < 1.8s # First content rendered
Time to Interactive (TTI): < 3.8s # Fully interactive
Total Blocking Time (TBT): < 200ms # Main thread blocked
Speed Index: < 3.4s # Visual progress
Backend Metrics:
API Response Time (P95): < 500ms
Database Query Time (P95): < 100ms
Server Response Time (TTFB): < 600ms
Goal : Identify actual bottlenecks, not perceived ones
Chrome DevTools :
// 1. Performance tab → Record → Reload page
// 2. Analyze:
// - Main thread activity
// - Network waterfall
// - JavaScript execution time
// - Rendering time
// 3. Lighthouse audit
// Run: chrome://lighthouse or `npm i -g lighthouse`
lighthouse https://yoursite.com --view
React DevTools Profiler :
// Wrap component to profile
import { Profiler } from 'react'
function onRenderCallback(id, phase, actualDuration) {
console.log(`${id} (${phase}) took ${actualDuration}ms`)
}
;<Profiler id="ExpensiveComponent" onRender={onRenderCallback}>
<ExpensiveComponent />
</Profiler>
Node.js Profiling :
# Generate CPU profile
node --prof app.js
# Process profile
node --prof-process isolate-0x*.log > processed.txt
# Flame graphs (better visualization)
npm i -g 0x
0x app.js
Python Profiling :
import cProfile
import pstats
# Profile function
cProfile.run('slow_function()', 'output.prof')
# Analyze
p = pstats.Stats('output.prof')
p.sort_stats('cumulative').print_stats(20)
PostgreSQL :
-- Enable query logging
ALTER DATABASE yourdb SET log_min_duration_statement = 100; -- Log queries >100ms
-- Analyze query
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM users WHERE email = 'test@example.com';
-- Find slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
MongoDB :
// Enable profiling
db.setProfilingLevel(1, { slowms: 100 })
// View slow queries
db.system.profile.find({ millis: { $gt: 100 } }).sort({ ts: -1 })
// Explain query
db.collection.find({ email: 'test@example.com' }).explain('executionStats')
-- Before: Table scan (slow)
SELECT * FROM users WHERE email = 'user@example.com';
-- Execution time: 2000ms on 1M rows
-- After: Index scan (fast)
CREATE INDEX idx_users_email ON users(email);
SELECT * FROM users WHERE email = 'user@example.com';
-- Execution time: 5ms
-- Composite index for multi-column queries
CREATE INDEX idx_posts_user_date ON posts(user_id, created_at DESC);
SELECT * FROM posts WHERE user_id = 123 ORDER BY created_at DESC;
-- Partial index for filtered queries
CREATE INDEX idx_active_users ON users(created_at) WHERE is_active = true;
// ❌ Bad: N+1 query problem (101 database queries)
const users = await User.findAll() // 1 query
for (const user of users) {
user.posts = await Post.findAll({ where: { userId: user.id } }) // N queries
}
// ✅ Good: Eager loading (2 queries)
const users = await User.findAll({
include: [{ model: Post }]
})
// ✅ Better: DataLoader (batching + caching)
const userLoader = new DataLoader(async userIds => {
const users = await User.findAll({ where: { id: userIds } })
return userIds.map(id => users.find(u => u.id === id))
})
-- Avoid SELECT *
-- ❌ Bad
SELECT * FROM users WHERE id = 1;
-- ✅ Good
SELECT id, name, email FROM users WHERE id = 1;
-- Use LIMIT
-- ❌ Bad
SELECT * FROM posts ORDER BY created_at DESC;
-- ✅ Good
SELECT * FROM posts ORDER BY created_at DESC LIMIT 20;
-- Avoid functions in WHERE clause
-- ❌ Bad (can't use index)
SELECT * FROM users WHERE LOWER(email) = 'user@example.com';
-- ✅ Good (can use index)
SELECT * FROM users WHERE email = 'user@example.com';
-- Store email as lowercase, or use generated column + index
// PostgreSQL connection pool
import { Pool } from 'pg'
const pool = new Pool({
max: 20, // Maximum connections
min: 5, // Minimum connections
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 2000 // Error if can't connect in 2s
})
// Always release connections
const client = await pool.connect()
try {
const result = await client.query('SELECT * FROM users')
return result.rows
} finally {
client.release()
}
Browser Cache (HTTP headers)
↓
CDN Cache (Cloudflare, CloudFront)
↓
Application Cache (Redis, Memcached)
↓
Database Query Cache
↓
Database
import Redis from 'ioredis'
const redis = new Redis({
maxRetriesPerRequest: 3,
enableReadyCheck: true
})
async function getUser(id: string): Promise<User> {
const cacheKey = `user:${id}`
// 1. Check cache
const cached = await redis.get(cacheKey)
if (cached) {
return JSON.parse(cached)
}
// 2. Cache miss - fetch from database
const user = await db.users.findById(id)
// 3. Store in cache (expire in 1 hour)
await redis.setex(cacheKey, 3600, JSON.stringify(user))
return user
}
// Cache invalidation
async function updateUser(id: string, data: Partial<User>) {
await db.users.update(id, data)
await redis.del(`user:${id}`) // Invalidate cache
}
// Express middleware
app.use((req, res, next) => {
// Static assets: cache for 1 year
if (req.url.match(/\.(js|css|png|jpg|jpeg|gif|svg|woff|woff2)$/)) {
res.setHeader('Cache-Control', 'public, max-age=31536000, immutable')
}
// HTML: no cache (always revalidate)
if (req.url.endsWith('.html') || req.url === '/') {
res.setHeader('Cache-Control', 'no-cache, must-revalidate')
}
// API responses: cache for 5 minutes
if (req.url.startsWith('/api/')) {
res.setHeader('Cache-Control', 'public, max-age=300')
res.setHeader('ETag', generateETag(req.url))
}
next()
})
Static Assets to CDN:
- Images: /images/**
- JavaScript: /js/**
- CSS: /css/**
- Fonts: /fonts/**
CDN Settings:
- Cache duration: 1 year (with versioned URLs)
- Gzip/Brotli compression: enabled
- Image optimization: WebP conversion
- Purge on deploy: yes (via API)
Recommended CDNs:
- Cloudflare (free tier excellent)
- CloudFront (AWS integration)
- Fastly (enterprise, very fast)
// React lazy loading
import { lazy, Suspense } from 'react'
// ❌ Bad: Load everything upfront
import Dashboard from './Dashboard'
import AdminPanel from './AdminPanel'
// ✅ Good: Lazy load routes
const Dashboard = lazy(() => import('./Dashboard'))
const AdminPanel = lazy(() => import('./AdminPanel'))
function App() {
return (
<Suspense fallback={<LoadingSpinner />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/admin" element={<AdminPanel />} />
</Routes>
</Suspense>
)
}
// Next.js dynamic imports
import dynamic from 'next/dynamic'
const HeavyComponent = dynamic(() => import('./HeavyComponent'), {
loading: () => <LoadingSpinner />,
ssr: false // Skip SSR for this component
})
// Next.js Image component (automatic optimization)
import Image from 'next/image'
<Image
src="/photo.jpg"
width={800}
height={600}
alt="Description"
loading="lazy" // Lazy load off-screen images
placeholder="blur" // Blur placeholder while loading
quality={75} // 75% quality (good balance)
/>
// WebP format with fallback
<picture>
<source srcset="image.webp" type="image/webp" />
<source srcset="image.jpg" type="image/jpeg" />
<img src="image.jpg" alt="Description" loading="lazy" />
</picture>
// Responsive images
<img
srcset="
small.jpg 480w,
medium.jpg 768w,
large.jpg 1200w
"
sizes="(max-width: 480px) 480px, (max-width: 768px) 768px, 1200px"
src="medium.jpg"
alt="Description"
/>
# Analyze bundle
npm run build -- --analyze
# Reduce bundle size:
# 1. Remove unused dependencies
npm uninstall unused-package
# 2. Use tree-shaking compatible imports
# ❌ Bad
import _ from 'lodash'
# ✅ Good
import debounce from 'lodash/debounce'
# 3. Dynamic imports for large libraries
const moment = await import('moment')
# 4. Minification (automatic in production builds)
# Vite/Next.js handle this automatically
// 1. Memoize expensive calculations
import { useMemo } from 'react'
function DataTable({ data }) {
const sortedData = useMemo(
() => data.sort((a, b) => a.name.localeCompare(b.name)),
[data]
)
return <Table data={sortedData} />
}
// 2. Memoize components
import { memo } from 'react'
const ExpensiveComponent = memo(function ExpensiveComponent({ data }) {
// Only re-renders if data changes
return <div>{/* expensive rendering */}</div>
})
// 3. useCallback for stable function references
import { useCallback } from 'react'
function Parent() {
const handleClick = useCallback(() => {
console.log('Clicked')
}, [])
return <ExpensiveChild onClick={handleClick} />
}
// 4. Virtualize long lists
import { FixedSizeList } from 'react-window'
<FixedSizeList
height={600}
itemCount={10000}
itemSize={50}
>
{({ index, style }) => (
<div style={style}>Row {index}</div>
)}
</FixedSizeList>
// ❌ Bad: Synchronous (slow response)
app.post('/send-email', async (req, res) => {
await sendEmail(req.body) // 3 seconds
res.json({ success: true })
})
// ✅ Good: Queue job (fast response)
import Bull from 'bull'
const emailQueue = new Bull('emails', 'redis://localhost:6379')
app.post('/send-email', async (req, res) => {
await emailQueue.add('send', req.body)
res.json({ success: true, message: 'Email queued' })
})
// Process jobs in background worker
emailQueue.process('send', async job => {
await sendEmail(job.data)
})
// 1. Compression
import compression from 'compression'
app.use(compression()) // Gzip responses
// 2. Pagination
app.get('/api/posts', async (req, res) => {
const page = parseInt(req.query.page) || 1
const limit = parseInt(req.query.limit) || 20
const posts = await db.posts.findAll({
offset: (page - 1) * limit,
limit: limit
})
res.json({
data: posts,
pagination: {
page,
limit,
total: await db.posts.count()
}
})
})
// 3. Field filtering (GraphQL-style)
app.get('/api/users/:id', async (req, res) => {
const fields = req.query.fields?.split(',') || ['id', 'name', 'email']
const user = await db.users.findById(req.params.id, {
attributes: fields
})
res.json(user)
})
import rateLimit from 'express-rate-limit'
// General API rate limit
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // 100 requests per window
message: 'Too many requests, please try again later'
})
app.use('/api/', apiLimiter)
// Stricter limit for expensive endpoints
const authLimiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: 5, // 5 requests per hour
skipSuccessfulRequests: true
})
app.post('/api/auth/login', authLimiter, loginHandler)
Tools :
Custom Monitoring :
// Track response times
app.use((req, res, next) => {
const start = Date.now()
res.on('finish', () => {
const duration = Date.now() - start
// Log to monitoring service
metrics.recordResponseTime(req.path, duration)
// Alert on slow requests
if (duration > 1000) {
logger.warn(`Slow request: ${req.path} took ${duration}ms`)
}
})
next()
})
// Track database query times
db.on('query', (query, duration) => {
if (duration > 100) {
logger.warn(`Slow query: ${query} took ${duration}ms`)
}
})
Key Metrics to Track:
- Response time (P50, P95, P99)
- Throughput (requests/second)
- Error rate (%)
- Database query times
- Cache hit ratio
- Memory usage
- CPU usage
Alerting Thresholds:
- P95 response time > 1s
- Error rate > 1%
- Cache hit ratio < 80%
- Memory usage > 80%
Related Skills :
deployment-advisor - For infrastructure optimizationfrontend-builder - For React performance patternsapi-designer - For API optimizationRelated Patterns :
META/DECISION-FRAMEWORK.md - Scaling decisionsSTANDARDS/architecture-patterns/caching-patterns.md - Caching strategies (when created)Related Playbooks :
PLAYBOOKS/optimize-database-performance.md - DB optimization steps (when created)PLAYBOOKS/frontend-performance-audit.md - Frontend audit procedure (when created)Weekly Installs
0
Repository
GitHub Stars
18
First Seen
Jan 1, 1970
Security Audits
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
107,900 周安装
AI定制简历生成器 | 智能优化ATS关键词,一键生成专业简历
2,300 周安装
关键词研究工具 - SEO与GEO技能库中的专业关键词分析技能
2,300 周安装
Changelog Generator - 自动生成用户友好的Git更新日志和发布说明工具
2,300 周安装
Clerk 组织管理技能:B2B SaaS 多租户身份验证与角色权限控制
2,400 周安装
SEO内容撰写器 - AI驱动的SEO内容创作与优化工具,提升搜索引擎排名
2,400 周安装
Playwriter 自动化测试工具:基于 Playwright 的网页自动化与爬虫解决方案
2,400 周安装