nodejs-profiling by claude-dev-suite/claude-dev-suite
npx skills add https://github.com/claude-dev-suite/claude-dev-suite --skill nodejs-profilingjava-profiling 技能python-profiling 技能深度知识:使用
mcp__documentation__fetch_docs并指定技术为nodejs,以获取全面的性能分析指南、V8 标志和优化技术。
# CPU 性能分析(生成 .cpuprofile 文件)
node --cpu-prof --cpu-prof-dir=./profiles app.js
# V8 性能分析(生成 .log 文件)
node --prof app.js
node --prof-process isolate-*.log > processed.txt
# 根据信号生成堆快照
node --heapsnapshot-signal=SIGUSR2 app.js
kill -USR2 <pid>
java-profiling skill for JFR, jcmd, and GC tuningpython-profiling skill for cProfile and memory_profilerDeep Knowledge : Use
mcp__documentation__fetch_docswith technology:nodejsfor comprehensive profiling guides, V8 flags, and optimization techniques.
# CPU profile (generates .cpuprofile)
node --cpu-prof --cpu-prof-dir=./profiles app.js
# V8 profile (generates .log)
node --prof app.js
node --prof-process isolate-*.log > processed.txt
# Heap snapshot on signal
node --heapsnapshot-signal=SIGUSR2 app.js
kill -USR2 <pid>
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
import { Session } from 'inspector';
import { writeFileSync } from 'fs';
const session = new Session();
session.connect();
// 启动 CPU 性能分析
session.post('Profiler.enable');
session.post('Profiler.start');
// 在此处运行你的代码...
// 停止并获取性能分析数据
session.post('Profiler.stop', (err, { profile }) => {
writeFileSync('profile.cpuprofile', JSON.stringify(profile));
});
import v8 from 'v8';
const heapStats = v8.getHeapStatistics();
console.log({
heapUsed: heapStats.used_heap_size,
heapTotal: heapStats.total_heap_size,
heapLimit: heapStats.heap_size_limit,
external: heapStats.external_memory,
});
// 详细的堆空间信息
const heapSpaces = v8.getHeapSpaceStatistics();
heapSpaces.forEach(space => {
console.log(`${space.space_name}: ${space.space_used_size}`);
});
import { performance, PerformanceObserver } from 'perf_hooks';
// 按时间间隔追踪内存
const memoryTracker = setInterval(() => {
const usage = process.memoryUsage();
console.log({
rss: usage.rss, // 常驻集大小
heapTotal: usage.heapTotal,
heapUsed: usage.heapUsed,
external: usage.external,
arrayBuffers: usage.arrayBuffers,
});
}, 1000);
import { performance, PerformanceObserver } from 'perf_hooks';
// 标记开始/结束
performance.mark('operation-start');
await someOperation();
performance.mark('operation-end');
// 测量持续时间
performance.measure('operation', 'operation-start', 'operation-end');
// 用于异步测量的观察器
const obs = new PerformanceObserver((list) => {
const entries = list.getEntries();
entries.forEach(entry => {
console.log(`${entry.name}: ${entry.duration}ms`);
});
});
obs.observe({ entryTypes: ['measure', 'function'] });
// 清理
performance.clearMarks();
performance.clearMeasures();
import { AsyncLocalStorage, AsyncResource } from 'async_hooks';
const storage = new AsyncLocalStorage<{ requestId: string }>();
// 跨异步操作追踪请求计时
function trackRequest(requestId: string) {
storage.run({ requestId }, async () => {
const start = performance.now();
await handleRequest();
const duration = performance.now() - start;
console.log(`Request ${requestId}: ${duration}ms`);
});
}
// ❌ 错误做法:阻塞事件循环
function processLargeArray(arr: number[]): number {
return arr.reduce((sum, n) => sum + expensiveComputation(n), 0);
}
// ✅ 正确做法:使用工作线程
import { Worker, isMainThread, parentPort, workerData } from 'worker_threads';
if (isMainThread) {
const worker = new Worker(__filename, { workerData: largeArray });
worker.on('message', (result) => console.log(result));
} else {
const result = workerData.reduce((sum, n) => sum + expensiveComputation(n), 0);
parentPort?.postMessage(result);
}
// ❌ 错误做法:顺序 I/O
for (const file of files) {
await fs.readFile(file); // 一次一个
}
// ✅ 正确做法:使用并发限制进行并行 I/O
import pLimit from 'p-limit';
const limit = pLimit(10);
await Promise.all(
files.map(file => limit(() => fs.readFile(file)))
);
// ❌ 错误做法:无限制的缓存
const cache = new Map();
function getUser(id: string) {
if (!cache.has(id)) {
cache.set(id, fetchUser(id)); // 从未清理
}
return cache.get(id);
}
// ✅ 正确做法:具有最大大小的 LRU 缓存
import { LRUCache } from 'lru-cache';
const cache = new LRUCache<string, User>({
max: 1000,
ttl: 1000 * 60 * 5, // 5 分钟
});
// ❌ 错误做法:事件监听器泄漏
element.addEventListener('click', handler); // 从未移除
// ✅ 正确做法:清理监听器
const abortController = new AbortController();
element.addEventListener('click', handler, { signal: abortController.signal });
// 稍后:abortController.abort();
// ❌ 错误做法:创建许多临时对象
function process(items: Item[]) {
return items.map(item => ({
...item,
computed: compute(item),
}));
}
// ✅ 正确做法:在安全的情况下就地修改
function process(items: Item[]) {
for (const item of items) {
item.computed = compute(item);
}
return items;
}
// ✅ 正确做法:对象池
class ObjectPool<T> {
private pool: T[] = [];
acquire(): T {
return this.pool.pop() || this.create();
}
release(obj: T) {
this.reset(obj);
this.pool.push(obj);
}
}
// ❌ 错误做法:许多小分配
const chunks: Buffer[] = [];
for (const data of stream) {
chunks.push(Buffer.from(data));
}
const result = Buffer.concat(chunks);
// ✅ 正确做法:在已知大小时预分配
const buffer = Buffer.allocUnsafe(totalSize); // 更快,未初始化
let offset = 0;
for (const data of stream) {
offset += data.copy(buffer, offset);
}
// ❌ 错误做法:将整个文件加载到内存中
const data = await fs.readFile('large-file.json');
const parsed = JSON.parse(data);
// ✅ 正确做法:流式处理
import { createReadStream } from 'fs';
import { parser } from 'stream-json';
import { streamArray } from 'stream-json/streamers/StreamArray';
const pipeline = createReadStream('large-file.json')
.pipe(parser())
.pipe(streamArray());
for await (const { value } of pipeline) {
await processItem(value);
}
// 强制 V8 优化一个函数
function criticalFunction(x: number): number {
// 使用相同类型多次调用
return x * 2;
}
// 预热
for (let i = 0; i < 10000; i++) criticalFunction(i);
// 避免去优化模式:
// - 不要在创建后更改对象形状
// - 不要对对象属性使用 delete
// - 不要使用 arguments 对象,使用剩余参数
// - 不要使用 with 语句
// - 保持函数多态性较低
| 检查项 | 工具 | 命令 |
|---|---|---|
| CPU 热点 | CPU 性能分析 | node --cpu-prof app.js |
| 内存使用情况 | 堆统计信息 | v8.getHeapStatistics() |
| 内存泄漏 | 堆快照 | --heapsnapshot-signal |
| 事件循环延迟 | perf_hooks | monitorEventLoopDelay() |
| 异步操作 | 异步钩子 | async_hooks 模块 |
| 函数计时 | perf_hooks | performance.measure() |
# 增加堆大小
node --max-old-space-size=4096 app.js
# GC 日志记录
node --trace-gc app.js
# 暴露 GC 以进行手动控制
node --expose-gc app.js
# 在代码中:global.gc();
| 反模式 | 错误原因 | 正确方法 |
|---|---|---|
使用 setImmediate() 处理 CPU 工作 | 阻塞事件循环 | 对于 CPU 密集型任务,使用工作线程 |
| 同步文件操作 | 阻塞整个进程 | 使用异步 fs.promises API |
| 大型同步 JSON 解析 | 冻结事件循环 | 流式处理大型 JSON 或使用工作线程 |
| 回调地狱 | 难以分析,容易出错 | 使用 async/await 编写更清晰的异步代码 |
| 不使用连接池 | 创建过多连接 | 使用连接池(pg, mysql2) |
在生产环境中使用 console.log() | 速度慢,阻塞事件循环 | 使用结构化日志记录(pino, winston) |
| 将整个文件加载到内存中 | 内存耗尽 | 对于大文件使用流式处理 |
| 没有 TTL/限制的手动缓存 | 内存泄漏 | 使用具有大小/时间限制的 LRU 缓存 |
| 不监控事件循环延迟 | 未检测到的性能下降 | 使用 perf_hooks.monitorEventLoopDelay() |
对对象属性使用 delete | 导致对象去优化 | 设置为 undefined 或使用 Map |
| 问题 | 诊断 | 解决方案 |
|---|---|---|
| 高 CPU 使用率 | 紧密循环,低效算法 | 使用 --cpu-prof 进行分析,优化热点路径 |
| 内存持续增长 | 内存泄漏(无限制缓存、监听器) | 拍摄堆快照,随时间进行比较 |
| 事件循环延迟 | 长时间同步操作 | 使用工作线程或拆分为异步块 |
| GC 暂停导致延迟峰值 | 堆过大或碎片化 | 减少堆大小,优化对象创建 |
| 启动时间慢 | 过多的同步 require | 延迟加载模块,使用动态导入 |
FATAL ERROR: CALL_AND_RETRY_LAST | 内存不足 | 增加 --max-old-space-size 或修复内存泄漏 |
| 高内存使用率 | 大型缓冲区、字符串操作 | 使用流式处理,避免字符串连接 |
| 未处理的 Promise 拒绝 | 未捕获的异步错误 | 添加 .catch() 或对 async/await 使用 try/catch |
| 函数未被 V8 优化 | 包含去优化触发器 | 使用 --trace-deopt 检查,避免有问题的模式 |
| JSON 操作慢 | 大型负载 | 流式处理 JSON 或使用更快的解析器(simdjson) |
每周安装量
1
仓库
首次出现
3 天前
安全审计
安装于
amp1
cline1
openclaw1
opencode1
cursor1
kimi-cli1
import { Session } from 'inspector';
import { writeFileSync } from 'fs';
const session = new Session();
session.connect();
// Start CPU profiling
session.post('Profiler.enable');
session.post('Profiler.start');
// Your code here...
// Stop and get profile
session.post('Profiler.stop', (err, { profile }) => {
writeFileSync('profile.cpuprofile', JSON.stringify(profile));
});
import v8 from 'v8';
const heapStats = v8.getHeapStatistics();
console.log({
heapUsed: heapStats.used_heap_size,
heapTotal: heapStats.total_heap_size,
heapLimit: heapStats.heap_size_limit,
external: heapStats.external_memory,
});
// Detailed heap space info
const heapSpaces = v8.getHeapSpaceStatistics();
heapSpaces.forEach(space => {
console.log(`${space.space_name}: ${space.space_used_size}`);
});
import { performance, PerformanceObserver } from 'perf_hooks';
// Track memory at intervals
const memoryTracker = setInterval(() => {
const usage = process.memoryUsage();
console.log({
rss: usage.rss, // Resident Set Size
heapTotal: usage.heapTotal,
heapUsed: usage.heapUsed,
external: usage.external,
arrayBuffers: usage.arrayBuffers,
});
}, 1000);
import { performance, PerformanceObserver } from 'perf_hooks';
// Mark start/end
performance.mark('operation-start');
await someOperation();
performance.mark('operation-end');
// Measure duration
performance.measure('operation', 'operation-start', 'operation-end');
// Observer for async measurements
const obs = new PerformanceObserver((list) => {
const entries = list.getEntries();
entries.forEach(entry => {
console.log(`${entry.name}: ${entry.duration}ms`);
});
});
obs.observe({ entryTypes: ['measure', 'function'] });
// Cleanup
performance.clearMarks();
performance.clearMeasures();
import { AsyncLocalStorage, AsyncResource } from 'async_hooks';
const storage = new AsyncLocalStorage<{ requestId: string }>();
// Track request timing across async operations
function trackRequest(requestId: string) {
storage.run({ requestId }, async () => {
const start = performance.now();
await handleRequest();
const duration = performance.now() - start;
console.log(`Request ${requestId}: ${duration}ms`);
});
}
// ❌ Bad: Blocking the event loop
function processLargeArray(arr: number[]): number {
return arr.reduce((sum, n) => sum + expensiveComputation(n), 0);
}
// ✅ Good: Use worker threads
import { Worker, isMainThread, parentPort, workerData } from 'worker_threads';
if (isMainThread) {
const worker = new Worker(__filename, { workerData: largeArray });
worker.on('message', (result) => console.log(result));
} else {
const result = workerData.reduce((sum, n) => sum + expensiveComputation(n), 0);
parentPort?.postMessage(result);
}
// ❌ Bad: Sequential I/O
for (const file of files) {
await fs.readFile(file); // One at a time
}
// ✅ Good: Parallel I/O with concurrency limit
import pLimit from 'p-limit';
const limit = pLimit(10);
await Promise.all(
files.map(file => limit(() => fs.readFile(file)))
);
// ❌ Bad: Unbounded cache
const cache = new Map();
function getUser(id: string) {
if (!cache.has(id)) {
cache.set(id, fetchUser(id)); // Never cleaned up
}
return cache.get(id);
}
// ✅ Good: LRU cache with max size
import { LRUCache } from 'lru-cache';
const cache = new LRUCache<string, User>({
max: 1000,
ttl: 1000 * 60 * 5, // 5 minutes
});
// ❌ Bad: Event listener leak
element.addEventListener('click', handler); // Never removed
// ✅ Good: Cleanup listeners
const abortController = new AbortController();
element.addEventListener('click', handler, { signal: abortController.signal });
// Later: abortController.abort();
// ❌ Bad: Creating many temporary objects
function process(items: Item[]) {
return items.map(item => ({
...item,
computed: compute(item),
}));
}
// ✅ Good: Mutate in place when safe
function process(items: Item[]) {
for (const item of items) {
item.computed = compute(item);
}
return items;
}
// ✅ Good: Object pooling
class ObjectPool<T> {
private pool: T[] = [];
acquire(): T {
return this.pool.pop() || this.create();
}
release(obj: T) {
this.reset(obj);
this.pool.push(obj);
}
}
// ❌ Bad: Many small allocations
const chunks: Buffer[] = [];
for (const data of stream) {
chunks.push(Buffer.from(data));
}
const result = Buffer.concat(chunks);
// ✅ Good: Pre-allocate when size known
const buffer = Buffer.allocUnsafe(totalSize); // Faster, uninitialized
let offset = 0;
for (const data of stream) {
offset += data.copy(buffer, offset);
}
// ❌ Bad: Loading entire file in memory
const data = await fs.readFile('large-file.json');
const parsed = JSON.parse(data);
// ✅ Good: Stream processing
import { createReadStream } from 'fs';
import { parser } from 'stream-json';
import { streamArray } from 'stream-json/streamers/StreamArray';
const pipeline = createReadStream('large-file.json')
.pipe(parser())
.pipe(streamArray());
for await (const { value } of pipeline) {
await processItem(value);
}
// Force V8 to optimize a function
function criticalFunction(x: number): number {
// Called many times with same types
return x * 2;
}
// Warm up
for (let i = 0; i < 10000; i++) criticalFunction(i);
// Avoid deoptimization patterns:
// - Don't change object shapes after creation
// - Don't use delete on object properties
// - Don't use arguments object, use rest parameters
// - Don't use with statement
// - Keep function polymorphism low
| Check | Tool | Command |
|---|---|---|
| CPU hotspots | CPU profile | node --cpu-prof app.js |
| Memory usage | Heap stats | v8.getHeapStatistics() |
| Memory leaks | Heap snapshot | --heapsnapshot-signal |
| Event loop lag | perf_hooks | monitorEventLoopDelay() |
| Async operations | Async hooks | async_hooks module |
| Function timing | perf_hooks | performance.measure() |
# Increase heap size
node --max-old-space-size=4096 app.js
# GC logging
node --trace-gc app.js
# Expose GC for manual control
node --expose-gc app.js
# In code: global.gc();
| Anti-Pattern | Why It's Wrong | Correct Approach |
|---|---|---|
Using setImmediate() for CPU work | Blocks event loop | Use worker threads for CPU-intensive tasks |
| Synchronous file operations | Blocks entire process | Use async fs.promises API |
| Large synchronous JSON parsing | Freezes event loop | Stream large JSON or use worker threads |
| Callback hell | Hard to profile, error-prone | Use async/await for cleaner async code |
| Not using connection pooling | Creates too many connections | Use connection pools (pg, mysql2) |
console.log() in production | Slow, blocks event loop | Use structured logging (pino, winston) |
| Loading entire file into memory | Memory exhaustion | Use streams for large files |
| Manual cache without TTL/limits | Memory leaks | Use LRU cache with size/time limits |
| Not monitoring event loop lag | Undetected performance degradation | Use perf_hooks.monitorEventLoopDelay() |
delete on object properties | Deoptimizes objects | Set to undefined or use Map |
| Issue | Diagnosis | Solution |
|---|---|---|
| High CPU usage | Tight loops, inefficient algorithms | Profile with --cpu-prof, optimize hot paths |
| Memory growing continuously | Memory leak (unbounded cache, listeners) | Take heap snapshots, compare over time |
| Event loop lag | Long synchronous operations | Use worker threads or break into async chunks |
| GC pauses causing latency spikes | Heap too large or fragmented | Reduce heap size, optimize object creation |
| Slow startup time | Too many synchronous requires | Lazy load modules, use dynamic imports |
FATAL ERROR: CALL_AND_RETRY_LAST | Out of memory | Increase --max-old-space-size or fix memory leak |
| High memory usage | Large buffers, string operations | Use streams, avoid string concatenation |
| Unhandled promise rejections | Async errors not caught | Add .catch() or use try/catch with async/await |
| Function not optimized by V8 | Contains deopt triggers | Check with --trace-deopt, avoid problematic patterns |
| Slow JSON operations | Large payloads | Stream JSON or use faster parsers (simdjson) |
Weekly Installs
1
Repository
First Seen
3 days ago
Security Audits
Installed on
amp1
cline1
openclaw1
opencode1
cursor1
kimi-cli1
Electron应用自动化指南:使用agent-browser通过CDP实现桌面应用自动化
13,000 周安装
Docnify自动化:通过Rube MCP和Composio工具包实现文档操作自动化
1 周安装
Docmosis自动化集成指南:通过Rube MCP与Composio实现文档生成自动化
1 周安装
Dictionary API自动化教程:通过Rube MCP和Composio实现词典API操作自动化
1 周安装
detrack-automation:自动化追踪技能,集成Claude AI提升开发效率
1 周安装
Demio自动化工具包:通过Rube MCP和Composio实现Demio操作自动化
1 周安装
Deel自动化工具:通过Rube MCP与Composio实现HR与薪资操作自动化
1 周安装