accelint-ts-performance by gohypergiant/agent-skills
npx skills add https://github.com/gohypergiant/agent-skills --skill accelint-ts-performance针对 JavaScript/TypeScript 代码库的系统性性能优化。结合审计工作流程与专家级优化模式,以提升运行时性能。
注意: 关于通用最佳实践(使用 any/enum 的类型安全、避免 null、不改变参数),请使用 accelint-ts-best-practices 技能。本节专门关注性能相关的反模式。
绝对不要假设代码是冷路径 - 工具函数、格式化器、解析器和验证器看似简单,但经常在循环、渲染管道或实时系统中被调用。始终审计所有代码的性能反模式。不要基于感知的使用频率或代码简单性来假设或跳过审计。
绝对不要盲目应用所有优化 - 性能模式需要权衡。平衡优化收益与代码复杂性。进行审计时,通过系统分析识别所有反模式,并报告其预期收益。让用户根据其具体上下文决定应用哪些优化。
绝对不要忽略算法复杂度 - 对 O(n²) 代码进行微优化是徒劳的。对于 n=1000,算法修复(O(n²) → O(n))能带来 1000 倍加速;微优化最多只能带来 1.1-2 倍加速。首先修复算法:使用 Map/Set 实现 O(1) 查找,消除嵌套迭代,选择合适的数据结构。
绝对不要为了速度牺牲正确性 - 性能错误仍然是错误。优化经常破坏边缘情况:手动循环中的差一错误、空数组的错误行为、空值处理问题。优化前后验证行为是否匹配。在优化前添加涵盖边缘情况的全面测试——在生产环境中捕获错误的成本远高于任何性能收益。
绝对不要优化你不拥有的代码 - 共享工具、库内部代码或其他人正在积极开发的代码会导致合并冲突、重复工作并混淆所有权。性能更改会影响所有调用者;与所有者协调或推迟优化直到代码稳定。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
绝对不要忽略内存与 CPU 的权衡 - 缓存用内存换取速度。无限制的记忆化会导致长时间运行应用程序中的内存泄漏。一个增加 2 倍 CPU 速度但增加 10 倍内存的优化可能触发 OOM 崩溃或频繁的 GC 暂停(比原来的缓慢更糟糕)。在分析 CPU 的同时分析内存使用情况;设置缓存大小限制;对生命周期绑定的缓存使用 WeakMap。
绝对不要假设跨环境的性能表现 - V8 优化在 Node.js 版本(v18 与 v20)、浏览器(Chrome 与 Safari)和架构(x64 与 ARM)之间有所不同。在 Chrome 中带来 3 倍加速的优化可能在 Safari 中导致 1.5 倍的性能倒退。在发布前分析所有目标环境;为特定环境的优化维护回退实现。
绝对不要链式调用数组方法 (.filter().map().reduce()) - 每个方法都会创建中间数组并单独迭代。对于包含 10k 个项目的数组,.filter().map() 会分配 10k + 5k 个项目(如果 50% 通过过滤器)并迭代两次。使用单次 reduce 遍历,一次迭代且无中间分配,在热路径中可带来 2-5 倍加速。
绝对不要对重复查找使用 Array.includes() - Array.includes() 是 O(n) 线性搜索。将 1000 个项目与包含 100 个项目的数组进行检查是 O(n×m) = 10 万次操作。改用 Set.has():通过哈希表实现 O(1) 查找,将 10 万次操作减少到 1000 次,实现约 100 倍加速。预先构建 Set 一次;摊销成本可忽略不计。
绝对不要在检查是否需要结果之前使用 await - await 会立即暂停执行,即使不需要该值。将 await 移到实际使用结果的条件分支中。示例:const data = await fetch(url); if (condition) { use(data); } 在条件为 false 时浪费 I/O 时间。更好的做法:if (condition) { const data = await fetch(url); use(data); } 在不需要时完全跳过获取。
绝对不要在循环内重新计算常量 - 在每次迭代中重新计算不变量会浪费 CPU。对于 10k 次迭代,array.length 查找(即使引擎已缓存)或 Math.max(a, b) 会不必要地运行 10k 次。将不变量提升到循环外部:const len = array.length; for (let i = 0; i < len; i++) 或使用柯里化函数预先计算常量参数一次。
绝对不要创建无限制的循环或队列 - 防止因错误或恶意输入导致失控的资源消耗。设置明确的限制(for (let i = 0; i < Math.min(items.length, 10000); i++))或超时。无限制的循环可能冻结 UI 线程;无限制的队列会导致 OOM 崩溃。通过明确的限制快速失败,而不是优雅地退化到不可用状态。
绝对不要在热路径中放置 try/catch - V8 无法内联包含 try-catch 块的函数,并将整个函数标记为不可优化。热循环中的单个 try-catch 会阻止内联、逃逸分析和其他优化,导致 3-5 倍减速。在热路径前使用类型守卫验证输入;将 try-catch 移到循环外部以包装整个操作;对预期错误使用 Result 类型。
应用这些测试以有效聚焦优化工作:
.map() 更难维护。平衡性能与团队开发速度。此技能使用渐进式披露以最小化上下文使用:
遵循下面的 4 阶段审计工作流程进行系统性性能分析。
加载 AGENTS.md 以扫描按类别组织的压缩规则摘要。
当你识别出特定的性能问题时,加载相应的参考文件以获取详细的 ❌/✅ 示例。
当用户明确请求性能审计时,加载模板以进行一致的报告:
两种操作模式:
审计模式 - 技能被直接调用(/accelint-ts-performance <路径>)或用户明确请求性能审计
实施模式 - 技能在功能工作中自动触发
复制此清单以跟踪进度:
- [ ] 阶段 1:性能分析 - 使用性能分析工具识别实际瓶颈
- [ ] 阶段 2:分析 - 按影响和优化类别对问题进行分类
- [ ] 阶段 3:优化 - 应用 references/ 中的性能模式
- [ ] 阶段 4:验证 - 测量改进并验证正确性
关键:审计所有代码的性能反模式。 不要基于对使用频率的假设跳过代码。工具函数、格式化器、解析器、验证器和数据转换经常在循环、渲染管道或实时系统中被调用,即使它们的实现看起来很简单。
当性能分析工具可用时,使用它们建立基线测量:
node --prof script.js && node --prof-process isolate-*.log无论是否有性能分析数据可用:执行系统性静态代码分析以识别所有性能反模式:
输出:所有已识别反模式的完整列表,包含其位置和预期性能影响。不要基于"严重性"或"优先级"进行过滤——报告发现的所有内容。
当生成审计报告时(当技能通过 /accelint-ts-performance <路径> 直接调用或用户明确请求性能审计时),使用结构化模板:
对于阶段 1 中识别的每一个问题,按优化类型分类:
按优化类型对所有问题进行分类:
| 问题类型 | 类别 | 预期收益 |
|---|---|---|
| 嵌套循环,O(n²) 复杂度 | 算法优化 | 10-1000x |
| 重复的昂贵计算 | 缓存与记忆化 | 2-100x |
| 分配密集型代码 | 减少分配 | 1.5-5x |
| 顺序访问违规 | 内存局部性 | 1.5-3x |
| 过多的 I/O 操作 | I/O 优化 | 5-50x |
| 阻塞的异步操作 | I/O 优化 | 2-10x |
| 循环中的属性访问 | 缓存与记忆化 | 1.2-2x |
问题到类别的快速参考映射:
加载 references/quick-reference.md 以获取详细的问题到类别映射和反模式检测。
输出: 所有问题的分类列表及其优化类别。不要过滤或确定优先级——列出阶段 1 中发现的所有内容。
步骤 1:从阶段 2 的分析中识别你的瓶颈类别。
步骤 2:为你的类别加载必读参考文件。完整阅读每个文件,无范围限制。
| 类别 | 必读文件 | 可选 | 不要加载 |
|---|---|---|---|
| 算法 (O(n²)、嵌套循环、重复查找) | reduce-looping.mdreduce-branching.md | — | memoization, caching, I/O, allocation |
| 缓存 (循环中的属性访问、重复计算) | memoization.mdcache-property-access.md | cache-storage-api.md (用于 Storage APIs) | I/O, allocation |
| I/O (阻塞异步、过多的 I/O 操作) | batching.mddefer-await.md | — | algorithmic, memory |
| 内存 (分配密集型、GC 压力) | object-operations.mdavoid-allocations.md | — | I/O, caching |
| 局部性 (顺序访问违规、缓存未命中) | predictable-execution.md | — | all others |
| 安全性 (无限制循环、失控队列) | bounded-iteration.md | — | all others |
| 微优化 (热路径微调、1.1-2x 改进) | currying.mdperformance-misc.md | — | all others (仅在算法修复后应用) |
注意事项:
步骤 3:在优化期间扫描快速参考
加载 AGENTS.md 以查看按类别组织的压缩规则摘要。在实现上述详细参考文件中的模式时,用作快速查找。
系统性地应用模式:
优化示例:
// ❌ 之前:O(n²) - 嵌套迭代
for (const user of users) {
const items = allItems.filter(item => item.userId === user.id);
process(items);
}
// ✅ 之后:O(n) - 使用 Map 查找的单次遍历
// 性能:reduce-looping.md - 一次性构建查找模式
const itemsByUser = new Map<string, Item[]>();
for (const item of allItems) {
if (!itemsByUser.has(item.userId)) {
itemsByUser.set(item.userId, []);
}
itemsByUser.get(item.userId)!.push(item);
}
for (const user of users) {
const items = itemsByUser.get(user.id) ?? [];
process(items);
}
测量性能收益:
验证正确性:
记录优化:
// 应用的性能优化:2026-01-28
// 问题:嵌套迭代导致 10k 项目时的 O(n²) 复杂度
// 模式:reduce-looping.md - 基于 Map 的查找
// 加速:145 倍更快 (5200ms → 36ms)
// 已验证:所有测试通过,手动 QA 完成
决定是否保留优化:
实时系统(60fps 渲染、实时数据可视化): 在关键热路径中,即使是 1.05 倍的改进也很重要。使用帧计时分析器验证对帧预算的影响(60fps 为 16.67ms)。
如果测试失败: 修复优化或还原。性能错误仍然是错误。
根据优化影响校准指导的明确性:
| 优化类型 | 自由度级别 | 指导格式 | 示例 |
|---|---|---|---|
| 算法 (10x+ 收益) | 中等自由度 | 多种有效方法,根据约束选择 | "使用 Map 实现 O(1) 查找或 Set 进行去重" |
| 缓存 (2-10x 收益) | 中等自由度 | 带有示例的模式,缓存失效策略 | "如果生命周期与源对象匹配,使用 WeakMap 进行记忆化" |
| 微优化 (1.1-2x 收益) | 低自由度 | 参考中的确切模式,先测量 | "在循环中缓存 array.length:for (let i = 0, len = arr.length; ...)" |
测试标准: "加速和维护成本是多少?"
使用此表快速识别适用的优化类别。
审计一切:识别代码中的所有性能反模式,无论当前使用上下文如何。报告所有发现及其预期收益。
| 如果你看到... | 根本原因 | 优化类别 | 预期收益 |
|---|---|---|---|
对相同数据的嵌套 for 循环 | O(n²) 复杂度 | 算法 (reduce-looping) | 10-1000x |
.filter() 后跟 .find() 或 .map() | 对数据的多次遍历 | 算法 (reduce-looping) | 2-10x |
重复的 array.find() 或 .includes() | O(n) 线性搜索 | 算法 (reduce-looping, 使用 Set/Map) | 10-100x |
同一变量上的多个 if/else 链 | 分支密集型代码 | 算法 (reduce-branching) | 1.5-3x |
| 相同函数被相同输入重复调用 | 冗余计算 | 缓存 (memoization) | 2-100x |
循环中多次访问 obj.prop.nested.deep | 属性访问开销 | 缓存 (cache-property-access) | 1.2-2x |
循环中的 localStorage.getItem() 或 sessionStorage | 循环中的昂贵 I/O | 缓存 (cache-storage-api) | 5-20x |
多个连续的 await fetch() | 顺序 I/O 阻塞 | I/O (batching, defer-await) | 2-10x |
在可能不需要结果的条件之前使用 await | 过早的异步暂停 | I/O (defer-await) | 1.5-3x |
许多对象展开 {...obj} 或 [...arr] | 分配开销 | 内存 (avoid-allocations) | 1.5-5x |
| 在热循环中创建对象/数组 | 分配导致的 GC 压力 | 内存 (avoid-allocations) | 2-5x |
当突变安全时使用 Object.assign() 或展开 | 不必要的不可变性成本 | 内存 (object-operations) | 1.5-3x |
| 非顺序访问数组元素 | 缓存局部性问题 | 内存局部性 (predictable-execution) | 1.5-3x |
while(true) 或无限制队列增长 | 失控的资源使用 | 安全性 (bounded-iteration) | 防止崩溃 |
| 函数调用时前 N 个参数大多相同 | 重复的参数传递 | 微优化 (currying) | 1.1-1.5x |
热循环内的 try/catch | V8 去优化 | 微优化 (performance-misc) | 3-5x |
循环中使用 + 进行字符串连接 | 二次字符串复制 | 微优化 (performance-misc) | 2-10x |
如何使用此表:
每周安装次数
106
代码仓库
GitHub 星标数
7
首次出现
2026 年 1 月 30 日
安全审计
安装于
codex97
claude-code93
cursor83
github-copilot82
opencode81
gemini-cli80
Systematic performance optimization for JavaScript/TypeScript codebases. Combines audit workflow with expert-level optimization patterns for runtime performance.
Note: For general best practices (type safety with any/enum, avoiding null, not mutating parameters), use the accelint-ts-best-practices skill instead. This section focuses exclusively on performance-specific anti-patterns.
NEVER assume code is cold path - Utility functions, formatters, parsers, and validators appear simple but are frequently called in loops, rendering pipelines, or real-time systems. Always audit ALL code for performance anti-patterns. Do not make assumptions about usage frequency or skip auditing based on perceived simplicity.
NEVER apply all optimizations blindly - Performance patterns have trade-offs. Balance optimization gains against code complexity. When conducting audits, identify ALL anti-patterns through systematic analysis and report them with expected gains. Let users decide which optimizations to apply based on their specific context.
NEVER ignore algorithmic complexity - Optimizing O(n²) code with micro-optimizations is futile. For n=1000, algorithmic fix (O(n² → O(n)) yields 1000x speedup; micro-optimizations yield 1.1-2x at best. Fix algorithm first: use Maps/Sets for O(1) lookups, eliminate nested iterations, choose appropriate data structures.
NEVER sacrifice correctness for speed - Performance bugs are still bugs. Optimizations frequently break edge cases: off-by-one errors in manual loops, wrong behavior for empty arrays, null handling issues. Verify behavior matches before and after. Add comprehensive tests covering edge cases before optimizing—catching bugs in production costs far more than any performance gain.
NEVER optimize code you don't own - Shared utilities, library internals, or code actively developed by others creates merge conflicts, duplicates effort, and confuses ownership. Performance changes affect all callers; coordinate with owners or defer optimization until code stabilizes.
NEVER ignore memory vs CPU trade-offs - Caching trades memory for speed. Unbounded memoization causes memory leaks in long-running applications. A 2x CPU speedup that increases memory 10x can trigger OOM crashes or frequent GC pauses (worse than original slowness). Profile memory usage alongside CPU; set cache size limits; use WeakMap for lifecycle-bound caches.
NEVER assume performance across environments - V8 optimizations differ between Node.js versions (v18 vs v20), browsers (Chrome vs Safari), and architectures (x64 vs ARM). An optimization yielding 3x speedup in Chrome may regress 1.5x in Safari. Profile in ALL target environments before shipping; maintain fallback implementations for environment-specific optimizations.
NEVER chain array methods (.filter().map().reduce()) - Each method creates intermediate arrays and iterates separately. For arrays with 10k items, .filter().map() allocates 10k + 5k items (if 50% pass filter) and iterates twice. Use single reduce pass to iterate once with zero intermediate allocations, yielding 2-5x speedup in hot paths.
NEVER useArray.includes() for repeated lookups - Array.includes() is O(n) linear search. Checking 1000 items against array of 100 is O(n×m) = 100k operations. Use Set.has() instead: O(1) lookup via hash table, reducing 100k operations to 1000 for ~100x speedup. Build Set once upfront; amortized cost is negligible.
NEVER await before checking if you need the result - await suspends execution immediately, even if the value isn't needed. Move await into conditional branches that actually use the result. Example: const data = await fetch(url); if (condition) { use(data); } wastes I/O time when condition is false. Better: if (condition) { const data = await fetch(url); use(data); } skips fetch entirely when unneeded.
NEVER recompute constants inside loops - Recomputing invariants wastes CPU in every iteration. For 10k iterations, array.length lookup (even if cached by engine) or Math.max(a, b) runs 10k times unnecessarily. Hoist invariants outside loops: const len = array.length; for (let i = 0; i < len; i++) or curry functions to precompute constant parameters once.
NEVER create unbounded loops or queues - Prevents runaway resource consumption from bugs or malicious input. Set explicit limits (for (let i = 0; i < Math.min(items.length, 10000); i++)) or timeouts. Unbounded loops can freeze UI threads; unbounded queues cause OOM crashes. Fail fast with clear limits rather than degrading gracefully into unusability.
NEVER placetry/catch in hot paths - V8 cannot inline functions containing try-catch blocks and marks entire function as non-optimizable. Single try-catch in hot loop causes 3-5x slowdown by preventing inlining, escape analysis, and other optimizations. Validate inputs before hot paths using type guards; move try-catch outside loops to wrap entire operation; use Result types for expected errors.
Apply these tests to focus optimization efforts effectively:
.map(). Balance performance with team velocity.This skill uses progressive disclosure to minimize context usage:
Follow the 4-phase audit workflow below for systematic performance analysis.
Load AGENTS.md to scan compressed rule summaries organized by category.
When you identify specific performance issues, load corresponding reference files for detailed ❌/✅ examples.
When users explicitly request a performance audit, load the template for consistent reporting:
Two modes of operation:
Audit Mode - Skill invoked directly (/accelint-ts-performance <path>) or user explicitly requests performance audit
Implementation Mode - Skill triggers automatically during feature work
Copy this checklist to track progress:
- [ ] Phase 1: Profile - Identify actual bottlenecks using profiling tools
- [ ] Phase 2: Analyze - Categorize issues by impact and optimization category
- [ ] Phase 3: Optimize - Apply performance patterns from references/
- [ ] Phase 4: Verify - Measure improvements and validate correctness
CRITICAL: Audit ALL code for performance anti-patterns. Do not skip code based on assumptions about usage frequency. Utility functions, formatters, parsers, validators, and data transformations are frequently called in loops, rendering pipelines, or real-time systems even if their implementation appears simple.
When profiling tools are available , use them to establish baseline measurements:
node --prof script.js && node --prof-process isolate-*.logWhether profiling data is available or not : Perform systematic static code analysis to identify ALL performance anti-patterns:
Output : Complete list of ALL identified anti-patterns with their locations and expected performance impact. Do not filter based on "severity" or "priority" - report everything found.
When generating audit reports (when skill is invoked directly via /accelint-ts-performance <path> or user explicitly requests performance audit), use the structured template:
For EVERY issue identified in Phase 1, categorize by optimization type:
Categorize ALL issues by optimization type:
| Issue Type | Category | Expected Gain |
|---|---|---|
| Nested loops, O(n²) complexity | Algorithmic optimization | 10-1000x |
| Repeated expensive computations | Caching & memoization | 2-100x |
| Allocation-heavy code | Allocation reduction | 1.5-5x |
| Sequential access violations | Memory locality | 1.5-3x |
| Excessive I/O operations | I/O optimization | 5-50x |
| Blocking async operations | I/O optimization | 2-10x |
| Property access in loops | Caching & memoization | 1.2-2x |
Quick reference for mapping issues:
Load references/quick-reference.md for detailed issue-to-category mapping and anti-pattern detection.
Output: Categorized list of ALL issues with their optimization categories. Do not filter or prioritize - list everything found in Phase 1.
Step 1: Identify your bottleneck category from Phase 2 analysis.
Step 2 : Load MANDATORY references for your category. Read each file completely with no range limits.
| Category | MANDATORY Files | Optional | Do NOT Load |
|---|---|---|---|
| Algorithmic (O(n²), nested loops, repeated lookups) | reduce-looping.mdreduce-branching.md | — | memoization, caching, I/O, allocation |
| Caching (property access in loops, repeated calculations) | memoization.mdcache-property-access.md | cache-storage-api.md (for Storage APIs) | I/O, allocation |
| I/O (blocking async, excessive I/O operations) | batching.mddefer-await.md | — | algorithmic, memory |
| Memory (allocation-heavy, GC pressure) | object-operations.mdavoid-allocations.md | — | I/O, caching |
| Locality (sequential access violations, cache misses) | predictable-execution.md | — | all others |
| (unbounded loops, runaway queues) |
Notes :
Step 3: Scan for quick reference during optimization
Load AGENTS.md to see compressed rule summaries organized by category. Use as a quick lookup while implementing patterns from the detailed reference files above.
Apply patterns systematically:
Example optimization:
// ❌ Before: O(n²) - nested iteration
for (const user of users) {
const items = allItems.filter(item => item.userId === user.id);
process(items);
}
// ✅ After: O(n) - single pass with Map lookup
// Performance: reduce-looping.md - build lookup once pattern
const itemsByUser = new Map<string, Item[]>();
for (const item of allItems) {
if (!itemsByUser.has(item.userId)) {
itemsByUser.set(item.userId, []);
}
itemsByUser.get(item.userId)!.push(item);
}
for (const user of users) {
const items = itemsByUser.get(user.id) ?? [];
process(items);
}
Measure performance gain:
Verify correctness:
Document optimization:
// Performance optimization applied: 2026-01-28
// Issue: Nested iteration causing O(n²) complexity with 10k items
// Pattern: reduce-looping.md - Map-based lookup
// Speedup: 145x faster (5200ms → 36ms)
// Verified: All tests pass, manual QA complete
Deciding whether to keep the optimization:
Real-time systems (60fps rendering, live data visualization): Even 1.05x improvements matter in critical hot paths. Use frame timing profiler to verify impact on frame budget (16.67ms for 60fps).
If tests fail: Fix the optimization or revert. Performance bugs are still bugs.
Calibrate guidance specificity to optimization impact:
| Optimization Type | Freedom Level | Guidance Format | Example |
|---|---|---|---|
| Algorithmic (10x+ gain) | Medium freedom | Multiple valid approaches, pick based on constraints | "Use Map for O(1) lookup or Set for deduplication" |
| Caching (2-10x gain) | Medium freedom | Pattern with examples, cache invalidation strategy | "Memoize with WeakMap if lifecycle matches source objects" |
| Micro-optimization (1.1-2x) | Low freedom | Exact pattern from reference, measure first | "Cache array.length in loop: for (let i = 0, len = arr.length; ...)" |
The test: "What's the speedup and maintenance cost?"
Use this table to rapidly identify which optimization category applies.
Audit everything : Identify ALL performance anti-patterns in the code regardless of current usage context. Report all findings with expected gains.
| If You See... | Root Cause | Optimization Category | Expected Gain |
|---|---|---|---|
Nested for loops over same data | O(n²) complexity | Algorithmic (reduce-looping) | 10-1000x |
.filter() followed by .find() or .map() | Multiple passes over data | Algorithmic (reduce-looping) | 2-10x |
Repeated array.find() or .includes() | O(n) linear search |
How to use this table:
Weekly Installs
106
Repository
GitHub Stars
7
First Seen
Jan 30, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex97
claude-code93
cursor83
github-copilot82
opencode81
gemini-cli80
Genkit JS 开发指南:AI 应用构建、错误排查与最佳实践
6,100 周安装
Office MCP 服务器:39种PDF/Excel/Word/PPT文档处理工具,支持OCR和批量转换
933 周安装
Mintlify 文档平台使用指南 - 从 MDX 到专业文档站点的最佳实践
952 周安装
供应链风险审计员 - 开源项目依赖项安全审计工具 | 识别高风险依赖项
1,000 周安装
统计分析技能指南:描述性统计、趋势分析与异常值检测方法
989 周安装
OpenCode Skill Creator 指南:创建智能体技能,集成工作流程与专业知识
987 周安装
SQL查询优化指南:PostgreSQL、Snowflake、BigQuery高性能SQL编写技巧与方言参考
998 周安装
| bounded-iteration.md |
| — |
| all others |
| Micro-opt (hot path fine-tuning, 1.1-2x improvements) | currying.mdperformance-misc.md | — | all others (apply only after algorithmic fixes) |
| Algorithmic (reduce-looping, use Set/Map) |
| 10-100x |
Many if/else chains on same variable | Branch-heavy code | Algorithmic (reduce-branching) | 1.5-3x |
| Same function called with same inputs repeatedly | Redundant computation | Caching (memoization) | 2-100x |
obj.prop.nested.deep accessed multiple times in loop | Property access overhead | Caching (cache-property-access) | 1.2-2x |
localStorage.getItem() or sessionStorage in loop | Expensive I/O in loop | Caching (cache-storage-api) | 5-20x |
Multiple await fetch() in sequence | Sequential I/O blocking | I/O (batching, defer-await) | 2-10x |
await before conditional that might not need result | Premature async suspension | I/O (defer-await) | 1.5-3x |
Many object spreads {...obj} or [...arr] | Allocation overhead | Memory (avoid-allocations) | 1.5-5x |
| Creating objects/arrays inside hot loops | GC pressure from allocations | Memory (avoid-allocations) | 2-5x |
Object.assign() or spread when mutation is safe | Unnecessary immutability cost | Memory (object-operations) | 1.5-3x |
| Accessing array elements non-sequentially | Cache locality issues | Memory Locality (predictable-execution) | 1.5-3x |
while(true) or unbounded queue growth | Runaway resource usage | Safety (bounded-iteration) | Prevents crashes |
| Function called with mostly same first N params | Repeated parameter passing | Micro-opt (currying) | 1.1-1.5x |
try/catch inside hot loop | V8 deoptimization | Micro-opt (performance-misc) | 3-5x |
String concatenation in loop with + | Quadratic string copying | Micro-opt (performance-misc) | 2-10x |