reflexion%3Amemorize by neolabhq/context-engineering-kit
npx skills add https://github.com/neolabhq/context-engineering-kit --skill reflexion:memorize输出必须添加精确、可操作的要点,以便未来任务能够立即应用。
首先,从最近的反思和工作成果中收集见解:
/reflexion:reflect 的反思输出/reflexion:critique 的批判性发现如果范围不明确,请询问:“我应该记住什么输出?(最后一条消息、选定内容、特定文件、批判报告等)”
仅提取高价值、可推广的见解:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
优先选择具体内容而非泛泛而谈。如果你无法用代码证据、文档或重复观察来支持某个主张,就不要记住它。
# 读取当前上下文文件
@CLAUDE.md
评估已记录的内容:
对阶段 1 中识别的每个见解应用 ACE 的“增长与精炼”原则:
生成 → 整理映射:
转换示例:
Raw insight: "Using Map instead of Object for this lookup caused performance issues because the dataset was small (<100 items)"
Curated memory: "For dataset lookups <100 items, prefer Object over Map for better performance. Map is optimal for 10K+ items. Use performance testing to validate choice."
确保新记忆不会稀释现有的高质量上下文:
如果一个潜在要点与现有要点冲突,优先选择更具体、有证据支持的规则,并将较旧的规则标记为未来整合(但不要自动删除)。
使用整理后的见解更新上下文文件:
CLAUDE.md 中何处写入如果文件缺失,则创建并包含以下章节(顶级标题):
将每个新要点放在最合适的章节下。保持要点简洁且可操作。
对于每个重要的见解,添加结构化条目:
## [领域/模式类别]
### [具体上下文或模式名称]
**上下文**: [何时适用]
**模式**: [要做什么]
```yaml
approach: [具体方法]
validation: [如何验证其有效]
examples:
- case: [具体场景]
implementation: [代码或方法片段]
- case: [另一个场景]
implementation: [不同的实现]
避免:[反模式或常见错误]
置信度:[基于证据质量的高/中/低]
来源:[反思/批判/经验日期]
更新 CLAUDE.md 后:
跟踪记忆更新的有效性:
有效记忆整合后:
# 从最近的反思和输出中记忆
/reflexion:memorize
# 试运行:显示建议的要点而不写入 CLAUDE.md
/reflexion:memorize --dry-run
# 限制要点数量
/reflexion:memorize --max=5
# 定位特定章节
/reflexion:memorize --section="Verification Checklist"
# 选择来源
/reflexion:memorize --source=last|selection|chat:<id>
CLAUDE.md 已创建/更新/reflexion:reflect 的对应部分:反思 → 整理 → 记忆。https://arxiv.org/pdf/2510.04618)。记住:目标不是记住一切,而是整理高影响力的见解,持续提升未来智能体的性能。质量胜过数量——每个记忆都应使未来的工作得到可衡量的改善。
每周安装量
238
仓库
GitHub 星标
699
首次出现
2026年2月19日
安装于
codex231
opencode231
github-copilot229
gemini-cli228
cursor226
kimi-cli226
Output must add precise, actionable bullets that future tasks can immediately apply.
First, gather insights from recent reflection and work:
/reflexion:reflect/reflexion:critiqueIf scope is unclear, ask: “What output(s) should I memorize? (last message, selection, specific files, critique report, etc.)”
Extract only high‑value, generalizable insights:
Prefer specifics over generalities. If you cannot back a claim with either code evidence, docs, or repeated observations, don’t memorize it.
# Read current context file
@CLAUDE.md
Assess what's already documented:
For each insight identified in Phase 1 apply ACE’s “grow‑and‑refine” principle:
Generation → Curation Mapping :
Example Transformation :
Raw insight: "Using Map instead of Object for this lookup caused performance issues because the dataset was small (<100 items)"
Curated memory: "For dataset lookups <100 items, prefer Object over Map for better performance. Map is optimal for 10K+ items. Use performance testing to validate choice."
Ensure new memories don't dilute existing quality context:
Consolidation Check :
Specificity Preservation :
Organization Integrity :
If a potential bullet conflicts with an existing one, prefer the more specific, evidence‑backed rule and mark the older one for future consolidation (but do not auto‑delete).
Update the context file with curated insights:
CLAUDE.mdCreate the file if missing with these sections (top‑level headings):
Project Context
Code Quality Standards
Architecture Decisions
Testing Strategies
Development Guidelines
Strategies and Hard Rules
Place each new bullet under the best‑fit section. Keep bullets concise and actionable.
For each significant insight, add structured entries:
## [Domain/Pattern Category]
### [Specific Context or Pattern Name]
**Context**: [When this applies]
**Pattern**: [What to do]
```yaml
approach: [specific approach]
validation: [how to verify it's working]
examples:
- case: [specific scenario]
implementation: [code or approach snippet]
- case: [another scenario]
implementation: [different implementation]
Avoid : [Anti-patterns or common mistakes]
Confidence : [High/Medium/Low based on evidence quality]
Source : [reflection/critique/experience date]
After updating CLAUDE.md:
Coherence Check :
Actionability Test : A developer should be able to use the bullet immediately
Consolidation Review : No near‑duplicates; consolidate wording if similar exists
Scoped : Names technologies, files, or flows when relevant
Evidence‑backed : Derived from reflection/critique/tests or official docs
Track the effectiveness of memory updates:
After effective memory consolidation:
# Memorize from most recent reflections and outputs
/reflexion:memorize
# Dry‑run: show proposed bullets without writing to CLAUDE.md
/reflexion:memorize --dry-run
# Limit number of bullets
/reflexion:memorize --max=5
# Target a specific section
/reflexion:memorize --section="Verification Checklist"
# Choose source
/reflexion:memorize --source=last|selection|chat:<id>
CLAUDE.md was created/updated/reflexion:reflect: reflect → curate → memorize.https://arxiv.org/pdf/2510.04618).Remember : The goal is not to memorize everything, but to curate high-impact insights that consistently improve future agent performance. Quality over quantity - each memory should make future work measurably better.
Weekly Installs
238
Repository
GitHub Stars
699
First Seen
Feb 19, 2026
Installed on
codex231
opencode231
github-copilot229
gemini-cli228
cursor226
kimi-cli226
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
106,200 周安装