重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
npx skills add https://github.com/open-horizon-labs/skills --skill salvage在重新开始之前,从一次会话或一项工作中提取学习成果。核心观点是:代码如今已很廉价;学习成果才是资产。
Salvage 是从 Review 回到 Problem Space 的桥梁。当检测到偏离时,salvage 会捕获已学到的内容,以便您可以在不丢失理解的情况下干净地重新开始。
在以下情况调用 /salvage:
不要用于: 工作正按计划进行并趋于收敛。Salvage 用于重启前的提取,而非日常反思。
在提取之前,说明发生了什么:
“本次会话/方法正在被 salvage,原因是 [原因]。最初的目标是 [目标]。实际发生的情况是 [现实]。”
要直接。不做评判——只求清晰。
按以下提取类别进行。并非所有都适用;提取存在的部分。
“我原以为 X,但实际上 Y。”
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
格式化为明确的约束:
Guardrail: [边界]
Reason: [为什么这很重要]
Trigger: [何时重新审视此约束]
“如果我早知道 [X],我就会 [Y] 了。”
本地实践 = 实用智慧,那种只能从经验中获得的知识。
好的部落知识是:
将提取的内容综合成一个重启工具包:
## Salvage 摘要
### 原始目标
[我们试图实现的目标]
### 为何 Salvage
[对出错情况的直接陈述]
### 关键学习成果
1. [学习成果 1]
2. [学习成果 2]
3. [学习成果 3]
### 新的防护栏
- [防护栏 1]
- [防护栏 2]
### 重启上下文
[下一次尝试在开始前应该知道什么]
### 可重用片段
[任何值得保留的代码、模式或方法]
如果 Open Horizons MCP 可用:
如果没有可用的持久化存储,则输出 salvage 摘要供用户手动捕获。
始终按此结构生成 salvage 摘要:
## Salvage 报告
**已 Salvage:** [日期/会话标识符]
**原因:** [为什么这项工作被 salvage]
**原始目标:** [我们试图做什么]
### 学习成果
[关键见解的编号列表]
### 新的防护栏
[带有原因和触发条件的明确约束]
### 缺失的上下文
[哪些信息本应提供帮助]
### 本地实践
[值得编码记录的来之不易的智慧]
### 可重用片段
[值得保留的代码或模式]
### 全新开始建议
[下次如何着手处理]
/salvage
认证重构已经改变了 4 次方向。原本是简化流程,现在却比之前更复杂了。
输出:
## Salvage 报告
**已 Salvage:** 认证重构会话
**原因:** 方法反转 4 次,复杂性增加而非减少
**原始目标:** 简化认证流程
### 学习成果
1. 复杂性不在于认证代码本身——而在于 6 个不同的 OAuth 提供商
2. 每个提供商的令牌刷新行为都有细微差别
3. “简单”的统一流程破坏了特定于提供商的错误处理
### 新的防护栏
- Guardrail: 不要将 OAuth 提供商抽象成单一接口
- Reason: 提供商之间的差异是语义上的,而非语法上的
- Trigger: 如果提供商减少到 2 个或更少,则重新审视
### 缺失的上下文
- 原始的认证复杂是有原因的(提供商边界情况)
- 本应阅读引入每个提供商的 PR
### 本地实践
OAuth 提供商并非可互换的。抽象泄露本身就是特性。
### 全新开始建议
与其统一流程,不如明确记录每个提供商的特性。
考虑:“简化”是正确的目标吗?也许“使其可见”更好。
/salvage
开始时是“添加深色模式切换”。现在涉及 14 个文件,而且主题系统还不存在。
输出:
## Salvage 报告
**已 Salvage:** 深色模式实现
**原因:** 范围从切换扩展到主题系统
**原始目标:** 在设置中添加深色模式切换
### 学习成果
1. 不存在主题系统——颜色在各组件中硬编码
2. 没有基础设施的切换是没有意义的
3. 这实际上是两个任务:(1) 构建主题系统,(2) 添加切换
### 新的防护栏
- Guardrail: UI 功能请求需要先进行基础设施检查
- Reason: “添加 X” 通常意味着“为 X 构建系统”
- Trigger: 任何涉及视觉一致性的请求
### 缺失的上下文
- 本应在开始前搜索颜色使用情况
- 代码库中没有设计令牌或 CSS 变量
### 本地实践
“添加 [功能]” 不等于“构建 [功能]”。在估算之前检查基础设施是否存在。
### 可重用片段
- 我开始的颜色映射:styles/colors.js(不完整但有用)
- 组件审计列表:硬编码颜色的 14 个文件
### 全新开始建议
1. 首先:创建主题系统(CSS 变量,ThemeContext)
2. 然后:将深色模式作为第一个主题变体添加
3. 切换是容易的部分——最后再做
此技能可以将上下文持久化到 .oh/<session>.md,供后续技能使用。
如果提供了会话名称 (/salvage auth-refactor):
.oh/auth-refactor.md如果未提供会话名称 (/salvage):
“保存到会话?[建议名称] [自定义] [跳过]”
读取: 检查现有的会话文件。读取所有内容——Aim、Problem Statement、Problem Space、Solution Space、Execute、Review——以了解尝试了什么以及发生了什么。
写入: 生成 salvage 报告后:
## Salvage
**更新于:** <时间戳>
**结果:** [提取的学习成果,准备重启]
[salvage 报告:学习成果、防护栏、全新开始的上下文]
Salvage 部分是最终的成果——它捕获了会话结束或重启前所学到的东西。
随处可用。生成供手动捕获的 salvage 摘要。无持久性。
.oh/<session>.md 获取完整的会话上下文——目标和防护栏定义了什么是“偏离轨道”在提取学习成果之前: 调用 oh_search_context,传入失败领域 + 活动阶段。询问:从语料库中是否可以预测到这一点?三种结果会改变写入的内容:
提取之后:
oh_record_metis 记录已批准的学习成果(仅新条目——无重复项)oh_record_guardrail_candidate 记录通过艰难方式发现的硬约束outcome_progress 记录在重启前为实现结果所取得的进展元信号: 如果 salvage 反复出现类似的学习成果,说明语料库中已有相关知识,但未被咨询。这种模式值得进行 /distill。
紧随其后: /review(当检测到偏离时)或 /execute(当你认识到无效挣扎时)。
导向: 带着新的理解回到 /aim 或 /problem-space——你向上回溯。
这就是反馈循环: Salvage 是学习成果如何流回框架顶部的方式。
Salvage 之后,通常是:
/aim - 重新澄清我们实际试图实现的目标/problem-space - 带着新的理解重新开始记住: Salvage 不是失败。它是学习成果的显式化。唯一的失败是丢失了你所学到的东西。
每周安装次数
62
仓库
GitHub 星标数
1
首次出现
2026 年 1 月 27 日
安全审计
安装于
claude-code48
opencode29
gemini-cli19
codex18
github-copilot15
cursor15
Extract learning from a session or piece of work before restarting. The insight: code is cheap now; learning is the asset.
Salvage is the bridge from Review back to Problem Space. When drift is detected, salvage captures what was learned so you can restart clean without losing understanding.
Invoke /salvage when:
Do not use when: Work is on track and converging. Salvage is for extraction before restart, not routine reflection.
Before extracting, name what happened:
"This session/approach is being salvaged because [reason]. The original aim was [aim]. What actually happened was [reality]."
Be direct. No judgment—just clarity.
Work through these extraction categories. Not everything will apply; extract what's present.
"I thought X, but actually Y."
Format as explicit constraints:
Guardrail: [boundary]
Reason: [why this matters]
Trigger: [when to revisit this constraint]
"If I had known about [X], I would have [Y] instead."
Local practices = practical wisdom, the kind you can only get from experience.
Good tribal knowledge is:
Synthesize the extraction into a restart kit:
## Salvage Summary
### Original Aim
[What we were trying to achieve]
### Why Salvaged
[Direct statement of what went wrong]
### Key Learnings
1. [Learning 1]
2. [Learning 2]
3. [Learning 3]
### New Guardrails
- [Guardrail 1]
- [Guardrail 2]
### Context for Restart
[What the next attempt should know before starting]
### Reusable Fragments
[Any code, patterns, or approaches worth keeping]
If Open Horizons MCP is available:
If no persistent storage is available, output the salvage summary for the user to capture manually.
Always produce a salvage summary in this structure:
## Salvage Report
**Salvaged:** [date/session identifier]
**Reason:** [why this work is being salvaged]
**Original Aim:** [what we were trying to do]
### Learnings
[Numbered list of key insights]
### New Guardrails
[Explicit constraints with reason and trigger]
### Missing Context
[What would have helped]
### Local Practices
[Hard-won wisdom to encode]
### Reusable Fragments
[Code or patterns worth keeping]
### Fresh Start Recommendation
[How to approach this next time]
/salvage
The auth refactor has changed direction 4 times. Originally simplifying
the flow, now it's more complex than before.
Output:
## Salvage Report
**Salvaged:** Auth refactor session
**Reason:** Approach reversed 4 times, complexity increased not decreased
**Original Aim:** Simplify authentication flow
### Learnings
1. The complexity isn't in the auth code—it's in the 6 different OAuth providers
2. Each provider has subtly different token refresh behavior
3. The "simple" unified flow breaks provider-specific error handling
### New Guardrails
- Guardrail: Don't abstract OAuth providers into single interface
- Reason: Provider differences are semantic, not syntactic
- Trigger: Revisit if we drop to 2 providers or less
### Missing Context
- The original auth was complex for a reason (provider edge cases)
- Should have read the PR that introduced each provider
### Local Practices
OAuth providers aren't interchangeable. The abstraction leak is the feature.
### Fresh Start Recommendation
Instead of unifying the flow, document each provider's quirks explicitly.
Consider: is "simplify" the right aim? Maybe "make visible" is better.
/salvage
Started as "add dark mode toggle." Now touching 14 files and the
theme system doesn't exist yet.
Output:
## Salvage Report
**Salvaged:** Dark mode implementation
**Reason:** Scope expanded from toggle to theme system
**Original Aim:** Add dark mode toggle to settings
### Learnings
1. No theme system exists—colors are hardcoded across components
2. A toggle without infrastructure is meaningless
3. This is actually two tasks: (1) build theme system, (2) add toggle
### New Guardrails
- Guardrail: UI feature requests need infrastructure check first
- Reason: "Add X" often implies "build system for X"
- Trigger: Any request that touches visual consistency
### Missing Context
- Should have grepped for color usage before starting
- No design tokens or CSS variables in codebase
### Local Practices
"Add [feature]" is not the same as "build [feature]." Check if the
infrastructure exists before estimating.
### Reusable Fragments
- Color mapping I started: styles/colors.js (incomplete but useful)
- Component audit list: 14 files that hardcode colors
### Fresh Start Recommendation
1. First: Create theme system (CSS variables, ThemeContext)
2. Then: Add dark mode as first theme variant
3. The toggle is the easy part—do it last
This skill can persist context to .oh/<session>.md for use by subsequent skills.
If session name provided (/salvage auth-refactor):
.oh/auth-refactor.md directlyIf no session name provided (/salvage):
"Save to session? [suggested-name] [custom] [skip]"
Reading: Check for existing session file. Read everything —Aim, Problem Statement, Problem Space, Solution Space, Execute, Review—to understand what was attempted and what happened.
Writing: After producing the salvage report:
## Salvage
**Updated:** <timestamp>
**Outcome:** [extracted learnings, ready for restart]
[salvage report: learnings, guardrails, context for fresh start]
The salvage section is the capstone—it captures what was learned before the session ends or restarts.
Works anywhere. Produces salvage summary for manual capture. No persistence.
.oh/<session>.md for full session context — aim and guardrails frame what counts as "off-track"Before extracting learnings: call oh_search_context with the failure domain + active phase. Ask: was this predictable from the corpus? Three outcomes change what gets written:
After extracting:
oh_record_metis with approved learnings (new entries only — no duplicates)oh_record_guardrail_candidate for hard constraints discovered the hard wayoutcome_progress to record what was accomplished toward the outcome before restartingMeta-signal: If salvage repeatedly surfaces similar learnings, the corpus has the knowledge but it's not being consulted. That pattern warrants /distill.
Comes after: /review (when drift is detected) or /execute (when you recognize thrashing). Leads to: /aim or /problem-space with fresh understanding—you climb back up. This is the feedback loop: Salvage is how learning flows back to the top of the framework.
After salvage, typically:
/aim - Reclarify what we're actually trying to achieve/problem-space - Fresh start with new understandingRemember: Salvage is not failure. It's learning made explicit. The only failure is losing what you learned.
Weekly Installs
62
Repository
GitHub Stars
1
First Seen
Jan 27, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
claude-code48
opencode29
gemini-cli19
codex18
github-copilot15
cursor15
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
122,000 周安装