重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
problem-space by open-horizon-labs/skills
npx skills add https://github.com/open-horizon-labs/skills --skill problem-space映射解决方案所在的领域。我们要优化什么?哪些约束条件我们视为既定事实?哪些约束条件可以质疑?
问题空间探索先于解决方案空间。理解领域本身就是工作。过早跳转到代码层面,你会快速构建出错误的东西。
在以下情况下调用 /problem-space:
不要用于: 问题已被充分理解且你已进入执行阶段。问题空间用于奠定基础,而非拖延。
我们真正要优化的是什么?不是功能,而是结果。
"我们正在优化 [结果]。"
要精确。"构建登录页面"是一个功能。"缩短新用户首次获得价值的时间"是一个目标。目标本身就是抽象。
提问:
列出我们视为固定的东西。明确说明每个约束的性质:
Constraint: [边界]
Type: [hard | soft | assumed]
Reason: [存在的原因]
Questioning: [这个可能是假的吗?]
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
硬约束 - 物理定律、法规、已签署的合同。这些无法改变。
软约束 - 组织决策、技术债务、时间压力。这些可以协商。
假设约束 - "我们一直都是这么做的。" 这些应该受到质疑。
陷阱:智能体会试图摆脱约束。"对于这个原型我们没有时间"通常是错误的,当代码生成只需 15 分钟,而不是一周时。要立足于什么是真正固定的。
解决方案存在于何处?映射这个空间:
每个问题陈述都嵌入了假设。让它们显现出来:
"我们假设 [假设]。如果这是假的,[后果]。"
常见的隐藏假设:
我们是在解决真正的问题(X)还是用户尝试的解决方案(Y)?
X-Y 不匹配的迹象:
如果检测到潜在的 X-Y 问题:
"用户要求 [Y],但潜在需求可能是 [X]。"
始终按照以下结构生成问题空间地图:
## 问题空间地图
**日期:** [时间戳]
**范围:** [此地图涵盖的领域]
### 目标
[我们正在优化的目标 - 是结果,不是功能]
### 约束条件
| 约束条件 | 类型 | 原因 | 可质疑? |
|------------|------|--------|-----------|
| [边界] | hard/soft/assumed | [原因] | [这个可能是假的吗?] |
### 领域
- **系统:** [涉及的内容]
- **利益相关者:** [受影响的人]
- **影响范围:** [如果出错会破坏什么]
- **先例:** [可供参考的现有解决方案]
### 明确的假设
1. [假设] - 如果为假:[后果]
2. [假设] - 如果为假:[后果]
### X-Y 检查
- **陈述的需求 (Y):** [被要求的内容]
- **潜在的需求 (X):** [实际可能需要的]
- **置信度:** [Y=X 的置信度:高/中/低]
### 准备好进入解决方案空间了吗?
[是/否] - [原因或缺少什么]
/problem-space
部署持续失败。上个月我们添加了 3 个配置标志,
每个都“修复”了一些问题,但产生了新的边缘情况。
输出:
## 问题空间地图
**日期:** 2024-01-15
**范围:** 部署流水线可靠性
### 目标
我们正在优化:无需人工干预即可成功的可靠部署。
### 约束条件
| 约束条件 | 类型 | 原因 | 可质疑? |
|------------|------|--------|-----------|
| 必须使用现有 CI/CD | soft | 团队熟悉度 | 我们可以评估替代方案吗? |
| 需要零停机时间 | hard | SLA 承诺 | 否 |
| 配置在仓库中管理 | assumed | "最佳实践" | 配置可以放在其他地方吗? |
| 当前的回滚方法 | soft | 逐步构建的 | 这是正确的回滚模型吗? |
### 领域
- **系统:** GitHub Actions, Kubernetes, ArgoCD, 3 个微服务
- **利益相关者:** 开发团队, SRE, 下游 API 消费者
- **影响范围:** 面向客户的停机, 200+ API 消费者
- **先例:** [X] 处的类似系统使用功能标志而不是配置
### 明确的假设
1. 配置标志是处理部署变化的正确机制 - 如果为假:我们需要环境感知的构建
2. 失败与配置相关 - 如果为假:我们存在架构问题
### X-Y 检查
- **陈述的需求 (Y):** 修复部署配置问题
- **潜在的需求 (X):** 使部署可靠
- **置信度:** 中 - 配置是症状,不是病因
### 准备好进入解决方案空间了吗?
否 - 需要理解这是一个创可贴问题(更多配置)还是一个重新设计问题(改变部署工作方式)。
/problem-space
应用很慢。产品经理希望我们“优化数据库查询”。
输出:
## 问题空间地图
**日期:** 2024-01-15
**范围:** 应用程序性能
### 目标
我们正在优化:95% 用户的页面加载时间低于 2 秒。
### 约束条件
| 约束条件 | 类型 | 原因 | 可质疑? |
|------------|------|--------|-----------|
| 必须使用当前数据库 | assumed | "迁移成本太高" | 我们实际计算过成本吗? |
| 不能更改 API 合约 | soft | 下游消费者 | 有多少消费者实际使用慢端点? |
| 功能集是固定的 | assumed | 产品经理指定 | 如果我们移除/简化功能会怎样? |
### 领域
- **系统:** PostgreSQL, Redis 缓存, React 前端, Node 后端
- **利益相关者:** 最终用户, 移动客户端, 第三方集成
- **影响范围:** 用户流失, 支持工单, 转向竞争对手
- **先例:** 类似应用迁移到只读副本后,性能提升了 3 倍
### 明确的假设
1. 数据库查询是瓶颈 - 如果为假:前端渲染或网络延迟是问题所在
2. 优化比重设计更便宜 - 如果为假:N+1 查询需要架构变更
3. 需要当前功能集 - 如果为假:可以消除未使用的昂贵功能
### X-Y 检查
- **陈述的需求 (Y):** 优化数据库查询
- **潜在的需求 (X):** 让应用对用户来说感觉快速
- **置信度:** 低 - 产品经理在没有诊断的情况下规定了解决方案
### 准备好进入解决方案空间了吗?
否 - 在优化任何东西之前,需要进行性能分析以确定实际的瓶颈。
此技能可以将上下文持久化到 .oh/<session>.md,供后续技能使用。
如果提供了会话名称 (/problem-space auth-refactor):
.oh/auth-refactor.md如果未提供会话名称 (/problem-space):
"保存到会话?[建议名称] [自定义] [跳过]"
读取: 检查现有的会话文件。读取先前技能的输出——特别是 目标 和 问题陈述 ——以奠定探索的基础。
写入: 生成输出后,将问题空间地图写入会话文件:
## 问题空间
**更新于:** <时间戳>
[问题空间地图内容]
在任何地方工作。通过提问生成问题空间地图。无持久化。
.oh/<session>.md 读取先前上下文(目标、问题陈述)当 RNA MCP 服务器可用时(存在 oh_search_context 工具),在将问题空间地图呈现给人类之前,用仓库本地的情境知识丰富它。
在步骤 2(映射约束条件)时: 使用目标/领域和 phase: "problem-space" 调用 oh_search_context。呈现任何适用的防护栏——这些是团队已经确立的既定约束,不是需要质疑的假设。将它们作为 hard 约束纳入约束表,并注明来源。呈现剩余的防护栏候选供人类确认。
在步骤 3(领域 / 先例)时: 使用问题领域调用 oh_search_context。呈现标记到类似问题空间或结果的相关 metis 条目。以简短的候选列表形式呈现,并注明来源——人类选择要作为先例保留的内容。丢弃其余部分;不要不加选择地注入。
呈现候选条目的格式:
**来自此仓库的相关 metis/防护栏:**
- [metis 标题] (来源: .oh/metis/filename.md) — [一行相关性说明]
→ 保留 / 忽略?
在问题空间地图最终确定之前,由人类选择。他们选择的内容会出现在“领域 → 先例”和“约束条件”中。他们忽略的内容不会被包含。
阶段标签: 传递 phase: "problem-space" 以筛选适合该阶段的条目。跨阶段的 metis(解决方案空间的学习、实现笔记)在这里是噪音,必须排除,除非明确要求。
紧随其后: /aim(在映射领域之前,你需要知道你的目的地)。导向: /problem-statement 以构建具体的挑战,或者如果已经构建得很好,则导向 /solution-space。可以循环回到: /salvage(约束条件错误),/review(持续遇到相同的阻碍)。
问题空间映射之后,通常:
/problem-statement - 清晰阐述需要解决的问题/solution-space - 探索候选实现方案记住: 问题空间不是关于拖延。它是关于构建正确的东西。约束条件是对齐,而不是交付。当执行成本低廉时,理解就是杠杆。
每周安装次数
62
仓库
GitHub 星标数
1
首次出现
2026年1月27日
安全审计
安装于
claude-code46
opencode34
gemini-cli23
codex22
github-copilot20
cursor19
Map the terrain where solutions live. What are we optimizing? What constraints do we treat as real? Which constraints can be questioned?
Problem space exploration precedes solution space. Understanding the terrain is the work. Jump to code too early and you'll build the wrong thing fast.
Invoke /problem-space when:
Do not use when: The problem is well-understood and you're already in execution. Problem space is for grounding, not for stalling.
What are we actually optimizing? Not the feature, the outcome.
"We are optimizing for [outcome]."
Be precise. "Build a login page" is a feature. "Reduce time-to-first-value for new users" is an objective. The aim IS the abstraction.
Ask:
List what we're treating as fixed. Be explicit about each constraint's nature:
Constraint: [the boundary]
Type: [hard | soft | assumed]
Reason: [why it exists]
Questioning: [could this be false?]
Hard constraints - Physics, regulations, signed contracts. These don't bend.
Soft constraints - Organizational decisions, technical debt, time pressure. These can be negotiated.
Assumed constraints - "We've always done it this way." These should be questioned.
The trap: Agents will talk themselves out of constraints. "For this prototype we don't have time" is often false when code generation takes 15 minutes, not a week. Ground yourself in what's actually fixed.
Where do solutions live? Map the space:
Every problem statement has embedded assumptions. Make them visible:
"We assume [assumption]. If this is false, [consequence]."
Common hidden assumptions:
Are we solving the real problem (X) or the user's attempted solution (Y)?
Signs of X-Y mismatch:
If potential X-Y problem detected:
"The user asked for [Y], but the underlying need might be [X]."
Always produce a problem space map in this structure:
## Problem Space Map
**Date:** [timestamp]
**Scope:** [what area this covers]
### Objective
[What we're optimizing for - the outcome, not the feature]
### Constraints
| Constraint | Type | Reason | Question? |
|------------|------|--------|-----------|
| [boundary] | hard/soft/assumed | [why] | [could this be false?] |
### Terrain
- **Systems:** [what's involved]
- **Stakeholders:** [who's affected]
- **Blast radius:** [what breaks if wrong]
- **Precedents:** [existing solutions to examine]
### Assumptions Made Explicit
1. [assumption] - if false: [consequence]
2. [assumption] - if false: [consequence]
### X-Y Check
- **Stated need (Y):** [what was asked for]
- **Underlying need (X):** [what might actually be needed]
- **Confidence:** [high/medium/low that Y=X]
### Ready for Solution Space?
[yes/no] - [why or what's missing]
/problem-space
Deployments keep failing. We've added 3 config flags in the last month
and each one "fixes" something but creates new edge cases.
Output:
## Problem Space Map
**Date:** 2024-01-15
**Scope:** Deployment pipeline reliability
### Objective
We are optimizing for: Reliable deployments that succeed without manual intervention.
### Constraints
| Constraint | Type | Reason | Question? |
|------------|------|--------|-----------|
| Must use existing CI/CD | soft | Team familiarity | Could we evaluate alternatives? |
| Zero-downtime required | hard | SLA commitment | No |
| Config managed in repo | assumed | "Best practice" | Could config live elsewhere? |
| Current rollback approach | soft | Built incrementally | Is this the right rollback model? |
### Terrain
- **Systems:** GitHub Actions, Kubernetes, ArgoCD, 3 microservices
- **Stakeholders:** Dev team, SRE, downstream API consumers
- **Blast radius:** Customer-facing outage, 200+ API consumers
- **Precedents:** Similar system at [X] uses feature flags instead of config
### Assumptions Made Explicit
1. Config flags are the right mechanism for deployment variation - if false: we need environment-aware builds
2. Failures are config-related - if false: we have an architecture problem
### X-Y Check
- **Stated need (Y):** Fix deployment config issues
- **Underlying need (X):** Make deployments reliable
- **Confidence:** Medium - config is a symptom, not the disease
### Ready for Solution Space?
No - Need to understand if this is a band-aid problem (more config) or a redesign problem (change how deployments work).
/problem-space
App is slow. PM wants us to "optimize the database queries."
Output:
## Problem Space Map
**Date:** 2024-01-15
**Scope:** Application performance
### Objective
We are optimizing for: Page load time under 2 seconds for 95th percentile users.
### Constraints
| Constraint | Type | Reason | Question? |
|------------|------|--------|-----------|
| Must use current database | assumed | "Migration too expensive" | Have we actually costed this? |
| Can't change API contracts | soft | Downstream consumers | How many consumers actually use slow endpoints? |
| Feature set is fixed | assumed | PM specified | What if we removed/simplified features? |
### Terrain
- **Systems:** PostgreSQL, Redis cache, React frontend, Node backend
- **Stakeholders:** End users, mobile clients, third-party integrations
- **Blast radius:** User churn, support tickets, competitor migration
- **Precedents:** Similar app moved to read replicas, saw 3x improvement
### Assumptions Made Explicit
1. Database queries are the bottleneck - if false: frontend rendering or network latency is the issue
2. Optimization is cheaper than redesign - if false: N+1 queries need architectural change
3. Current feature set is needed - if false: could eliminate unused expensive features
### X-Y Check
- **Stated need (Y):** Optimize database queries
- **Underlying need (X):** Make app feel fast to users
- **Confidence:** Low - PM prescribed solution without diagnosis
### Ready for Solution Space?
No - Need performance profiling to identify actual bottleneck before optimizing anything.
This skill can persist context to .oh/<session>.md for use by subsequent skills.
If session name provided (/problem-space auth-refactor):
.oh/auth-refactor.md directlyIf no session name provided (/problem-space):
"Save to session? [suggested-name] [custom] [skip]"
Reading: Check for existing session file. Read prior skill outputs—especially Aim and Problem Statement —to ground the exploration.
Writing: After producing output, write the problem space map to the session file:
## Problem Space
**Updated:** <timestamp>
[problem space map content]
Works anywhere. Produces problem space map through questioning. No persistence.
.oh/<session>.md for prior context (aim, problem statement)When the RNA MCP server is available (oh_search_context tool present), enrich the problem space map with repo-local situated knowledge before presenting it to the human.
At Step 2 (Map Constraints): Call oh_search_context with the objective/domain and phase: "problem-space". Surface any guardrails that apply — these are already-settled constraints the team has established, not assumptions to question. Fold them into the constraints table as hard with source attribution. Present remaining guardrail candidates for human confirmation.
At Step 3 (Terrain / Precedents): Call oh_search_context with the problem domain. Surface relevant metis entries tagged to similar problem spaces or outcomes. Present as a short candidate list with provenance — human selects what to carry as precedents. Discard the rest; do not inject indiscriminately.
Format for surfaced candidates:
**Relevant metis/guardrails from this repo:**
- [metis title] (source: .oh/metis/filename.md) — [one-line relevance note]
→ Keep / Dismiss?
Human selects before the problem space map is finalized. What they select appears in Terrain → Precedents and Constraints. What they dismiss is not included.
Phase tag: Pass phase: "problem-space" to filter for phase-appropriate entries. Cross-phase metis (solution-space learnings, implementation notes) is noise here and must be excluded unless explicitly requested.
Comes after: /aim (you need to know your destination before mapping the terrain). Leads to: /problem-statement to frame the specific challenge, or /solution-space if already well-framed. Can loop back from: /salvage (constraints were wrong), /review (keeps hitting same blockers).
After problem space mapping, typically:
/problem-statement - Crisp articulation of what needs solving/solution-space - Explore candidate implementationsRemember: Problem space is not about delay. It's about building the right thing. The constraint is alignment, not delivery. When execution is cheap, understanding is the leverage.
Weekly Installs
62
Repository
GitHub Stars
1
First Seen
Jan 27, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
claude-code46
opencode34
gemini-cli23
codex22
github-copilot20
cursor19
任务估算指南:敏捷开发故事点、计划扑克、T恤尺码法详解
10,500 周安装
HeyGen AI视频生成API:文本生成视频,支持Sora/VEO/Kling等多模型
922 周安装
团队头脑风暴AI工具:多智能体协作生成、挑战、综合与评估创意流程
45 周安装
Office转Markdown工具:Word/Excel/PPT/PDF一键转换,支持AI增强处理
921 周安装
make-plan:AI协调者制定分阶段LLM友好计划,确保代码与文档一致
942 周安装
AGENTS.md 编写指南 - 为 AI 智能体创建高效文档的完整规范与最佳实践
1,000 周安装
Microsoft Agent Framework 开发指南:统一Semantic Kernel与AutoGen的AI智能体框架
980 周安装