GSD Debugger by toonight/get-shit-done-for-antigravity
npx skills add https://github.com/toonight/get-shit-done-for-antigravity --skill 'GSD Debugger'你的职责:找到根本原因,而不仅仅是消除表面现象。
用户知道:
用户不知道(不要问):
询问体验。自己调查原因。
调试自己编写的代码时,你是在对抗自己的心智模型。
为什么这更难:
需要遵守的准则:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 偏差 | 陷阱 | 解药 |
|---|---|---|
| 确认偏差 | 只寻找支持性证据 | 积极寻找证伪的证据 |
| 锚定效应 | 第一个解释成为锚点 | 在调查前生成3个以上的假设 |
| 可得性启发 | 最近的错误 → 假设类似原因 | 将每个错误视为新问题 |
| 沉没成本 | 花了2小时,继续坚持 | 每30分钟问:“我还会选择这条路吗?” |
一次改变一个变量: 做一个更改,测试,观察,记录,重复。
完整阅读: 阅读整个函数,而不仅仅是“相关”行。
接受未知: “我不知道” = 好(现在你可以调查了)。“肯定是X” = 危险。
考虑在以下情况重新开始:
重启协议:
一个好的假设可以被证明是错误的。
不好的(不可证伪):
好的(可证伪):
何时使用: 卡住、困惑、心智模型与现实不符。
写下或说出:
通常你在解释过程中就会发现错误。
何时使用: 复杂系统,许多活动部件。
何时使用: 你知道正确的输出,但不知道为什么得不到它。
何时使用: 以前正常工作,现在不行了。
基于时间: 代码、环境、数据、配置有什么变化?
基于环境: 配置值?环境变量?网络?数据量?
何时使用: 错误在大型代码库或漫长历史中的某个地方。
何时使用: 许多可能的交互,不清楚哪个导致了问题。
在3次修复尝试失败后:
新的上下文通常能立即看到被污染上下文看不到的东西。
---
status: gathering | investigating | fixing | verifying | resolved
trigger: "{verbatim user input}"
created: [timestamp]
updated: [timestamp]
---
## Current Focus
hypothesis: {current theory}
test: {how testing it}
expecting: {what result means}
next_action: {immediate next step}
## Symptoms
expected: {what should happen}
actual: {what actually happens}
errors: {error messages}
## Eliminated
- hypothesis: {theory that was wrong}
evidence: {what disproved it}
## Evidence
- checked: {what was examined}
found: {what was observed}
implication: {what this means}
## Resolution
root_cause: {when found}
fix: {when applied}
verification: {when verified}
ROOT CAUSE: {specific cause}
EVIDENCE: {proof}
FIX: {recommended fix}
ELIMINATED: {hypotheses ruled out}
REMAINING: {hypotheses to investigate}
BLOCKED BY: {what's needed}
RECOMMENDATION: {next steps}
STATUS: {gathering | investigating}
PROGRESS: {what's been done}
QUESTION: {what's needed from user}
每周安装量
–
代码仓库
GitHub 星标数
672
首次出现
–
安全审计
Your job: Find the root cause, not just make symptoms disappear.
User knows:
User does NOT know (don't ask):
Ask about experience. Investigate the cause yourself.
When debugging code you wrote, you're fighting your own mental model.
Why this is harder:
The discipline:
| Bias | Trap | Antidote |
|---|---|---|
| Confirmation | Only look for supporting evidence | Actively seek disconfirming evidence |
| Anchoring | First explanation becomes anchor | Generate 3+ hypotheses before investigating |
| Availability | Recent bugs → assume similar cause | Treat each bug as novel |
| Sunk Cost | Spent 2 hours, keep going | Every 30 min: "Would I still take this path?" |
Change one variable: Make one change, test, observe, document, repeat.
Complete reading: Read entire functions, not just "relevant" lines.
Embrace not knowing: "I don't know" = good (now you can investigate). "It must be X" = dangerous.
Consider starting over when:
Restart protocol:
A good hypothesis can be proven wrong.
Bad (unfalsifiable):
Good (falsifiable):
When: Stuck, confused, mental model doesn't match reality.
Write or say:
Often you'll spot the bug mid-explanation.
When: Complex system, many moving parts.
When: You know correct output, don't know why you're not getting it.
When: Something used to work and now doesn't.
Time-based: What changed in code? Environment? Data? Config?
Environment-based: Config values? Env vars? Network? Data volume?
When: Bug somewhere in a large codebase or long history.
When: Many possible interactions, unclear which causes issue.
After 3 failed fix attempts:
A fresh context often immediately sees what polluted context cannot.
---
status: gathering | investigating | fixing | verifying | resolved
trigger: "{verbatim user input}"
created: [timestamp]
updated: [timestamp]
---
## Current Focus
hypothesis: {current theory}
test: {how testing it}
expecting: {what result means}
next_action: {immediate next step}
## Symptoms
expected: {what should happen}
actual: {what actually happens}
errors: {error messages}
## Eliminated
- hypothesis: {theory that was wrong}
evidence: {what disproved it}
## Evidence
- checked: {what was examined}
found: {what was observed}
implication: {what this means}
## Resolution
root_cause: {when found}
fix: {when applied}
verification: {when verified}
ROOT CAUSE: {specific cause}
EVIDENCE: {proof}
FIX: {recommended fix}
ELIMINATED: {hypotheses ruled out}
REMAINING: {hypotheses to investigate}
BLOCKED BY: {what's needed}
RECOMMENDATION: {next steps}
STATUS: {gathering | investigating}
PROGRESS: {what's been done}
QUESTION: {what's needed from user}
Weekly Installs
–
Repository
GitHub Stars
672
First Seen
–
Security Audits
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
109,600 周安装
App Store Connect 订阅批量本地化工具 - 自动化设置多语言显示名称
1,200 周安装
Chrome CDP 命令行工具:轻量级浏览器自动化,支持截图、执行JS、无障碍快照
1,200 周安装
Python网络爬虫专家指南:从requests到Scrapy,掌握数据提取与自动化技术
1,300 周安装
安全代码卫士:AI驱动的安全编码指南与最佳实践,防止SQL注入、XSS攻击
1,200 周安装
NestJS专家服务 | 企业级TypeScript后端开发与架构设计
1,200 周安装
微服务架构师指南:云原生架构、弹性模式与卓越运维实践
1,200 周安装