npx skills add https://github.com/poteto/noodle --skill adversarial-review在对立模型上生成评审者以挑战工作成果。评审者基于大脑原则的不同视角进行攻击。交付物是一份综合裁决——请勿进行任何修改。
硬性约束: 评审者必须通过对立模型的 CLI(codex exec 或 claude -p)运行。请勿使用子代理、Agent 工具或任何内部委托机制作为评审者——这些机制运行在_你自己的_模型上,违背了本方法的目的。
阅读 brain/principles.md。跟随每一个 [[wikilink]] 并阅读每个链接的原则文件。这些原则指导评审者的判断。
根据上下文(最近的差异、引用的计划、用户消息)确定需要评审的内容。
确定意图——作者试图实现什么。这至关重要:评审者挑战的是工作成果是否_很好地实现了意图_,而不是意图本身是否正确。在继续之前,请明确陈述意图。
评估变更规模:
| 规模 | 阈值 | 评审者数量 |
|---|---|---|
| 小 | < 50 行,1-2 个文件 | 1(怀疑者) |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 中 | 50-200 行,3-5 个文件 | 2(怀疑者 + 架构师) |
| 大 | 200+ 行 或 5+ 个文件 | 3(怀疑者 + 架构师 + 极简主义者) |
阅读 references/reviewer-lenses.md 了解视角定义。
为评审者输出创建一个临时目录:
REVIEW_DIR=$(mktemp -d /tmp/adversarial-review.XXXXXX)
确定你自身是哪个模型,然后在对立模型上生成评审者:
如果你是 Claude — 通过 codex exec 生成 Codex 评审者:
codex exec --skip-git-repo-check -o "$REVIEW_DIR/skeptic.md" "prompt" 2>/dev/null
仅当评审者需要运行测试时才使用 --profile edit。默认使用只读模式。使用 run_in_background: true 运行,并通过 TaskOutput 进行监控,设置 block: true, timeout: 600000。
如果你是 Codex — 通过 claude CLI 生成 Claude 评审者:
claude -p "prompt" > "$REVIEW_DIR/skeptic.md" 2>/dev/null
使用 run_in_background: true 运行。
根据视角命名每个输出文件:skeptic.md、architect.md、minimalist.md。
使用 references/reviewer-prompt.md 中的模板构建每个评审者的提示。
在阅读评审者输出之前,记录使用了哪个 CLI 并确认输出文件存在:
echo "reviewer_cli=codex|claude"
ls "$REVIEW_DIR"/*.md
如果任何输出文件缺失或为空,请在裁决中注明该失败——不要静默跳过某个评审者。
从 $REVIEW_DIR/ 读取每个评审者的输出文件。对重叠的发现进行去重。使用 references/verdict-format.md 中的格式生成一份单一的裁决。
综合评审者意见后,应用你自己的判断。以陈述的意图和大脑原则为框架,说明你会接受哪些发现,拒绝哪些发现——以及原因。评审者被设计为对抗性的;并非所有发现都值得采取行动。指出误报、越界以及将风格误认为实质的发现。
将“主导判断”部分附加到裁决中(参见 references/verdict-format.md)。
每周安装量
346
代码仓库
GitHub 星标数
124
首次出现
2026年3月3日
安全审计
安装于
codex336
opencode333
gemini-cli332
kimi-cli331
github-copilot331
amp331
Spawn reviewers on the opposite model to challenge work. Reviewers attack from distinct lenses grounded in brain principles. The deliverable is a synthesized verdict — do NOT make changes.
Hard constraint: Reviewers MUST run via the opposite model's CLI (codex exec or claude -p). Do NOT use subagents, the Agent tool, or any internal delegation mechanism as reviewers — those run on your own model, which defeats the purpose.
Read brain/principles.md. Follow every [[wikilink]] and read each linked principle file. These govern reviewer judgments.
Identify what to review from context (recent diffs, referenced plans, user message).
Determine the intent — what the author is trying to achieve. This is critical: reviewers challenge whether the work achieves the intent well , not whether the intent is correct. State the intent explicitly before proceeding.
Assess change size:
| Size | Threshold | Reviewers |
|---|---|---|
| Small | < 50 lines, 1-2 files | 1 (Skeptic) |
| Medium | 50-200 lines, 3-5 files | 2 (Skeptic + Architect) |
| Large | 200+ lines or 5+ files | 3 (Skeptic + Architect + Minimalist) |
Read references/reviewer-lenses.md for lens definitions.
Create a temp directory for reviewer output:
REVIEW_DIR=$(mktemp -d /tmp/adversarial-review.XXXXXX)
Determine which model you are, then spawn reviewers on the opposite:
If you are Claude — spawn Codex reviewers via codex exec:
codex exec --skip-git-repo-check -o "$REVIEW_DIR/skeptic.md" "prompt" 2>/dev/null
Use --profile edit only if the reviewer needs to run tests. Default to read-only. Run with run_in_background: true, monitor via TaskOutput with block: true, timeout: 600000.
If you are Codex — spawn Claude reviewers via claude CLI:
claude -p "prompt" > "$REVIEW_DIR/skeptic.md" 2>/dev/null
Run with run_in_background: true.
Name each output file after the lens: skeptic.md, architect.md, minimalist.md.
Build each reviewer's prompt using the template in references/reviewer-prompt.md.
Before reading reviewer output, log which CLI was used and confirm the output files exist:
echo "reviewer_cli=codex|claude"
ls "$REVIEW_DIR"/*.md
If any output file is missing or empty, note the failure in the verdict — do not silently skip a reviewer.
Read each reviewer's output file from $REVIEW_DIR/. Deduplicate overlapping findings. Produce a single verdict using the format in references/verdict-format.md.
After synthesizing the reviewers, apply your own judgment. Using the stated intent and brain principles as your frame, state which findings you would accept and which you would reject — and why. Reviewers are adversarial by design; not every finding warrants action. Call out false positives, overreach, and findings that mistake style for substance.
Append the Lead Judgment section to the verdict (see references/verdict-format.md).
Weekly Installs
346
Repository
GitHub Stars
124
First Seen
Mar 3, 2026
Security Audits
Gen Agent Trust HubFailSocketPassSnykFail
Installed on
codex336
opencode333
gemini-cli332
kimi-cli331
github-copilot331
amp331
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
106,200 周安装