npx skills add https://github.com/getsentry/skills --skill iterate-pr持续迭代当前分支,直至所有 CI 检查通过且评审反馈得到处理。
要求:已认证的 GitHub CLI (gh)。
要求:用于 Python 包管理的 uv CLI,安装指南见 https://docs.astral.sh/uv/getting-started/installation/
重要提示:所有脚本必须在仓库根目录(即 .git 所在目录)下运行,而不是在技能目录下。通过 ${CLAUDE_SKILL_ROOT} 使用脚本的完整路径。
scripts/fetch_pr_checks.py获取 CI 检查状态并从日志中提取失败片段。
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py [--pr NUMBER]
返回 JSON:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
{
"pr": {"number": 123, "branch": "feat/foo"},
"summary": {"total": 5, "passed": 3, "failed": 2, "pending": 0},
"checks": [
{"name": "tests", "status": "fail", "log_snippet": "...", "run_id": 123},
{"name": "lint", "status": "pass"}
]
}
scripts/fetch_pr_feedback.py使用 LOGAF 等级 获取并分类 PR 评审反馈。
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py [--pr NUMBER]
返回按以下类别分类的 JSON:
high - 合并前必须处理 (h:、阻塞项、变更请求)medium - 应该处理 (m:、标准反馈)low - 可选处理 (l:、细节、样式、建议)bot - 信息性自动评论 (Codecov、Dependabot 等)resolved - 已解决的讨论串评审机器人反馈(来自 Sentry、Warden、Cursor、Bugbot、CodeQL 等)会出现在 high/medium/low 类别中,并带有 review_bot: true 标记 — 它不会被放入 bot 类别。
每个反馈项还可能包含:
thread_id - 用于行内评审评论的 GraphQL 节点 ID(通过 reply_to_thread.py 回复时使用)scripts/reply_to_thread.py回复 PR 评审讨论串。将多个回复批量处理为单个 GraphQL 调用。
uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py THREAD_ID "body" [THREAD_ID "body" ...]
参数是交替的 (thread_id, body) 对。示例:
uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py \
PRRT_abc $'Fixed the null check.\n\n*— Claude Code*' \
PRRT_def $'Replaced with path-segment counting.\n\n*— Claude Code*'
gh pr view --json number,url,headRefName
如果当前分支没有对应的 PR,则停止。
运行 ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py 以获取已在 PR 上发布的分类反馈。
自动修复(无需提示):
high - 必须处理(阻塞项、安全问题、变更请求)medium - 应该处理(标准反馈)修复反馈时:
这包括评审机器人反馈(带有 review_bot: true 的项)。将其视为与人工反馈相同:
提示用户选择:
low - 呈现编号列表并询问要处理哪些项:
Found 3 low-priority suggestions:
Which would you like to address? (e.g., "1,3" or "all" or "none")
静默跳过:
resolved 讨论串bot 评论(仅提供信息 — Codecov、Dependabot 等)处理完每个行内评审评论后,在 PR 讨论串中回复以确认已采取的操作。仅回复带有 thread_id 的项(行内评审评论)。
何时回复:
high 和 medium 项 — 无论是已修复还是被确定为误报low 项 — 无论是已修复还是被用户拒绝如何回复: 使用 ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py。将一轮的所有回复批量处理为单个调用:
uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py \
PRRT_abc $'Fixed — description of change.\n\n*— Claude Code*' \
PRRT_def $'Not applicable — reason.\n\n*— Claude Code*'
回复格式:
\n\n*— Claude Code* 结尾*- Claude Code* 或 *— Claude Code* 结尾的回复,以避免在重新循环时重复运行 ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py 以获取结构化的失败数据。
如果待处理则等待: 如果评审机器人检查(sentry、warden、cursor、bugbot、seer、codeql)仍在运行,则在继续之前等待 — 它们会发布必须评估的可操作反馈。信息性机器人(codecov)不值得等待。
对于脚本输出中的每个失败项:
log_snippet 并从错误处向后追溯,以理解为什么失败 — 而不仅仅是哪里失败不要仅根据检查名称假设失败原因 — 务必阅读日志。不要“快速修复并希望成功” — 在更改代码之前彻底理解失败原因。
提交前,在本地验证你的修复:
如果本地验证失败,在继续之前修复 — 不要推送已知有问题的代码。
git add <files>
git commit -m "fix: <descriptive message>"
git push
以循环方式轮询 CI 状态和评审反馈,而不是阻塞:
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py 获取当前 CI 状态uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py 获取新的评审反馈 b. 立即处理任何新的 high/medium 反馈(与步骤 3 相同) c. 如果需要更改,提交并推送(这会重启 CI),然后继续轮询 d. 等待 30 秒(后续迭代不增加等待时间),然后从子步骤 1 重复sleep 10,然后运行 uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py。处理任何新的 high/medium 反馈 — 如果需要更改,返回步骤 6。如果步骤 7 需要代码更改(来自 CI 通过后的新反馈),则返回步骤 2 开始新的循环。监控期间的 CI 失败已在步骤 7 的轮询循环中处理。
成功: 所有检查通过,CI 后的反馈重新检查是干净的(没有新的未处理的 high/medium 反馈,包括评审机器人发现项),用户已就低优先级项做出决定。
请求帮助: 相同失败尝试 2 次后,反馈需要澄清,基础设施问题。
停止: 没有 PR 存在,分支需要变基。
如果脚本失败,直接使用 gh CLI:
gh pr checks name,state,bucket,linkgh run view <run-id> --log-failedgh api repos/{owner}/{repo}/pulls/{number}/comments每周安装数
308
仓库
GitHub 星标数
454
首次出现
Jan 23, 2026
安全审计
安装于
opencode264
codex262
gemini-cli262
github-copilot254
claude-code252
cursor243
Continuously iterate on the current branch until all CI checks pass and review feedback is addressed.
Requires : GitHub CLI (gh) authenticated.
Requires : The uv CLI for python package management, install guide at https://docs.astral.sh/uv/getting-started/installation/
Important : All scripts must be run from the repository root directory (where .git is located), not from the skill directory. Use the full path to the script via ${CLAUDE_SKILL_ROOT}.
scripts/fetch_pr_checks.pyFetches CI check status and extracts failure snippets from logs.
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py [--pr NUMBER]
Returns JSON:
{
"pr": {"number": 123, "branch": "feat/foo"},
"summary": {"total": 5, "passed": 3, "failed": 2, "pending": 0},
"checks": [
{"name": "tests", "status": "fail", "log_snippet": "...", "run_id": 123},
{"name": "lint", "status": "pass"}
]
}
scripts/fetch_pr_feedback.pyFetches and categorizes PR review feedback using the LOGAF scale.
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py [--pr NUMBER]
Returns JSON with feedback categorized as:
high - Must address before merge (h:, blocker, changes requested)medium - Should address (m:, standard feedback)low - Optional (l:, nit, style, suggestion)bot - Informational automated comments (Codecov, Dependabot, etc.)resolved - Already resolved threadsReview bot feedback (from Sentry, Warden, Cursor, Bugbot, CodeQL, etc.) appears in high/medium/low with review_bot: true — it is NOT placed in the bot bucket.
Each feedback item may also include:
thread_id - GraphQL node ID for inline review comments (used for replies via reply_to_thread.py)scripts/reply_to_thread.pyReplies to PR review threads. Batches multiple replies into a single GraphQL call.
uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py THREAD_ID "body" [THREAD_ID "body" ...]
Arguments are alternating (thread_id, body) pairs. Example:
uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py \
PRRT_abc $'Fixed the null check.\n\n*— Claude Code*' \
PRRT_def $'Replaced with path-segment counting.\n\n*— Claude Code*'
gh pr view --json number,url,headRefName
Stop if no PR exists for the current branch.
Run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py to get categorized feedback already posted on the PR.
Auto-fix (no prompt):
high - must address (blockers, security, changes requested)medium - should address (standard feedback)When fixing feedback:
This includes review bot feedback (items with review_bot: true). Treat it the same as human feedback:
Prompt user for selection:
low - present numbered list and ask which to address:
Found 3 low-priority suggestions:
Which would you like to address? (e.g., "1,3" or "all" or "none")
Skip silently:
resolved threadsbot comments (informational only — Codecov, Dependabot, etc.)After processing each inline review comment, reply on the PR thread to acknowledge the action taken. Only reply to items with a thread_id (inline review comments).
When to reply:
high and medium items — whether fixed or determined to be false positiveslow items — whether fixed or declined by the userHow to reply: Use ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py. Batch all replies for a round into a single call:
uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py \
PRRT_abc $'Fixed — description of change.\n\n*— Claude Code*' \
PRRT_def $'Not applicable — reason.\n\n*— Claude Code*'
Reply format:
\n\n*— Claude Code**- Claude Code* or *— Claude Code* to avoid duplicates on re-loopsRun ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py to get structured failure data.
Wait if pending: If review bot checks (sentry, warden, cursor, bugbot, seer, codeql) are still running, wait before proceeding—they post actionable feedback that must be evaluated. Informational bots (codecov) are not worth waiting for.
For each failure in the script output:
log_snippet and trace backwards from the error to understand WHY it failed — not just what failedDo NOT assume what failed based on check name alone—always read the logs. Do NOT "quick fix and hope" — understand the failure thoroughly before changing code.
Before committing, verify your fixes locally:
If local verification fails, fix before proceeding — do not push known-broken code.
git add <files>
git commit -m "fix: <descriptive message>"
git push
Poll CI status and review feedback in a loop instead of blocking:
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py to get current CI statusuv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py for new review feedback b. Address any new high/medium feedback immediately (same as step 3) c. If changes were needed, commit and push (this restarts CI), then continue polling d. Sleep 30 seconds (don't increase on subsequent iterations), then repeat from sub-step 1sleep 10, then run uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py. Address any new high/medium feedback — if changes are needed, return to step 6.If step 7 required code changes (from new feedback after CI passed), return to step 2 for a fresh cycle. CI failures during monitoring are already handled within step 7's polling loop.
Success: All checks pass, post-CI feedback re-check is clean (no new unaddressed high/medium feedback including review bot findings), user has decided on low-priority items.
Ask for help: Same failure after 2 attempts, feedback needs clarification, infrastructure issues.
Stop: No PR exists, branch needs rebase.
If scripts fail, use gh CLI directly:
gh pr checks name,state,bucket,linkgh run view <run-id> --log-failedgh api repos/{owner}/{repo}/pulls/{number}/commentsWeekly Installs
308
Repository
GitHub Stars
454
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode264
codex262
gemini-cli262
github-copilot254
claude-code252
cursor243
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
138,300 周安装